var/home/core/zuul-output/0000755000175000017500000000000015145407532014533 5ustar corecorevar/home/core/zuul-output/logs/0000755000175000017500000000000015145422020015464 5ustar corecorevar/home/core/zuul-output/logs/kubelet.log.gz0000644000175000017500000310226715145421716020267 0ustar corecore#ikubelet.log_o[;r)Br'o-n(!9t%Cs7}g/غIs,r.k9GfB >+YI_翪|mvşo#oVݏKf+ovpZj!Kޒ/h3_.gSeq5v(×_~^ǿq]n>߮}+ԏbś E^"Y^-Vۋz7wH׋0g"ŒGǯguz|ny;#)a "b BLc?^^4[ftlR%KF^j 8DΆgS^Kz۞_W#|`zIlp_@oEy5 fs&2x*g+W4m ɭiE߳Kf^?·0* TQ0Z%bb oHIl.f/M1FJdl!و4Gf#C2lIw]BPIjfkAubTI *JB4?PxQs# `LK3@g(C U {oLtiGgz֝$,z'vǛVB} eRB0R딏]dP>Li.`|!>ڌj+ACl21E^#QDuxGvZ4c$)9ӋrYWoxCNQWs]8M%3KpNGIrND}2SRCK.(^$0^@hH9%!40Jm>*Kdg?y7|&#)3+o,2s%R>!%*XC7Ln* wCƕH#FLzsѹ Xߛk׹1{,wŻ4v+(n^RϚOGO;5p Cj·1z_j( ,"z-Ee}t(QCuˠMkmi+2z5iݸ6C~z+_Ex$\}*9h>t m2m`QɢJ[a|$ᑨj:D+ʎ; 9Gacm_jY-y`)͐o΁GWo(C U ?}aK+d&?>Y;ufʕ"uZ0EyT0: =XVy#iEW&q]#v0nFNV-9JrdK\D2s&[#bE(mV9ىN囋{V5e1߯F1>9r;:J_T{*T\hVQxi0LZD T{ /WHc&)_`i=į`PÝr JovJw`纪}PSSii4wT (Dnm_`c46A>hPr0ιӦ q:Np8>R'8::8g'h"M{qd 㦿GGk\(Rh07uB^WrN_Ŏ6W>Bߔ)bQ) <4G0 C.iTEZ{(¥:-³xlՐ0A_Fݗw)(c>bugbǎ\J;tf*H7(?PЃkLM)}?=XkLd. yK>"dgӦ{ qke5@eTR BgT9(TڢKBEV*DDQ$3gFfThmIjh}iL;R:7A}Ss8ҧ ΁weor(Ё^g׬JyU{v3Fxlţ@U5$&~ay\CJ68?%tS KK3,87'T`ɻaNhIcn#T[2XDRcm0TJ#r)٧4!)'qϷכrTMiHe1[7c(+!C[KԹҤ 0q;;xG'ʐƭ5J; 6M^ CL3EQXy0Hy[``Xm635o,j&X}6$=}0vJ{*.Jw *nacԇ&~hb[nӉ>'݌6od NN&DǭZrb5Iffe6Rh&C4F;D3T\[ bk5̕@UFB1/ z/}KXg%q3Ifq CXReQP2$TbgK ء#AZ9 K>UHkZ;oﴍ8MEDa3[p1>m`XYB[9% E*:`cBCIqC(1&b f]fNhdQvݸCVA/X_]F@?qr7@sON_}ۿ릶ytoy͟מseQv^sP3.sP1'Ns}d_ս=f1Jid % Jwe`40^|ǜd]z dJR-Дxq4lZ,Z[|e 'Ƙ$b2JOh k[b>¾h[;:>OM=y)֖[Sm5*_?$cjf `~ߛUIOvl/.4`P{d056 %w ^?sʫ"nK)D}O >%9r}1j#e[tRQ9*ء !ǨLJ- upƜ/4cY\[|Xs;ɾ7-<S1wg y &SL9qk;NP> ,wդjtah-j:_[;4Wg_0K>є0vNۈ/ze={< 1;/STcD,ڙ`[3XPo0TXx ZYޏ=S-ܑ2ƹڞ7կZ8m1`qAewQT*:ÊxtŨ!u}$K6tem@t):êtx: `)L`m GƂ%k1羨(zv:U!2`cV, lNdV5m$/KFS#0gLwNO6¨h}'XvوPkWn}/7d*1q* c0.$\+XND]P*84[߷Q뽃J޸8iD WPC49 *#LC ءzCwS%'m'3ܚ|otoʉ!9:PZ"ρ5M^kVځIX%G^{;+Fi7Z(ZN~;MM/u2}ݼPݫedKAd#[ BeMP6" YǨ 0vyv?7R F"}8&q]ows!Z!C4g*8n]rMQ ;N>Sr??Ӽ]\+hSQזL c̖F4BJ2ᮚ苮p(r%Q 6<$(Ӣ(RvA A-^dX?+'h=TԫeVިO? )-1 8/%\hC(:=4< ,RmDRWfRoUJy ŗ-ܲ(4k%הrΒ]rύW -e]hx&gs7,6BxzxօoFMA['҉F=NGD4sTq1HPld=Q,DQ IJipqc2*;/!~x]y7D7@u邗`unn_ư-a9t_/.9tTo]r8-X{TMYtt =0AMUk}G9^UA,;Tt,"Dxl DfA\w; &`Ͱ٢x'H/jh7hM=~ ֟y[dI~fHIqC۶1Ik\)3 5Ķ']?SؠC"j_6Ÿ9؎]TTjm\D^x6ANbC ]tVUKe$,\ܺI `Qز@UӬ@B {~6caR!=A>\+܁<lW Gϸ}^w'̅dk  C 7fbU{3Se[s %'!?xL 2ڲ]>i+m^CM&WTj7ȗE!NC6P}H`k(FUM gul)b ;2n6'k}ˍ[`-fYX_pL +1wu(#'3"fxsuҮױdy.0]?ݽb+ uV4}rdM$ѢIA$;~Lvigu+]NC5ÿ nNჶT@~ܥ 7-mU,\rXmQALglNʆ P7k%v>"WCyVtnV K`pC?fE?~fjBwU&'ᚡilRї`m] leu]+?T4v\% ;qF0qV(]pP4W =d#t ru\M{Nj.~27)p|Vn60֭l$4԰vg`i{ 6uwŇctyX{>GXg&[ņzP8_ "J~7+0_t[%XU͍ &dtO:odtRWon%*44JٵK+Woc.F3 %N%FF"HH"\$ۤ_5UWd̡bh塘ZRI&{3TUFp/:4TƳ5[۲yzz+ 4D.Ճ`!TnPFp':.4dMFN=/5ܙz,4kA<:z7y0^} "NqK$2$ Ri ?2,ᙌEK@-V3ʱd:/4Kwm2$'dW<qIE2Ľ)5kJҼMЌ DR3csf6rRSr[I߽ogCc;S5ׂdKZ=M3դ#F;SYƘK`K<<ƛ G׌MU.APf\M*t*vw]xo{:l[n=`smFQµtxx7/W%g!&^=SzDNew(æ*m3D Bo.hI"!A6:uQզ}@j=Mo<}nYUw1Xw:]e/sm lˣaVۤkĨdԖ)RtS2 "E I"{;ōCb{yex&Td >@).p$`XKxnX~E膂Og\IGֻq<-uˮ◶>waPcPw3``m- } vS¢=j=1 W=&;JW(7b ?Q.|K,ϩ3g)D͵Q5PBj(h<[rqTɈjM-y͢FY~p_~O5-֠kDNTͷItI1mk"@$AǏ}%S5<`d+0o,AրcbvJ2O`gA2Ȏp@"0(HKkD4<80: M:'֥P!r "Lӓݰ@ 9n# " $fGgKQӦ4}Gn\=-zz$^ 2(H'e=@kҀy>o\戔-QB EM;oH$$]?4~YrXY%Ο@oHwlXiW\ΡbN}l4VX|"0]! YcVi)@kF;'ta%*xU㔸,A|@WJfVP6`ڼ3qY.[U BTR0u$$hG$0NpF]\ݗe$?# #:001w==!TlN3ӆv%#oV}N~ˊc,_,=COU C],kygSAixپ im41;P^azl5|JE2z=.wcMԧ ax& =`|#HQ*lS<.U׻`>ajϿ '!9MHK:9#s,jVeE!EL2$%Ӧ{(gL pWkn\SDKIIKW4@{D/-:u5I꾧fY iʱ= %lHsd6+H~ Δ,&颒$tSL{yєYa$ H>t~q؈xRmkscXQG~gD20zQ*%iQI$!h/Vo^:y1(t˥C"*FFDEMAƚh $ /ɓzwG1Ƙl"oN:* +2gSt!8iIۛ*JgE7LGoş\bC}O i ycK1YhO6 /g:KT sPv6l+uN|!"VS^΄t*3b\N7dYܞLcn3rnNd8"is"1- ޑܧd[]~:'#;N(NknfV('I rcj2J1G<5 Nj̒Qh]ꍾZBn&Un' CyUM0nCj.&Oڣg\q0^Ϻfyrd-i-Iv#GL`,Oȃ1F\$' )䉳yg=#6c+#  =J`xV,)ޖEiWgii?\e%pf6>7V<*EzfH{]:*6M x-v쳎M'.hO3p-IGhzRʊHބDnC{Q)J. hW UT oVh*c6q?Q-Ns%טCE?Ge먠MD"+3@'V]PXu/:*̀1җZ,{Oǔ6Jy%١oBbFM=$OQYꐙ^=Zza5a%פG,ϒPv3^KPbGVO'7daOU%tt!ƖRG9lhfd#]y=DFT8F}$RD<8 ].v\-v:8F+Mt|ga.!! р#ݴtӫߴ]vWͽ2]Q6Û͘`_}KnK"]p<)Xg '鸽= &Xu=y`g[#O"?5vg3gR(Җ}f`ӀSqUق0D L*2 5I bHb3Lh!ޒh7YJt*CyJÄFKK&GjC6/."6%>Ϗgxl!=3.l D[MTo&r:L@D+dˠUHs[hiҕ|֏G/GMvc*@k]ן;trȻpegg2ŝl1!iaI%~`ūyR}X~juPp- j\hЪQxchKaS,xS"cV8i8'-sOKF<չw"W,_4EqgMƒK6f/FXJRF>i XʽAQGwG%gCCY Hsc`% s8,A_R$קQM17h\EL#w@>o^kПy׏<:n:!d#n} t]2_KB,cB]i&ͺK Y1/_xq=fe`OW[F%mgX(e..`-3})N2FVg{yQ0clz<'&~}eҏnr_ _O^W ~woe{椱I |p+U݋_=ϾMZ9zu b#s9@*иrI@*qQN||Ix;I}&ݢ6ɢ}{݃x}ߐo>Mm8S݅~(EX{S_uM Wi·yT"^'|i6֬:v~m!҃=pnUגZ6p| G;;74^l{Pclwů Հ}xcSu)6fbM/R(*ȴd.^Qw %"=nluOeH=t) Hİd/D!-Ɩ:;v8`vU~Ʉ!hX #'$2j1ܒZ˜bK@*`*#QA 9WykGk,8}B6{/) ݆Y~ 1;;|,ۇ=sxy+@{l/*+E2}`pNU`ZS̯窜qN8V ['4d!FmaX-6 y:1V(!L7,RPEd;)QEAVخ m3 o\` s?Hc# fqT .,ŀU|⦍߶/*~48âF,#[:y_YIpʼn)dk!J'Z5=r&; (y*b*O_ULT.ÔD[%s1,jЅ@k0Ցu֯dtKl$Y5O*GUڇvI`b0ο0~oI`b#FOf_$0!i rS/wvҍ%Eb/Ec|U9F-)L)ŘF`U:VK jeFrԋ7EDYpԽ.D\dNyj荊EEg]bÔF˩ք%EGƶ*NX)Hc(<|q@Oޯr^3>Uf1w;mCja:-1_k٘%VbZ˙#G6 `q+MPU~l!.We$9; -.D087?1a@P5B,c}jcGȱ WW/ @a#LA4.ٹ^XڋXٝ:^Izq. ٽƎDn6ٹBc5Lt;3#i3RAٽ9| cbpcTfp> 6L/_x 'ۙz7~w~)'qU9GDT! 6]c_:VlnEUdn6UˇKU;V`JUݵޙEO[)ܶCy*8¢/[cչjx&? ՃJȚ9!j[~[' "ssTV2i sLq>z@JM->=@NỲ\쀜*/) ̞r21.y? bO]3?C!yw3ޯL_Su>o>&lrw&i"< :]_<<7U_~z5є/rfn͝MLmc 6&)e+n7cyy{_~궼07R7wPuqpqo{ߟ+[w_u3ܸ'AqC_oB㖟E-? k[~;vmcoW]"U;gm>?Z֒Z6`!2XY]-Zcp˿˘ɲ}MV7yeIC~ W R 8/ZnRfH1_G9(ΟSYpŘ-ŦΣ8N,౬}xAX4xM"5XITd E$ZkNb۩r`fC`kQU``%NĀVecK[ld-'Ó5hjsa*MDpa.% qZBh𒄓(#~ |ؐ3$ "6meYO>Y?> (<2y. ">8YAC| w&5fɹ(ȊVã50z)la.~LlQx[b&Pĥx BjIKn"@+z'}ũrDks^F\`%Di5~cZ*sXLqQ$q6v+jRcepO}[ s\VF5vROq%mX-RÈlб 6jf/AfN vRPػ.6<'"6dv .z{I>|&ׇ4ĂO4 [P{]"}r1殲)ߚA 2J1SGpw>ٕQѱ vOj..g*r)|ָ_V_c0_Pu(UeS@%Nb:.SZ1d!~\<}LY aBRJ@ѥuȑz.# 3tl7 ]وb Xnݔ[TN1 1Gȩ f,M`,Lr6E} m"8_SK$_#O;V 7=xLOu(9W_iI NRCǔd X1Lb.u@`X]nl}!:ViI[/SE un޷wQFa2V%ZniE|nZ&-I,t*ώlo Lhnٓ'Xm R ˍ-~ά}hs\5TT%~am.>!LcoJrKmqvez܅E9t6FZXgsreHhlٷ+ [}r:̓?W~e6>0E8`Jq-(ed;W¨:Ä&]䒿e;0:|7IIc(Irw~Z"+A<sX4*X FVGA<^^7 vq&EwQű:؁64ĤjI s>U^k6v읨*:}9V|MX!8j0"t \5Ȕa|)v"Tqw?E8V 7z[v_}OO-DcĥF7FX%2@KɴH/=sۄ`gvRqcf:|XUZ#O\_JK\?}3tj>YSD1౱UR}UR,:lơ2<8"˓MlA2 KvP8 I7D Oj>;V|a|`U>D*KS;|:xI/ió21׭ȦS!e^t+28bI,ti \Τƌ ]穇`8[ ضزEM_UA| m' L,C{"./Ep.h>hLXULi7.1-Qk%ƩJ4^=ple;u.6vQe UZAl *^Vif]>HUd6ƕ̽=T/se+ϙK$S`hnOcE(Tcr!:8UL | 8 !t Q7jk=nn7J0ܽ0{GGL'_So^ʮL_'s%eU+U+ȳlX6}i@djӃfb -u-w~ r}plK;ֽ=nlmuo[`wdй d:[mS%uTڪ?>={2])|Ը>U{s]^l`+ ja^9c5~nZjA|ЩJs Va[~ۗ#rri# zLdMl?6o AMҪ1Ez&I2Wwߎ|7.sW\zk﯊溺^TW^T\*6eqr/^T77WNZ7F_}-򲺺VWQ77V\_v?9?"Th $LqQj0ւcz/Adh.$@bѨLtT=cKGX nݔ͆!`c|Lu_~ǴT?cr$dts` P}KNf@r3 Zj/}I+ϟYSUkTa>x!8 J&[V =͋A,z`S,J|L/vrʑ=}IhM4fG7QIc(<ǩJi lc*n;YKO?mc+/Dr `:N'v|Ea,K! Iɲc{e (fZfv8Y$ Y(-gD e\Y:\D0F,$r1\O-CUMiPu U! ƥ"/Rݕ64BCUR&!\̓f ) V1*^uQV͝[Yq{\~)ϣB6Id Sybo H`R<**2StO bq-|bpSxpSI-OxeŠNdp9Po A O=\OB1<'a0H':㼜 ^|Q Sa0\4ey|=L3ȝ$y-)Ƿlޒ'o`Ɨ0s3 V^ ]0[/+p#Jeq嘆IF ^b!/+GG{v Eh:ڂNs(".5i7UsY&j~$_yz%W|9C~j=J:`X^w{< rBX?I7.2 kc7{ɘ'oZ/H~$M&_NψB[;Mm|8+) C_>ħ.hւ2BCu׬@p]6pGxdii־ZZUo26ɹ㙯o?MB7/LBw.tcQL_p0EucĢ~Ӿo\e/!o"<^K_L֛7#O0tm6RT x/= >@%s.Ln;e278 k:q몹%-qTM{LžC B2xՊ%,N{T""(Fs^xwB1e\TzZj-/vP,~f#\y̖! 3Xdh&c턆exgܶ@监0| ,,S? DQ\Z~wDdJb"O&A6 y#* u|^k)uПo췄FfM+@ydY <RPdjy\r^(V1*YeQW\@a\:?WʖzE2Mͫ*/_OnrR4k[u^EHu2b)Z8el EؐRU%ZX{XB~v:̜lUfQ5yї8IP\dK(!EuU%D;o*oN$(]DK]gw)+y.qbQ)+ ]$gʊ˷1LǢO,3/Ni8;!-`meeww|1~>`,ʄ'XNIԽ'"}?ztm\ ]Npdux X֘'X[ !?jb7%΋CKIS*By: 1Y8. \.h߻߲ÛtVȲ<^S%r^ 2r| .Zowc߬ĹN` `>E"!yN/i]}I5?Kqٓz:5]icivDZ|gȖ(8m[]mSغA*,'X^lKXhX4|}7c$P+׍A*iuq۰{H@cFOU!dN݃?rHæYӖ"U|x) ib~o]3 ۸ o{wN mGfHM5ڢUMٶCi|qm9b}&< (!H^J/Yqd<TMwBԳ<(+Rȭ*^`'l"jluP%Ԝm}1: 2mH|x52uRR;ϡ߾ƀb0Óڳ|ZYUԻq`/\mα\R q^;2e閽% A֟A2)ؼ+ zp VylɦYr-^34308X'VIWZ7+k([7z"okҰTR+b L5;Y_kbLmRͅౝͳRŨCK@@5WX9 u ][ĥq8D Fzxtm^cz[mw;O^muBCQj꯫2q i6KHCt/!1&]J!E ,z*08Nvp^'I;G]lhjƷy8y!ÃUH}aܛqx=sh\Yr^WBEK$e/qG?{rw1'<{tIݵcNy=$֤Ysvѕ7S#ST1OVn&M]_\.dq=d~´ ? [?|WJweW"5bMuDFӳ$z\[m・acVؚ0C o4ٺf*'clgܶx3W:l[l&*gO.u](~d  P=@ s{W02@ܽlq}ݫ ~-dѷںo@ڗ+{O{)mX־\}X=Fڋۗ˶}{36\G_U"rHHPT=\,v_!H*D%BS]|i؞ܜ Pܽ/{ܶ6uw+u"َ*#إPBL\T= %16ọ03_zfnpìmEnc^tºwi0mi2 pdz5 YuTsC I ?O_߼uJ|_fmJUNlZ(: 'O(::&Xp8wcn+HG_f1 Cj p )L9kճL~_lm,gս7Zp0+GaYea6jԨjT$^|^`iOǻcE,m+xBKeyᾁ'}z\8;8|/FPޑrxU[] @1@ kʥ]:@@YUAP2"qP i }hI@9(oD#{qcu3 PځCqv]09_L@iH~xl hX^_ @~?h' <-06O=j| عР~%^?r=`~v]-yS+eպs\fyA݈CP&aQ{Z'APICgUBwhlGo p4h @鮳k/eᱎG@nB u*":vqdA_,,'S>[y@A0Q:`>cr*n62OSptaǩ-S_,ũHZ-3ea"Sr<:]a~imb@vZ(4lnubmPNP9sz ؽBrNq188׉mgws #A6+t9RLv5넦 ^팗<m2;9rNx\'<Ֆ\cypPwWez{'Dž]9; Xz\4FZ ~nx-{A]#Rwȅmw# @ u’@x 9 0i85] w n?p[n:ࡾjR)R䊳IҒ^ekB{-;h.3d3_'&وfV3 AlX~H9Ep=-Y:/A-7+IB9O?'d۰A ۟|Nf$5nFJ$= lhXwA: Pi*eu5VM䁡+`M^KU$UpkT s0 NjX/oV ~ bf1Ǵ #e6 ,݂\8 Uۤd0H@f<ϱ\Z<A3)L#WȄ-[(VUxpU32;rx@~q b1&A dߨ_hCAOQ.뼧n?,4m; nBD7Eؤ(*-LGB_X!) {7jh!0 Qe9 0#S fWS0Gf:AӛG7~Q>{R[pY_e_~4AфpG-xz5@0O/؏ 0$$6(ժ}n4G.7h~˘ai: @le@8<;&k b|,0nq.m>*8N~4Z&$:H:4(Iq҂Q%Rk}|5X|[_hY*+ٓ#H0ahF M1]z rfA?Nnq(Yz&MLn( ?oi'݁e:"lhL RG.mtD_XHV'[W+7q$q%/E`q'L`: K2fZU6fUF'ـ$Qr paYJx[ѧ` f0~!rD.7%n^:KdVwgNq^ NڃJh")9< d) X*/RgS0HW )"? _63af' Xa`{o#XWmiV^+\ZA'rOrXJ<@k;Yoa^l誐Xӂ0ԜN55MF}ŋZ>g/3ܭ" ^\rH, &-/?rT0÷FP((]CPu Tt:T*}eMUVnRۍ+weҲ5e^'*PR|QP|}A T, *T/xbCAڋkj/:A YYCPg}A l( NPwCA5[_Puz !(PPAre R3@ Y~IǪ/Bqͨg],p,w:`1qyG䜓tzϦ$TR*:FK$GaZG_ 1>$nWy|FoGE2s{cd_޽\9ف0͒CY?s8%oPI 5fr)|egi#lEA<8=nJO._^ѩU]8Mӑ<x{bȣ>>T6},T Lb<-):7 [hnY85uTW9XuC$$OudxC+C3'-̞f૨[$DS2APEApI}4ƞQyTJV_Q%W*P;+355mNql<4yTU0 ù!wꋨ~,yJ¥)=J2r S(qUfpX2E᥻J[=M6̋(zx8=c/A6de1Cy&'E82nq:IXn4Qę &a<2Vz( _pפ,/n fRB aECEfXP" gYtB-wXRB=Jܫ}[$,{S}Jx5oO{VDБAO2e%ehUR~&rΐUãVt{=3G ҋ2TQqk:Nad4@ .=A`|"첪/5UG]u5T$)r}I%1oA2 AIJ_"(9RK$(A,bś2a7D(ikGZ3}j X)'! z!?@[ ˕K|,U(b}""QP;}ê="9[`h9\) TE9uhZM10)}*B0H74Z5 6p\"p$;bOV뼟voʀ0idHЂG0$ )YjSQ|(HDEwo'}۲~(*jݷ46N tJmeF~ۨZE]U=|EE옳ٻm7{\kn楥11,W{:lW|RBYSL:W̷j{:Ȍ^mQy˰=N{)zMd]8]/q극!دVl{ξl/ރYOEN%`Bh).;`WZ%C Q{s@cVˁV`VtXs7U `ՊƯ m|=H- mHU"zZڵԔ$ )z`=0eZkAleTzp-Uewa3*F8$ #=,|RpW.rMZ^_'ͤy`K@Ðjȕ!!(74. fjS`_],y8%k -ZEo0XiX˂;3֮Xpj3x,:AЂi~1&C$WgwmJpQ*9bTho"db8p<%\ӹ{UA*_8C6]*kr5#È!Nr{Qy7IkrXSC*YuLY2:AQc )D9Hba$ABd=P/O'cm8JJpKX*&qLї"=VXD*)R NM7Dv} xjIByB. .Hb(lxz$e W!sVc "!x29R%Ÿ1fXL9X߃'b73qG#-{-Kxby#AR5{x}(cXD 5 #m7Othǚ_)xW&&Yp_D\xak1YQW6T!%-+2t>Qh%μi°bԏXWtUdiX[6px]%cZe* SQ>ww WJ=jX :i$U+E97'QY)DRTtMd*ƈa5=>%k[U5)nj:*XVbKaDH#i֝'Fh 42 3FU?'*7(ۧ5)jb7< h" ;"8A9R_bt4#>ͶE*qQYiHj!{# jpÿE XUrg01= 2*{"or/2z ,NpLEȓp#ZnHD&W _$8qQ􆓰CoHg9<8FYmm.~. Nϥ;h()BҏJ8m^i SŠ1 $9Fs;Y]x?8MV Raɯ㔼1/ƕuČ1"twykΝкƓ9:-G.WP ]nIr\^)'3.r \p>pݽQȟWId['΢iNH{tc>)௙a+IOˋ -?O[r8\:v[b4TK۟I?Z #@{܈wMj_M8=*u=t\wysg7ϫ?u9rfCw>r0`Becleň@ay)807cݸ!$&ս`s5##0keՖ˶Z ^1tNS%;z6bv5$}ZK0i_*&6 ϘV{4ZYYش?$i0{60bX:Nb=M6B\!9~φ}sYrO9xᚶVxiz3rpOBheqEIgl2 L-5D8e>$rx# Oϻ ]3A(hs' 8Z Q369 eM0bwe9Iw0+<첞I =EpO*!` g\xdFI#\g8vl=xfA\>5'pkSGjF_Z>ύcԢjXJ?48#=CъY;W-q0,KTQRI[ǒI?m,z=kn8Y҆xk/#_iG8Epcu_! }ez<ﺹ~F/WM` 3z7g ܤ!5 $]"rC1L{n+|w≴k>gn?c7?<]q7V;Ĝg.mmOq zոD}FyuQ"$fto`Nh$S2lvA]U$S 94hyh_| +(`ҟ ΃]%`/]tQdPc"6N7bp= Y<2z>OZb IrcLPלHS&ri O4Ϫ^MUWߓIm>x)d<%E8#rJ*Ǘue߃i%yy^8;}IY~@Rt/p񃗏 </]Hp\_m)b1][AyG[5^=cZWݿl_Yҋ>m>5R';|7THFҹu/$7 &΂۽q'G;DtK_(yӣCaTskS|ͦ88T1̪Qp])ƪ ~a6"*0Χ=Eߎ^ELN[ & hD؄*B]:r-.OHg18+FSnko`Uh3ഥ42e WXŔ?0b~RZŊʥ|F;uNMg<fmľ2\X+ m(:Q E@M ]3SI.vxzK!N1}8bixڠX9vSXm .(n=\U%k AAH: h)(O_3D7m2Sv#jpƠyt4VR5*D "c$O$\xk%-g-:T-$0<ҩ9H?Fy?>ߑ8ؠ`Q Zb=؂ Ehf2кzb}CfQh\?+D@qCꁆM ӷ_ydq||^.409*C ]C2Q踼|Mbq[ ֧v񸪛[?efAĬj<&CHyק6RS,iK`&_77il+SȠ%CR@^hqk>ȖEv? k&Ox+RAw&FuU iY"Wg[=FnkX_Y3 /Yctp%a>݄aJ1l]u _Is Z^n)M!OB- \X]1%"O!Y0p9OU- \T8{O!7Rr9eSzWWyuZOW]5x΋ㆨy-qYNvjm f0\ىŐ_;.ƅƏ[)jXVa?C֨t5 5S%[/㪾miMZ]3_oMFtdAO78E8ҏ&n0U\{AB`HG<)p⸑LQXSXvr3r۬} ًbR2T0/.#A xփjCIu?37Q|rFr3$\_$jM !d@ҋa$m7$8ހ5w1|sm[<"\ԤI%Qgu'Hp4c}n/7fZW4T.ykdsæ`u]] 9R`=NZrVRCg^_gp,L1ie0νrGO`eP˔HI,:/x~L<Yw;0^# r^{ğȜ/AkS( Uk1q ?7j.ұv͝m8L0tw xg"vo0dnvAv"Dc{-{E{(ɱ$c+CJ 2-syH3pxR%BTS =i+Վo&C2FKܯy\ЈyjeCs?Y)6| 9r64ڏL<>Ee S !V7zDL{eꂠ "%i.z/yփ{#gtM@w\JWJ p9rqFo,*ut2 5͆ QܬwLg}Q_cdE_Ͼ.~|Cήo}Ϡ-Ǿb;a|2-fq@q吾qZp77^H9'LeuasNA`ݝ9'bԕ9' $&wPGs[3 ppl.Vp]Uŋl/̚P)nDq [ #-mPjC 7۵6P#GKQm")٫O ghK;u!cDpuJPCȾh6(;`w|g5.I QТCCN%e $]2V(l?HVՅ*E)SH_S[ze5j[[q__ɱhM#gͲopM}He)": v?WJ"c}qX54 /Nw!rY!t$EQ.P x훻>ޕ;rnQr"4Q׿-_vuW$$%&X#fKلbKʼn炆ʪ~SkQ#yC y%:IuRHJAS'J e&O)NE.:ga*={bWPQ0gӓؚIvZE(wܥb"V{uYCG+59B8+'UP<z>d@3{/ c,P3R:s r D+y?ʦΈOR>odsHI7B_h/}5)r#] {跫s%ԉXaB&%:@|i9;J~wr "h@ vnvl* n? ZƇx81w%amh7;:;-NsZaD08^|pQRI[R2mXu',;'bYhpYPyOߗkh E:?$#b}nտXI^r)-_5=QL I>?Xէ\w܄=- J5ܶWIEQ{gcTPbH 3VCPˤ8(лW̢ gCvej_dx*k˻O"]! 0 =w=(^e0ۓf4> 8G{5}="4uT#'@_vNzV[ zQ+?'¼5c%&v.xǃ3@{[};(Z+%v)ez|x+yP VGWZlQ+}!Fk~6Zæ.JOE8Y^Xkmۉ(׊pQ?yϧY~_M|UO{NJ\gz!JhJK3EDm}qwE;=aM+/: 2 Gs3.dfCm5,㻫4N]ʟŌJ4nX\*ܙAZ[QbL &pZkLTk&sN:BAiB3FJ,CDsxHBcx{JDoo'VN6ɚ {NWptc-^O,OTN䊐 [;|BP5*+ab(R"IUr4&Rh[c1\n5?Y3`(\+ aP(\@Se; 4\241\ެ.V ;sKR?t|! DNaojޤ3W)_A(3ԀC̿fa 1W{ U$.Gi1:WKgK Tq?~4nZų XI./'mZMPZ]v3Uzht# ݹ1;scv41WUhh/'U'JY r*->_vwWP'9d]_F769RyxR#f櫷RʼhXI.U'iYD'!>thá藟|MDqC$w 'HE  H`˷Ml7#wSy vd'?A6eaUf{B8 OYR=UjRPAhvU퉂U47۠4@m&Ä:q@n D Nh2΅݄! r)6)N\b Jb.$a' uH\ qB(wb$1`ǐ N&F&@"FQo ihSTV=^O`!Wuz\YeMM۰3>p:Ibx ̡0sa,ikZMKi$9(^c,/C(&1*{Sq-Zm P$e{n[w::+Z&pw#ox3-M wsL&zS?|FNZLDpH:{5i5:O@QvSt$' '0Gv+qZb-vwfVc |e!D9u(f3$16#4N2#S*zkV1bM xz MbΫ1`D˘8 ;$a!8 Badi0JvC1Nhvcol h=!0%;>pg| -60J K@0WRi6reHKI&rc瑱lCIM^u wd^ Equdu+b1\tb,%SY` x_9(Ƥwh:#(a,)ːZj* 4M%he=Id"2T4")ɒ(*I2cES1XTEh8kDGZ׆UjLMdءJׁ-=|BSCՃkT-47i-?SRJ<QFqJdL:"AQYC 7MfYey `Rv{bHP+x{$(咴mn[OUU-ƈm 7 f`-Xyˆ8|+Gyl js?Uq K~ Mu`B[hP$&R"M(J#*cLWęKK"!Vڌ¨S"&ʱ$1BJ M,6`%pV2\e}*2jE*CLS+.q.EJƊIQxçrFp]JJ,,#b3`LdlXF07"NYHd-K#c%5#+by6Y"%M'}̟ .E`2.⻥^lx[-Z9?tkzp偏-kam-܀ݯ_PPVV8)Uo*b<>OP{Sǭ{P%:JJy%JLZbMTBɴ4DfM&?t sIB2pr8KiH[ fSuXfE=oX >ʨVJ.nP`]! v4״- !ֲ61-~]1cꡭ}wu(JZ^]U A`H[$0i@bԬ$_ B{ ٸ?c*ۀua9Ze?L@UZ 0;xWzpՁBk%o70 % F襱D6Ic(oCZC ]q/hd=Eܘkڊ=G\Z8@n@X:\s`)S(r CNz*r[Ҹ^0niJW&g2ڐIߜV7*,q @J^LS0>e 6E1R!XIRwDTzI0M|v}Mn*O֯}H8 "̆4RÏ\I1{„Jih0G \fVAklQ7hhf8ne7`?"Vj4֘؇cʍ9q3XRu+V^‹f^dc(Z11t fxt}s: J+4B*Bt&S8.-J$dexޞ5#nŝ<'vho=i dn `4YE_G/X8ɖ+8 X߹|6c4bWm?l}^5 ҩRCzkHj S zGzY'>S OM9]PVԇׇ˕Ԫ8,P;V_Ђh^ TL\pՉ7gw7t>i' 6sҁ Pxgat/_O: \x? IOS7 _=I~ 3|Fz4/:GxԼNE5qQV&y a,_̩(.ㅒ{OH~6+r〔|[ PrK/m`PBBz 擶|a vhW-(LF_&`{lzpkAk;mE$B=Gq-/̙jBYV-[{^ tᓵkc4E+a[#.mpꛏ(HVf藜6@6-9ekµ=Kh,CD!Q!!=EZ[q_b 4$(Tž ~)Eڃ[=lzhamARnJHJx+LD24^Ų`0񐑞 VR`55?&2 SӚR!8nQ1_Uv22a]x6!~ zUurPe\ZBbp7͡[5_;wKh1 Kl:ʓVeo#"= Dҳe9WѰ'r*u1-cz߿h3fh4O}|ݒVBR 4M"sK8 L|M',acT̥C|v&u hM!hf┺0ɧ&#h]}$GE1lμNݝf) U H[m~#dVʿ`;r3_ @xj|Z8;pgֶ&YVh(]K7dZXRҖԡWqWwm}Do1㋹vJq]ml@ڒnQcB)ħc8\ϣɧ;.7)2~r(r_TRGA 5"6oJH by8tVXz2Щ}0 ߎғMx%! 'ϳzn=*<3p!YXal:= p~Laz+!_^ǣ! '~ q_$b/pۻ*'zDpeK5=- Zsvc#=A]'~xQ8qwy\qOϣ!; >pHAf2Tuw2umdl=9z~'LbUVUY%-1UeA(Z؟ PGoA:>;szMNSp*6\?'OIdLVͥO5~!NaKO{x4-,%T :֙sMjyc%HswEdUFie(KQfv3IZ.'٦Z$s+8IVKMatH6E>`65ٷ*d5AKs¶k 9/G}K-m):d愑:*H2 I$bLUl8|@i9*Zx*饟SD7H .7GW΋vL='x|] ஫tL?mR rwu>6^1f }L< Vd4|9Nq+`w_-&?҂5E: B"HT8*$"r~Ј-8s.:_...ߡgO 58`vmdLW"3Q7VB4 z?;k`')6 Ӿ毚ގoG>|H*x/1h87]o pՖ|#~p@0'*Â_߿K>:TkSٻ&7nWXzq!~S8CrNc!9*ZZ\.)4fHjH;jfe1h|Zھuݼzf;vԵlٵ>AJ2h-wԿԩcI)~g?ee/bxߢ{.uye#q2 rDL,I\ >ipDHn;c~q9叏_65`/Y0UqqaU=fUl,TF'_lU9|@r2}nYK:xw küb?j3Zs^WfeYu9^.ס_tMמam2u~VٙͲJD[]Y=n3cXNߛ???NY{lnex,_A<]Äq\׻d :j΀p4Z4y[?0C><9r96-^r*ɇnnè0Vn!c^6]^b6%Lsnt߇9͕X©ړ HjNL@*z8IY 38NvcN$Sn?0 [TG}D_ğEmxk}K7k֍;qZà ]L 3"3"ŨfX;ƚy0ZpuIrnR_= uU)!>BZcxJ5-l t9{+ۛj88>@c۶{`lYvb;9${dam*u(LC!2>ͺeRG31iDzɤsN2:,gBF#Ui㹳jǘt% ʐ {d^ 2\A|rKja$`0ΫhVJ`:ew.R-)6a]uXl8V ,}D' :M1\kU6M"enq}U(_\ؘsϓ 3TIf 8W10@k<⠘R-0A(./{lD{,ZAeO۶Mnc}u߷WvPr̺9|сFǎ[miB[f1bkI\^.V/brzX\1w+x⭿Wq$瘬) GXXm_t6=\-:./X6̅J50|O?桋0<@Q+P+fZ-ML38o-CQ$TE#wmZ†R2XE 2Т% f*%૒H1J=X1\qGLkkW:h_ftS {(URx'bUZɕ0{Ά%՚C:pNzAclV=ַB%NeaG- \KS T:xBQ%Ro޾݄U~1\Tuys=ioS5x:!?,wuÜ6'29*-c*t]YpԉCn:-Up_ ԏL~Yo9,þ;QȏP~moVh22-Qm'^*I*U@W׸O_PeᡅLb˙]^=^U|BfA1=/AE+9XϽmSu~wvaV9: [^#~.$|'<|tiV" 77jP N:+SC醒sjw_*"}Y/mׅaS[p_}#1C*&H0\SCLȵ⹘&rD[}&vo O1oᖐ A(czfl 8 2~6xQ*<Ş8b>QKTUB߫}F:7pYjOL8A^*5r8\0%Al1!C.QG- kofI.я}O|8ӧa޽h*!c}N3oAQr% PR ,pn ̙C= ou}k 0ctcKuMTHCpٖ >,ZAj D|Vwiַwtda,l i uı!A,Жڀ4Nޘ(= LTRۛ)o`"sA==>L9~A1^ZO-5 G (n\0_+#DNRLiMR, myOz.smD) j+yZI?>6im$7qXĝHK GD u!yI 5Krm#:!Q1+S5!!T@[4 1saf,G[ jYDGTPf7O dcuyE$$Q.~1zߐ˚%33gϖU2EDÉy OPc}49y_Kr6߯!9a e𡰒uc=VBєE\H~|xV> j'V),$]{xMɛ ږ5I[6 g5@֊i䕁F1`+o"ʬؐ˚%9-z?svҙ 9mXPI$YS ,$VuU(yYy_+=$^܄wꌗ1gJ܍ B&ߞ}]4Ͼ=pp,PvuqQ v4RPrk帱A6D/A}NٰX[h5)aM&!✰S`Wy_Au9N(BaG䱋 PiY!yI 5Kry>=q^#8 ekEwgGclQ^AMWԼ'ŀ(A8T - r`I1u[@Jny_2]l >llYuT HWp,Z'h(x)>%2jtBeXDLg`tnv:F\TtL!!,`^8%5{M QtjZvjZԼ/)?i^/? ]"iBb)XKbuE}4uQ$rLZpNΠK+8DV#G @%6}v>ngGZG'}`8DDct%[7T-yAղ}ʆS9k@rh3TԈWcPc6ȕvc+˱GVc|Ə݇r`d4h 4X ?cl%MAMISԼ/.;\Nr)31ps sL0z Khy_y~;x#J!*{[0Y(%7qQ[g>zE\1)5QM@rr@K@@3 ¤])yy[Լ'u gc߉cE5v1F3;ٓtkH8B|h飈[͎G͹suR;A-t %&[ LTLyiAfqnɋM]Լq l-cqS=*~6 AB>qƈ[ŦSQz4QEIqN i#@U* @f}*c\Z~i^+2bI )D!#4EBΦ㧤d;Z_)@'8`/JTҼ/5*n/qs<u1fXG90GĤK쑑\2PPnx6(jޗ̚ {O.JYN4JXy'|6 C)ޭ;5 5|A_͐n>*EYC82*{.C*}G\[E+9L K1GcBJ:^[Ѽ*jޗv>~:"6o` r4te$h,A)cGѵ佦ay_r1 x [8؀rs| J#*1-6PKcl#1;t޽/;ƨnsNe Pp%|'eVY՝_ѲK('V\,F))J.%ϑ 8LXpuCْ-kޓ*̻(]T0>۫1{3!˟]xa^D9[E=?nK5e.7t-kO͡_NVUɫw_Z t8}1df)Mk;^\%(x]Q$J ZH󐐈RcEt|[].SԼ/݀eDqDBܦrMĈ$Io (yA}l pwG<`$q ŏͼYOjm1ߋKΗ|DP|cZs;"tO|Ka#bcaG~cRƲW`#r`=_wAE7\cKQ3േuM_V/j!MxWp? dgu;;5eFep,i78Gq8۫1IwmMKrRyp/OqR hG#Hڙu%H) !◭&&ݍY<7D"y>6.#mq9!^39jm^,PvzuiZCL}}XZʊ` +>6 ix9ŽGh{ {E2 msCsuA@Cbzxd<*.UVaf2DBc17<ũ~M "S_RHcV#fPS$΅OmD.ҤVܢkaWBD#uTޑBV>8a:_{R# ;A n=D<- $†IVl-ȗ^cU b=b~DOqnbD}3B;ve4x|('| /;2z23!tYf:D>[uxdK=d D~l[CG07BD>`7(i3v;cB ( _4߬/l\|aY+gYٍ[ B4T:ʮ;]dstA ߺ aFAY\F F׍ۜkƃXGre&^[^8UvA;u%#;:[<ݮ+ ^m;3N]c5_܌ 8Zo9 P/SZ6<7&6IX6GFhJZry%4K*) T #PA9c%vx1qT!W#;IԈZ4r[YVccC>3^1l漂JWjԘp5Hbbt8[Kf:nף,#DSQD:)gi6U Uֺ x1-.:aErS#b|CP!3MZa2^vcGm=sAm k'Հ# ᑳ5bTЋj1C˙b77]k[TlD L"/0"(&,ڎAjD*&֕2QZ߁{B, .\qHR0F Ix{  ! '81t\='N'xgG{bm6W /NJw V}BP+@-F-D~M:7bA=<ئthky@/h(!%A.|HSq+h"D~șIſ? {xd'D7BRwM2Eb$N_1HZorX[UiLјm"StDM Φl;[b ty6>_f3TPWXwI]vwLHfB(d1b]ś_ *.C0T< M dzxr:=rL2 l]]k]ʰYHBFM߸`'Wq &q!DŃ# Ag-9ޥ _mLDx˙_һa1b9~6.!Qa{vDvD~f ""mP^{Sʔ律b˘U2Ϝ+aαCk Pygw=#G3LiL9ViJ-LdZ|ߗ ,q{($@لs98LxAI=oeI bfUNq]TfM4S**Wo3via-b6 XHtxSx]"Wy!pbb+Kd";d J=- fmD~t)&;W-4 [M`2x#W YyYhEsD~Y+XYd{xdl#?r͠B7] 228a3-Ӆm^t79/)WyG*h ;ENp>Pʠ3lJ@#bTq6܀@2b]N3T4Kk AAްpu@kgAVc1@S΢p%xS%+݅2ƿg4`tLfѩ<̅o[))PD~Pe:U}|:1)#։nC3Xl֛uU2F^r\2T KwX m"Sڌd5TӀ_Q#e#W7ܹ͌*-PVU!]dT?VjW?ώ%*4XbTYeA#a+q wU8{x42kgeyTk^-4*C<9;(UަQN$Oޮ~xO_/屰OO<~eW5?.}Qe'eP81&6 |ai:G?Ӥ^__KKcN]-.:Zz*wG~-{N PL[hULwf,-HBGqtJKȘ{n4-V91b cAOpgDMY_9ybF?(u#܍h#*xDa*h3"Zu~,|~wq_׸ 4Rh@(lZДٴ rL0bz, 25{l).BL[q!qN|ח:%+[4Au) .IR?<1&y3jqsuY (+̹V揍=r.% R:yT QEEm#V{w#_=cMTm6z FB Pf,q@=Mոs(??sfƍT 3 D4 IȜ|g,Jfȹ\ cd|/$s4:!nPGΩsZ4<)]b[\xi7?2eBj/cm߯kbjB{CvJh U)SiӾJ7(yZm)rOA ڽq1_/f-!ĐpiY< 6uUTPWpuix=/ܽc#= ez@?H_k֑N]ůշ궺?o6x5Fn-+"ءX!GcTh|bEE|z۴k7v꩜X>%\CH%Or@0H=<^{XJ%,/DёD!9(U3xSq %>]n6},!䳹xn 2Yfu5#eXPNڥ7 AOoo@uT=<~1Y'Ì!*3z* we(V(|c|&yCY s"m)4Jg4s;[,nyK-bŸzft\na}hv>1:TD,Gg`F*UaF2<|z!3&l?aa"v(3sҚzwU \Wk<⪔)ޒ@&ӧy$/[4k$/)!0~SI|WSK b_s>Flͅ1N 4x_њJCp֫NgtV ]yIa-f1 u aj|~v5u}9b |Sl^Z?@_5nbm\ )N>AN+$(i%Dߚ3J]Oηx"IHV H](D|ѽyc 㹭FlZL/~'d GB`5%$H{Ī"CxT+TL{BLpdtpmGPl 87}].VK!)? ! d{3 w0bY#“ I;fBV?`sݯ^OitNg4S*TPJ$6YH9MJ< c% GX]Xz!@i={16A{.]#!"a(hYxz²'X 18y}GuVcH7!pp^ W`<ȲrIÓ[F!kF,QN~ov5}m׃j~`r ?=7)Q4yM,=,1ΈI!eX" 6)]y8p̌٣̞c,c ¤gʔ/^mLr;L†3'pƄi1B-rOW-&x[O ĩecu(sMBl~:̇<: x9#1B K@YN^4H9\Ra%RSS`KCsf,Uzm:{~6(B,D<3zWdٽl #Q%&>ar6'A3=4(@7*f%Tb&9PyxQAD"TQcb2H)o׫G` 7koyB<8cacIΫǤN&Jzq!^iA^B'[OG-u~6$9 |(#u]gB "hL@1%lGv^0ý\$ak)7D4rc%ЊZC-c~⇤v) Ȣ7U$_ƀ[=3 DF۫X33| sxҳ9m0Ϡ֠)OXPҁ5#0q-uv#!@8FRLT uZnBBgJ+fO>]]RgBgca ʱ@0a;_<|x:. pkA Iͼ8,Tao0+>}+f19z=UFwVRe?UPg)gcDƜy0њYBZ>o ;(굯$WBl@AR!'E+>p9 :* .AhZZa$,tjKv$9HRY61B#/ӷnZfmdEɌ)Hyn/7c>e=r;D[ h} ȲE':_Y+L }!W?u{j CL]$uAz.|b}|kyB<8acu/@W$zmUJ/qvyr1m ] AY+ @uĚ8s9 (Nj{$b|bT(u4pw:R|檸~{Rgl1ĺm3![[l?촂1Flo~1ɬu]5u։I~e]TI:T\͗v%v„!0 F&V4P>^ER%qΈW)oĖD<+A!#9d0=އ0Cx;cE8Ax0QkË0/^tIFm7U.8xQ 嘌@c3&I%/rv fLj<q< d3uÿ7tzcorI NobXlfg8O0;s oXSRܝ#&S>L~ۿ-7o߼9Tgaa^ޠ.Roŝ9p krRsig͙y.'qD*|#N#?|u]u剬xڠI,>7k2ßAuT.뉞d1K>Fnq/F=3p‹PF3&)G6LhI9#Ҫ&=Vc08D ⱭHPuGm@Z o>:M+T ?O"~l); -BY8%!<'[%".nng^?w ?7\\Fq=Fv;{SB@?~-/mNz!r]c$- vbՃd_Tr̩eXcnbW6OX7 +!(PrŁA$,)R,$jSC3 ;y=`^M|8V_*[πwiud+&nP&s*n|1;e:bıR9^`v:PzU[/_W_Oy̷^d#\@[oIS *= bȕYz.xKnDW 9P#<[* aSjse66"gOHĆ.9 t aS,3OE?AH3*;W@17IHJ#j5YK]l 2WUT],Th u'rH \c$B5Pc0ThyYPRn nƄUFAmhKՋ g}}];l°`P+ ,`cC_ϊMG7~ޟ1;Guv?"4@i)#}4*SF% ~&l[Z(/k/[_ˍtn6FcaAU?Cz6C$M}p,6]lχ{_h]UͻuLW}j_=GuXamݴj:\L~ٝO6}j1fԾ9:OA$]}3h3;o6IX6>/qG{gW_/ XWYԘ DJ,IH/(YP:,N1Xi60Û ;/"[u,(9Λ3y&kͳ ۺx~;ݥlF*v;sMC'mDw~>]T)w+]N?³?ǣoe?yP/{qs5mw'vgL(Y}V5+qutX3˴<6߅/֔+[͹=[']_n9=4e墪cFW۲._o6{[v?̮|;j%WϪ5k5~`HծQyCMf@xyg ^\ϢU^KźHճYDr _n*L]P˫ ?ab۝[Y7q1)̖e]TW0GJPj`q[F ''k|l/"o鋸O2E^<=$W,!T䱶 3[X%p՗iǞ)8V|JB% Q|@Ã9JV&J#mBÉo?.̖~xlGpκ#X;ei#=,lF3B-(Ġ` BVF8,'^cNJ`2#!ˠP<<'7]e(y x5PEVlڛ0 HCY[jMᡳ.J)}l$D $%'pN0'nXIak"S/b-i7* f_mԂwUUfq8Z^|JIehm0J%3%X,yՁj%TK`~}F@7glz/p1l7s9~1w)P|jSNI+i!H/RXulvz0ѓisg&- MVY86;DXyZEOߍ=eKrb;J X!˽P u)ćbR `ngJG==6(c:ˆ|30!ANsǰeV))vNA`(##v'q ^,^y`Y~d! 0loԇX߼bϬuiet[qMYHЂp`)*q0]8)b&iơ\hfu`66/#y0yبVO|;YW_s% |WY+MƘGzEp޿(I/?lyE,{E=iE0e051Z6{zI;U KR`DwmX_1RL~(vәE?t[4~I1 /I[,)#,˲+W"(j85y<Ā!XYkϑ8:БdF/ Blb;' AieӞ9#nZx+\$PQkMh2$BЈ@H44 E`ƫ`%b3(>y?1jW?>3dU(#wh6d6Y l ı,wN{ g_n'[Ӭ%X:639 &MpIA,I2yk$ Ȩh6<6ۡţ hKXFp̌ q Ĉ$la[G0/ZțEzCwSdzYe dǠnNqQN+2%Z3;䬣Gd7in?v`0afl8 R,9Gɹh(n#FkD꜂ ؋ݍ'γFf=Ô =Sob|5_;Q"Vƴ!lả +C{(jlj:Y"BYZƟlYWlnKod:3%cM$i ! G^DJ=w3^ZZU|@sFcH"8ImqIu4*;::FVF!U79k5r`8=}e._g`Y=4ꂣ_v&J 9nBl>Mv -ŷO6 VB&p.-A iFԥg@V-%7]3o,L:hTN[@;$: xR} W4J n p$ `Y+CBywכJM0! ›=p84ڂn;2p KdR)`32$^9t@, .7ѰQw+Cƭ 0 iRy<Z4">a)RY5E~7xp}G%gk * [R7EhQ:_ tX >?^.'ͤ:4#;Y'apWN ⾇GrnUտ͊- [O 08 —)A<@qخ<7薠fpfYr{g&hp\Rg09aj(g̃K4*#H=u2Oc#nGQ;zY>tKr @ 6#G$DָAPt=F8+ [MWe ևy "(T 0FYneOj" \.7}eVHØ_FZ$UЪuGOKQN#QBiˁ:`;ʞ*hkԠNƏAb`?#fo".91ÿ}(be8Ml*3i1`eH HÜR0PQP×=ze@gfKнo¬#(_ogNv%7qPJ2, w 9>9'LY@y^+#B먆|AN)_$kY+>?XRFepxCX %ʫ.~/ x G.q- re"g5(R/=s|T4*pA f!HIQtiĆd#ת!SQhfU"0@Sm,Fep \-d&84J%tkA&)(H` eʳ&!f#D&W)[+Pڪ1!F]ptn/zYK=#Pr<, Jrkcn4"+" A;WFDޡ fL%ޚӂ4D BHHOѼ|m{(Y#4*#dg pu V,;zA3oa5A X1niFep)|^Iد+áUvI7;cWè7<™'OF PFEke4Ѻ:eޖuh!R!Oe d mGeE}&^؀A#%(+B =͞5#4^(s:hTEV퀶9enGfѰ}*#SH=Fepʤ +Bcދ+S`"XA|ȉ"dJPIYYR0;xe8; gC 3(Q9񺝔\5wE&(UC9C(/=R,l;hT 7*QʫuA28. 7兢bDI &d%]aW4YwCjc% ?_52G@Muv2GB f:hTGoq#5GwШ N;eĐ8!a`X8U>zc#=K.J 459?1h*WF9 -Rb'-C5tbɸ* ͚cYVW]>XCi9ΨQ mQ+wШsKm5x~\m}d{oeN~ ӊYhV0?RMn_~6^4ߡhj߼_}F;n8]}Ȭzևp t/C;&O2Nxs97,p@x{ uШ +8̚؃d;hTG/Lfvӧx  CPp)GNp)X''`94#-—VF-灵G퐵G,~j1Ckg.. VMr0;Uk&4q㔊TVfrk4jF( GFE0ѽQuc<۶W;F`feNH)krILD*S@9dI46,{$d]CvШ NLRQ Ռfy[.ؓ+";e7'ĨFep o"JJ}F(Ld1t#i \lGuҐh2LTe 4ܾ? GH*rfNЁ6 ցV qzD :hTGGZрy\Ák K`24*˫qY:hTǔko{$etШ A(L)\ wP`-q)6n)>/=dm)P=7Qf 0A_U^ړ>ŝgc @4E洑)F]pHx9tR99oC.FR$[:G.0` &-7Ȥ *̄OLm@cR#6Rm8*=)`oQ@SQÊE&Mxh,oiȎ1,o9!9IT ߪ#P:fء]%ωAګhPP~8VWp\)nA28%<oѥ*ˡտ{h)9[;cWB w'SfhP L@\DFMF]p V{ز a9MCX!),Z.qJ2xyu=n`ȇ MzGr!ZXm;;|eTTr|u:hTSfGg\ΎָqhuI2Jw ljvѲ;{v=I^\>P^yz;w6ՖxZXc˯6ɝѿpVZ}5O_}"nĥr4 on,p䌹FެѣISe2^47&uhWhݭ~N kkѣz?֧O>ˏ[[4ŵO{t<殄A7eCʳi>Z7vϖȃ_o 8OAuH颅.vx 5[8{{ L}z@" ADoA1c41!vhҒ)P~R߅~-z_)s##4+2G^DMunmb: s$f7+C'i]3(h~xgc7Lh8B;w!T]L%Wl"6JY2W` fw/`4e4f)ÈB^a- DŽu>uao$[/ֲn{PF흽'iy%|]g SǼtCdD;,0Ne$Hj9 V@Dz 6M=Vl!Bh PF7kO҅Σ7{vcJ6"̧ ^0`9 AH~H ei"i|$%Q)qF5a6IUG*lqnK4+@ ulo;cq1VYE 62( F刊!]"&;A Att:}:㬞5Vﮖ {li?&P00^vU|vf1銫}92] p:AHѱ猥=qN5{C Bta68evF@Tx0A1&hPDaQ0$V8m0`Imc}J!J0x\b73g? w5-~($^{ >])O u˞avv4!.4Au3Y]UjOM/̺]bS Bc:DeR<\*e0` ci 0SYA:R0!S&jZ`?*K)µL8z6st6NmKHJ0 v'\P d0 #s04YPI M¡ [&8. bkV&(%V_TGQ2҈zX er3F6:v]Hs,+D S@k, /`&$L,璉^.HR09d{3|gFh m%ogPmmiQI%*%E7?O33veIg)4r?b9Uf0qh_[ڧ٧#aFtVt\y-* _Sv8>-}mzZ[}CȒJ~/ݍA4ndDw܇ x.̳/^n<ȂIQF)p3ZdF}ԋH_]u >[cR;ĬW逰 9HyJx->cts^=C:wVdl=BoBRq" %>;2vT]ј0 ŐGE/tx|qeQjq~7LQπK 炉T%%NlHE>Ӓ̩%h / a^DDzCzz̼g]fnJ{\x(/$3O ί_a }s<ٛ6UbEyCd%nAiտOa)F7I Pk˼AS)"^pJ:j1(Z$&Jo Njk5MHprws_-֋Z~2)֤:o^jS1G 9e#"A[R.k%!IWOM Q,^t{E:^{{Ii ] +4},(R`r(ʹ:ͷهk ~Z`oX7;5Ǹ1lVhPj4]EKa&oټ.AC{Nj7WvZL@_yeN=ýg 3{{K -Ծj,e_)_a:_.#Ϳy˴dl)*B)Odieyopdˤ]ii3%͂N;byr52 &xY5{-:Ma~Uj69u+~+ؒ2 %'w2#L~JN܀[lo1)69Ӡs+m2) )VdR }HI!>a<;6~@(@; Zi=΂^ܢr |2l4<Ctz@XG&Gho*Oa*gU$O9FǵF| 3=zrrWl>dk l3Zݯ}WC z dTPrix\jm;r7BD3)1ڲ&OG0/XPz˂mWZ4B% ۂߝ+$ǟ iq y!~v7OSS>2phPG(!3,(SXGF lo S!Eܧ }xa aWRG[f!fZ1tf=יJkDew"0fn\G_L'/cNX0sfʹ1AKjya rG^*g3j 5}>CMPgyW{9++0ǁa i8cR҉wӤTTXy`a+=?Oe0ҋt糅c4oey;HOO\\rƝJ:Q5` x$ R:o;cƌLR^-#chnDtp1(ѥ$_N6^x8{ N IM ר]QQ;o> KKB9U =˽)7ncW;٦_L b7N wv[mkj|dpVjy|{rmg&Gk$^H1 N9ťoW12#XP1g&v"k/es襫t-GV\ f͋uTRibD%$@tY$h) Ӂi]IՅ0{=/H"88fSa3 A0 ('`!=rl4 r2 10BB23uS/IN Y$53GR_2%^Ҏ)iW-aG+_FkfK8e'Otn[ƓuV3ླྀ q { {~!S哜*_{XemǼ ˭vOWNiMsh)v6Zf`RNCjk3mHw7hhwfmZަ_\r'olo񶹍|߂[rgk 1"n\KJf72 ض-2 DZ\g K+غ.L 2۸y54<03mCd+/8fn Yw0ՅmMݟў o=ka }4CMFa1+/GeOe,:uv?\Y,f!Ynu Owʠzh\gیu>nt `$*8ʝ7\ 3f4j|sW GAw)IZ7δmrԖpMǪyOZΈחW`a}x%KWN9 Ve:Z=Eb )*V`r}ѿ|6zq,烴L`T),D2""&ZH0<)c"ҝE $cf᧲)8m[CM38/DҗVTۺ+NcF1ZleF͝QFGR :ɅZs^+eջez@g=/"daRyjף٨?% (F9ƙ<[y h`udN1i$C&>/8uM.~wUz.hrw@#l&F.sz)m `5UWZxJ$d"{d t^ux,?l(\TxG$t*%1cXfEwH0:`@ RI(OwJ{c=©Bp@3ye )UQ;.tB8 WwRf,ÌRC:_}I7ĮݤPQҺb[y_?No~[`8=_oXlx`:c0V,M?mЌƓo`*oxG9(`i8~,Tw'RYcţ09G^_  ?X$?q.Սbh{gOޗd򶈿,zҨװQAlrGfSV`ؘ&c%\}#1d9bpޗSϡB׷k6Eߵ VTHS;Ø)t8] eV1؇/oPYb`4.qn1 gʚ7_aCpzڎ#3ìN5mIJ}l*H$$AC2ՑGWYYv _e7a!7(O? GF׃\y޹ hO7sa=/n?O~3p0 vDbXr ) 37j>wzfsF2[[7@eHqy#5T(7 W=\J l ۃj0vr`Caa 7a 6r "bzP|Gߧ,ga#gcXFGûK)7-QC\޼W9ro u=Z=uḶ hxQq 0Yzsx?\"p$z+![vps ><_~9n Lu,4;( 2Kp\}/~w߿Ϸǟx ͻ?F}(p&^|=y}J ]Ƃ4隷9%@Sw_}-jr~zzlXz*^1Vjc5gnqہ]ɠ9[b> Hrs wꍥa)Q\K D\J C1FX~Cr[ϗxu5ڧƻr|^`aY<'y픈ee:"sƱ#x8}g1|ʨu$ .J8P`hLŅ'-V6Bرfadc%Y&R*L>!<][',no^)}qʸ8}q]Aڕo 1..In7 U! k 880xr?"`§:c`b2L3A-W/xp1g&.&L>Z a ?~G(WJh.JvR3MLf{URe͐F++RQI$^Y]Ww w3KBn "2eBc27ZW^B/ eLZAT @gQg HEgv&7r ZQq2.~nPPo0""m6nv vW.zCz"[cqvU׹ug͌5-!O0n;JĿyL u6JEC+ȌqdNc$e]ӹ@l ɚ> {Eۋ|ڸ>!“6vV3"6Z{je I;<_4muEM.<R׉$]cUol񧦏tHhş1)/퐷n:+x/=8{ZRtfsz1t<¯]j' .,2q].HL]LÞr/}I;jmM['\{Jre̖?N@3W猨*K>K>##B# #b(N+* <#Dzjb{@`s2eHh,y3(4("#rbֆs[⎱^lұKұ C *,h82!Yqr̈T0 r’]X"/!еKQZW:vLu:zt9Ew*H<b[mӪp,JJNT#J G.х(†[i8nDyJ83ߜ66چ+D{wfLPr13<ƒD # ňQr )q`ʠT+{ZBvi S^dq~`X=1jqnצh?^M93mIE65Ht9#z[O8\RRգuUVe [1 #b80 \|\q\_|3yca0SDi)QG3:b蹪Α0j6BuCfS!N"]>V.2-J8B ȗ|M]Tami\׍y5#\zauFdTg@)o "g5}TnaM:z&=>KqMlD9wDΛ_,AJ#IfFݘYX':;8GәY4W؆է+Ԉy~3w[udS=gC6xsLc(-H/db_yMDԗ/I b6t4QL¢= 6~1p.RB1/?oPX~0D  8^> BB>,g4T[Ё]XVaeلܵ>,jO=PR6IM$+m349k-9Z>MhZrXGdbB:|ZSZmAWIWl8 cpszYi5_ z46L}jPWy 'Rܵ5 ??:e\ϳ |?Q0E3MLFjGxfmlrvԇ3,3=9e>gos}so=(g3EXN$L 9q#yƽutʐ1LK}4HqڸꡜAD ~˜ ;bRZUKj)V-ŪX)V-Ū"Ū3V3PORT~)_JRT~)KJR%MIR/KR*k)_Jr: XfہOwϮ @q,NaoD CZ),D؇@:X;NPh)-bJ![tNg'YZ8p:pASk*aQG%KftTȥx*`:p"VaN=.Z3>F#3gRsY1ƈ X3ݜvu^ lǬOAA--tIrXS T,SRz'< P !GZqOC{~ |=B5~U-:w ;]\թ%=`̎(pn$W AO/v: lD?n` t#LnWYаXG?O(t qI"=ڮyVv74;Dw r /~>`0wN9ǨGGbmZBBDX+[V+I56`q.Z/MX 7] 4dA^4z߬S@"Xj΁0mq6k C[k. P6m&7J@j?IcwU ZAUxa=(3ʭY(, +)Q>ha+v-!iWCZ(m*k6/zUlLk@>ؓX,GeZ#⺀7hF,Gr"#"ᢍ5`gCցdeH;vWkspZ4q<yg/ɪM~a58L2ɏ%'(?l?Y'yXZ=P{eP@go hH& [L 2=Pm8^J5ɛ4]YÚk/\e4עe37Fqb-3d^dx1_ݫi36ksʅ?X/?'[Z$[Vצ~cG:V9 |#DAwM{r/Ҭ;4z䙵eօ5Gׯ|a/276pF 5unUk kWj*5Oȹw9',57x,^J_vK$֑wj5乑rبuv}~bܫ<k~z |F]]N۽ Eu,V:p¬[5?c.bv,2pu>eZhy)T 2"1w$&$>#LS-3hI4w"Eb^M}vԇLT g05VQΘbKa--. @FǘcoSlf;wg% _PI[Me3}},~DX^3>z)uad+{i`Se5chiH*PwᏏD0,Й@X~S X A, ɳ_HјSrGWjRL+Mg|)/,/iQeb"l898֩`1dH B)dkY72v:>.ߟr>7W}L=,߬q|uƠϿKd$^ $sB&a(j/ )9B0xZIOX\g_^{cix=S)$HDžonaO)@ q$H3 ΜFa0x]ŞTO]L(U :KՠmZϖQizi7^b GM̍x.7aէ|a|pթÍ9 aj~ZaF/ Ҟ\U-7.c.NkVrw8_O;)XT>~xoC1sV+fߖ?ǣ]}m"+I=Po o~TG[ SqXjôb7{V1=(>FU>1\QHȦܽӤzGN?as2u[ ;vmUpu@@*zTLnw?]/Z6.0ǩس*uDt2ǢFrkGΏ'>C('pΛi$//&CEӍEsBS8 UKlr8 C| [ׂV~1As.F_P*imAءcƢgNlxZLRYQ2mݞ$t MRH(Łq0 㹳jǘ#D{B ʐ2A 󭘵Al-`ҹ~v-6P`]^k[L< _P:]HWU7lQw|$W@|[t4F1流uPTNH.uT d`H)0-[-1uɛੂA\iW҄[^jÅ@B`֑D %ꏕA^YsҢ߯1E$S/FiƂj c8f #.*RKxPtxR_~?UjHNSAnמ rK7$Ћq*8%ؕ!7_ϑmK!m),L.[ X`@*,tiP) fk os7>>C35ؤ~ӟE2ׯ"06}^}iSHEKo7~Suxx]/ū_?zSL˻xU ZNAJsN)bu M/ӞMC{ug\p%&g_lKcGU=,?\p5ŻzY|p"ptg`⢮_*"/.[pO6Ɵ֧}>_ᖋoQ9sdD績&d@7Ŋj'z  Hnm&O"[>)1cUS]33c*z#pikh!Tew7v]z iAoI6u -5'Y^ﷹu@t>q׍EDFc9684ANWվRp\Rc1\\"ҨCt_ӍN@э%[ X\K{7-VpUY?6o0CəYul6 Lfs:Jdy{Yg$3Vb+JsXM oZqp65j8{l=,:}ԗ=a:5pQq:6S/!p7laݱ1D.T*נr56y~ r߹dz-Xϻ85nrXq8b= qt'"t,Ves'ō<1#1|9 _M/Wej6Ys'i32Y('EM+ KbՑ>Y G߽wQ Bd 3符[&xTkP.w&T hgqb6jl^ K9s!@N0 AN=iVY*+-4Tn{P#CH-;eu:@i6%g:N(j n޳Ie'$}nl}I?Ɔt)v6"I@ 1Q1%jj~uaqu.e_vդ%I7hCv%}/Ϯ:$ćSJ'y5wV')cQ^5>i3eH!|:룛J31`ZU㠀_}TR9`L${р9XbMBLp&~ V6:ݫ?Rvƹ|ϔ,w(A ˗gEtQl]|Nֲޠe~,EB + ⢸7b-/o[7^HкήN.7Ť~y:Ԋ9(|z2{H&eZX"%3:ϩSQV9sauU^g{nWu)t% ]駿ߗPm7ݣ~P@W:o>iK=/+'͆[/ɻ%~*P|>K kؓV{Na=I˜Xk=or|ZF/Sn"1zt`W/;[XڊX&j=zt@gv}eYW:OKSF[{ ܱr|^=`dUg"yEICc9h`5`a;0;{Ds{aY99BQZJ !L\tdGcedIq2q;En]"WE>n_u3k􎊝II2},8,lB{%xEV>րj2C0=D !wuK"y5>LB)_ ӂ cQKEMn|:CpEh!qw;ֽAϋ7n,]P{8+ui$>O9#bݺ}Y7N_f'5/mEp3Y)/{S֏ht2o"Y͕t &\hZOvf=fg zsMM]!$,ʹ|џ%&3„pqc VG#Յcj=Y,ҩ s?iD)^7Q8.VyD:([̖us LUnu5+P6À;Y]AVWR_2-Q` J*i3Q#5"ʭxx@^+Z24tM1_ܼyfʘXB]D\l` XVxt؜UH}(7c?h762G,:* $:Y!l=(`*"3K/C9 2N+.)`)s3(1*EZ`2r0hv敖% *5Ҁ,!S`n 9R0OGbB#bP(,.qP¼d<7T192Ľ)5kJt0߃B%"7ARRqc AD* >߇BMv)v8p]"QՌxd2ȼ+XRyXXi+=␱a!!iS⣂4m`,̷21O1E(T*Z4 1.x XF;} 9%#n3 rgxIPi.aa"R@v AºŎ(w>0xHNYCBr ̦v0߃B2;$D)DCiMfR(](q| Y$;E[&ͩF`Nr<63Tr YI$"m Y7fKɄ2Kz,b=6a% P<3TT: gdq0r[Z֑P<#dg3Nd&aA|қ _z/]r0ò3K + "nMe(2t1M!'7琊 bÚ RCzA.A! X%b LK'8mŖ9G4}(da^;9ɂ*KߑPb"w*XsK&WiZ |5,I4&r01ν)֥2QZڴ EhDje,7n#i~R߇=Eb$~:X)M(a'ۚe=i:!Zͩ70Rh JQ!%0B PۏbbJ(.'],MB+  jߑ#fuBh5uImM o6r*R+Ei7T$ס"-3 Uf0VZJ(/DIFsȀDz$+THwlPByOiq})"k)YJ8LTkq+\|yEf"oS(9rK*G E/]ZP3Aq'Ia XkY^DYaŀHWTR?gA}1!Uu/em/z=\u-y Vm3I"Fko3-`L`TH *JcLr&M6.%N7A`E:8/D%i5c Sd0"ENJ Ɨ\@a92m uG~ {FSF$!JaiSP $&X88.>8K&amqF3\4Av;2^l6LoQ6$Ar]Y0Pa#^-c@rOrzq43/h(#DFQ IgT͛^Q a LF@%5 Q @, AqK4*y. Id 3 \3H4:Ϙ"X1F0,zaHhǽJx$,mؔ)+" u]/ kET@ؔ$0  m iM9,he oQ m\ü.iS!^MM'ABx{@Q r'ǒb  %p+ƭD/KqR NSa\]r> ogKtV8Q1PZc+{N9L Je6X k؜gEDSґ68mIiRJ0<72 nCiXv{vōz~wg~MYͿ_GOg>6 8y[0OS7gjIz;xUs ~@;þ&}'w8 3IxB#[Nϼ,,?g|||_i_lv5Q50]l;\O,ӶdvrӠ_kGѤA X ,:fU짼{2ܧ0*/&sYS]8Z1wAl"&$~f? }2 ZB3?^tgq"? YBy°>B2wVߍF;븒].œ qڅʬ\o䝿xԨ<5w0ǟVOu!{G2ލ͔vOf䝽<U_&zsfھ掵&1/.eeBwlG֙Lc(]O&p 7%R".o[muyqA:\YFv ;'ʂJx6LV|\\ÈPo- f椟lp45@^M=װkl4b( (3QXPԹU WzOfr48 6ʌcq 8 (XFBG<Тz96,zm C%,F"|C:pNz鵶Vkay_H'kdpkKN[p`{#x*+_}^{l Nɖltl)sH0!eLXόŅpaĉLXX8DL!BLDO,hn0K,m{38_?']0./^繸sO7yvXV77.]V\׵f[]61ß#A=Wͥו-:0f:qq^E>{rwv?T<:tÂ]`dGh, Rr͵xLh T&Bh5D,=λaŘ_S,؂v:oorhwEԼkuk|@nvݴ3g=WQ \~W}^wY#!Dc[V[\$.iOuI / H.sy>LfiOEܳufs"9ŭVkck+\KzM{8^sDijݯK^~"]'HW=G5ǣVL#P*Fj&&8a.x!GBsQMD5I,F3?w4M?U_=uñfs䢦Pj(wK]?C t#/E򚯮ԅa8!3vEOngL i\NZ5, R}8oL ϭQx0hs] Z-D{ȼ[=$<7?{8Թvlwn2iu+ZgrsME&l-^vbez5!_g'8 aW>T$X>gc?N8mEZ>]!~,pimn. _Gh4ʣJO[E0Mga0v c3Ë3iqxy=Z)ړh4oF|Պٵksyďקfe,;)nd>bSl bT@(1*%(G3,'q'Sl\s?}8_CN=9Ew6wr{Nq?.k}y S`>^`n0SGSP|X&N1;{vTթ/RK|' Ͷ%Ƹmmp-)9rOiHF+lpU'MQZh=TXz>= cu}guҳt_Lj>pyE*QmB$jܤ-<~Y7žS*Lֹ% }}-j-7CS噡p[(qR 7ˇW\Ƅjg=8?Z kƷ+OT9NN C{5}>a8Cć $cqk$26$Q_؇ iqLa9N-<lZ6M6+'z$ ]Z)[rǿawL4˛`6L6|$ok<=Ȓ"so[Բd,٦9YO!Y*KђyJ* \ÖGF9!>Uc'>^O?}հh$ q!=?}%l.ZМiҜ0dAQ0,`ǣGG*Q0!S&RA5(K))a8z] 5{nC ۂcjܧ` J{c=œGq`.NUԎKb%Y\%o\ݼg%f"&ه ~^UBhX^tEg1ػwտ}sen+rԹo 41B[+y( *~MڳВi^<:cQ%-QU3 :ԛTzړr;JuL~-p*΋y pHd.)p@Hto}[N74ԑb>cmvcWΤO8CUt/i{T%nT@Rt31yH[f&lxI1;o1dVc&YL3 ||38_| }S4 j> >jawy5?|K%_/˭Z!qQY~=zlm֘_vke[%tJNٚ 8j%d~tV[I/M^w^%d"lʗY$!:mw\LfUN2'j w&0҇^u:N6 7֌tSg4~S5U+WTR݄Qzh\r\ŧ=8z;Oɉi< b *>Qgtkix3o9zZ7PoAqO,סjB녉WC`ȁfm]?2k*p۫;i;m:Dl8t޼4bKKe.a40z_^x˹E7};OEgP,vFm$a\°VWqr6TJo4~}qOqztS(bŊHs v*ӡ-0Elή1wfFpc_Fc_0XqE-6`MrTasu9x'"u0E/|fs 'tɇ(xZI3MFr"^ \=eԦIN:[ ̜ݘϠF}p v\yof7^3Jb$7tT i.Fbч^QЃ L:z4/sokԍi#Jmj+cK >H]v+R_Ь1ےάӫ_3x~zww?M۷0QWoջ%\ z`\2jp~}PX+GѬiT]mz;KPMn(w>Ttheo{Zx!rJXYs;i32/NPSj ܽw>PF %K9wNLm)3J{~5UeN6[=ay*Fch2/7k`!F` fX 7 V>:5XZWkF9!wo3Z MlOBƘ+zW7Plx$4(Fh9R!J䠴J𿠹wrq^)hgljms"(+4A;c% Ӥѣ4ksQQ/- N !ZIs{pM@'eLI.t22 0S+7x'ylg#ޮm'XH1ZF/4Vj1Dq" 4Ns؝OZSm@5LYLx^2]͑7k#ry.S=ڗ28T2Mi%ډV"0=.3Xuy`SVa1 L`D T{AHY)V蠘Jt3FUP z Fj*e;yB;NxMLZ݄oz|4w;ˉ^#/P킎 ̹Tra„EZEg\Y>j:H9j0y`mpNp@`8Fx u(^q$\@`kVѸtnUDuQo"Xy vY{TtHs,P0XM1< f!dRy.;! Cmvxnܰmͬ6o#m*M8 `9L0R9MA%,-# dRKJʠaby^niZ>\E8W~MyD0fJuJ* HZt0\@"TdK-Y@VH)%\!#2a:91Bxo(*P$D?.8٤cB)7j<[֑aY: -J))$fk}8f0M}t=kӉwiw4sndRy$RI=m^9ϖ!DdCaj3jX$2b=6MVHKD[ >_&1'$zXF9N]`DZʩR[)rmq4_/ՉCE=.yFv|_WYNݑS{4\o(οi kB})nG `+8>jK ^Ȍ ^`A Ɯ(lker]oakeunїwG͝_'(MYF5Lx\J^B:`^e.DjC,ȳ3BXBg h*Lu4H rڐ#c}9`AֶX22ƞR)$$R( KFb)DXҠ:KFbu 5lZfeeiOaE DoΫV qcz;:7djTyC';yC|-;;y2RC,$;yC'c-;yv 'ہ9P?C9P?C9P?o eyfhv8fR=R)G#Cml`iKv3"bIHcxxPy N$&hpQjQBcnsy\)neb!}r#,1G3|Gc4>Fgct5L0טG`AP.#H%9N;6ZBji}>+F`n:y9Y:OBN$bk3Dj| O6rٻO=fY#l=f=g6c,c,a32ҲG6{a_[6ȠUL0*"0쌡 "((`G^FKXKN<;Lwd=wϺ=V>֪CS \Lт%ڟRuCwFwd:K}Ye)KH9@=mݻ<ݙ'- r^H]o;ym ؛p:8J*ӤDlp2vr\8 2Q-͏6 ڼ晣j|laθw V*lt̺V S!e^Е:=CnmqݲۿN W_*=Lp=m1JM:/]Fxv| K*Ljd<)V2""&ZH0<)c"ҭnX'}5X m7r`t΁ҜOvpc1S9u]9~O0 ,&$qs0{&M oΛwz(sw(_6VA Rܽ\5)qf\>.+:#WGV#bkJ'ujcIS]kJm6w^ɖf6t\$zbb)i-gpf!՘Ͳ̗УG㾥cF \x}rmҩ m3' \䪡vqfr[MRFL,5_vfK&T6If_e_Qjx0o<5'oK'-썾\D Kmu@q. 偰 $']tꤍWV ~:Ԟ#Jsи{>,ҀMv}sUHwbJjMa^_}~\L!5 :%gP11TR]v 3 Ac'#>v &uD%ԖhQjc&d#ɲc- .sKcޮGb6J.FGI&{PXQ/6b{Mk~\:s㋢oS:J ug%5:;knpS6 ǚ]ol0 3LX>TOӤSmDlV{-I7w]C2'mH elF`,9Hki1H(W=IQrD%JDNOuwu(è`+ 陔ޖ*%=MB1]kU<ϗS;hWw&cG:vIuᵣ,47CKl8DҦ93X?p2ݫDDvȮJAD$N{db%O! JqTQ&#"*w9lVJH$DUnő`*E̓*nۋ!]}mȆ}*=f.}(LN ܛGGϣ4?(vH]`|5`khdr6. <>η^#)zZaz870xj/XU<3}UUEǸ?Cy,![nu O~A*.)h&&GlO3f4j|sWL#,8ZI} 6bW skIl.,G0oϓt0<9J/E`Ek!{UqT=ݫz]V&﫺 u  +BMeE9/:^_q8v<a"hoxҭ8GF*lb, ~hcGi2 ?t$oɚ t\.,mL#0SADHb!Iyi- fs",eI$Ws˲D`亨卦 G4g^\cr)'F+iil#ierV9jQd*v!a'Y£q܇<-;6AۓBsJsCCG 6AG! Tieu>#aAWK zj O1v̂Ϊ,<ľKGgϝM<#қ+g0ilױȗWqP7=ua[nhAλlc:C=X`ZRꔶXI0vښ@*H WZ5k St2LBF-4* EҊ63ikב A4:&E0̰4 8 (O ؞(&w_z, CU77gAkqpDM:(ĉ"qJT (,TӛDSB-Bx #GEdKd7Bm J>RK.V}6 0dc;W~3Hf@qpfC_Z$KI%ƟSL$0 I/b}L1L _%4O]K=E?_L1LQe+dMSv9!к: 3K;9΀ ^Ϋgo%P0 0+VC(r\:$UX?]0O@}um}¢0+x7cL6 Op]TPȾ?-_<;;]^ݺSB1b0W[u떦 e.bi5Ȋ !EZ*0w٭5Uyr3]wŃW׳ed@X'si~pᢄ- .9F?^ϊ0 3wF/BM#18Gl4#i0чѸQ2gVt1r0۴8G%h"FmZ+cXtjIs@'.u}=?'8'!wt mTN\v637t޼wߤo^o{71QϿ>W u h¿ u54WMk3r[KPKn.T0q v沵 Je?^j_.%+?i!5YP23_i_|U]ʬBzLҤ@Mh YtZ/mb1˫:[RqFiwN3Oke`w`,YT`IuHsIMltOk?bsu<:}Qk"ʙ:Ji1K3HAIdQbTo=wyAȋ5-Ş0<@*FchFc0KKiP_ |v<:" ĎEHEȎhE}#V}[ C.m8A6KsO@Lhlۢ*٠T=`|:/;ۤV*϶%~V 6m#TkkdZTmhlf0*.]3ޫLE299ŁzSefwVHr0>悺{ j "FӚHX!+V*δX6#bx,eB/$S$Z«| !ec'$#P/E#M֐BJ."-{=i!]%\O,P.ޭ0TFβ%F֘bhEXg.l2M}0~8;8G jQ7)u)9^gWn|yTn9 SӜL`^=ߋ_8k^ z~zׯ{g;@?%ZYqb arHMc[k-  ʳ% > <5$s5X% h˔%j'KpC)<RԭU4jf^ 畊X!OA+%7H4Vj4 qƔ`?Gq~ PP=BJHBQI8 ÖEqk4

\-]WR}Yѧ]:FZi$7Nm:vC>2Iu4࿋Ik9.֟Ze8G-l=Yt-shWھMΫi-z^j_tSg68Fr= [:lKPBhmKﶾ{h󐶻aܡ0՗,6ۜqdmn ke[RKNK䴨K ÇWΠ 2ezGg2zMӏ@6P.Gyݏ_".`XߜXXC AyonY-8( SrKϕ`xYO7=%`"{&CLa)$SDc)Q"!@%K9 Q Ƒ.& @V 9m]4=]p?iA%oV5E>5g)O5'.п{\C=axbY@WLX=Q<0Q,C=MۂJ.N&u/ڵ<'XdGҶ hjt xP|\܊ tŬ8\XH5ߛ"|~ xNGrG[en86+`㕹4ၧTE!W j"fr#檨i&Ud`Cs2޸ӭ$$NzkW\-Cfi@ybZԴ?q8v87'ÇFW. ƹ4‚an$ETXŴij!`*#O]``͇՞޺oʡu9嶆 *5yBuj_|091uj\kCs5v*7rbPCr܁>gW(F;}1OͷCVeTpИ\>JAV+ Cٻ6r$4egmC 03`?Y,ښȒF8߯zX~Ȗ-2GnEvUWbD(e(WllйdZD<%,M\AI(d=ɳZ୑bl7@TYlPҁVQtΩnv.n8][n\h1;ormdrH@&hc@a\vh5IR&bKX0#HB/:] ~:*.)Yj{9KZeeMM0UPy 'c A@#uFFwWyl)Ԕy *쏨{f2.u0,/F\Zd9,3褒& JmItփ-?Ҵ E'BQ%cL, %ђ֖$7 cЋ6EF sBt[AjWl^ps Wwt`u9w_kDž~xO;4q+>5W{#rb!&SX iӛܮ38&vo?8|߾pȅ;<Ҫ1B)76 \swt_еjkoֵtjmN 搗{U|=ˍAہ_Nw8َgS6WM;tuKU}&1?i&v*?g8I q!ߗݸ;EF"[_#|2f+p\qVNhQ}sm~lc컬 FY`^'`~~F?6tJd TAjgkU yh"nk=wdFԿmdLHQ$Ql18(8g,_A6od9 3rv&=76*5gh9K3kOf*V]⭮x2`S'h;0ڛW&M~2F|`*RD/@)IZSȒF! $auI[e2A26Id=ꚑ+m^;'rr`^޵WoKknjYG)Bm8>iO^~~鲑o?& h KB }P<1p$$&lӪS)#8oVz{gj1,\9_*h~mO;iH|M ڵ1ONt{o_nSoh==ꐆ u`>%:`Io}uuz"VCR8sEH f?8|}G-uWߨ eJg='Ow\"DpDd{1AseN< ^Gc0m}ua($ rΌ;Jr+Z`s[jb֋2uv>k^>% AIn-#h-sв9( 5KmJj侥䆔fꍇgM%!hXnK<9nN ^EWQk/za*nV}Y:!1di!xv,fes&5h A`Ă )C6Β`Ӂw)K.H 8pڦHYp#ZLJm9+<.l36Eʭ.|\@0l#=:;Z>t; حl8QXLDI((M FE>Ls> Qy3ڰ.9[/ !aY :Z"L (B XiL ȂBBv} 3f,ANZI\&D.SG:n%:BY,W "6dlɆVҶg-a %bt}[dQQ+Ձ턍>jj_V؆q;J~5 _|W^Sm'/Q.,W+2͸rUɣVĉ} Nqjpc Ξ[xɥf~ӨjNwјR3DPM׫jʋao4AuO锷yzinZV?,I-:Bti6j޽Y ܛyWœ(NҊ?Ы-ꪷTjP8z? mI=8<=P uh?HҵA7pN*XE2XT~Cy>L|(m9xQ֋2=s78zCvP[h7&_@O3כZ.~υeW/Xݳ4]q>3ingGUx5͹Y[)nw~-zzQCw;# Rs># Hʠщ<Ɔ9*-PEkL!as48_*O.'\f`s;BP' J0R*9%1J*L9E.3!E଍o:S o`VKkO:i&"8\Mc˥NG2(`錪\+`::hmMU*c= 9g;!"41dR*x2L̵2&顖;+sg픦q/`{R/|aLYԣN~=%d> VD_ $1I4y߫^yi8d֤eNONKo1NM#idJ2R4&HKf}F`$~y3k5M P&ctdZ, M%њKeq$r!+=[N9+`nڜMI"K+`<%갛$WSn.w3 X,$, d%)2!bDHޔh4K6FWdiZ-]r,Ei OJz #>k`d Bۥ"v g9.HyY=g+_; m'᮷)&G[,KgT!QFjYtP@ӣ3 k <Aom-ǭ帩c6pUT,HW?-:ZY.8YI9ƤfAqAhf˕qjw>X'&Nّ_Dɶ=kh,yt^Uy#iI1.~r]"A.oy=]Yo[G+ļLpޗy8II.0Nf^zS$MRe>)6`[ivuԺ=lgwY͖Lm7f!4nG=o|١煖!Mnv׼ Ur='oxb<܅ªa4˟6' :GZnsL]p?$Ylj˭槱i*ӌnwKZ,} $vJND@ոN]_'rU٣ӗm`t[]mk2|WMA6va}Skhs qٴf]_OjC&ҮM_Jџ fA)|s|%T8 1aҚ)RR(Ŝ(aHI|aU\6{z[<=(]vwk7k2Eu?7D<™k80fa1e]8.OX)sPG|05Wc'ܶ'8^}N2..Q9IxtŬ8,d7!|u;:ñhG'ҎHd];^SP4`r9TQTimhżbNUSNVج 2L#gGIͿJ],Av)`.iu}Z}S .8mѱb`8'p;![&T#5[ |0RCkiM֭qf_kwݿO~L>x= dn͗v7v7㽥z " wVӝmn'td5xHi<,{`DZF]L'yynp>f=ڴ8G%hE6ڴVưdQ<@> \n+G_Vo| HA0܆Nko?_{ҿ\z&^ :16N=5 <o},0th.C˜ls >f*oyR{v D֯w_tOK(ч)INSvR~ l~[[MgYqܭEy@_]{ /memijzUgKTՋQz;uINo菛P_=𫦋"gexP%E!=Rr'5Y?0ÛTJo0uu<:}Qk"ʙ:Ji1K3HZPgU{GY9zH%/{`y _d0KKiP_]W.y uVl4Lt<Ɍ'ĉsu ތF_Yt_ 4x+AFms݅-n >Qè5V0#(bFJd",{!ћq.7ғDk iI_KH~0@Soxt L:-c[}߄U?u|-mhuz m]: DV@.Zo&Rz7ۻ6h4igk{ F 1?@0Ngp`}-(kݾp.E:13M Hutwv{͒5Y Ѩ7w؛ȤC5&VȘ5Z@Em 0i6ggx:D"bJk*iWƘX5LkBq>z_~JcwZYj<\.ߛxof"u6Y[nM{㽁:wؖͯm]}p8m6ze~* Ǽ2pOghX)Q1waʽ3ø)a1OezOB,7C՚7 *x0.|)䱃ǚ˩L{8|IJz26b!BV!r9>txyhpj'VJ* Z;=EygEVÙj." 8evF@5JDQ+mtPk%}J *(1d87 #L/N/Tj4v:C!lE N<m:۵)ԫÙXWufS 킎 ̹Tra„EZEg\YkX)r>!L] bq")+u29:4Yr_?y U h\Zb)o"XyL1gl:c@)*/V)"`LOA[cyI3fdRy.(;?vE 3MWU إx5moNdƹ7wx|92~XN]L߹nwX2688ጋAW~$r {n369c46g;M-4GZػm @9:>N =^a0+D$a'd=9X`B<\F0;c-,`2j1X Dk45[ȁ;޾/q~u%"`1+Rn'* %qtI7-NJBTɼXSD|0&)proDՖk:Y4\]szs,N[wIhP'kEM49o]·RL@"oW12#XP1g&-MgxF-$0e,ETf6K.a]`> ݤ7LӻﴏDwCRE#&xrJDB:`^eK ) Ӂi]I&[qhȁoI'a iUp9jf`29> > 'H BC##TGcǾ! #$$"pFBb%#11)0ԃ(Ij$fXrXÅ@ii/' "`m-v*c{ A6WM5 fTDÜC^xL1ǪѠ߉1`s쬭"WZrV9AfxA|IW*/,RҎ[ӎUkyayҢRH;M͝T?2(L V, Yi+YyEazpK G(-͔nR--"4vLAE)"$ڔ 1#ETP(YMk*Pʍ"VFudXDҾqG Z ј; -t{@BWKGKasRa9;Ro%<[QTqJޒ-9{Kޒdn+9{K)%go'Zr%go[x-9{_;P"$ĖBX-[th/ԁ^C)_n>vZճM&x^zWס;V6 *a6}o R/۴ysxJbKJҾ/)KJҾ/)KJҾ/g6#oSEJu13Ku .!%Pr,`%PB]BKu .!%TLʰh:Kuu@KR,u?KR,u?KR,u?KRR$V`H'(1oC l. $ ,"N+I$`JJPC=tr4=Fϕ,fEj Ti-# ڑJkiJkZ*"X yQ J}h^kh^3 E0ބZZ ^7+:+JEvUsZH9̮'t9qR1+|\bò^)NbyNVոN?#i$f &"v_bL IV[=IQrD%JHgGuOuWW']^ׅ0} jsIq>&G-N8'G "X`T9F$㑅H>p_SFDD b FрG!eLDzXf~Ϯ2qhZ>OC YŴ.+[/W_rŇ(0 (7G9H[;M2GYNCV"fV/yHU`~@!i ob lʘdzMG9:,3t uD䈾E_5Rg ϼ^qɠaI{_\vImu@q. 偰 ,̕괱f[aN@%DܒkK(xXJRh*G}tH 5#:z(t B3t?8^^  h Lg.T*__-պZfzyCj Zͪh񸪊)wZ;iܟQRGSJmΌY}ve굃A-,X\JnU3`u^2t?Q~孑p;slhS,E j2Lie<:8iXaUaF02dWZVj@ۤ 17۝@ z LM 8_GiO[RitEm)/C 0Fit8wjb!?~{2|OT]Hpo:iɬY ؃y,Iϥheڡ+!T/ DE!^P̶2CJhg{0==mpV&.=ٞbmm]m{n i28ީ>?w鍦icj|S[.`jPX>dsk,QV8d>r\xMz6Tse&&CLwG5xt:]1[5h$U$J"d3J2- Y}*$8Ȧ^2;$29Ͳ"\i˵ma04h[Ic{2-2v9Kx066q:k']EJ͉=&}Ӥ&x@GHZhI@MPU)Q5W2:?`?KeY̙C:<!+:bNР jsyå O=jg>LLj9+j$wDaaN<6ͥ`y7w=cU6ຊlAB-,= $;z]V&[{K{.ӟ/X(M0X\jr (%R,yx_`"srp̻ tgqdDH A{7ː9"6Rak?$heA(Uq{K%ùs'f ͼQfk]pss5@ {?U4 <%0"p$F IKkqpi>WXaq4*k@KhZleF͝QFGa|R*:d:ΆYW' <6\?{[_&jW=,BvYΪ"nv9J9y6W<S:Ώsw/|f6@솦++ܾ*vhe4|dڜ샚0?[;NI˷;LH.Gf#5oHfבVH[z~H_s{ L+oilfY5wqaɱĚW7KPguh{]p<R~܎o-_I gXh(ʵ64g^\cr)'F+iilr~,9Rde*nm/QV]~h\e"[ {Y[m1\>J9 j9aȂ!!J]#͂ ͣGG:b@:Rbp1!S&p Fe)A@`֭Ggo1t6[ 3>?]J,r0@2BӁW5?=taYnhΚls *h-)uJ[,$;mM BE$SD'!d sI^r׿J7e켩fyHTJNcRY@Sf 0ꀃA$ibpwǂq߇Au)~}/WX۰<`#jA!NK1PTkLe@H JpϼJAr=pGn/F"k˼ĈzQtCp %2 {;A чԭ mS(5qp޾TYqP8W/Z$oK[iG$0~J}!qM}=0k܇Rȸ3 |*v>ld ERQ (k1rt#pN ňXq .-Gm~ _55MFj6q l" h amvcNYzf?_'?^\M/^.ۃa9̥~EI[](V|5-h0qa!=1˦nHs76v,A} 5,se%K>kBO}.{uWZOצ2%:I1pRW_1K;ggF zqz `ǯݫ*|~|û^D{wxV+Lt8~i /C׬kXeޢ_rCŇ*mlk!@~bM\$v] *Nҡjspef3_ b~YM2W5QK 92 h.ݩ^1H7/caFiuF2Ok0|ct0ud~VXiR;ɳ\4+/۩yZk P|QJq\2 @R=J"Nݽ9/DC!9JxK'L0>Q1j5By-a2+/%<: ĖIɖẻE|#&"V`c[ M*,xA6YפR4֣ϤC:M{YrpT^{C KPճ6U*M=-Un}{ dEuT}-<&)r5\Q[h ի[k%(fkV]2ꢺf`VZ$@c6V׼a*N3nif[:)m(>+Tg ^L0G {3?9.óMer,E&.F's`QZ2pr,F5HO6 "ô Li#R >רxKz[F qX/2̐FYޡFqq]NY -J{SVɴRڕL&oKvQ1w.uQډ~<~^}''f-R+0p>qkyJ¨ Ïay-D">?NZy1r ׻ʫi Qu촳|M>F#֩'7OrNm,}㷴蛚ߞD𹵛o-R..JZyraݮ.a眞#:Wd|ՃTau s&Ή'ߜx{10|gk)Bb`X_vyd"C,c%# OޅzGM=NQdH?B%4ejA)ds(6l' \T [H抢-'q17H<]mʕ.7`aFeN$E$X$FQ6ݕ(ά>R-4 L XRA`zTa.$]2J0Ɨb_#/u$'cKԊ"/L]xI(*#dgUZ{i FgX0]ln%9 ,ӭTZ`6(8RH5LU(pxrVE Cp |B*̴Oyj#S=283 9ډwmmH2⭁yfz~`^&xՑ%$q[8:Y*~U,V#x%=l%=?P7\MSg.r$glTkphP<:0ԤX%I&Xy^eYZe8*I0Jj&z\NG4Hε-]9ۃ>gYn8t'`\ ܁}4)9D1T5nK.98bt5A(y %~ AZI9I2PyPZ*̊l+rἊVPi> S6zr:l sAd GbQl:YS LZ#LG S'46@E@zפru߻!(<$Jg 85$1%LEt d+c'$c97kd]W=)'5Oγt[G>@,k??(Q<&g&[Vʮ51]J߶͑Aw$z+{z9Z\\ލ X1JqUHڡCTe*9jQMl: 8 ~BAP~Q7PI"j^3Ed-w5hg9e\8scc I2& rPxZĜS)g*#4A١Zy$ hۖ uޢw: @Y!C8s=dqpfvnYCW㙹CS=vg:s^eU1HLmb (w[ F%S69k5I,iv- -ù$NB}j10T$B\9O%J|)E<ʤ.9ƽRWEDiSB"i  2$&CA PƝRDVo1rhfƫaWm#vp*uۊ -u^~h;>ԩNB&h5Yܑ'vVcg[znh =OL3cAxs! d&M\IF3"rG9c3c VrdR8ez9UHƱ`c)U1H*TMJkbpcj#J9., y  幍oײ!``>=zqoԼ9ؚ;eQ sζ<K=hsqd]ͼюhD"Z)I2d~3βRy NBP#$&DҥOpJ]cA?,qK6jPUIHeӈXxEF \#Zs,D%L{~`@<ڻQY[G|.VZ[8)!R NM򠀥,H.8!(Q{fŭs2p|JZ+iRZ  Fx]6Y72mL I;ĬJG VD-d-}߷9$xÛ^*d;W%4/*$TF ̊hX.sZ@V$ڣܥaG$KCJZ4)\ІHH Θ T+t;AV!Mm'Y6.r𻫁M~n\{[wRYZC C9C-`ɋrpa]Xq+m!Ց &b@_iH5PA%xq ,"&Tu@D墶 @8e-js7 IGBT!SR-!.x dbIrupo+{`lN*7rT4_a약Z_DO5tk_"Bj[tt$td8a/*C.Yyۿ7zؿTs&iܽR"ϠTsXQ 4kk^R?5/ކvW[b&=XYv[8FhߌlFhWg3?;~oM@ՇB{c]*m`F:_XtJmѣPRj"ȝ "tJT@A!T nV< 9 7e>yO6WKO%x`lnv `LgJ4eko`_D9 Tq[oqRq+*~w6y&/4>q  2XT<`9Q*#RZ$]4Qȃ5ڇ D8.DxS&ʌdXGhk1#+c? ܹ? wS"NȅYƺZ𡲷IX&Ą!jr %DCȈȑ.^â$(h@Mh,.tligQKXj#jk%NHR*ybKh/Ev ^ހqj9vsoަ\k+ZH%BA\KT ⴯\UZxcqb9+ğ*jZY(ZIhB 'mLQD(3@n[»w].#I 8U76]^}9[Wغn]MZo?fKuW[Vr+in7-o|C -7C6=y.4{z[!W;8 ,Zb͚aRYQ+#5n%sd sŲn%A>YZxext 2񄗗7u*tQRKNx}w" %t\i]iGNI˭[M D鿛"Bu+:zGN4>,.v`㕹LдIFBhb&a{4*e4aid`{a8Y杲PiaD@'-z)lMݦˍZʛBQp[qR7~wW.3HBY."(TX N1#L \#F*pWxLn >5Ƿ7{^&jT{ o,e#rm u{eo|}3c YvP,>l|D7;}mx777Ulf-xĴxHxMqCW4Tz]YF+7CB:Nұ8)`Yqш3 d>w};B:#^`rDa8܆N%?'u߽BvoOǯo|篔鯟Ͽ|]OsyF$c/ x2~CCq\.C.2Rr >rmv|o/_o쇿tG>>"ӟ|kNb> (׵O|P%41 CB4b&4YQth#-煻=W1[S ?{WȎ0Ce4@s\!HmR.Kz_?UmjV))Ɍ~Va<.?@{r@T=&Lk\P_iٿ./Wߥ|nΔM&"@m S(ŀ1R@iU Vn֑|zVu#[Y:PtL BX"$u)6?&88^:|f4|8_A2_-Rr}ՓVK0oux+mkox(O/xd>ǨLW4A58icl0&aaL0_^u`857ҋ㉖ݧ^O15tʶW4 fD !QVįR ![ RadLF_۾ <Re(.D382>h|Yπ2d4zi^ g׋˟+M]2z{ DKkw}b !rM7ɠɓ* jj^K Ebw*ʖE LY> ðN(ino򗮺%|J|A0>*GF_% 20QNML:(ykmS筫2# mRًwBRD}e_Ǽ`pI SB uC(] ZGD\"8&]0BMƤm¼sxՑ ȊRnouxbE;,~s &hpo7ce<8]oN/& (6MJ"Gj1!E0(6*C}q[!l-J%%D/QjR%clD6cz;o~B{zS +ltsxv͟ p<9>k=KlCfZeu.$S$(_B* $ ˋE/JluކaVҪMm]-dBv=CQ-H7͖Ybˋ@<݌:^+ԆAjvm_AxᗨSQ @kM/"1ZoMlF:pYTcPkI*HV2pZVU`dEbbcyl9a/U0DlFD4D$b^ as huRgTsLȒef + F'!)-+mT?piGivu&&U=g')}EVUwj}̍mO4&̊B< tqiW32V:`š @Wh((@3ĕ}b Guƒm~ro NT9~gW:R:R PԪZ\6LEZ핈].>v@u!:)Zke@;O|Tԍ *-:(TTTD Ze y9v+'1Ң&-%ZZ66t@ʢ]q!@-hIֈ#SQ1:uB͖:{$?H&g/uϞ}ӮfBƛ.h^k*X&鳰Ix"Bcc`lY":bꮮ/ .V>\]/m7k(ll-og1JӪԯ?X:뢌יTe !&DHVd.FI EٌOLF(6y%R"%LBD|9IV#3UIuql8崬sw玑ؔ\e?+@eԎg? FD:=\M#)$c2C>Ӡ9j"5+ƟI[1iO|P=={~_3ɇ ܙy~ amh}YbzX~wω[YI?}/-i_cdloov־|SA2jKO}?&:݊Z>g.́1<c>Ѭ]W_XS) CO< jqԣtӢ^tqm!*O=/c>)wLR |ty:bwa!<.,w7=t=pɥŽ!XƱa;M kU1w}<*e2#yd`=xebN[మhNZ5߹ZW5 ZʛCYkjK^_Vr+cw4w"Ob5@f?royfjQN\5U)MG?kk*tM,IF+sX0D`&~eq=/￞.˖>zLVׄH@5N|Q:'j-u:ɄdB C7 ¨Nb܉`qI8O˗k@ʻ[ސG"Ki۝]b<-u_ {A2%,аo_߮z;k;A"X~+g.'y ?vEL۳w_߿|חxZ*w_z UOiIa&;˝ >777hj֞Ӵ+_]%HNGy {iS[ f?v;Iyam:WMZ>̳ 4?rI<AR[~vntMnsc4q/bZgR~%>݁aSJQ?{6j@6`^c>y;X?~ݤԔM.Z #{U\Bu.dc EZUʤN#< =?%o9G-"K(GO:Y&|KUQdճwM[nBݫ-OǫOǫ[VԽ]ݧ/tV_g}=(y?@"~/9)_hɅ*Z8'v8y#q $9o'46ߺyNz7cH.C#b7圕ogyٻ7pvϵ!|;}m8yȂB?6˱ŻDls@tMߞJ??p_[`m9nWn*pvV Hkz${*JUzW椔Eg̵t{ϟcu=zU)WRjǒrYĈƙT"hJ9GR89\B'oG 엥I!I_2*m2䘄LlQdrs[ȇ|4vkYS$lۻbrx]wʼnoOQU˃%x1~!6xuuz@+ ]X2̶Ƨ<Υq3]0*3"KI -oOW$~G>ZAG> Wg#~,w1 ??-E-ZZ a0]ۛ"W5' KBGd8ȷ>39_=OBgC @fL隠yHsVMpl:̜^}^k+w2T<ᔟ)yQZ Gl c },oV/ +x*޻>1're[Y/hh@vy~Z4|;[;?wn`=PwgG=A?0[V{T* xvkscogهԣ۽v|ń??/. =QWN[&'.6hof/cd,M{2tF~\\n; m9:;g q~|9ztq؀BlRlK7x0t|W c_W#qi]]e˟aoآGSzKG]rAځpllٛE(G겝U¸bq4,2ECd+w;} 8Hv7=Hpi®W $|w;GoCaEZ+kFfmN"2YSgidMxa{݇nO#0`z [p7ܖ/\ޯ%LBYmT|v1*;-Eda4I Mf-9"GhCt.j281j.YRu"[Nmۦs4JF˝nZ ^ Ni+KA*/|)%ct $QR,FEeGk7 88h0KrTںjQV,@R8 H"D s5b*5ƪf`j I՚#IJD,ζ`-̍@G,C@mˡiTA4IeDPdC #0+Lpig1МkP5IV/B,+.FLnrzϩ|e$U25:DDyD 1RHhٔ,H).bm~5AY%V*sR1yN2cEj-\ 22yg`U^7J?e>LA r%HaELbϘ(( Ek05c@j~R{HE $;6L̀j@o]Q? ! c& }IacAtR%.]'T*dE2T U ))W !vkz m2 + EJErJO. Q"GޠO]Bg ıP(hQ0RS,d f9\:p@ أtnFD2a(^Gp `NLQƺ]0sqv 8֤U )r@KUڞIqILT@0>Z t#;9Y%kP|PmTno&A9%^HL)7VP0YXAIgP f@ \ɪ a4S"qFd98AN=3Q`J.IRH `~xx@SSVƍu{c'YFR`ebBu!jIbJID<@Eˉ1[`4rK4]KmwE#B@({b wL0@5VGA,n\,N@ɹDp%) } 3\8|_9BͫG vmŕq8݄ҷ}j o ~HJ zh%QZ}@T9JJJJJJJJJJJJJJJJJJJJJJJz@9=$%ꇣrN`@9+F%P0|Wu%PWu%PWu%PWu%PWu%PWu%PWu%PWu%PWu%PWu%PWu%PWu%PWu%PWu%PWu%PWu%PWu%PWu%PWu%PWu%PWu%PWu%PWu%W Bx-L`Ɗ @A*dWu%PWu%PWu%PWu%PWu%PWu%PWu%PWu%PWu%PWu%PWu%PWu%PWu%PWu%PWu%PWu%PWu%PWu%PWu%PWu%PWu%PWu%PWu%׫R@-ǘޓq$W佌!  `b}J)ᐖW=/q(RJ=,L (%2L F%]&P eu@]&P eu@]&P eu@]&P eu@]&P eu@]&P eu@]&P eu@]&P eu@]&P eu@]&P eu@]&P eu@]&P d Fxoh${~P4">,Ww]>/{n4N¸zYK1{9K./&q e u%$.o=zjANK"&4\-K:DY:d=/IB3Poa=8=C~z;7M˦yzggX´\{wDy 2&O|޳iPLѧƣ$ڇ@q6' h-ֿg`cnz!(4B%!OUpՅ3tͲdi.C3h]?ۻt:oG39,:*lݽm{E;z^j&і}tT8HBUtM Ҹ^aQ7U}u z5Mw7w m6۱'P$L0"\-T3h<?Y_`N0'χ*[-Kx;UJ&BOwѾv'y/BE~c&ѼL 4?PRg(9MuPZw=RY_NR';T{7Uo6,,ziNKdebmMq)mAfwL/AM.q^~}SAŶRӶ[-|+m=<_<QخE5_čOW|k o]v{ jk]P5ݤ?SPiN+-Ka}7?R futQMK7IwN i'0wpt[AZ~wk3DɛU eRÏ ~w\utLQ##5˺:}f≉bh. Uz!v(Djjs`};-I6x\.Aqcrk.=,.fBʨ1_YtW~`,LXxu`*|}Ojc<ƓNJ6̪Aꐫ)myg1WE%TI<qtu{NҒN@tCjj"xs|6ZTޝ8x[razT"Kmu@Ki(Hi-\GO) :8gT1ggDlo&=nԙK);d2mir2UWup^wjϹVPzS껥N׿ȸUne_Nc Tb,f(1xf2fm.~CCD䏞YTD-AjH jĀ$RQNUOG2ޡN[ڸrE5pvYcHC>CNk|Ϫ:Skes[;hԡ{c=vᒈ֘ \^J-? J_`An^`7ivyf$c!jȈR4cK2Y; 1x* N<'̂#!eq&,j*Oa"1t/3 1jU r 8M(rneqWKkFw9lq%Eb@G4o -u.tL8$>@F+g26 R{&KM!A$ApUEq` v9w@I/NF׽T^ ]NChy 4mqA@U[k@rA=å)V,:uv?7S,`3,:$]ʈAuc7e.I҈X)ҞQe)Z(&&VQNk=cF@1gz4‚S0Ͱhj0RwH6#Wp0M]7 |(Nzշa ﮗkro~~n a 6?ŸBh{!tiÃC蒎oIOP"ye0֧Ir4g+[ӘRPF*ղ- ;R_2,PjG˭}_jkA#J  S1!er%RܹN.!.!8K<a"hoxR#i#Vg1 1ţjoyb %>Twlk.^YȎl Ɗ8* KNby_vR0+,7OIm0#$Ap䥵884+(ΉU vh5%4 62( F刊p')5pzg@8xb+flWtX 3۷,kK 6Ǭ Vzt|o4U`8e;NՑJZCZYEiylCp=)4gAZ4' Y`;$8Dip* jspyHXV3TxELIXtv̂Ϊ,<ľKGg-ϭ<#қ+ċIlבGkƨvS!܊u]a[Vz43.w1Y,0-)uJ[,$;mM BE$+-5):!#=4* EΕ߇ZI @?𥫅HjL0n$'}+5Cj!>K>{˥~R"П/q< E}o͘L<,H)ENR J1u'&õܞS`'aYW  ̊a !-,I6sp-,H]T) pZb~:?gzpSxN?Z%XoO)ggJrF (yp::q;!7MMOm& U&VT͓>_'?^ϦW]`@X΁68_^kKbmEjßg" ;oBM#18Gn4#i0чQi$ B1ɕG%h$Fm+cXy:59K]r_<FI>p[:돯oοMſ~|7o/0Q?xk:,AG}8yY[CxT]mκZ%)7{|\ra\DϳO~RۺRYqR~R_I"ep?﹪נ k_:&tW5lclVb#buNe{):M̰>yoO8Mc_Ef KC{Nju~bFeu<:}Qk"ʙ:Ji1K3HDIdQbTg=wYA -Ş0<@*Fch|e0KKiP_|xuTu[[Ém%-m% @{QG`=km)j[=Ozm7CXRK2LO^6L2J͘",6t dY%j"^BJv){l7grJջ¶ӴIbR,0mK*^V:p'8|rT z!T& #fZI d2N:Pr8h~t−Q$Gk)Bb`Xd3F u܍]f |{pU)e%WI_rHsG)w0y2uX3&3HҌjD +)S!=(*?>"v)Ø&0A&"eXib^gzʕ_1iO¥HOݍ ܇e"%d0}ZlyNm6ı%sWZL,d$JauS/,Ⱦ _.jg4փ!r6GxV"b)'x/YopL;rR+҃SC9{2D;W z]R`h!pLMzINq!3.K~I0!9R[8fJ)=1ĶJlv.[*rU-)zu]h#P[aL$HL2 41ڻo$ z}Ԝ's02Ԭ }-C,? qOP㫯v }LcK:͙0]*}B屚c7Gk!^[ .Im<ȑ%匳LxC0LȤAFE !F2PriP6#&J p$'7+H1Hpj7&zO8W_FWqHPa}"_|AI}I= cU%Q[)EjMZ2)l[Eyg1e8dX݀'f̠9Ac +9eD]@N%+f缍 5#g[Hܫ2;waYKFNCW0Jb4,RD]ߏ+̵D|33d"swڸƵg߸τ)3hUBі&!aFhKBvNɤNDx'œJq!rAZ"JyPU,p-",7Om̵nYv# q QWNi̺Zo5+ׄв~̦|J><V*N1?u.NYuM6FX'OAr瘍] йxTg[8{ebsWm^Mћ&ю FenM{}fl%ztXKI,\JRyo9@⢴&s#w;+U^/ea+v~IV/I |OoW8j(EMU)h]*ɛ9F2c^``u %`ƊҪkRlCR%}x.&'HDPJ1 Ce[8U4]ݐP"&wΥ$xucIE U$maΒgo^ԝx_MZ:53`4_Kb^x>N">&e^BgQGW)4[L D ]:+{P9c\Pѥ93)j\R8>XRNVƹ`lXxP,|\KE.9l\86 h͟\7ڻ=~5##6Jo)diњ}x`H6%O؈"X\$˸cEc<2(3m(tRؔ ^C4FchRf`sZGl?\P2jCC`WFh&ψ~`I}ihțV&0:j YdEWDB&fȁ "&H&JOa<8W'F<ס b+"ʈ"6D\wJ` m%$w- :0PHg UՑ3 ٌ:)~2> $4i\Ȓ`3 Sz-q6H{z^X|Vɮ+np!kw(C%=u.`|bu,}ηAW ;Qx. jPbZ&q EZ^#Himkʅ/] \i U;+c0qApUVWE\s1pUgW$b ^\ .^WK"-s" !\YQ>[p-Jv nz_+~R'&p0t7{p?cW$KaZ\wnڜ-\ L[K"9{.R.l0}0>z$~d=_Ʒ .c~ΐ uy?ЍLr$׫A?P&&+Z~bE؋9BfYv~}~aJãGO3%q_'sY4+0_@#r:",5Y'8;bJncf'k'[OY%eKr+o,T$P+NN XsvbRĶv*8 )Nz Hᮏ[i W$:uVSŝl%%Wej\:ѽ OE WHAXh&~Z'(G#ąKzx:F=OFxfL[Te CIfg-^`TYIÿ0*'j? ߦPT^ō-g-ism6Ll}$`R>Vn#).(m}G$Wr6MmSk6MmSkvpˤ-(Tl!)mj[ۦֶmjmZۦֶmjso2mjN<;'IqRk'IqRkԊIqRk'5,lZ85NjmZ85Nj'IqRkpqqچsjcW<4솇Uks\<ҠZ(|pH#y qt{_|iP-==ms6Y;#/&0{q%̐1k2Q?I%*3>X0*-T77KN'sv:fL=ۅ7+rhAE"z2'} 1_ȅ-ͱhtG'_Fwѫaj3wWɄ4sӮ/f~~Yť f^z= ]>\f~5QMfA$:CUbҜ_|ק114 ϖx}b5490%^P=DXT./q)IZ!)ha=D`F*^ݕ8/ L}"k;Q. 7ͱNM%Tu*^B ~1ѭw#hD^X>D 5 , 7/s}%/Cӥ| ^FAba@eHYr1,\X-P&йlg뢓˧wx7gFax{7[Et,E;q>,}:RBόUVf˚49K!F0)ci2(,)Z9AZ\A*AwAЛ7y )ߏ 5egX:\|տ_r=n2jzy>z9M3_̦M1O,릡K)ő1tZ~1-Yқ^5j-6mGm.dݶ z91 Qk(wN(ސn0#6 rW JI1#Z7a},q{:r(끯t3{}Ϝ  6'y>'dtQX:FuO8&+Ưs5V Xv6ƎNĎ-ώne:ף%F9VwQvcz(>HcS"}4J$Ra9)FlRl)ܥ.]*G-t .i0b R28˓6BL9sYF˸VZ42ǭ=9Ҽv Ps&]=͸'(UoPvx&4l_iOirV&O>ܲG?}6?_KߊIrm!z<\x˽P#dI'^~_ӯJ1ݓ+&>,酣RYfmѹߨމZ{>pC:{tv={}V@' N;@BsBNzEamk[e3yCtG&p 4Xg{yukrNU`%pN,օwh2cdZeh;D3}m~3uuQfPƄ\Ćh!ҊpmDҊE=O1 d) ĕОW`# u% U'33FtT9^_Yo@Kq$կVW~U]mivE,OcY*՝'Swԝ/"?EJ;;~9](" ?>Gg%JDSR gx⼝B쪲} >u'mFY@brO61Χ2F[xV ``GLAzd姐`VemmdYU}} mHMp̀?x&@=,pЯep/$R4؍$pꪧԚą`P! az I$-Ѳan- g $PJ#?&ֵ8o[[ܲ3^9ٮ~krEwF⒛d؂h NZpwI(MҒL4VEΑ砓s*asIKNTap6.Uy<͠bnPna6e @Iē#CH dVw_{0 6ruOI&nI.) ”^ֲ,rЉe38 hY|1 ?Դ!y#\R",5@sKF ƣbLz^Gσ|)ΐ5m8?{W~ Ro-g\|?gAZ(++#%#r&yJ9&$,e]\N*0>q!`ƽOrS]/z]<V ?%]?.,ㆂK6 ; Csq:^jvl(feBu;f:0mjq%r7^ЀhQNOmb=zat;q#_}o޼\i@gH޾6:n9GtNӟg4WN.T9?<*BNRxw69} n q}x4[c("U~6Vp14|ʵwBNn#F,+2ĘGQ,3h8䶇v]\wuϊ5.:.in#ƒoP.}6&{e5tn"czpz'Gۇ?ûw?paw4F`B) _+||Kܸǭ[Z7Գྂ5k{|@_:Zg_ q(sz_Κ *z{qx6T&4^*@8 1!ս(rqktMܯł?:f+p㬼 %`P/u c?dq2fedƀ ڊ&O$?r~A!ʘxH@bL6ksAjHMP3]PّjS IIQx0R A1C9qnWPYԁC7*^\jjc:%h UlY/ ~{赇.u+0`W<塂 >D! q?SJzu&hEWjYBgUe]Uav0z+^ΓK4\0hĽ&֨w@3pџi&+|7'i=<{iunοϦ\lBѩB!֤J{%s.wꑸT˹dTT+w FA;БBGPb&")^&9a+5rgi9ts~`7ɪ̽3ʡSV pZ1q6ຟuQE\A.}-&L`۶|~/pr7O.oO^0S b:Lx-Wq:1/Ū_L2 ocS`*4T`mP9"O^=Sof o{\[xC IxPq'fu6Xڣ#B2I:}S D,2z]vݰ.*-hυY]m䁆3Jh cM35%ݸs^xq2EK; q$F$lަl\KR@=&f>ڙ]Ds7!Hg)NɃSPɒe] tr+T0v)x_:rv^/b3#BVV&ĢJdv.[2r T=[U[;_h4fGNoV`I u@h :!:06QjTNӴC~`k/:QY \ ҁϔT +?~7 LY"$+Ô09<W)΄ߒ[6)MM*H?k }ȒtY&!h!2*@^`R& *H/mJ]-aZE(A(#9\bDv]hzGĹXO<kkհTOn=2'sտy'?zX|(CͲ& g Y@xEJjujR]eHR6b31{:yZO#Sm{^]oy |x w'<'L'HeǓy5gZ쒻*Ov]q-wEfK:۹ G"2;㮊튻"i9mwWEJ:wݕz.#C誈kŮ+sh*R KtWZȥTRmyԻam'h-:ۢeQ&N](m}QS>9.Qyr;2S7RƓQO˾y<yNBayUls6(΁1µXfgI߂6*f"Ā!$, {"uخ~tz9*?K_qz]7-*͏Gi ӑOLWw~ WsbR~sF^1<_,ߪ*zn ˓AsG q=>yop/roPK+}u\F[> 4`uTqHr>AI)sF0_l|zUjÜ4`˛}MТha=^,dFKi/79J@@n4iRr ]RdYJ6#WKS 5Ƅ,fN6&ICKAԚ2PU֘:w?[@텖aSy|ty &2g (ܤC/u֡N"i[:J()°jS9#&.2!GN8 Ja٘8C$'};?OmWKef(L ΤPH^1Rd|mZ Ь&d f, ېd6@K^~ &'#!$C^kfBÆژ8wj:,)RϾ?tu3sm~պǓɕjiNTrc^?Yp^Y[D[s'J%窽Pv+]mepz(!x6gT]^8AdM䤽kNMgF: ד F꘭ѥhrFST\\䤚5FcX1_ؘfB .$*69+xM`]xwb&P`t>'߹6# D T>QfE;HX7٥iJ` 6A+Zب3v,&*YЦ饎Ĺc0.mڍiǶ^yަ^h`2<#J6:㵷.%+oVYHUTkp)ƋQ.xC"dB)(d1D&( EE(Nڹac~Pc[S#"C([ ^خiѶH)bGrѪ)6z5ڛlWhmh]xC1K^ӅA5SB]oGW#a8$\ í!bL2IV߯zIqHJJ,ٜfwuOUף[çJn|5d,MfgXLӏV@/þ"aR/1",DDꥦ0+Cʘ4RW%PwI3dS7\v/ N;ivZm5P7Tfa/r fo ೀ |#8SvN ,uJn<i 0ТY.B.+;K̂sų!W',XY(X(6eeһj>NRۈJlBô 'mdit xPN]]vxm(ЍkWO )mIuo~o%.nX)pٞIhuz{l4rʹeC#bufp$5YmBb5 yg&M)WcZ$84Wjq3`ŭy_SiAY'"NuWk.QItY\,'cEMI/@.+R[`KCy ,F&r!u T׻\ﰗz]o>./NR_q;I@5~:+B+B*t3: zAj.V>xrAɕe.p%og}1X,Qd eJ !Sp%|UNƺ*|,^97A0ɵ r B($<3@cĜE NVF봆30L96{,?m(|,K#X6,rwWa^}n~?:unr=hiz$é; խt `O+Afz^.=0q G5le[ nᳩHfunHjwZܛp>:H2Q`"Ԕ ,ul}\IQ$GF0^xC ) [+@Z_A<29y"ǣ g㍑ehc{ԺnO4/쬁/%}g?{ W xn0"p$F IKkqpi>WXaqRy^]B~&PfTetۨQ蓒5 Tѡ+,Fv ^kg=9dzؑp쳕gp}ۯȞ#yD1vi]i)8m#\) ,ҖJ8Zcv[y=0`WY  OZL0xI)(Gag EV%fBP`ڝ237UX-T)Hc&y+cx [!-=۝ԅy^cn_aK&U}&Q[n]W+ȍՀ;$Ch]wjz d_RH˺u@a;t ku>dڞZnu><v_Ϧ-7uh N6l6't% HϾp\E?pJ&\r1k}158#BC9L09YoYskyEֆ+VjTi8h%-݁}99i> uܻ2|^~>莅aen#[;EDm.ZМiҜ0da!!JVaYpyH^#HXG"\JlB Oc@veZb8z|{#gLj((D8I \ V~ ǻEM!BV>!u%(yd8C ZKJ N[PQJ OעA`2o90I`^J7s+j{<޲TxG$ *%1fEw``:`@V 'UqaY%X1=;_,nPЊS.۩ njP|jt096*aM4uQ7O"*\kMbhB1`gR} "`ϗF33/03Fbs$׏t5 Fa5qG UL|<^ =sp>mԝyv]v͕1,|i:w 9?twq1foc) S<{`T?. ~z}?L?_{ۗO1Q/{qoA|+0 .]$ߛ_Es{n /~Ьavh.C>g,;ƽ)>T^ʡn7ϗSZh:_kMJ b~^ k U\!}W[n@W\e:8^16H=VmdN ۩:Ma(7O:]c_Gf KC{NjMg~bg?vxt@)E33tjcL#g:{KIdQbTv<:e%b -Ş0<@*Fchz6k%Z4Fe(V¯ޡ·Pw<:B% ;.Q'9!te)1t C'=T t^QކYkGIQI)%3ZXP&M̷:2#Ϳ?8qoI2; @@~8ア]5rSfK,- ~]ծWIn)?~rTv!T) %#ZPJ d2lW -C8Z?8 q@h 25G!1RiADzT`r#EX`[)F~RNq,c*h%{gqoR0#4'(xAca١]/?Z+,\x,'І|[!|_|sxpـE#&۫d*A5Α̖̑̑̑,--8&92#92#92#92#92#i L+-ceі12F[h@8-_m-hm-ceі12F[hm-ceі12F[v2F[hm-ceo9J2V`r)s~>UGG'Д+JrQLoq\ Y:ob2}N;kH)EԒ)E@Մ^vMÆ7R+s;c0V$}e1Sw{1']}iwزcn6tn/lo卷$uUoGy퍾UdLitkSwMgxtrͣpD}OiP} wĉ\?qٛXLK9lrV"Z/G`*M?ۏ\݋`pL/_rG.xԶ0R/EE@(K68 8@n??svSA$uC grH: m.TmDx?"ϗm-<5UO]Y*驨;3 Fj*e;𐫋 q`9-RLp7ܳ96b< _~CgZfw%mH0|zL,Lu݇Dt`hSe&Oqp*ʲL`!×@"&:$)WQQr)xfMVF@L.^:㱔$M%@]CR!Dtڥ}I>,20w 2sd>p](F i2|$e+Rt#[pa5|]knc#P[:EIl Z*pBpda:qm"jPr1c~wP{[^u7Ӌ^%u |k^)խ.JM㫐C8<vWn]{QfwP_ovm뭽QP*-Y} JV L+iM7nDRJ9=U;sb8sbyD(Y"efy%QLF@@0B3-%b";dR%ZR/lB']֥2Y[ `{n1! 5Zi&sjl߬#n  .{X&M jzPOD^y[vHo%<$R:ż2g5.X4aofAF} qƦ7иxDǂe:ѱN{D%M1XyRGMLOJL($@޸ Y2l:%, +6KrEau2GN8+E2&ΎG!h#F_|IF^G13n<: RB!H^1g|kZ iɡ&`ڊR=+lCl$$LOFB"4<5r[̄ʆZM톚.]fL]qXVӓgf]e%5>t=YBzo{I' .mB\}E1q1&D a>bE3Ԃ[#@ڻdX{fQ&fAO &1[/K Ns j#c5qvPTjq,X(;,+.%*c~)xMٛm0L?Y dHd}x`,I*O@(ш`"YS$V (E^6AWQg혱YM`U)MZlGl?XPv<2jCXrVhɢe&yFtG8_jLd{E*+hrM.xUÌJ "r /:D.%0AqL'sXTdꤝَSH~TX~<2"b"n;D89&V^ y:+!ȴL%8+ɁpƔ&Q%K4Qg|@RI%ytCD`*#b5qvȠwh=⤙1uVӒ⢪7"b8.sHv3W1JZLË́e>Lcj\<ԕP?an5>vܺx(6:(hD?#vgy ֠Aָ,i6>x\tQ]IÖ*c,&m)%w1p)1$h>euHѐ Lvl/e1/4԰-n^˩i8` ;UGcD<\;o&{R8ք=v@GJި- -A`chms? ls$l& ,'d:66٭UkWuH%IAbR(# L&*Vg5q[< x4u}tb)fAn#@aZ =6W_sȑyվ1*&9l0Jsuc9瘕#^9INV N^Rf1IKYsP&abnI.\ Nk~7_gg!evKUx]`rhQ͵+DhsץfrFbП [Hjݎnf7wOs8fOff!l"nzzm1=ܬ[͛:cEwʒxbe*M/Us1E]+Z?mr:@ֿ>X9vͿTc1V]okbΨ mZ0o0h_\.6=+{a^J) %1p "8f6pod3@x .B{As5!H'`s2pEjZoD *WJyR0UkRA\BF@2N):Av!|w%j׈ng >s~Ẅ́|aSxtQ[s5ؾ^s9 ެYПuI0_t%ŃZl1G"%:AzIFa=#8Axz;z:ykȣDƌqDp,@fKYgY@EC\=0I;䩂<0(zqJ|AqVO}ˀ X$ ˜sſH.>y^bdNU AuV_ow;_cwݕ  mIg\D.d%Tċ=ޘq1iOȜQڕ*%,9`o^0L@F\ϻ`4VNa!Qm_c"p;"o8Yn)76_F8%|\)xu`=t( ,<\xTC3jwQNfj2.`ɺS Б"D [,qynG<15Ye :RXbH(ʻ"hN6E!O'lWo\^w<|?=3 uiMLv"NK&7`&mRR*%,*W40f_i-xn}SQ 24UGU'|랣T.u=]:.#lyr4 4j4,8_{f tHu/֚5EvVfv\&&4 5}d xg4F%üvT }I#؛ԭI˛YY,WMBRe?̕dwQ\xj`YCIRoE˟?ދC+~t:N_MƖ[`h}?%?rv$RfbM}Mmmn8\o7ėSb_>4^>l^oW/Dmnm%Sﱔ(d6Q^1Y"OP"q-;ZCEZ!PRʮ;5D̹8!"5'W$A<̑IvpJժx+?$_˹*7Ic崙h6d 1#[{GYHOs@3 67B}@}dpjagnFvJnW|llƸpYܚ[` 'G=XWbtR+JŞ/D|VFZeoя-0B--',h~&=GG}u컋!Q~/}>7 ͷhEhJy``r\B|CdF:a(Om~M>_ B |?}^5RJrUwܔc(T_4nfk3,r9;>POذrkWٟ(8L9'J 8;9mٟ3.?\qTR+="iՁvD(\ChN]'WE\WEZ][oc7+B^&y86/[bu(S,~UB՗#$Z?ǫޓx|1 A6dVr2jv;`J܂{C\x3:xVC?% =2O \ťye= R'WB)iP%8`b5(*:jS+MRiQ/^33_XHꬢg1')9d {kH%5&fʢr=GJut:^\4xF.8 (ndE2:0DρD ~iH_ *Hj@m ;pIy6 +.4zpVF3FyUIRaSdY)_Dˤ)e@4[Hg2NS͐hY $RA),\ fJ(h!CW>θ۸g'gahhśYb] QU9kNiߵk@1Lr6BDeLG$IM2Zpk%F ź@QWN4UZ_~PcRާ8Q7i{\:bjs M>rW?}7>7䀅6=渧*gdAEkZaaNV _]M*3yԦZM}ޱB)lue[4nmzEZP)Y{cx*` l&B-6U +jl+qg/pWx;tV\W+i,=>j򍯠964iyXdlA^s5^s1 c)l Jd`(!G.1|F7"y \pVd)A+K7ITvƜ7X2dQo" |/Udhhu Aڮ7c>{v[Q80'ښo&p=R !^6.M30YkI\3I2'fA %& H,1 2z5r/Z/ݑ>-Оe=$r OC p\W=ty8g#ꇭ+ ysXG4kgf! s?*a@ΨM>b7,\32bœy.\#Ǘ͏<<9^l7N ,sQfe0. id!)Q`d.] @xR8 Nؽ5+*\P|gbݎ~Rλ4(eP$ Wh$[pΐR\RR*%HЫ" |L3_gUi_7Rf2)ot~E}~ѦKn\=\E\x .)J%q|O~J S|Gݓ.F$>;+V9 D 6d 2ws3IÎv$SaNSؤQlLdӛ]=SVGNT>fKuJ$vD LJXsBbPUU.A\DVUδRYB#=!X`hဈpst,Be^-0B֨{\(VZ2tn?. !d)6w=7?t=5w]ne Dm"F@ 7΃󘄶&Dt|9dA|דRmIӝYޑF041J VuNT'6BR`@oVǖqO(Mom?% "(TLb.YJ ~&Mȴ*B4Y{R= OnG: XddPRnj%l ӄ^7`>ǔln]#Pj{3, { ##H h3|ேgqjjFF\1IoC)Ddu)_~?~?>{:uB+pB)Iz6 | <4|hiaMY|q9qoirn )|Qp>)N&iAȦm~>7 ¬/Q,s c+B,a݀@]r;]cՑ9g]lN5vZqF[?:-d SY+U0N|iO_VTQ$%ޣ P*ecC ʀ#E@@\ɀ"C)J݋[Gih!9p/1KhEoA1rsit^Ϣ \lQr#s%TK;zabG\E&]V`UtV8ē-0LN3J|uOщWk0.FJ@173a >*swqfO?F]Vl5ZQڂh]&ˑgVD\FL) %wB<\U(Elrr,2ާٸqpsF+jW:bj`۴ȭ.]~n~}oFоfϰt= yUv첲3|6}ojA"öGZ(#V5Sqn67G[Wi0`mMj@8gSc<FU{*RH:"%냨~, N9$ 0tx} Z* S2Ѿ$T\!DtSD"v.;YtH iwmqJ~ݴMd@N`wAPيe2#˱oq.؞Yjif$,wszYUUN-[-B$7U9bɪUKVd6k12S;վV$>wy&l4N֗;d XkXT!h/S]ױ=ұiYdwo~J u`P(.ÁKb}ch1AыЙEv,3/?|!?Ugʣ8CYO'Au?Xf^ZE|XU6Du.sɜ2"Y#q#z6_osѲ?,kr`U'"" ϱQ}KCCo֏]EX<* M]%2<dž%]_qu|xs|ՋZ=:ϜZ_9ZsTd%«G2>j|)trqܪ×ÿ_}Zkb8EU)G6C4 V=>׶`r/Žl<J.+!q&9X*=uJ^ .#Du"q &N'}-Gj| W'Yl& pQiRQ fN&iYeUbmʝd]:OAw[<^/!G෭7߰o{8f7E7ns?֡CĄ줎Z-PF!xC"%~(~l}/|EL 2g?XU暥T;2 tl{'݉={sq>T3*R(?qȶ#))v`z=OK7{o63 B*l̡$<[KevQ*F@ _C3j0r2KVv[ZM:Q?WvMa7AB?\K)/ mYgq{a[dZ_US4>#:G8ӯ V xXdmƄ#X$8ՁUC+hi:[VCOGxV' {auVeRrVh㳫kKV)Gj0nG_ lR`4#jsF,NUpnds4qn ݓ@ ptW VasLAIhuJJI'j8RO5؏ ![Vraa FQ"m+J#+hlW%wS1K~ǧ+/dPaqʼnJI. 6;~d5xSil[$e?z5P &pBN\kR-a 91\yoWM\gD h:\=C"O+A9j:w(pդ {W"V•^;w|w(Nꅉkk^+soJ;tri_lym/ATwo AL7q`&-ܤi}W_CL|ΏO測hƧ9N &>mi~)jq"ǣio_,޾n9QZnopcjZ'c~8;>;- W|e]N':Fl)W/?x/>y: @oI: @'$tNI: c%LP5웯6rC|Ij@]y``BHǸCv݂=UQ-=R׭ݿ *im2  (r6B"akjK,ƒ#x.Ύ}h0$&HYqƆ~2Y.hܢƒUlrr:Q.;Gll"J5R GD:N8.eWWBF2+d,|P٠q)clZ&hAT c4qLs5e4Vc#N5EqF"&_jΥŦ\R1ifVBS>LW3r(ISeY} |!KV+s P2WV/)W1 cc2Fl b ѡE = SJR¸jsT2)5'Vȥ#&OP%r4)pjw,vK@z"%WV"A[!=$8t4iM_N%kn:"m~xM1O乔'&Q17U"GT%1t>&{,=I$Ҭ,:<ր؈28r.bQ>DknG$V[xLfk&ѧs;vgWOP5L&iOZ{IT{ooz0OMv69)02Znc $+ք6 o;fւ^d(I! mB4oXۆs[:\z$cSf#Ɉt11dEtVmLvք@r޳!Zs_n|a\жޟwkl%]bna/? L.Aq:5\f\z ^ Q$+sJጛ!hRĐ?6o}Zda1zqζ\rfO8n~ AF]WWˌ[\9믷:9suϷxC=rr-z kwmSsud*VSWcww6?モ}6(5>65n(#xhsBp a@U7!jIS;Oҟ6p:99)ӇM{!=#%ioC!jal @ Zq2qˡhe/nx`{=_)Dq6+giM4a9`Qj?{w<-_uX.V͡RNG1Ska]b HB*xO,HJcPȹ):"<":0bJ+;%jJmv◖ rRq[95ꂍ` XK9/%  {VӀ wQ-Ż2jNc&- p@>j}(ҋX-"7rk vȈp 6Ś9VgOx@b##ˆCt},6/Q !"h9xmZԐlj"boĢTJ+y,TT[fm8hce[LyFAfs:iW;Fz$P>yj4[kmH&،ԗ?do 9X'ؗ5J)R)[=^Dix-NKW*.&ݮol/ ݦ٬wE͞42DNiͮ68 ٞȆRBRtXcp[E8\467T Ͱ??AgYˁDBn831ڋəIbg5 U$ΰM>.O٤``oVu34 H& jfGi꟟?x'z0Zc,:@`)l p/B 'y#kʯϯِq %c|u'ŗ`o訓)8kdalHE>l t$wigjop^mH~ڧ]ҵװWap`p}tӗ;Lz&QqUJ=|7\%MLL.VxarK6hkmpՁ.#4/Ҽ^53٢ʳ`[Sy6-gZ G_y6SIMWyVh^ႌ!rމcr j֖4z]XsxVѱ2v(;ml*w8 e"q+Sv=BX]VQO.$:-DFCâ'GGˆI*+RFGPd5j y.4JQ#e YZ:"0 )a@clOjGd#`9K ScD.vbM.hs׳8ʚrG{˭B-:88bzldR'@3[V*L^EI9!ʰa!$x?(fHx !-a\ɃLWIi 6 <TG-n(;@wK8S{Wn1Hg^XADM _\.ɍ\EJTTۡ2oifq x $HlI ^SQ/9#.uHnމ]bBlϽF?^=Q қ=w)8>uDI&U&{g(o3g}IiJBIϫe!8e2ِ'T2mANɹSOx/d|28O 91KrX%CכI;7@HH jGHuðafYF q8*e`^|Bc/uSvTF֏:dݨus 棎6HX sts7c_xFS=9pk:O;{u~y??w(goO7\I9ڳ7 ڋ%;; M ͆Z59j&\3- mrq͗랷]:mC/̾d6㞯( UB`JC<مeN貎VG&v {Mb QJ=J);p(1oOt_:7 C:B\xeطk;e4R?)lN$D,B4I)gBAE@Ldg pfu=ud&ru#.mp@i.YJ !2y SjP8{c_aoAτˬ'o"OÏaMLsn99Z>*SiL`lX_?O|_fuKIUU˶l3A=(4wj1u/]+/G_.ǴaOB3n e TvOXرUn\4o 4>q 2 I3 *M&eYaTJQjt@J!Rר* bT-Me8Қ(3Y`jDzM66FΖ#Wœ"n?\]\LTgS[<H\&Ą!jPs %DCȈȑ(*R[b Y\0h;*΢ȹiFVG#$K3>^r*yby?}|=MR]׿9*sQ0ktR\Wth@1IN)k@sT /\EBtc9vjIZUk%fiݡфpA,Po-'Nژ$BSh#4NScp@ :g(@,OG͕MH<.TɆqgcl9Neנ5'׷eњ~C{1#8ý}ER7OeX<3uu ojRHKZ=ۚȵGc,W6]^? }>[غn]L[s8zMfwW[زun-~7w^/ۢ祖!+͞7, ϣ]1EW7t\. kсK^S xΚϚn[ngmRw‹?;)tt(U%ͭ͟I7gڀjQ`9525s2}e*v^NJHWRaqa!I66P6AhJX rmBHmjӅy>[,-,1Ji<K BA;\ЄN"D"d^ظ `x71 dojoRSG2 4 Pƍ&G#|4*ĥTL9^8? 5A)nNΤ Q1xΒj}W#glF.cpDsZyȯq5:ʲ[lvS+k&5T3'vwgQߎAMEOugљ_9yv_G9N?vqvC-]_07p\-yWq:φWOprʭ)s_~<,~v%%oڥ9K_>f~ļ*^!bK37 .p>e~+372ZsK3'D%)qQLTM2ix0H0KzȧT=iC'?::nq""50˯zݠهՠ\?ԡqguIɑ*=zRGݷ!Tͻ_OPPP/e#:N\\KE&.y9'Bruۚ~pX5N?t>OtOhTdJSQt.LJw>ΰ:)W\Z^8-mSRunʀmO.q{4u]*/T-Ӵur5Fw7PwO:c+ e72IӔYچ *1pDygxO}4é*k6SfηL>użӬd\D4h/7Lgnfj 6M09k#@KRmbD YneA=>RyGiďRY܏~j`;P<@-Oo! viq L2P4'\y#ՄXiSSJ90ӢqE56n7Vdkrҥ,a"* ½U_qA R.p9-pOKN8P |,ƕ˾- M@Às_gUa]LE^\ &?[g.2W_K >pҭ ޞE/m`ohBZZhWضH랐F6*[Gջ ShUBbZB `(G^r8Rn2rʨ=VZ5N$a,%uHm m&")c^ZPoDtS~`5`UqQ8!('McgnZzwObO,kޟezMUSe{«<^ Zzky\ v>XㅈFS8sў_Zo1ߐ]@G"wU0cuBU jcu 3%!4D[`Q fA C 0beRɔTYm|05u19"~a \ո-Mђ(Fۍe9fKa͙n(piř/ M'N;?bWՐzᵬ&λ*F՜cx( ;`$Z"ƪșNMATFKi43?(pa`r8֒ aZhT&cڛFS(8P\f B]<}NIkKz= 3S0 5+H7 l0)>Ag<.hf@u8 GXZfњ>9qw Pt~HSwc) C9t~<>Eeˬ=%x)"OPבW%^0&Gzj) bCrp甇Wp{l1m5*D@J ;b:m`=RP:'7tI d.J|v^ \Nu>Uy?yg躣,>}Wiw|zS"V!7aY-1G\+>8GgݩonFZ~[^T ' Oeb"utE̙^C=.vQ=WO?|r{$4GrHga|0r 9?t`żD1gUW9vTFuFBP-棖yGf2RU@i0 +>H?3<.1S<[Y_7wBw_^<|'OO_ ׏8ɴjC}JViXd!v,V* h]1VėE!/b AJJ۩:p8~jl! ?'@6+#1ˠ-q~oNe q)"ؤEC@ʀmX9,(8s5;2E%dGȄm>Fi<;J-!h2y=:Pqm7لګxyQvü]>ԊѫC.GL$TG*B>],?#RRƹt[$)N$J"HZcLH{34ȹYXHpaKؗEjPNm!& _C^T_-ļ'qy6aapF=W~˂| N ARልE'Q:m)Wiu=Bv/Pgam*$D uTD@{K\|AB޸l'0idbE ༷T,˴:Q"ERJga9[n_ G곯?R7)R5JjF:! 2dBIC 4hbD|%Z3nIYAAe)U(p,i1$G!A$7-54[Ą1rT8͹Pa)g*~cx/tYF%Mҹyxt^lAzm|#O .mA9c woR|}L7BR ƨ ZPU3wf+QH8ez uLֳ UQ*K֌E6CcX1]g..U]8(?xR ucUx ^y u@;؆{P  Ġőc %6%!nuNC6\d+,MtP+5Q'툱 ª M[Ms~\21Ek77ڢa-Z"+ahO~\HljpJFxYe9T6^t$vEh!#2MFŀ8B>8][s7+,?% @㪪~] WK1E2"%R1^$fHk\ÈL7_ndQxt0g=N$Q}Ab-"ˆ(ZDlq%  ;yfS*';Tdki ϓfY{x,c&pEi5},%(jfNsӃAȷSэ*WtIb=8p:sN'NtNOgO;ѸiŇ|GTVhbBz0pp-zp %Y>?vVgk2;{1j'kln? `)Ezl,&u`6mw\ϩ䕦TiiPy©9&Gx'0XC"+O=t~_~=;Љ~y rda|:s;x" BPL|RDE#󠙊)a.P+" -b\| 'Ou(f8|Ҥ7t{m]㶮q[׸k5nںm]Rںmնq[׸k5nmk5ޣ={f:PINh. g.kȡ,Av/e)l2"f~pGk1 J+-T$%LhD[ʯ-lr6ydieN&6$R*ksFzʱ赔&U/69>ިSQ4FMq .Yr)K$(b X#T¬8F쯄}?{2ѷZgѳmPH"%MBdz/XtSL =j+UQ:9Q#h(,he(LE5zj: 8 5U]v}(5XLO-5K j[zuƃh Q(tA$c4iTklHH^(h@D;ΖWZa%!e=k 8l :%nU J nTid,&%qbXXL3 b፪̌+O뷏? ;\~ *gwh^K`',@E(1A3o,B-(ViRvCCxg# I3A%6qDҥ:J]\}Abڱ-j¨[n*wL /7 (mprƨsɛ"W2HAz[z̐r=DtYK#bYT ꨬ-YS?>q_~l0"[D\B0Oa(m[LEw 2\KO98IЁ+FSBBQFE8Kg\p^(D{W`!)`iGUc\4YLKEYe-.xp8٘!-MTB(*Sr&D5yOnqq/xXL;CUvxYpwXPG:qFQG;nF?>P#я֟":6I8o6{OKA2m`*V'ˀTrvE/a0 o~M3XS eAŜ3ɍU:'G_h?RRTW3)9hϙ"jMHh 2#,Z =`6SWg0 g}5l7V ,ċЍ}tf.}R>qY$&4) l5X Nڣ &-[b&9,A0R[V[\.itBbLx *yRz 8-x!eyħ־޹ubba.+Zn匧s m9^vg~>yUfߝ/ξ8;]ǗI tԿx&ojo{3y_0}O;'spf6!&{]Vzj{>||!i8? Hɓ:v]4=9J;q9QFlv6ޑ8Ӱ ,E|7j-VzJ\w\%SWI}h/ۊ3ީvZ{ò X]Afv<҉LSjXfdےѾ N;6RlPw]mo+7k'mM\0 %Ww+lYZRrridYcwd֝vc-s~FY5|3RJP]C׽hsFH3T0c IrU }1(%taV ;6cC-흵 f׍iMLځL2p40Pޠ1+m,SR" ^+Fh`q:#QQ/-X^ؐZU[pQ*7g;vgbw.=})32TxYPI,? Ʉ+댩|2-I]-ӫ$!䤅kYPYFbVR(q ]*˴7F HF" K$}us;mĽs` 灡Ɩ=x4 __+f"Ā!$\p]k,q AB#ѪS&'NZT0nQw/~m?0B0tC wq#ɴz4t?=+31KUx8d^[/[D %.Eی\e QcBk+HtWk[MD+_إ;;>oG@A ݾv癠AZ4 nAѤgehP}޷dNhU+ V^z[e!K5au2Z [lQz0BZd=P kKsI֤2D'RYۈ1ќΙK ZƕkBf\M}vٌ@{(.iZi1>w<,@}:JWr86ǿw{ߦAm}ТW߼O[sEnjÎ ͢yp[GIOTxjss[ f^]Vl3qs[܃&6Zb3A=ڏrM}&gfUC6ert ZPYL$F{ȋc لb:vJXWdo%֎b'F!skϳRҧY++1io'#R`Zb feV\xVcLyFHK "C8dR1 i5GmyEl9fٚ8K2|"+CEK+N^TxryBS7qGQrGO /x-Wi=rlG.^Vc=I ΃. eI,ƒ(J"tRcN{Aj|Nj| "ST BSZK T&02 $uDH d!CcP 5B6'2꟒L \R6&[rЉe38 hY|Q2;mi)NX#\RDY@kKF ƣbLz^Gσ|#&R{iDخrT;}﹈ ]- O]^&"+WSrF)Q"K*Ob`ޅ2Ƨ.K 撓fez=eAI-rWs\s&Kk\j.7 =lV®Ӊ?_Lz|fvW]ت`w:Vk:.{i:Gj#RByt^]67=㡊taAsGw߾~7׷| s߽VPv ᗵ>F߽Mc[MS{Ӧ2M6{=%./h}kfkk=xA|)}1͉wʦI^qb_K54^<VJf/ ڄHBҩCP-|8U sqVNb ;S0zpwm~hc쫌^~6N&A[`ȍ~OIo/۩9D Hfu.H)]I"#ps<:2U=Ɇ3E(=(Pa$(zctE8+,!g6Zل(tbE2CX1r\n΅k.ny9o%;Z3څhE+6 |ΪʺB2a Į^u";g[64ڇ`/P/t}˄Z6B{b!gBZ%N!gHk~Xni~Ia8 4&֦րNspr}֍ߝg>}c:.ؚG!͉?磴'ߟW)XOJnR<`@V4<׈+pj૝WpB԰>(DK'_p]mNn۬+t_ φtk/mVŘkSQza5ߟ\N&60}Qӫ&׈sv>O ^}>o&2{=`=6.磙~^,^r-Zw kV5Mz) 9|0-ʦ,4J**kME˝8YF+C}`F9tJRT[EhM% 4ןk[&s:h=Ss|adiynU sǽVJqȽ&[^ʰmqIXmeH^ke[dj{E1WEZk7\Cg^2Z(\1W$[ya4^7WEJ#;s͕-i΃]LT$^9;˅0=&}bۯ_FóHI~|GVWC.GD k[Ef|{".m1EoIJUg3Kzed }~?nAϳ0lpr&ܥj, {y|{3F_"Nⰴ1eS^aa)Sy蟞o8OBBUgc߾s5T1mHMW _CeNe[QULwZkAC >L\>LZm>LJa;PFqa\5l*Z-护m*R 4WX)+X15檈"-M7WEJ);s2z}~||f\$J Vz^9JR+~;W3&߹ijR@qo-qфj;ihu)irZV-eeJ*UiڠrDzee ,-e~quD09$T lT 0Z#pcM5:!$B9X!g&^rҒr9&y̠"RdĹ?qJNz:< $^xJbNҋm%Rʥ5Cm$יh{5nBnlLIjZ%,)мA''rJ ijCnA׉d*$hMv0:{ӲL1"+Ųsي]kl)\y^mjծ>H4#J닷B+:E E 06QjT۠96JM=\{dKQg߂fڷ:8(Z6 `[s -gMY"ȳViaw_.R0e*L*U3>C?6* sv'74!d"lp1'g1G2A3 )T1L1 1Y$Kۨ5rS%%p$'מKL1 fkM ?n:y{Yt*}/F8Ff7o>yjñl4WL.x8C/Bb2A^Z9T%c!B7y[gK&%{-c:c6+7n]8y1/$~*Bqn!s*9->UIT,VVK1d:9_FkY  u-dVPl,-\`) 3A*h*PR@e?2W`UWXE15̩0=RA ?wl6>lDA =^,dFK7#4By_Xs*j^gMRr ]RdYJ6#W˱gJ{ȱ_) p_ Iz`頓~LapՑ%$ ]V/%6e[.R,\.Z)"V"D4v֝nF6IѤ=ϤL5DeɑgLe 䣣D1+yqrQ̳YտJkc}cX'N8bierH~4` rFwV֩u=bz/XO^ȣH0M#J/I92"Ei(e&#Q(9eZ30{-#chnDdfٺZ''}D~g?ۺP<" ra ;Ia'u4`+)P-Uty50$5AZX,@vGE3@g $NM V+%R.ffA 'ɸ-l{Sӌ({ם{zkIaNN+eVkMن>>̧v;GʛըbܙKPA!8*d#X)Mʳ1X+x; WJ`EkQs.b\f-}Ԗx93@*72f%qlXmf  wg3n1V +c'_W(M?V{FlE4 NPc`TS;H{EV6H@gVGFÐ= IItY$R:"*QGlFl;s_P;-j̨:okE@PrPČjb oZh^p,AϛxY4d`h ('8FcEy@Ҙx;R ͏""όCra aPXC涰9vDwA%#1q* :Q 7zRgGN BʎsI&,3"f>h^\p^gYr[\qQt 9$H IRHP&?DsXwfmPfCy;<-b\|2 gq7Qﳩ#YoXQT֎[u֏d(@;~TO9.I9ʕ0-m l8,F9ιfk0hːtBdPѪH &bBrdvPd PXgI[|Եj3qft ot>ξڮMC4#S]Lե%Ԯ?.뛇L64^Ѭ%Ԝmƛ[Rs|kڛ|mV!iji|Cބrl]z5SsTo5u:V :Q+ͭH7ׂ2{WM}4'iƮm]<1ȿ3RXr Ti# ԰d$8j#̬._aR/1",DDꥦ0+Cʘt杉lrgbl+GjQڧ-[Zm'0Gy} 3X=7Y콶zyc9w}95*r &0ŕe *3&\Pvi.7Ӹs:9j(0uX^4kw'w^!M8!0Yl}+/ҏ`tIj*{+O <$B,WȦ8$RyId`%ꉶL)uxG+t!RqX`b4P2*'.²15NZnSjjey7ju~,a&?Õ1O%ܴ?X'1͈y y6P{*٨l_>Wˬ&fCKNzh`ѕo6AW{3>^NujkOfT5Q-vճ"=cpp\_xIz[/| 8'[/A8+GŻp4JjZМT-pڴ,"=zfZ.pr_nj:[[H?_Զi-v@|TÕ'dX^b0G+ֆ楀.Kb)̫u7ՅU$ũP|Aq2Tԟ`Ѩ>ZKƍDsS{xgЀ &M+I3LR6^qa[lvjN^e{f`&}39q UG .mkg'hPw;Rc:wlDh@zO[z/ ՞gٜT/=}I9`{RWy \E\) w;!ᇷiwt6w&8o_}wJT$﮷ w%[d?z&T KIΪ%BwR@<9C`[w6S(OYP[i7Ǖ{oX`G},a?*Ԥ+,:uy? l7 Œ9K˭.#)!2 ݅ "P6thǭ6A*|YiqsP1\(wp)BSϘ8P=d:FY^1Tk%iT}t{},}_ EV_|: b B}yP \a|u۫ͯA%AM3fn{t{Oh=K(׼1-Lĩ}}Mȫio\e8Q`XJE)OPTRA}Hq:ץҏiQ##Ej[/!^d`H,"ᇖ1xT]H]FGS}uF≦MO)|;?#O#0S DB 8ZAƕ)Ή8;y@ x&PfTetۨQ'%! j@C&gf g}r2e٣.u6v,HԚc]_w?>1TMK+5v4&Y%-z.,B["{:e: CJf?yi$xo}`dn"[-;-Rm.ۓBsJs!!JK9.6AG! )#&J@RO )0p :Yٻ޶&W~yۢ> C@zNai%UA;KRelӱ@#\^H-h8z<>wΖ bn& tױ/pm~=Emf$ӫI EsVa1 iISb$ik* 9m֔Fd"׬9$PycÃw ,0|HwDBRr=XafXTq ! L\t= &I.n_xźqpDM:(ĉ"0pJT (,v~?/.R~i-BxXF#{)-Qo m؉4T]Hx η A?ll {P+S djL040 I/cmƽIb(&ƅ ft2Sڞ"?Yd>e(d*5i(8.7A{'RP3Eb< CJ0":lBrTjv9x"[6E )"{6 JKpM`%'ggr9%#t*U&iB5RM?Wǫy GJYU<\kiy3C̜QNNzٷ CLQytԌ; ' ok:kW#ik5rRRѸQЃN> /], f'ukmuȶZlQk:|Aa|;D:%.,YIȚ D̉*bQ:s)L~ux_<1(NI4 >܀lÏƾK@!=Rr'5X?5x 89[yZk P|QJq\2 @R J"z[G9/h!xK'L0:Q15By-a2+/;>P<کP͉[J%J%t@Q{`Rz(+# >*B=ࣉ]!Kĭb|4qU|3 >*L[ ?0'#XTYx85WZ(ڜ-rm|S(}AbLw?iG\v=0q-3M΅{hl+L7^1|PiR/Λ8{ިp!T. 9#ZK sxe=Xbc9 v(؉X#w4 "=)Aҙr#EXǀ # "hX)Q; ZDl ha[0 ^)Bgl뾒˟@<ׅӛ \:2i|tM"#o 7ɻO7f 6ALWJ8ŏ VT< `WSX2$]?H\ d|s֖S;O0%_d{ۓ/`/Uɟ=)}>L{)c.F81l0.ds6+(K=j);05Ku6ii!&Be ɣrr>, y_{%t|𫼵_ī1a6;4`H#z X>Ԗ '͡ڧO4}>`fO3ا5:>`4}>`fO3اLdX>`4}>`fO3ا ~POrowyx<}>oj I#N su.:e▋]NS~)Lw|\ ]cr40LK+hu)o罡7t5tEa@  2(P #1`UD !ʑaC&DPXj #QHYu2LwZ \ZFL&8BZ":V;cgKCW%?#h6($ r;iEM./7q1߂bKotQY6OI36{ zޞx2+iچ[.mF֊tݐOL6NTߨb؀]L*i>ΡtT:Jo&9*!(sM*ܛ%A(CɼۓԽk'^++[Pn<fy@Z"M-K=;6Z!bjΛ?/m^o/|LSӛ?e2%RDw[\m1R:Mq,W1u&1rC i>WMNp璫38ܗ1A’NXJ{iI#㨍r0Rx(jx ;uwrS%b>@Yj&krQ͠tYqn^mW},7Px.=RKpCJ R*U4jf'rDr={  95ct :,@/ U; RH- Maq׌%T8 1 DxL ug%0l\ױm:cgp]l$fݔmHޝna $r-9Jʺ miO\@\!K ]J墣&y'2W<tv{*zzI(#|Dh4eT9N"C^rlY'-@%O''uiޟ\ Ӌ5ѯz:NN*"ƘѤH>(roC9ѯVҰ/ź -Z uLu ]|g#ԅO .N^~gG<>AEd)$ P2 j%-x[iD J)V4$!J6ɭqJF؆WE6R ,W%&,t{T/bճeǍϚgyڲ_>d~Kg.2n<\/߾|_3883?ü\r2!7F-^ђO&y2+}kH1*a<.^={.uiL3bv|~>P &?=}=/:MROO5gZ+Cq2ң>,S-N?˂$F{2ph/^И?`0LO@Z7))'t~4GU-fEjch<\OWϾzh )) YV#ŵ>g$9U(0P8)$Ee]:ځD9WUZ(+fhFrPe/ȧJ"ÐzV=Z۵2,LQ0-*]r17=2>Ds>,V}AԌ|}@q2U`(CD]InHw<ϋ f[dQ:3,o8N0lL-l8TSRgب)mhh{Ӎ3vۍol$V;.ɱyRFgMוm4${8 p!Ȏ>{$` ĝc9-J|Υᇬ~Ã4ܖϒohW[^P&4G)T#)E8J!LI?3B+:Kny@9e$uyu-8I $ȇ'C/^p=~VK4V%p 59wceA-35yVQ\#*gie֞\:"u$T!w8Ic:I^{'a9i:9..v/V|z~9tNB W@R[`KCy ,FRDULk:zJQ9gVx/:zixaVEufln}v޵[gTF;D!z@.)J)z|>XcuɠM/˱q&޽Q:D"ʅ!g\kr *}({ "v<߹P bvIa>rJ "cr[$qYnIƑN!hreqc*h%{gqoR0#4'(xAc3vvV]Rl2j17>VM& ,n2uZfк `M«BF*|UGUv;ǯMEU ͙W,ةpJZ'#d!ٹn!uܛ<|oxmY37Y[-jݭж EȂZXd!!JKafDQHc Z@durE؄ AGڱwmH_җ$@*VSK˺Tx\SBR߯1×((k\-qƠF?{D!2a.(CS0$h9kn;am2%=_֊o~dd/7^훇dutge+9nGurm!fר8IG%mdfɫ()B wB|zzۻ- L3*% y4Sܫ4E2iԊ"ՑF%,˻tcP~fH?1Hg^XAFi&i [1T$ɩH-EA%Ku ?ȉi]`R#$s) 82G mrh_l /3!?>j矃?<.*f@hZW SZs٧="'qR5?(8Io0>>p}0ԃ.ٕPµ䇆~o^w&p5 mE?׆м8lh|UYO_2Rr+ƽ.>=?[{cz/f{O%95s Taiw73UB`ClU++Y5O7d)]lNÔ*J4pv8o@s v:>q?k+<YODk^JgȎ}8<R)=mN$DEnAMRkPܠ! k&8x/!i< BzRo)5u$(O!%mjBU6C10f`f3lsVjue9 [=Lʘy(8{\蜞7wAW!3i6TSiRoWzT߿t3H _qs]gw]o;X;pu-q뼥{Homh~gl&oi%|x:kAg7.h*SK.ǟ3&H}2 YJ}C.Pui;puq5y?4ƿU> ޗ\o;%7Z{j۵y;0~pueo:w8[ 0s#\ UA~9-yu[uamWn \pp 孕u!0^)o6Z;I|WXz>P2Eeh 4\aֳ1>pA@ sH<&څhe3(3Y΢D\&]eXXXmop;]}8g:aV:/ogYE*u5ImWSpԴjyUۏo?o?^~-ju\LGq[Ջ׿T~RxJOUxլygrcu1<IEMs\x|! 's;֖(/S4:@`ѽ+yL1-T ǼvzJ!rI 0 L@fM<B.Q8DYBeR,I2rJ< bNz*ʸܵ8PTDPͥ`A Srsc8{bON'^ǂW΋(ՑFOvИ}"ʆ&\xT#9B w m!1HA'I ! >@'Ew޾Esd:Yӷ|׋{ݝipPDmg<}#*Sܾ qۡ:{8;8)`$*A3ꐴC+8 jQM쬘 AGA}rh1@z.@&)t./%kѾ#֠ H©IsPxZĜS?ST##nu0_'KdD;֋җG1b*"ve B.K48w8^,ͷKl>WD 8"P/~FI{k@Қ#B;BAQdοͼQšy/?kmhў~ЫHqFhF/Ą:$_hixk3ޡmGh9VDHn ш:.ȤCD;D B/P!mr |bQEѶ(Eb0icp :g(瑲ϨD!X'c0.FΚq*}z3Guxp640!1;$Ah"b-$'d#h76AdgߣlcdtѴZ#ON4CGNg)E a];6ڼm _$G)mp]NIoEKJ)B m`$iEh!#2hMF"ED: ?QT ꨬ-s>lY_VǦQֈӈFHg`=Q> Ꙋ(!eps$uBF4"ڌ2 Q,e=G%J\ޣ%͸( LaX5G\jzq\Ok|ŸdS( EN/1p)1C[ APT,*G3yO.bܱ>TL@KxO=No8Ov{gFDyE}>PԧQhf:t"sy=S`HN"9Q^kfE4[C~}M[^mdZ`"]Үs靝KYתZudJWl* ĚJRBCuҸ@~N#vҸa:i܋JN~u!N/=P&%'~Gev|{vJ-솎" }{紹^^ 1EpSQ@(NeY=?V3mj]hiN |_~~ǕE/q|1JZbp}}$Qּ;$}|8#Z;DWW^eg ̎s870;^n`6}5Q /~}IJr0YJAKȏhI%>I%oyG͛f P~F۬\@GUkHU)Ckm2j&͘{a7Vmj.`ǃ.]fjwtCO]2F᫶=V]!xѨ+$W0XUV1Tz $B5G\ccQW dFw)+)],}x5ӳ̎ƙ-bCAx=<*WUלi|<"xN%5ܧii됒Py©3zm1#ͭWD5w:NJRTQDpAX)#Fz-kFE&m-@Nv@=X`E ر|)e4 %Xn59U#f,b9;Awi|ˎnew~<7'GWZOK~ū0}_CRj_?vH1YOO;j*6 v|- 9':;/?Ͷ}kߜuu_r]ݗ=r@Ec-L#"|l4%}N$DNx6<@0OsZ fZ="jXiK6ꄎ9xp :t 1.hЪa ] : ̟&v=dx=R,:GTb.xKZ ;C9%N H(u16g5vnLχ f7&Tk؋#/w囘9$磴d}8ɠ!NjbGY-{&'eyPk5!&[8:CR{撤PU hwpwySczG4hG7['M'TJ dviU׏>ipp6:X5w #Ȭ`fC9,B-B0Uy6I(dC&PS}<J5f:a5ڼm٦GQ/˽ҵ5>Xㆂ ~ #RPczGsED@0xKOhi_Gk췞n5U˪-vڍU.UYKL&Ob;zZiv6= N6yۧҢtCOd)1=g"8n~t|/%)t~N\h<euYP)wK_.PFx?YF˭le܂lYF=l$f1`S P6K/tJYDL"IgTPYRXmw>סv>1K|)2*911 g<ۯewV?EX6mwh}.,]8u]gūԣniڒ֪>Y>q -OgVBJ6;<;:쥓&hgh1hR1EZ[6!0v,|Fm8Fz+7p-e8BP!S nX`RcR@)Z@ Yxcn&l1IWBQE\D,jrw xE9 )v5nc]„%0<+ !l`PJa` $ڢml@6vLģ{[{~b: g~3h|@k{k^if @2vN!")fc}gQgϠ T?'O ϊttɥRLLq]Gr1O# l^eU\pOc֩d)?>!lsFq—wt!ϕ#L:~n>?NDŽ HdRH wΗ 2 RcPs%np$$A\ ̘T.! $ 62)A-QK)GJT |b(]V'%ĜB^"T糷EBɒE بLcL[Cb?9P:twoʐ^"3X;*X LI+6d}9DRkxJjkzUtKP|"]Z֟2F6As(MdVPP]Q1hq Nln+inI0C륯oF]5wYOV>rqm͓*8«ןLXlKT^Y u c̤-S,NĂF:grծDCz CJ,VQ8.R)msQ@ٕR6@Rc( R8+Wi4c(X#+ITmz+'4LWO.L&N#hd2JLL1V"!QSɻ(y!0,6:dAAJBVUaMl϶0Лd]VGxx(PPvlqDуI㣲 `K\, . ;ox &--3\x94â ):v!g(Ytbʨ2d) ;QAEeT 16g=4P~l1"GD01hdŠIXmC*Z&e݀ŜI5,b\a׶G`9! y+![TL19$C-\J̤ZVbRQJ[TKzOUGi0!u6ӒMq4E3∋7iV*:pPT9$[{EeU ELI7}Pvl1 ,*lEÓ5rn8rd92{Fg"n ~`m㔄GΗ/= ;TLr)^c3 z=- 0\*S3y5$:h\ `W5K;.T$1Z"Tʉmؽ{dq\_ֹat}LЦGnnsK <jmR`i)cD)$fڣ"p/Puy#0SD;%K$ RhHTNB֐acC&F?hҦ 8$!ΊGOg˔(J26YER"Tzc53|&tG*RiA6"zѵXa?Z8Z'ԝn/o!K-@⦫ 0}WU/,3_̕&x]{oG*{ `vָ16v8)1I,ہU _ġHJc<_UEd)$ eJ܂{56O)vMSav>|5HWHVm镓^Az݋!y_fh";/k w0+"7)A zE'{y9LkK/ IݟazN5tGo;S*2 Q_[T|~>?>㟊TLFVBVZN2ϖ\~u'{ׯNƟ\\u>:Qbs5:1{>;|Re7Wʺv?\YOoNrYYP2\g-}EaAw2xtDŽ8Y$g?<{:R>?8wE#eЗ?/Ɲc7#7.t6,1vUaJ^kK״-lml5XÜ4Q݇~+\o` 5-H2k:+Zj!QZ 83ާq(ͼ&aS+9ʞ獏*q+u'iY6 r!g@uS_ͻyի9K7主2 \nLWYz+T餒R]ߤð!ͫQ+vع7R>12(PY-:+6r* eSlG?Tz\(p!m m0l{E]j 01OýYB6Tn֯NTLES6ީ)yL&jr7+m}r·^~*O& 2x#,qܘ}_ ryU㿎Ꙣ4o+J=*ө?m%9*KϴXqZCWkSCW@ː ,0 XE1YaB x@2@+D!e!x0>՘IDk1 ihXMn5 ]UG`j.ݫy?lmݢͤW슦]x+*<#r)ieD]>0Io[8 %72ோI%]h&=O7hLtmH\%jޣD϶ +S?Ο-\*VΫT4Ŝ[(eALaN;fOaeyM#a9f Dd[oq(Wh]EGQ3Hؿc`Pf`b~/Ԇ9GMӷshmԙ9~GF PtY 'LFמK(͓TSaeHeL^#j:Rjsfȵ1OibM0&7Yڙ`f`2o 69s|w#W݂;QKmǭwwâz̉fU"XWSԡD-.d݊W3o0hzHϿrN%_uub±#2l*e*"S@#EDM7{,|/*ョ ȓ*خ}Zc)*/V)&`LOB3dRy.hapnv֞tS<աD-w9c|%Nʳ[柇>5$T\!r -%& }ͤݹڝ9F= H;'uQ-ҒQӭ2qԣ&̼9O}]3|RU{*/LJeoqW$*E1K@FBR+Ii 8BΝmdV$zS( ngGV70xs (v-ovy6lL |(@GD%gSl)޸96繄SӍ2 dԄ`_εրIFEer>as#oM2[@l[4O\)sik\"NEB[<;]ҭjӒқhJoyp-*1M\KqoqC)N}sRʮa}\Z5qy/ՔF?"絲2aDD&pAjnDh:vF&vIr}uiF]PamU0ѹ)HsSfHZLRx//?&ߋ69[+\sY݂8t^TVlm=[Gik7!Pe"a-rjʼn|ZܰWS)'כk ?ɡa,_ \̱|<\Ky$"kY"Min䡏e 6XR>j_sx5*Cz Y@SJx->c3/=3Aŵ ,˧0ŻΦ>S|{rJ:!'䳫\#Kq- \9xz^ԗsw_8|v7m>~mgDEgPs5DpFԒ[iF?yʆ J/`r>8*@\[r#ph-Ў{_]K%!"DPA%*,^Hame"*@-Jhl4`rr6ȹQj mZ)"V_"D4609  HBǒ!vIQ`cn\% >Z#I5KL߰OUE}2: g\ >OrM0smSEjP#ԩupVy4 iSET>>Iy#|@Vh u"luLGPCaj3jX$?{Ƒ\r{C! .Nv@v[lCGĘ&$e1_pԔ(y[9_WU׃hu=OێV6j%7 O¥OHGFk EQń 8ZPAڻ(X{;F*cb!@0H,:&%4DEwJTid,F}*Űg  wg[e͸#ryI\Ixiy/n4| g؆{P , 48| FÂ52:X7QI2dcEFGeJ%Q3uBێ J ڔ>(Efqdb j㎻(ڢG^]iN%@2oHYuYG7ϟG7Lz>f8L#kJ~u{c'oU-f4r=?d{gކ緪1pE<7LܚN\z}ng㯆nv? oV1Q2.Gn,H.+)UQsk߰Fq꛺oLۺ77!9n#/?5/rL? JZuOu尣j3W78h~N#瑔1buNMD?>jn)*6mM$W3σ?'9i6+Gq/2砩%LX+@V W/"$@9mύ[?ځ$A2L꒨ ^j$oW횴V֊JҶ4!9 j)+77B BEruLIjT^ԋ΋tT ݦ#{֠1TK'nlw"vRR{CPᵊ8>Q,ĀX0]5>Xt%tȏjzݰ uξoTJ^paqs]Ƚc 2DŽBXM *".f\0v[6G!p%5&K1qÄ im)NADȬ0D{G(/2Q[Q,R13#kj:~8eYӋBq'K6B2*f_Ω"MHTɅd\;F~͖=uye:fϺȧ8sVRT9 Qӣ6U?M\<ލ8M]w2xG66dvrn8G/V]0{t!Y6$VWSyW/~9WUMK0.kk*gxx<QeL" dGGOf~qćOƹtXjNILg7%,^/?#܄Lqhea8B(%TTv{psDUٲs+v![5gɺRAZV;ͻndZu6EZzL9~y ]53DfLi{j1KfupяZRM[7njԭdw4_1A^?U_k)KQdv]xWOR(Wr1Vw7n-oTw_~M䷝T37?GNi0[e]egl]p1Vkf IhRޮ.']1zbvIjsPW Uᘸ}kH?_Iߛe6j/杲K.LD J#]Ήg:|Aa8gm(qArCD1d* Ơ )v".&T耮@W<wݐQBJ&@EtjB)rNR ރ^I=toZN⍻d[NޣrAp Wf(J!7I}ǩ"EE!B'0^mzp:??Fc_'l[Wy+xd ds$j댩 `\qFyN)^OV>]$'eȔ):hu'o \hP%+# K$TTP؜IM܃  ڨ<QJTv#0/r#gTkL~l8W8]os/Τ޶+IyEz?/ sc+7nH*V'RhI8{2-3>ZeҚWR܅dpȻݼv#qb4,F&γ$U ^ނʂSmr) 0H"YaAzMd).(SK5<ʔ\J'^[/Zp\J8.p $ϚM$BF4wy/C OMقBdɻYp3ہH7R{K Uqgp ë Fgᤶ֡- v#=OJ.ˡ2PޕCe@s)w5cw2_"[fYI++hoL !亭 @2)dU^ BVQE(4LjMeDt\8T ;GT&F=GmiDQX-FΖu|XKDfJƭW~>9Ӊz"~5Sb;^˕nSnfx4`<( ;`$ZU3MAT s/:~Py״suv6qZb< A‰d,oP(8Phh6{AܗcPȸmp%E_Xa C݁`dzn-H:qGQ2ܥz)=?ִUEK\^D"$rV%9-IpThT(^ӠrztbJ5r5?W~ތu"@34E+6ťi٪)Q IoRyP(`.o\VWSB74 ,p lhdo9kQ9R_yAt5yMƣYţWW[V\{݃Hu^y_h›epv3B19v_<MVث]rTOEƇox14[ʵ-\f`}3ZmfyQ4eN~2]zrxx~iU`w:u{VDqx| 4},ʃ?deC;>TќÆAt}7?|o˻7?Ѯ'>.ȍ:d}7V޼ia!MO= #i[ 0,~ Q4?'`^Γӟ^/hk^Uq[۴̏a1 >tN4^Mp1 C<TSܙ?gs`q2fedR's2&!HĘl[hHMPd(F;2]ԿdSЕ^0ޠ1 8⼞E8̶C*P{z^mN<2*#3z69dnw'sxsx[qeP`{ġ;UҼ:LݗhF ͬBl:bև˄)["ǹ~QxiՋ^$Nsm^. uxWE/t鷋QmOIQ龲A~! ޛ~ca|yєqGim>dݳ ;*+|*+,b~-vOHF{-fې֎huu[n=<进_SW)2h42:u :kM괧q冫ܻ' ۟eGQ~l]e Ia2v1БRp&ýIFN F4[-g1Hg<' 6p܃Yѡ>0:e%ȕrպ󐊹]~(rg1}ܥ&h֣=xL2ixuGxӼygygygyg!- -lllll=ū4ښF[hkmMi5!0F[hkӿi54ښF[hkmMi54ښF[hkmM4ښF[hkmM[oEl^[7ywi(f^JhV=!72BuWoy*΃y6륾%x.)FoeS گ~֝Eݏz8ßg16~<&WTYw6G2 C F)UqY#Y$bEۨ zRkO[oA@v]ʬZwgudN+ɅjCgdj'˼948O՟V~nHi0 \qBb2 "Ge&Ց3id,$e-*wF6_%7{_~z/r;J~j5\CVٍ;F1"hwKtm>gA;ȍS,Rگ8# 3u+l)O}K{ C$![(֧ qG0./"Y(a᳌) n|L.AeVCHΈMd*Չ|qcY;<>.tD\Hðĝr6 \\nlV+߳zB_|< R o2$<'#,zC0~N%R2,KJј3 EwIƞ>ZR RُƮ;/ؾ@;K2<;?bkh:'`ق81`t<0CKД'$:l4e1E5Z{usJ%DP̦P:( u&nnj̀6U9̂6:ju~Oqbծ6;kfS`2<#Ŋn6N0tƗB)^xʊZE%&K/Ri!XD !GbA$ 8'Y Rsao{R?Wl,q_,bX(+[D,bwb坔8V LV^ Y:zWBciF2L&W"ZS8JF4Q#bg|`RIъK&bҀZ81ZwP(W.N'_gYX*Ebwx !䉃KV㙫ćAa ,!pe]܆]{Xmv<P?n-*Ǯ<{gq ϡ`E?t/6l Sw5cϥ{-}/^z)x+tWX&0W9B'Pr]B'y mLΘ':P8V}݅2z@l@!E42d]Izp[OW z1ni.Lb#th,*@`Uƈw:Wy mSA"ԐH=$TPNҘ-$=zA9/UkD9H>8d(^6*7>AGFV};<=߅~ƚx=c5՛`>;>`A&'4 bKc9a+!|.qf;/2rP ^'\; gGz.)Қ IaUYc53E4hWRXk^J rtݞq/@sjt=tʼn7L*EkYw׮$lؖ[ClrVj7=rvx}Vw/G5Xt7u—:f9m;TnzgpX<Ph6{Es$K?9)#sPeqwjǣs ?:qq~Ժjhع==墳zK>]01ܢ#l3qE 6NfL& 6 0!f,D{GS܊%bf{=(`CpDx$gk#V,x9fS6ŕЁ kl@ ps%yzWB(l}AU"z`O OfEk؃?wߛ_)u^Y~$e @K۟0r4+l!g(GهM!\{WU91/A?MZEx.8TRa1',hmVa4࡫נ?vgf%]. p XFn5t_vW"+{z3,#={<{{ ]Olę#*v{ۺJon[ Cgn*[>]|~*³k;-wes\]wj9v*__o^$mn ïnD!F\+y3R'"qM#˽!( 7{'U)usVSO<瓻wW#ZnW."T j8Q"!EN82<"i;6)ݧ![_RhBwrY}jҾ-_-.Y3~ޯ=|fI)u}sYe]N3hy8% >U\=" "T_x {`bt9+wKmt%s.͎tV.X"uo+m| 7`#;p[nj~oF:,ʲP7ճK+95FS%LLVp= <˴'гGO y4XNbZ~a9dFxX.XâX2*KMPxe0{y*]K Kt8?2r9_kw']myg83'sea z1UX 3heDHk$AV=yfZ(;d`:zL$qEi?m?-g/>;cx{nUw?m7ibDstv6I_4ύ #TKF,$dNj,\uBs[~ϡ'Ojx{}lzav>jQe/qd~P@QuKQI~[g ^-ӶZײyѶ^#k}eGW:lryb6ن{Z0bBzmAɮjiw4ES<[y5n<%#g " ㊐QF:KQgOBz?gs77itL2ݯL'Ƿ\,teI)ʂN*rdRKƜr%IVn9sH"7U4/fc|7&Ҋ{Ux%TWc_OگHkt!m^ Vgj6ejKíʉW0X+`ચkѽpU|WF5\ϦF׳iu|N+ `1;9hU?[A2u֥Fj[W/b=j>{2rRlʘ*n}TAb#I)0!HH:YK.5yWD{u+u:ijG\6Wcwe#$mr< k4E|yZA,5͕yto<ŤNַc|}Kӌ wN-H=~G-\>ж]ṛNO4j\L2ڪ#J7DѪLP&p/X/"́f k>K|qPۋw ?zqvc^$ ә_I.r^1 FËXg9E dGoC2K o; L&.ju"јjdVvX21K`Q. n<0F3p-ĩ琺negV rLA˃*t]Y E˃cpmVaE uP4 ]KHrW4u2$JB3pYfa0Tp"3L$a}V#TB`VAmXED2 {'yJzAxch؂~nVbQI2 (28XGȄz@k3!XPPuz;)[ `$(_XڙG25K i尌^f@T>O V s C8C@|, i,`*+uT,l:@G:*t (]:Zt%UUd- <&AXdL@|ѠBByS$!2!(|0l3f:o, " +xi{%7>8\ЋAuD!3i.h q<Q,@׀R"ԛIe`[{*Fg% gr@As^OB6hR3*iN GPxberS)h|PպHLAYcXd$@f'"AGA`@BCh("J$hb,0k/3wB Z9]ؚ %Ee0=̕x]vWK\.ߏ 4m$uM\CrZ {ϙoiw֫ؑfD~$?C#-DJ Y xZChTRR5׸Xޫv 3uPj~q{kh{u3|On)!FM .)*hP<@ᡇds AvJb;v B`^A@QZ O6z;j0&j`'ۉ¸"`1\񀛚+.[9i{u6'hV0N W((BTUV( F|T515u *1#dB׆U"&Yv|NiuKks+npRC^ϜO>Uf-J3cD2]( ZVay@`0;T Nw \xIk |ܕZ5D@P÷ePXkK ZV2AhC3xQ+u!֬Vx%`d0{=|fV;M6RjL:-XE+ff%xZ傾vK=(@ť$klp^gQ"B S Py c?.zF'b/n5.:w\R) b%@b@{^#+| 'J\ '*]hXccN? ְ* WmJ:JmB/&v(-%B dI 0%4=DJ RT-=R=q S gpJ'R@"%)H DJ R@"%)H DJ R@"%)H DJ R@"%)H DJ R@"%)H DJ]U٧SឫA)0\G p*J @i$)A t!츐/F64{1TM?/{a'x_n9Qu!%» xqc<}9\j\eNU.fg ':6[5M/yQߏRϴ4?l2/|Ǫ*kE$KEdr>5!dl$c#HF26dl$c#HF26dl$c#HF26dl$c#HF26dl$c#HF26dl$c#HF2q?$6~826+_h( >^heq@"%)H DJ R@"%)H DJ R@"%)H DJ R@"%)H DJ R@"%)H DJ R@"%)H DJU5$%QR G ](rBKZ3"%)H DJ R@"%)H DJ R@"%)H DJ R@"%)H DJ R@"%)H DJ R@"%)H DJ Rh@׶j?o_&Kl @ci/|x3)-p +5+SK.!Z%D- At3F\C+Dծt {1 "+cj(thtA>ҕ)   "ڧOWRs=+ͭsC++4á+s6;?w(=>nܼt#N0p=Ur Q ,sB_xW3IPxZtF?#hwN}|19X?_G 1Bs>;(e2pt|-7o֑OVXUe-(dV2:'(olp{Or|_Ρc.5kPgl\7+b$OLypiaQw|12?xLIRl͝׬RV ޔ>ăKBPʖqswyQ~N^Nm÷ @+uΫ8MiQ@fKz1w#mn6.lq\L+LGWy>ܿg9=~? ]쩠OQ"6ضk)S$f,БIaZY"i}~z\ڗdf7o^z\t\yj^م<6sVF䓰s=>ԟLi$zb&7QN[TxSjzxLMbmɟ L(_a궯P-3QfBg& ]`#` , Z+v'G1842RUY~U2Xvݷ"磦z>f(*<<,,N' 晭\mB. YWV^yVz3JAMUg`׿-]ǡߵS "yWBԕX* UJmS*iY^Iq 0TMޙZ$^N,Gweh aU:pmz}V\vPMsM>"gz멆M:[S X+x6`8pK\x3 o~4߰iE=k"6F/5ʳuM}ȺSz c"FaCq_RC I.oiAMm&i`%Ļ9=j75ӗ;qgZ߲| T̻Ńk|܍7I ֯##0t<q>yгuVxxK/+]x&ͮ/+ 9])UG}.7:&hμB X?#^G _NنIOVׇ9a+qsZY_Tsw}F'{ IZ=9KM%|-$kF`졆 tՠI j}kvlh,wEv ,ѩxԩ4:yĒ+9<0p:RO 3vבS0 3gqq%͛+lKSnL:G!6 %׃PZ~lh_MPchotɡYj&x;__)۰I%w@`܇vm=3MoVgPڡ]n vDoz]C2kI{c!wvQNr [@%on˦Q_` 0EP<4%Pvg$" ;/|Z3Oګ{4BL ޗie cZy\7F yiNm5gvC]F-> (JFD)D)]R垢=RÙTAs]ܰJOtt% v0tp ]!Z+wDW{HWJJᵴn0tp ]!Z 4jJ+]`c` ֲDW{HW/+̥ ]!\-BW]+@)8#C?A,p *BWVT;t #]9ha=`+1+3whuBjʃ"]y++P ꝟBV^龜~K 接:PZ]7,ǍFg^lxUpC7 7ӆ}]5#G4/8r:_YT2&ldbձf)-QV<LW6U<)XU|]:j˄|M[=g\̞vh3;mR'MϭLULmWur QڎtBWZ!DWk\]!\BW~[9W+`b+8á+kP r  #])Xh]sVd4?ɩ8 9%f. jVj=(_brPpR@_\lcP/>q?do[sEToGtYTyteRZݦ|Қ~mDr y~r>X=̻k W<;g8,4M wi͌o #:݋^ [UJ3L\+Q}\I+6}5TC:/͋K5I(wvQQ9=>:rujʶkX#rdR(+pʽ13Ymlеt>15ia?/~A3E<)<Ƴ:d%~13U?{K\J1N2ѻ\ 2콕2:_q<*da\$M",#\2ƂI9pI e1WWb:[IQXZ-먬VE[GW:Cu9@GFjVqo pƠp z/G]/:JeU};>7k3{scb"- yE"R$cS'#J`6GH.J]G-M" ;M6v&y{pLlh&7 bgWj07;sx:/w!;s bM4l5~=O{tg@̎!;k5v6ǎҲSwZA>1:[ÒyJ=bjY YHESeEW>!Ft 9z~x<{T&oWmZv@Zτ}!N7w?ʆ\YrJ.;G,K<"W=N(<~ƒ[(^O쑂~0."TLJJ%nL*&A-@E&$[)@ o UH z41DsE&a&Q" F<%[pvG~Q]Efļ2[>ȕwŤqXׅr_OEy'-{L}݃Wr0s bLxTyؙSy7N<ڠz_שPy5V8o6|(=[IB\YCtC$3g랧gqR#N*q.-YEO^F)u+Vu`0i!͒Y* ܔ&GQBB&)1`v Ș%cxadIƒ5*1I4pext/{^yt#jk i4cPcF=j9>.[2hJ4%kTx0 Mu:zAe4Ң1&ts$IgAcn6f/kp-)ʅJPN&LTNXL)O9ʽIɯ]X QrVfY$-R 5e|2c=ұ8s1 ;Ȩ; ;+rW& !8 m^iFr>xfL|6{A1ӏʬ^H"F_%laP脎I`NTZ) mcJH24*Ȑhe *qJG@NQ 1Nf&UEM>92|:[|j=O&W*d;iF"!b.`.JddYHKRǢ F11Ay(}td1y^WOoc(mk* {#` 3[q#,m[>kbc_p_Ԓae4pG 0=O30зӣ4ӏ4rPT =@$DQHYSP2<^,]4Zb[@NlEKO)ca4([ V!^g铰&EۥH gG9a~4f$y}|kq\ŵhb7PO/'OGqDD$:%yhY9cb늒e6Wk׿}y}IכY53{jS};s ,3ºo3qW-jFӈ_m-d|~ހb4NtR-4C"茛-S}rDIƞ]*"/I܋.WuS{G"ѷ"E.di=\%bǓ۲r'`Q1 )(ku*A2Q2^h,mѧrޒpq=>څػI}[h LLʦ.⁍D"tQi@l@g+49(@3j(Fꃚ.F/(ꆥtT N7Tt2/ZsFۀ9y0iWTzSd*6jmfjx6uD1\q!@-3cQ XN:ܡpBw7,w^}GM']Ծس~_uM$ X_3 >L`cauPt"4@2G1EH]-*Pbe2("!RN;z:C/PgUl,LBEf]ŧ|Rllac!cch[9[Sb9HM)"QY"R H]` YG6F͆c&O0OO}}v'/?>Y M[m]aT,FhT&et0LKS:8_b[E \P"X5M62*Lb%!=35𱱢6vEja)g:>Yy&4krLNlrJZ4ݿ^r?[o329_mؖ+U>>,E{C_LPKC1c b *`6a8d$2xŊ L-H-ac*DTTlzr}}\*Tœ|).'+F) `a#26Œafd>-QSERga!N.'q:l~:/kr V"$Qx&XE>EtTU AH#/nux]f2ä&DMm]@S!NP ηh5Ƨibj7}640XYB9-R>`%j'Gs@dtg51efcF !32,:̔Q,UYh8/E%Tg6v{8#jbǾ4fEXBcTLX! DzmIbdT.[ $\okq0k'K!ch-&N@jg(QmqLZ$DuV١QWMl'Bl&%E..v.7Jg2KC,Ҧ䃫mÝ-`PAP.>]=l&C#αz_NdFnm>:w); j4t2}) y^;204t ݍFXX6{sxXs]bġi ڡ } LGcG xjZ8=o 'mI$RvAPJ(s{U\-"l<m>ܣqVR3 XK6__Z<%G\ySs%gb^w:BR(I6 0@@"T>5$OT( Y%@Ȇ|R^Y RY$c ^'S &A& %m8;'MX2Ϧ]w` ?wz&~3A%AG>Lƣ -bWm % ݇~hLPE,tEP2uT~@CG͍rp6K(ȉق3MLh^ؤ5)N" <!!xk9:ɾ4f/5 U')mf=8ޗǶa3A=o_kꢻܵB9jQI$W; & oٿ{P/B,l,(ل*\Fe!2#)mV9GpdMi <8_lgǟvڳn\n{B7w4ch~NV{OtIŀ"e/3Ua4Ă)֩lHH"s6#QdS fyBE(1k@1ʒsJh5R(Z}6v ^=>eCx_bW ٻ6W $y7C}Gq68[kjH }!i(RYd4F"[fwuOUUdYrFa=oѶ`9J9ʥ0-mi0iu+Y帯c& U!CASVF & )(Gag EVĪ  12ڌYNfV  yX'38.6}rQ2 ~-`,5x嵖f}nPWr%_l6=/n(grgRꍋ۹pCJ R*5h Fyer" "D^ظs:5ct0uX]4kwVBbnd{&CҚ)RRA8);Ŝ;3 ]"=vw]y~&ηbw "v@Ewe5IH:@\!)n"LkvH33: 3z-S`x:ixFٹF矆<R% 6 F3# ̔D!/9,'-Z!O+u㆑.], p=0rԃe%qPIQ1f47##AmDpNtRߚ/7j m(Q-:g{"[җ(U]*H0fb7L"pkǼ"d"b=6Qĝv M z.kp|zTOG7'qD/Y.Uv``X;$ 67E!]$'O&Y% /9[ħ\d=: $BHǣBgG_}+qx6_Ρ=J@EQA5J2f)'3O=?5~pnryz -z{9vH\27} z_ɛ^Uϫb-~ f&|*N~ߣ_̸<Fw_>}/%YZ$5|~w: k5 2b}5ʃxNAO>G8@@'Igv493&XK 0N㢓_x2q/ìfENA\iڃ䴕 |õ-ߢ 1H$jx$U|) m-n-:6~EUdU47.vv:+.HD>Gt[Qr g sv0pP*RJ:1•f?59uQLPA n>Mz?&`ҥ WN~C;>UK-J)6e&XR. bBR^QUb6CNRof!|~MPLV0.G֥T },9 Yr5W:Ae~2102'GOƓJ1J1;Mi}?DD[S3_\䯒 niU -q~O*O/uz2W"G(pP}DL  1Hk_w~?'lM/y^"NJ^qZuY$jmIZ%h :yX.s{rq/`8?'+W`` S,Yd2*c;6;<$(_I0V+_ j Qy0j$CP#O]CT8: 5T6>!`9l,'S{;8r&33v3Q+9Gl: 5ϝ.@J?yMܓLF8w2U%ܣ0W' 1+!aa$`a@o)@f긬bLܑ AzZP1uQ"gi0w,c̲ZV;@+I;޼oS|ʳ78 e)sH[f2;eL|.^VޭaïL<+dʼnV⇴Rv>3+G)oDR궝rWOg\򃁫H.'WZ)JdW(FHW@.G]Ej)wT2#+d Vp*U}H%봫W I z0p*RwTK c}@p ևr :*RKɾURvpJJ); ?c0rB"+\)" W`"_4ŝ WJ#\iA2\~8`vO# wpJ\}ÇS0!8c/|F[zVޫ ) Ƙcl2fO0/zg'zߝȥb϶gW꫗ݥ{ִ,XYf$QS-9KR 3͵bK*%rUPka!*fV6+SȖMḴ<:63w^wIUgf`4%<e;o4UZ2XS4WҲ"Q|T~lX":4%+YWƍ?^M&O~_^+YMM}Cf8stJ[ &dPJ?ys lzҜE)Wen=.uVfz <߮m.>BsJs!!JVaYp!LH3GYfGºxKM)p ЎY | \KLRkY[#gK>ߣ> )VXROhU?Um.qeknZz4s|]nU:P֒R2H"TxĈ@Μ T7$0 iU6FuFwD Rr9&L* ˔f܁BꀃA(kJqcPg!RmxuqpDM:(ĉ"Ֆj (2 %g^f`A?i}D*8(ړR\* Bk%=XGv6U"k˼zQ2QP @b'ḃD8߾nl](?o?Vҽ1å.j'v>B$9k.Vˉ (LB&$:ȱQظM?`\'8-Rں 22y1 Lc  A.ޙbzf: v7^Pճo%P0 8+PiatHp=a 䫨 2,'ydbx(bѻ9HU}ZVW Q4~R/h6V`ȵzߗ H-6 WRd0^N]+XYp1Iw&;>peЬazh6C6g]-ƽ*>T.rabo-ғg~0u纀`Q.-Vi[ω}  WuQP%Bz— A".v*Wm6Һ6q4yP"j(q %9}Qk"ʙ:jcLr͹g: $cQb /q1/-}:yba&[ŨxU;FIvE^:LLHw;Ux%I%-KxgIǢU]ߦE$EwaRH"z Ӯ)дM怦|>  ^L|]s1@b2\mcGXsȆmVikitUڃy-@6T1TEm1F: lpFm_jRz\KPUQUdMT^e,®.Wa%.Ūlې Gq'3h7ȏ#P( $Y6,r}ӫ/]>7h4 n7^h5[inny+ rųɇ ƹz͆b १p*/J/ t!8~0gT~][o ZQ֠\]VfiZiiZ-M@tN>ƙ'NW 2"J!eRiWƘӫ޵q$6{X Yl;è$#r UTKY$2꩙zM(q7o=; >H`6J`r*E{ 2 Vʙf,8HTddl7Rx[.P10XI2bD("6CLـ9 hV)) r[ň<2\A^:[Lۺ/ԏ/U_(tY[w{e3ՇRid7^D`u o8(Q'U3@NLjd#)w=wyG O6y_]D)5'VtdФ!A2DL^F?@8cbE"+WaGc2Qbbȏ3P dƅl;w^̦c,"Å@;kN,hXJTS4i!Y"c8X!!t]NdДh iP!$BVg=Ь =Č<T\/>z=-X&hRP>FBC"Q5(CYzH6O)֫f=uuM@U_RTȋOY3Z3$΀Rk'C޸Azc'ړx:40dpgWRJkt pV)U+GӓOl*h ….iByCm[/r[o}g \T"B.ˢ91c(YQ YPgap5udIov3s"5ek\j,(`9OʨU#UcGV3qv'~u3"POtON8FI;ɯ+7ہ?' BZusjˆt;gvof03 %$#0w>bEWԒE<;@=O3ohGii0XTDKR@bFlȘ$=Z@fЖ+lt@CEː2AGm+RE(I,֤rɭC3q x;l~#a$bv\>>=dsp#PY;էNЌ)xYTJN)&Ed^IVU$,0xxQ:ܠqh*eϷ R~?^$X㿍.e؏ nr[&g¿{̧ϿÞmtSɵR2˭PlT޲C{$Jk$Q6{ 6ns)}+RT9z- ./cwe-q!%mH_H(rNUR%3<t0JlFSD:ْ<tdjR㶓&@;P0QlƖ׷&6QAA[~L}C'Q#VV@QԹYQ [R)a6!UWEQkq6`gO SN%RKH!Ộ-Zke7;*YUtf^,EJSNP=A=W.Z?8B;bjxZ:eQMZˢ(!,*%)ХuC3qvBwW,WP}GM] "o_Т/^| bVuW;URS' M) D]օX-8D?Uz:cPg̵ȧRl,HgWB&ڒVC)%)E ض '"g kD9SFH)JcH"rH]dQ j69k:gV''>OǃoV96J[ 6þTDG>RZjJG UT2G%KұdKLX2E:fjL["`cEm&nE׵Rw°sO`YnvfMn^F%-^cqq}dCq<6^f[՝bQ˿q3U>>- 94yB-yBX)e)SJ R1N2ԎCV!cP֊?F:AJ!.xVxMasIeJ1);4Hi 6R-c3q 4VF_lls-Wd!%lurz5gt Nd(,K>c"tTƐ($Sm?u]L"eY R5fS;$gW1NXd+HE?ugNID1bCil`\Z(%KDm@ !(&]vVzfrLbLa3dF0FMLU&N 8EeL4m,EƾXfPh;X"n. %VvIb%*ON{s>PK5츸P3gtd|/uFjgRN 9rqI+5xiP1i'8{T|CyV>:Ckl`m[R`c$ITC,mý mBE ] {luj}c{w`Kn1qw=r7Q|s;Yy !A>N,ttpX:8JľwpR^4T82G8X;R>h孴"<7ѧɯb@jTr{ X$) D-*c] Kbgc!*e_ۻ_R6c.\ Bnm>ŷ͠SY߆3R) ЎGw+[}u[˧L،F: 3wteBaewL]ݢ8.t(K?g3A1-XeHsѤMg:ʹ\4ɻ㙆Œlڦ痛R\&*흘gqQk[nM/րA*P&)*m"H!Mfj8X;u-_00cV b$5h+BRtK)G62 Y8%I2 M7NRN}GfW^}sFfOuٖGo+xl'(RJ%DeOIX-1' YdBPO1<ӓś>F[s^,SGbq}bMouU`V"eb||4 ] !p; ~o>Z?'9Y=Z:̛s:Q\LO>ְ{9)T>˲z(lծۖ.{NkT}jhgG븟)Wc _;V[y~OFfI>aeFOL?=ЬkuH _1lWOo ktZpnf|Wi'՜5xv$]|,gvTGGv47izl:|f{BXxd5/d­WG/8s67XEzUtz|t<"H?UЇ5xU↹Э^-guqfDey+7]( lt6цfb VgO* a`1s.&yS[yݒ_?hp7)/5K*+97TgKOy׷2b>Z_>`׳]υ^F[~SjS*N`t'Rz+ _rpr~2}nJny0soX5ɽ #Y4w|g `v2>][z9KN#|2-ZWdCUI1|3l/^;m:\ U>GthvNf4Ϸo<iVÇ>41ΩR#Tgz]7?kЃ҃~޳$&5@ ~)pZڒ2Ei^)ƒ)Ikf ^WW ݜg)jmZ:RmX[\{[?i~hDps^ 9 !z;k_/J$m1Bg]]bSi%oH>$l{/5v;k]9L>*^ J@Mζ_EplⓁ͋nͽłcolk$|jɲcնl5ӝA<[,YdYJ,*U"vA=*~B A9Vg3cwqq>/_;:}s5gT/rgd.Q:`f.Ss "cr.*&~rrkD\"F\5G_tt(Yidʺ֐ɥOh}35HtHʇE:`*V 8s-/ ]<W!DODH_,j/_j< 8>r}οCܮ rtK/sG#ք̻=xCt:9&QG?Rsb)mq7o RKpjXO9mN~ۊ-q{eoPǤW8N.>.qL~yԸ~6GKSyį[U#SƃΤN:yÏOp>t^s'U#oA*1WQ S|)ҔSi92T&Uc69l {%i FC_0EO?~S'O甮)<cDafy(o>TC>LynOVh ϪJT.$ܲ!2DHhmPRiΦ]"DkNt.R Oz}PWN_cX]RdTNU{{y?g1U[kM}92y4<PpጋJpnÙҊ)y#nWb`zۊ\zrŔ&\lYhې\0kFWVi\1\P?!"\#ڑ+ߊ\1d)4\Y#2 ;m+u"Z//WLi$W#+g2!"`,og/(1ʕ`jHb;AxWD\1%N(;݈،Wmi8ʔ竳OgnntA^3$΄>X7~=\,g25= 4 }rr#4n<>= *:s]ס:)\֤Gd (glqQ'Ys v~sF}z}6Ȥ?3?4X\]8餤Ⱦ?f/S|Ya[$w*UcvEbkpƑۮ Z oT>?;~V+{ʻhʺ9CQWs9181\[)XͦӜFȩ(UѠZV)nYo>:CI"g-V{>;gg}p갳^ 0~5;=fgr=zi!"UV&pCEtNr5BNDC#kE#.)7Lr5RNkߒ\bW$,bZC+\QjIz*!Z+=&~\Phb5@>JnFЛњ/3\P7h[+ֲ f 2Ko|\11ʕkѐ\9l'qU3AՃ_bɻ\yi)AdqbZc.WL$Wc+'0Ա#oEOnv,: P~u/` >:~WzȴdM;Ů|̵6h1Ya}V.rO5z+VCq^esT"Ͻ8Oɠu @Xq_5Yc-z Ckvc^7$WmF׀nE/WL,>xJI+6Z6#Wk}+rEJuc$W#+ 4$W=fi\1IF(WwM䊀ngqhE]1v5J2Jd-]1Vi)Nr5BJے\bvpV~)$Wc+G[Z"`P #WL;+tzʕG @Vfq Z7tbJ&\)j[Q̤-hxx[w1. .5@ƿX?d;8!Y5AIZmrͫofǛO+$ovI90_uqo|)A._|,GSKx:@tG,_]^7 "gy͛xwW%?Hfs3E6N_RrRgEoZd*ܠrkտ@3,X.O }VŴvEaOrEN ^RMlH44d\׌\\1%LkWc++-]1oGWA3rŴ.WLi`ʕ 0vqiEVSZ=ʃӒwOhFX<dS3)2n9QX#thAj=Xns$x~gu>@;QJC$OnzFdItA'$-eZfl+qK֫ӹԩ5OW+{?II.; Y RWZ,hR4K,EJ44$WkF4´/|U(WJZ7GuwÕbZ.WL$W#+Ta+VGVbZ3x)jrij!"`xW "-|bʡ!>իȕA]KA(hFv1IF(W&l\jfi+t0=\9-Ɔ䊀fq="WDyV@)jr )+>P4#WL;|bJ=yW+&놻LoSH!ڭ]L;V\F-}~xk^8r|{VoɄWE.*[im)PZa7i0;iK-JȘ!Hd>Z[)fF_z*ѪE B=<#;yFOmz%S!R*)+=tT?Z.WDbQ=ǟ#?[;ogb'+kfk}8=;-5PѺpNV޾۷oWFw9_.h߬ jyuWeuKݻ~ׯ?jSt]C=KW8;[Д5ijrWf}'[w}yB g%Bk MWwE[_$tOxw!^9؇}C Y^Cuuw[u+POZ lYYL񶥺y}J0Uwl+lnf9:")/HH(\1O-pC4P7ρ9R>ҘOh#9tS#/{Zx~ѻK"WeO,Ox$/IG_DW] JIjn!&z4JQHFWA!o^]R I{>^wW4]||9jq9?-.#k$jjčVD:e0!!kﴑw9Z/c(}@U!)%BUO!,SV*0V+ Lmn[e A<\O3 XI9-Ud2ͥ-MEo UD4BVZthIkK J'\tKJ*Dbdo+! ^Gc4[ij^a_CN rRZ2j@,. ZEtu E=I2-~drCFSgfR k&d<ՍI֚#yt0|M1al_{# d<_?%o)2dK|B@`dCW,/W3YAB |v)#8S^Uȵ:[6pHD•o!+$!@xɈE<\PߟT>-pօ2iE%ɀ%0yЕj7xS<NPWN4/iTRRW&zZjOXab"U!$ Y%B|H.*EjoA-9ً[a?q1CmpQ*L6x'sCPFkȊ y+ ׅ*`5 !D(C?{Ƒe O;Uc1]L LbȤF8ۤՔwl+bIfV}S]XKͲnGdfɣ^f\*1c5.d)5>%‘FiҪd :JrJ%V 0 CMZe&5:/3(|" H&jZ5 @hBV58 U8ni4 O×Ud 2inuG-8 V'ES U_~^j lbZ1pߩqcU0 vntl}r;ߊ;&!z*`YeD-&PG]Jr}^Ls@ Q/сwK }Czv@ @L)=`,C)9iDκ$!;V@r0]Zx _3@[vRh Ō޲N+}Ƃ0-Gص7Fu&M35)JL `VDAoqVUpV<`, °" wMϲ] tĪM@k>ZlOvXI+,Iƒ5$RYIe6HVӥGoU4"ZF7)mU(aOAK0AlmRi6tŒؓUZwCP^: 5\Q py݈V!? ^Or%5p AP`AH &Ш3ES'2(8`뛶] + EISL1. xrSp͍9%E bZ0p0) p#X("#ÎW=U4+~(] ڈJ5- ̩m X{uAfaƑ=XV=ؤ=ɗ Ug&dd, D ᪥hsOH:'/Gi:?T赫->"A[vgQ}p$k 7VP8i* HFWP ̀zreR\H O30%7aFF,Sat9 L03iҭ ?H`"mM곩\4rUfX. LDC1 4&e n#!:.P,=uQՒ-0 Uyۈ'3tEC@L ##B0@5[h#竾ڋn+n)tkp%.+_}&Wqs:kPP=-Tad_~K[ڛ&fvbn. d1o.vtlxa "1_ֶ.^&M\=^ohx=,|Dm'm+\oI'ͽhg꺶x1ъҰus M^ymg/}Ά=7mکE+D # mDI (%_ +Cd%ЧNzJ V@b%+X J V@b%+X J V@b%+X J V@b%+X J V@b%+X J V@b%ЧrfdpG ĽQZ?z%ؾ@G i+X J V@b%+X J V@b%+X J V@b%+X J V@b%+X J V@b%+X J V@B%5(`mVFJOP f%+X J V@b%+X J V@b%+X J V@b%+X J V@b%+X J V@b%+X J VUa@H7J 2(Z>v%Y5+>E%r^IV@b%+X J V@b%+X J V@b%+X J V@b%+X J V@b%+X J V@b%+>%[cUaz:{R/ߥj"ep#_{#\V~%X >/(}Mv]N鿗4m-qq0i=8zΎn/D<ԿO/ZCNy#AGZZ>q9V):l,9WVtoiD@XGEId.݆>gnq/ipXnfTMVFkXz|do; qaC7OfGHnon8x&LxVA?8%n=FsW]X_>85J/㱹yn>_+$ j(AGo bbZ"~1JzpK܌)>_A ǟZ] xūO8x G?aucK5W}Zi0~S٧~}7Oׇ_N>P*]^FArmZ:QpPLL>ȜΑ jD{sDAu[:2Zo?94/{m.ҒYN꺬m7͋-bfqCT߼ws[W+৫o(~3y}u{N/<lgۓވ=tvۮH[c#F{K};6AGѦ5KC;wR.{dr*E}c\5XE{a--A*ڒ͊GGJ9+Y9Gpq\F #x2snq mWm|zlT rU>^ٷ}z6axLy\r%\ȇGKUو?=JRI.ڊddIIƅfhfR9̹#s?k\m^cX\3"wCOkw\o.fV߼~/˳;Iy_foiz_t;Q;yдͺжd4:4C<tx76O6qw绻7\mniLd [}嫛oo|x-_?oE>4nw7s- jѻ˻my[o_'뛛=7yb}p]v5qsUh7ٍEF_vcZ-?cJex7't|).ua.{xa]X¢Ñxv2>oޘt@ܫQLOw _Hm1m4b]7O%;HE3 ʤUߥnԂ\y g(Jӊm+q#6}NX{Pϖe61x;l98ߖHTt8J!jއ@&cBr"#}MEdsbo[pRtV{HnѸfcT:z:40fj&IO&3İ5L|D9@Sb$_j5wskC??@V/Fzwe}RwQM)mYӂM:(cjNz&TtU1%Égij>W3-8*DcF#0'/D/Ulz:mGy&<S9!3pNA7%9}ԃWNыK)NqF ?k[3ɭ;:~ZX 7HL fԯES3ew7r"4Ev&Oz>_Ƈ'r.2$nQ|CsxEWԠ`jBW8ܤi5`l!C'v c맳_n<}F5ifn!s'%ٳe=z49'mFRI:axV"Wj`Qpsa|0#nBY@4,t#GV= ͵nrnVN䫾=5L?4hUCيq0wCDQ)ܸp1q)U{s䍋ZUi:\5F'X^Cyd )iN:lά886J&Zbw4u]\\y>_30#k;Ā}$hvSX3;4p]MV F$n 'ZL&@ x4,3s'O_]JZ8^ln}5.(qIM,קJ",eA=>8!EЎ֏҈;|t`q?ih o`}u>3/BRj%0*@Ӝp印VbMsreZ)G o'F&,!#_lvdcaM6vXϧ,2 )ΉmXVą?WbB%C)3mDϠ(|K/(UoJ)ǩhsI,J) `\qFBhH~N2_up'k>땫$'e(J2aK 똘@I(P؜zGM܃  kEL1V"7atXw6(_X* 녳*(;_7M{y67W1̶  ][t'tt3gRt{Ht"XU2cx)w$Wxa[H $R/KC"b(QD  o"3LqVREf 2%'^[l$2Υ :$IuU"XI !#zj1!15guSƺ8$Z A'x _ zxlϞ%r0wâPYW<4$#-T@l饷eb`^i6hbғV sXҚUt ƷFioM@1Jhi .CtR+e< ,mPSR$*ɭ5=5d1뇶f (xVq1jPty1^\"T;iGEmj.z{׿ؚ}c=-LozP:+ 3pZZ&USvQ[|\8Vl3q&s[<&6`o~^6hMm:Oxϲfޚ8: '`J'y 4ff̶Fuᲄ^J*^%;FF)ycJ[c% zC0beRɔ NY, ZS@# '63s *VS|ԖhI ,ΆT39? %MltBK+= vv7pk2Sz,];FMB3?X= ̀_wv0>5r#rb)`8Nw2_a4p$`x(\g*SB0 Z3a,wX2&kw#j)AŎ\+)Gn4+߽=q:-'?"JVBTG] ԁhwW Jgݍ8 p6 7hDWP>F'@,\уSw| UdpdN_<99_ "K8%Js%U=ңY-1Gk\+.p " CLf?+wowՉ~g"T:s>wz9ǖۻsjR U- %!njU3lu3lf^Xޣ"F9 G0b^tGwmv:uVY!WYZZ=HErcG8"{ď-y oͭB{}N>]ONC|?~w??̜w'1 r.Ufw^= MMs#6iZ7ԓߠ]F|WrevlmL^3 9SvYTYUN]y"NF5avFE{T4 @bC<4F:5Gl`#-!'~㆒L!3 RDIko R Zʀ,YQpi 8L缲R/u8p42DD"z#EϙiRKZ HOdFEbۍF6vu1x&ku=3p흃1`t=CئpUô&FP; Ww!ьʠ(W0.?G˚h^ m)9ΪҺLsb !"_W ;|3*6XI++Xoc;WPTHp8=1BIьhfrcyLƉ N=ФGG7: 'E)y 8p^Q)1^*W@OI lR;E\ }jh pDŽŃfbNfgP/|$UN"b: x/lyB]5ġ@Mr!z ӹDIxx$'L'e]jl %{l:Ԥo]P0ANx˴@ #Z@J8Ƙpa6QjZ; B"=7MQhk3ݽT؝5/_>r JKC)8^z\(a+cӖgoy6ehS= H%;&Kk4`d !X jlL eu$VZ[gG! 8y@i0 lbz!zH!Y ћA+ExX:HZf?&g(n`ϒĩ\BrT*g8P 9)"-Sl Ά~wD3!I)$Lx<-sQ^ZJ/?^ Rs7#=E;[ ӌb׀i)`I2"N:ɵf`#f uM/6` =8Rr3y/J(yk%>3UMHJŒ*ĽqaάF5$EH;;zuVجۄGkB98 LD˔1Y -%.FGiqTzY8DK!DU-zg՘IDk1+FSHNG}HV9 J~<swD5a%X.#LAc8 V:줎ႭTJ2GWPCR$ jxT>s>`0!8Uaa>Dellp̨j V4v02u6ɾxjrg';^Jr=|>mIzo%q99V+JϖS%D k̉m\ "Q!J oR4ZS mM6BIh-R}#'TL-&x#32x5sf"TndNtlXmf ҂/fYn:2s} *3? O_[Q+ D7!) Q"N+I$Y%U(ɬw0'ZPHM* %^Fґ0хTOfsxLCAlcWfQ.:DR5R"PP+!e&Ŭii`Rxq6ϛ#r! "A3 hG D9#`\0(Ak~Q" b+"̈ "Dܨk8'0̸ȱ#2 *I͔p$P eFHHEbxKFbS, @r,i$53G@]<;2^xpqV5uH{fɮ(2(XpqӌX9$H `IRE>&DsXfǮx(3 ®52u7(3tέ 8k?bsqD@hvCwN\2\_M#0GW].cKJzpFk*z }L6CFޛ5m$=p!紏h?7xha4A} bǸOf1⒯}ߪc).s9 뎢zff(`jS պ74^gaXjAҥĨEǸq~OǤOԱd@fgۯ3?rgރ@£Sxaw5J'6f Ԅ^-EJ,%RKU,V!]Ynu O[zʠy˂T>oS}}ɘ.3v\",IRެ6>V̊ GDg0e+ZgV%`b1#İ&GW8E,c-W^ \.GW'LW]J:\W/(O{/\Ì.;JJypԪ +Yp|^;S5>Ҵ5')U/!\LbWm5SϖRkO5㿯a*sJ%1,_ݪ: WwGg1aX:,G7'%;(,D)9%.܆B"A՟i/.}el@}ܓN{Io- gD Ģ2D"ǚs(J_Qs%R!`y&4A >C؝szb[ŲFp 4 <#^C$R@AEd)Ezl4Lc'$Wj7Ok-ާ٫w'7S6NjFp.fjfIf܇bCk q=f&)=|K$.Z|/]Nvu'})Xi)4YRBBd)4Y MB ,U ,RqDdQ MBd)4Y MBdQ1Db$K嶢umr[V*mr[V*mr[f\*mr[V*mr[V*.JRTn+X*mr[VpTn+JRTn+J嶣~d'q"+F7f>uI_/rA TWPS1 %R|qV _$)7|qdDH A{7ˀ<TZAK4eA(ՄK*Ljd<)VAxhA #(F`*c"ҙlttZtWz+tp0qug愸]q\ton|e.6͚J#?ei$\Q&rOIm&"H$ gfXpLqNY%%<ЀZleF͝QFG?) NPB}|t<ս;͠ߦXXvE,ɲazD SGTh {<JzT/ S_NԴ/#'aKȓU?VJ8Zcv[KZU6u+ feP\AiS"RQ 0!J ~V}s~ \O5g5d7*ܝtY݇0OUͤjv l_Iښ[ZW֛pf[^ټ2ͣY\:u'ixP^5N1V:P^ky>vz˻W}2;EHφVjsn:l/n_66O(*[R[ne+yDͻRRqX/Rg=-k DJa ',P4PR#㨍r0Rxhfx ;:Y\p^ibKce7i08/m'sӁ[(M% sєRo\.…3_AZJFc`PP !WQ!f`( `!pQC9VNǀҠ\[EC8DKK:+0`"{&CLiX{)i);Ŝ N0BΫJfɻ;D~j5v@:5jZ&͘}C!rlJ0Mإj:j=$20D[p<z |m#@)e@qX`b4P2*'T7ղ15NZnIeA,ȓuiޟߪX pzf8zK0٩㈡脉`(bM`Ⴡo)HΉ"٤~lyj"]CR1 (e7\tg2ē7WZ %%m&29Ҩ lG%R!!(hc^`0K$iHڲ's>jkuQ"pkǼ"jXFi0qK 9'씊>c{{ThxI_|wp97eV'깚Dr{_KnA^s"*CLE1%z`|0&YhyқBrG[÷LF+4VbpT^4n>~In9R_+|]]凟 3T U5y:ʉ,#iCg?Mz,~734#wh& ~3'7ZՇYZoJnWKҫ%<\>ۙ7?,Z' ֙z| ``t6 <LXSNy/z}OL#G)5fȂ.e20qӓ os73X:0\r}] &7"U x]>Oklڂdk5IM=IN/E'u6=k|<$o (4s5N F/!i\zNV^IGC~Jv3w!9{*Y[EX̻IK1Z0Μ4,s7_i& ~c/x?~0HV$P sQaZ"Rt!` / t񼚅`֙!;^xXuĉ5nJ)ݴ %3 ZS=\}ʂv!bRcW6(SYJxw㞼$)Ɠgܻg-P LJPJ b&e*E+D$QHLa PY1T)b+" O΂}-;^sЫ| *YN@[<ٖͬ\4{`Zo^Fboǒ_gOc6U3:uq,3iG)jGwd:>|g)Ze7M _Xyg+y'(hZ y‚ dn"#K^t7tG$TPo e\+ CqcC4NFx5ٚe3{̾FW]GMt/{hzl6Bae"[l{c eu HV;@drSEJm"Xeuz"t`lA#}@$$ GL6 Sq$'u|9[_yX9yY)džV^)eqy7qK3zۏ^dLlx-kOx]>r%&^v F4ـAb3: ٕh(:}C+hA˂]A"jƜXIu"Tg= B֨>qwǖq$={a݆sVYQy%gM_9&LA/*Q2Epww/vO5m˻"e}RbFq *c*h2rI,*n[AOV N 1$JBLZ+)ymRgnbiX&ZFm$a|j \UaG̚d2ME?>jm].+並kZH]%>bLV״Yҁl T?O0ꂃ3qph4x|L/X c̯-[-'¨N(Dfp'Dal\ER.M_xA<:-'mK)թ-s$j=u/љo|xe_/~[~ZvR*k]bF4W"ih9괞 䅌}Y}p߫ka(4JOhJA`tELkJCRީ[.IkU/I]6;]dTɻȓJ! =y4rE^*,#3b^bRλl) aj!AWr3gDYM7|84.1IF7[d ’Vc9ƝmO yԊ_t̮r>&2}Ԩ}DA8[tv6:]Nu2HJ$%2yXB S\hWR{cL1?T 6Yx->h-l? '_T:{J*-W}{2|`FT<+U_1e j@%dilR c=<dY m-ov 7x}KmA)Cc eM@ JÙОml:۠}*m~ JYy㝀eF`LLֺSm:_%ݖ~ڋƩK<+~{7˛Oozo72xBk(v-:|;7oTv 1{f /m*D}o3TTfh3 k>7fe:zkΎ$7iMG}%c $bjL!i]mM!`y44E}@S=J3PH3&{Ҳ+P\ B $۴F\F%[s G}* XClN )8$E MBǚQ"y9[l> ߳q-q6[k}3~8}:*o*A:G(X,E7 F)APҦY?x:Z-Q<ܠqhUg PItHzOVOEYF7ۺ׶[Wn\mVH%9t//ctҼtw`8 Zh(Ƈ@2V!),'TXhؓ.H.J+F^6u]G;so)R$Vڸs^\ӽbn.89ҔJTPQB_BN0\6u7k=gsS5vffׂn #Ɛy*MVD Tj-PD=D ׿ 1@PEMd=Uv@^?ZC;w#n!ƝEv1F%B pĸ𧻓VH]U-썺:/ꊩpuUD7g:+&JSk]]U* +trU%ި+&v}߰yM;MRmIsH&YQg/:LEj [64i\0$Uj숲0=RƤ^&s&p2Ъlk' o0CMb:@j!\m9@JI@T>9T08WHB Ykc uMv+kfMC'5ktsiSOp}P{hbl}ٱ|| sGu걪 ;5>P?nd̼CPQGKKMmtZjb㳗Iy>Wi{:]PL,$]llO R ֗ u}Z6ew9@C>'"RrH,I2)ޡw*h@w:;#g2A>P8|4'Ǔś{ *,i5efA`&T2F9(q%% sh@gl j&a,ŬEYS޵5u#ڗ٪#F$JvlK\.\cTDʶR}<J}lKx@6_7*&! M`-XyV#gFМ,%Sߖ={lN{'[+lNTr?\Cnq~ޏtT%z|v'\W.WvT|?1kJAхP[RA< Y`O WԂ;.(tuKr !eA۔]c 7s$jKj얌J5YX3EAU^QT涸(ȸ#a}w<.&_6hd28_ 9D< H)T2@D t{fX6{2R*=(bS F%+\t;. P9#f-ԾEnftC<];*Ue= ح5,9/VX)A`9$*kܒK:^[&kc} @4dB\%(h #(zBQERz_YV#gs#te+[}yi7R9*WUgk"uM5Yr`o/#h^nr)@fFY1ymfLJyQTB9sLFKՆ@Us\LNiOGPBWv3 ?@gz2#xB;Y&i__y}UNjDp ЀQ~BF1Yz55`0E qgxV,2RH&y,SF9o` tٜрuM{+Q(~uv>"6+g(kn܎eՙ(]?%HYh ܒ1"q$g+hlgNΦ!kacf b)1T<ЀABCJAs4C񄫂b^"ike%eIͼ{ɂ-WP#C5J+K}}:fwt:w]ȭg=. u͙/@h{ץ0?"2+:5ԺKfz{Nld&}5eC-NݶznU=hv|[sW˵h7f]]n;R-inyJtgk5A70_T%X"X(z5 /lCTtbP2RuM=J=ڰا`XJ .S7y_͗ ^ \JYy0q Y $jDS$Yg}ഇ!YTl?&a\5?-ޏ[:qt{w=76})qqb?KR&K!9m &噳 2;W\ " /x `xbt;F]F.#7VItʠvD8UpU*KSRs9qBh/hn KvUj Ӑ!Iu@|Iq#jr`ˤ:3ЬͿt#P4PrrΣ k]v:'$URL 8=R#xf狓4N3H/MB91O&r0 S2{fEV,Y̓ʉKm!{2AT<8)||'~Aǫg/~דf,2jXgF eιE4|RԉyK?`6v}]mX6}f -:l_Q80E5qM#?R !^6.M $a* `C>+Ț;M] "1fd$H1bA2k}NS7L 0G܂aP1x14`vKy'vK8uG)j6;㨮{G,;=kLCկ{(զգ]܌ˏ9~,Ƿ;w}& M`t}@-}\/[Lmh1s[=^-iV{r85OjL=JjoU@ȃw >of9V#^{̯o@t|>q:fFz!m(e^Z-/5b,̜`ɞIX7]tw&n]9o3]3 EU= \Gu,P!WQ?@qk>_*3lqgN]3 EQò1XnjYB g;5x\9 ($RD-SKѠWZE,0U 2_Pk W_x:^~`D(Bt[ɤ 9rÀȵVv 17s\\LUZЪPrUGg ^VTw=iHLz}gb"WkT*74J1 4Z:M7 =dKh $PX 2U&lSXX*CGBWYJO1~m$LrpUB9=|/$gY=sDZ(U*tFխvԯzv=z8gBGp)g>Oh]Q>^Eg YhaYPnDD yjHd ^ô gUvʡ*@ TMd9D>zŔ օ~Rk.>J1u(eMV(` f/A3kݕB Q甶N|e?x=gτ6yccC5$@qf)w$kzײ\ 2w}&̉K0'S.w`|4/2܋]nա<ĘC8N IdcNaD&XsɛHdŪ(5m-\pE8]~6%y_JrGAU\ļLY)#-Zyt0&$oKhpIǛc$o:ż ;u g,4)&Z-sTm-{W)oH8Q=m4V%ݏ-C 3H7kG]Q{ͤG +Us\Zd9,=3`tRftJ ,?Ɓ۟huXSA+:<2&R"FqI /ͽƹ Jv$I Fzw\ *H˱T3Z>J0f`SHBN;Nh?n~6,\ߦYAj@C(\'ioi[uXDq1}l=`s$I2k^J_i4]Gb~[&i;+ry;dK-uāb.qtƎxs< ˣN\z C2:#cH9n&W+ίM nupNm .7oV秙OF~4ŏ{VVVo7﴿5w#L$=Zѐ7M 9کtVﮡT9ʾݚӪhټ\ɏ.'^gq{E̩4d] C"ȧǯOjq'8V152l}HJok=d7Ng\h4N. =s|:^lܲbQ>z]vU)nըbié}~ŜW+~GNi-]v6./߼޽wo~N ,0pI~4 < 73VCKP}5gZ Vs>^ʭB;Wgk|;w<˵|{Ӧz:5I+U4-D6mey|W]Tp* ]3@jk#=s(Y%%8e^n?p!yƧ yʨDzE dH',ʞsi\ ^r0bAUʡxn珹96ݙtcڀga*^L/jq!XSkT-&Z93^P)P+Zld-$Nq_.l"ͧD K.FJiO)|&HN]ݶxYv)HWzKaǸJ֟׾kt8^N-&K ^4.&=R&Y NG jWGٻ6r$W}ݩL Gw{ndIԶ7AR)E*eJ"%e~=9Y[ ʽOuɦW_/7fJ^?^7lW${.B+ja~-iղЮN@v{Lkz|ؤ}Ω^B#u|By3ī'sg=f,$[lyܒ# o<^P%\**rTcjk0PZtSr6k9ʝjIc2^i_YIyy!o?)9 X]v~t{dYs+4TY?cX{`5;[dmDt$&2ʞ^W݂)RV-ċ^ /nGt>}7<΁]ԾXsk0llFFUc-BrCdy3c?`19]9,ND\FQĥh".b.j?KafS#.^> ;gYF m>s<۲;?6?ߟ>SKO S0AiyYœV\kbl?WFWZl]X;L*tl9tX BoeQecr?۴#yzMȾid-m6}w_PՈTxTJ3ڱfmPCW)ۼRLa'ڎn{(l;_PE塃uaT{240!ޭcz8BvesUC";5-1t :pwP1C!́|/BTf^$UCU] 1("ƠS֪R@:Y:k2X=;͜98KSC~7Hg1N%ʡT%z".&.*Pu%d>R^}E9N`X|Zˀ^F )$Hk]sPOyfΖD״!6$y&Q@U;2 } l Fc F0_l=c7%6M;ukXxju Me)s&(`S8p>``hH9DLVO%ة!pQNq]>M7?Y])%W]0SUq%UǠ GWdϦ$rM C{i*xԀXʚzlyn8x}75Mk灊YNd#ZjvLVETbm09o/λ`o?Yұ-?% )SRS ZH"Sr֨RhlLB-c<؜>KIZ;>?;tFdZ{XO߾O &cT֢s j0h1oBSx5:]${k7:Uף2ۏ4܂mԭe&X-v=v5Hz׮[xRݦƤTT7O,U=8zvɟ3fp'ޕN.rk{rg0z{ͣ^vV׽1ow1֓e{;:?}ܯc@̀.j_K,`9566iQx@ldb-BrCdy36=/jdvem$́$ rҾPfjy1C^v>e:8߅wFQ cEs@uRș E+ph!d(CN*,kLj2{r2ޓkۙLH}w#T3]Y5m2\MyJ~t=ݰ;syf۹ >ʷɻYY+R~|4i}i6s^$3FWR\i"cB^4:GPU*nȋcUfE6\A(69Lb%; jm1V7sW {CB]lw6f!u,ꭷݦ_.^\\}8}LL9HȔω U!IXʉ=wbKzQBLMp>9dٛ}m(T`p5:2e˼$1ڽ͎]Q{FmP{bV1T +_ՕAa$b)N[p z]khoajIh(daEE'("M.*9S gZ8BݽQ2^orY [ 80)1HE,+W=R$!GPJ6]SzHP ģR-akR_dDl?ED2"{D*Hg` (m8XLE+!e@KO@%NFSBBQF 򠀥3.8!(qE=j g( cYWEdlKE2.{\1Q)1Cn4Q6 X*p0BfJ!p+xw싇ejG<@O.*BbRI+K@r])Pnr%+"wGb!2ߟn!pyf8]}Ps/MQ$QJgF Qw\bn~׷o ٬E͞m*!8dahQn70lI^jӉg/$cV.q@'AM.ay 1 =? rMk=o蚦]B UZ2L8.)Ԑq\&#/2?Ye&tr'_lN!8m4f^X7ƠM`1?o0#1^u)GhBbȮOFC1=9gI#ɭcnqs6ZNFC{Kl@uyoi] 7~7ٙosdYٳG^I&7es!q1fun᧒zsG˜8nu.6fNӚq]zdKs1IitPzႺ8[KtySW_80N~L.oJ2_UOt[l8EpVgNsx(Ã~߲0SLBznREW;\os.rBI 5 T;B"alV=fn:g=;̪lhW]"eS:t6D1eMsJJMd3B=x`OV{hzW/Snןm@jJwq0S x0v3!d0X6.r}+W?7?'OLmw9auz-m^o4? ']6uq'"RCf~^=o3bdO9e[78{QyD=NY1ö%[;;5vm|l<߿A 2ӆQ< ͸-oqRp'P}VDy^{Yҽ\o 4>q 2 ogU:'QsȢ_FݢI}* ȹ3;tX\HݷB>jvVntYobOLh 5YBi9EbQ s4zK &GL:䴳Km`u4[mMr= S1%'mm;^9djn:|(1W _+z4Kjja^(MU/N] T},. -=/ux{ͱ;9&iQr@,G O5ZY(jIhB 'mLQDMJb4NScp@ ЫDB@͕MH<.Tɖi9Ve=_z~ڊeY@]Wp+{KVU3uŅ͂|2 E7u1obڣfxjWNkm6R-z^iW4I7y4v3:qZ|2XbMwɶEP-}j_w‹{Y*}[?nΥ`^쯼Q ^g1ytI0P +xN_^S?XXֹ1qu"C)OѮGq+UcbɅ3! @0bDʊxVM D:υfB)=e Ym\D"F24Clbh&%wxdJW~:uq35!yj^Z eRVZt4FqqTE(ȤN:*LFlrBaǞC_uiA}ӂM2xpPOf4)XHڀ(P:hq#DE XNX22iy$o7F=fm1Hg^XADM _\P"IN%nR*InjSggaTGSsDi<FBX(&H BiApҘ w0,ov7>:f6AʀP $|o%G GID 83RQߟ <_l~~kiq;Q Հ ЁBUEeSb`r])^FD_Re˥Grj_ftOwM.') \*2~y2_ӜNFe ?R; 'JrN?ixw5;>x*%KBՒsu8N|m]CǫY̎;oɋ # QvUf1(\y(&bcχmSvTFdݨus NEF\쑲wK"~A't0t1ҩDep<<Ë3d?ÇO߾}?~ӻo?|Lz>|O? ̢1$GӣXApg~Cs MVmκZ2kƽ}}˥B;˳@x8}89"qwyS[Еza$z^KnZژt^G}LJI0, xzs4Ί6L"`, ȵlB@G/*1:#-uZ1(;X>EكRհ Srek;Ly%QMn-ϺJ]Hb 吗lU   閝fBHkRRK |VYgb>'XDlŔJ٪Y$LZ{-ve@7hdqq,<҂F0e\k46P`|6l+ńY*ކIrL7Hbc\1 Zv^ T*iճ㷻j6S}0&_>9ގsAjRQřcmWUMZz8MaۣwL_1W%PܽE}hw )q^I~3(?d[nv#Cr 9| 98@{I>>>тo^@.o BS0zx 7?o( C jujU//d&,KoI`^Gs(kQ>fr8]^f$\x5y/JsJ~pZCir^lGL(JRdLOA~-Y~50+Al4/~/K\VT.M+)Pj3罷UXz!}/Ӯ쇖(f}?a+4l֭:̃79Q5tpi ]ZKU Q1lAh S[CWj<]!J+::CDkZDW8t(UGWHWRڀ+BeM+@JvGWCW GWؐXWUt(mvut)hl]`mu-thUADi:0*ZԞAZCWƺBR6]!]YNj]`fCWB4eBÎΈ,F63MQ8r)6F2l~I={U,ZqPNY7,;>|%+m< E d<o^K$+4lSx6ՒENj)` s2Z&<̴hEVf2"ZA>"Ji Z& EtO|\mBWڦTututŅ4Jp!CǺB55+.!3+!m+X{ BZ7&9ҕL"R5jOW뎮ΐo>]`Mhk 6[Q":;:0[DWµqm9I]C QF~̀] X@(oݸ*T8,qu۶/vkm|&RT&:z|5pbRWM LxEiBk){\WH ]ClWaR2FRM)ֱ>;Йnƫ`HV6kZ>/e))%8kҥP֣kB/Tk[-Ev:5bN~MܶXc5xQTdռaһ׽y⦻JqLj*Z{Y&&~UMSZKdQF[{ؽyؑXS mPbXm:HϪ¶"m+DkOWʎΐ uZDWXWʶӊ0tBΑ!V7HB5th)M+Dٴ] ] #-+li ] dm+Tt%.'|JZy5tp-m+RT4BIIg]#])0Ud֮ql ]!ZxJYI J[x '&9M+Di; }AMZDWX]YL{ޚ+DȀ(Mptei & 䚙K-ͧbdvKB%?`CNLՄت-7PbArMQJ׬,ط8Q\ -kmkk2ϵ&f7מ\ Xp $5tpOZɛNWRwtutA> o]!` jȦd3+Et(Z]t%-!7m^]!]It֮$Etp"yzMthO}J!;gJqÄh]`Nڳp)i ]!Zx Q ҕT0"JՄ-M+Di;gMvYjG5KֈΒ$<~Ny( {Fq3x_a_!.==[(8zks8z> nn62/8"h<)u.ƙ}?/J YTӿ~3@N@DY9/|w4FQZhȴx/Bn3n [`_mKXVZПjᏜKsX~gI6Cvӻ 2υ,a:/Wtj-hHX'?K(>yW"ҿ-{*O:?sӘjɽ$ 4>cR]G6S:h}Ԃ6UĤ< "dFVzȜ7~̻ٞp!364GӳGD0-JQfa Yxx7nb)9`|7ќp̓(OA=6trsC0^v}16?ǩ0e > AΧi0V O)R<ԘE9"-S,yfYս\2O36ZM>6BXe,J('S> ʷA1ݢ_dڿߗ?`;d۹N@另pBw`u @>" VpߌFrhzu[gם a&dk'̳KPLqr#a"i0L͔4sNN`jqƳVՑ&2Υ ܺLJ3mm&DS )x ^&Nl0E A:p(dPe>рkaȧG(aOY:RVᙀn@gϥpCJ%f6v، UY ԽQ݌Gôb6kCY[ڢc΂&iNRKL/x8xoVYVT >^jL*7` !Eedi$ALP`8GJI:jjܩɤF$Ʀ0bmq(#ʚQv1V@y'% VV 2I]q-4% 2bqL\h %DiUSx<Ղe3>&i*fF nF|0xq̕IkIɡjEbNj$^2 X8F\}HvG3c=yb?JM%>LNjMڤP>5>_–'Q5}m>G\ l6xCP""vE?(QP+G?߲_CToBt'`M`Ev5̑uk&S;q܁OFMMeSC +30 2(0Ǻ?c5;oku:fwT=W]r+(g.;4Wn mG(@X:)Jos8Qv'c{3J&PrWm»+߾SR0Էts =}L=8uv4L^ylXEEW$˼[dӉ'qsϭOY.oF'\ hMPrI:ưڗGÒqB>y_¢i޵q$B^ΎU}3. *qM(Vz8$J#ġD+ vWTUUS2 obRsr=8E9Ǣؠ8R~ʍ$n>_HMcK3gk!~P Q)PhSn|L.CԄ\X% $n"S2=;R{##ԻPA>vɣD|Lƾ0'~<9[wx }:Oah}qz&?Od PjJNƦY)q1R0Ţ ca[6àA1lJ;p 6'+,$7"K]j2!H,r\m$˜sbhI>ddNU A Rߛ_:oТEf>CC PԷe7HNdOx%^t }jU P3&Kn%)vl Z$1 mRHE;A?L?*˵t}\,W7W#S(Vǥ)%Md-{?zu_>uaN5Lrg\ii)w$lB6`s0Ë!fd.d/' Mbmlnސ6fiNkF{Luۇ6},ժՆFTq; ~7>MKm@ܲ=m#JuK.N/Lh3f''uv5\nH_@%zBlޖhknw wtt\? Y#fˈMțmQS7?*]F3rtJCZc{Y8$}W"_Χ㵞){ m{On{NJ pr:h-$ .3Z6SJa`tO잠Ěl$bz.:-.x0fb"l zn[yV#M{-8W#D) }!IQ!6J <08;1^^POzx;PFo^"^GW^1?_~~lqk$I.+G{aڨoon% %3'4Lm<<=v2!:c*D&+<hdŐ2~Kcu7^Yy Ť $0T7F HF" K$}u>ۮ{C-Ɋ >߂6*f"Ā!$<pEN5.vVqÞ8O9"&\&6-ǹ7 o]*HlO +7}INHt1Pc΋eU`6D zjH dҘ-$=z q祊2g  k/).u),/f*cV2xilL{M "A9Cu-K}Zt›;~P}>j{@Y5C|6IKP1|4 J[4$sB#\a`* Y\ &h#ju٥keԕT jmPKkRZ)T,OmĘHs3Dq%Z!V(Ф0o{M\>J$ZާdVN8|3L~7]6mr20},Iv p- ]X`XXSLzVʼn7,~2~^^X+܃mpP﹠zgIGs7~ewIA#-6Ml~,.c`aqI0_$>T4a:D+v}1-e;iUuh~1 EH螿u6Qek}Z4/?|M'#b,7x%rdܴU+.檄ֹ@ԛ(#)y[TF].ߧ5>~>T7MefbH!1Lrc%`4(a->{/,syZ>o/Q =O/QnQe^6]JuLd #,%F$LsERE='@,r% R({`xGZ’qIVg>qM&}Z !GBs=Kpo/ ǥsvn ¾дͷ.viŔݲ\_UѮZF6C#ih0,rJE=v;rVme!Z1숕HZMOxeB&J̓@ 'ʬe$FoSN4O*_|nWjKSV !:yh;d6&Q>{ܐ1P24#%NޡVžӱ\H 雷k˿b!qV&T]`,cVy&NDW ڐq8hL.Jӳ:6 C?щIK,ulūWg'*YT 3$Zyyq:ES.ǿmiV]v>YqDGH:=R5~?^nW/>?;j>xs$zE̱2h`/XmYMW;Io1η΄ۦhh*#Ai:ub9㳻m묂=rmk\:/Sw:\|N+'xlAcw '~Y 6..?7~|?0o#Y % 1 ?mDpnLo~|4rj0ej窛 0`}.eޛ)8Zޏ~ȋLtVF*h5T^@?N&;W0A{"v@Zσ8x BjW-| %|0=*eSMN &U/hsyU[ʳU;g+w.VPډ<[,"gЎ;&t䶤Iȼty愗IFNJhiWNȍ F:9aedU^eЩb 덜RzT_fte=k>+~gCZԒգs>ɐG=x,U3g«r xuٶ [>)XX<1 XTȓWϔT2]w!ͯw CBh.7d0B2I,6~Cl#DHud]TN[֣!rV7&y̠"RMJCEsxq2Ŏ&y%1G'Ads6eh\Jmʥ5TMB8N{ ; tx} J8Y4'A''rJ i%!}aL[zLdJm-#Sd%1;He9I{qb)Rd׿~E|$E=fQT2Ѕ\p)Q>:C~/=ƱrA/%SJf덠ָfCaֽ&ẕ=ʱ~y8!^߃ޢ޾t|@a_En~f`}Dq=ӦP.IәNgz;cckncVFs]BɾVsTkTU7 A٢yO͌Y(䐁[]V)>׵t^ xq_e a3x}`W_϶E?<\j.?)VY'4~}tV$F7!NŧĔDm|qlQG>Zkuv cfѥM\N֞>Hl7=vaL[mђ[7⟯|( ~ N5W)pnCf^6ވmK!&ttOTs'nHt3N>1_1惜k>&EnߚM륯wb7o~^Ů-R5{=wJmPTGb)қ0Um%q|;x8y}}3S܀7 &hy,LPЅi&8&\rZ`&J/)@DW4 WPjM~銶 ]Mt0t5Jw(tJ4AA-L`jt(t5ƽWWeJGztF3pE&NW@)wLS<$23t[&\;Z&p4/bt֣ `jMr(te (?HW)Cj{^Մ{8 h{(%U[]](o~|uk̶g.G92x'^9:N?Kf!+W"߉ªĘǟdqwCOup?yZ9$agqYryn;' KwsE+o_=<|\ø55@;nΞ=8Ok=yoz>.1ٳ޲oR3=l\$Cg͞GznypK~Gc|l>CcQcOOk~?}D}D>:檋Q/ "n޽?9k^ʂÛCz v,Ƕ5u^fts(xkoBRw)vi7s=]͟].j݊]hN+u[7aRUp1,h@E y"Y1- yBx\emz+<9+=J~lIx#g{1k& Ge-%+R_UL\m6y!?RA]}w+t_~ca`/6Z\Fչ䇶=bJBY$E|vBdqp웤(-ף?b)\NbSs%r9Զ A,Bmf<sMs}4^a|#܊S.D5ZwDh.S DU,F %Fy(')j%"ZLmyHsyټr!ƘÖ@jCθ!Pn{ݹrEb>MɌe!G4Cc07Tj(:\Ө%gL=@|_maȾR Ҽa ڐadP#h B̻2{ XjB#r#ڨiXfSY:,p1:,)Dl_YwI.#IM{k9yB0WF:zY(/`Th,SNbUUysYU^kh>Lă\G)7 |M}ܗS\k209iR>w{cFO 2KhK&XPR A5bdDH\8"zd>hiВkӪuR5W!7vg9Gd-U9X1E9mt:PBlhÂkե?P.enqJ6 IVP[-@EE;L4Bs|k@)e%prdA̳8TbUhS7 0| 2B5%e ,q+YWe0lPu3a>K+5r^+! R̜Һ1R\Ttkkջޠ8[oe:Bo-%t,r<& bf#Drg5':2\[+U nLWӬAό ,%Ji`̲3wB&g:p28cIPLd9+L fm#f6]e 3iR +p}VieV+4O/XQWzn $\E#((nFeu‘sO(Bc$$.>jTy"1dꙂ U :@mJWBmuLG*Мd1UdnqHZA]JF?/j-gv8rRƢ /ϬG_w*6BLan_)7i" n恻fCN^ (,`9 v`-$o50zs!}~Z)A(=>\2'U$FUc6\l^oKc`XKEu,@5)ԊpeŢ&Ռl٠kWaO !B`E50BI9# ՠz̓?gq3mXB{H}$ 6m<`S<,ˋ߷zDIU;kfWJ9ZZWrQ !E1 ?S~r=#xcz*`]&*kk΋6{.{W\J=wBcSr55{Cq=v=Ň@EJΑz$/2@H! $@H! $@H! $@H! $@H! $@H! $@H! $@H! $@H! $@H! $@H! $@H\HtD)aVeCB< R*$K$I $BI $BI $BI $BI $BI $BI $BI $BI $BI $BI $BI $^0 I8û:6 OlN:?̶ͷ?] e/hf08 (yK^65 5zo&py F/h  FW rߘi`?>@dI튶6i6]$7W?/ƆWMwɯ;Q q?翁ᤂ}&ܭ4c▮δ_fRO+Yn-_a- }K;\Pk9w4Yv-eK*¯0$k).K_nWܯcNoWt0qr< 6Ov~[;q(!Ń^ojxɍk5817~|Oyː"ĸ\vMzgn s{'8w=[80?s&`z~/ͮi79^wr8N\[P4vB?3QYѫy0X+U3MBqKs F,d4Ϻ(įWx͢;_?So`{}Ԡqw#;#;#;#;#;#;#;#;#;#;#;#bqwMsXX}EVU_ÉYT` J*xe=u`tYܰ`v܉S[i`Ʈoz7 |.YG8. aԯ`hw"f c[GT.j!w0$3/!h (bL("7b,%xs wS$W8Ź0ђ|WzaI\y];<ީlݏpRq펇\;uP~n3,;P(]4՞/]sgŢ]r5 oѳ[[uBWDA_eVSZ#M ~{֨K&Q|"j׌ ڑ\Gru$ב\Gru$ב\Gru$ב\Gru$ב\Gru$ב\Gru$ב\Gru$ב\Gru$ב\Gru$ב\Gru$ב\Gru$ב\AխRC]+g)q`z@vگi^lY9J,C*kWAGUi ~Y=Lf-4Q ~OY'FԖ< }J!N"io }xn,y J,0beV a$U;؎!ߙ87t(]3%is*R͏Qh5ꛘbt3>|r[~0<:Y0,nqv\p& !zf]beQU3K6~:B7^pw폍BD?թXVVtۨJ2QXeӷK N"˺Ii=rH:sG QDg6X)PEu@mV60*H]GBAQldb6:镵QjKFHű#uۑ_=Na.Z/RL!$U9F%9YppiT(@Ӡ8#Y]_Z:15ٛOER=)S>.oCS^Λ Ót;s!%x7_c^ZTq) += VgAoz4˸%fhy c1 !iiJ5#3@f/́7쇷Ua%i9u_uwݮLտO'MƌǮ^A_ B$֖ff4̪S=llH>=^?O{8mr[۞UŢqIA}y8 1د+3ׇm+v5\sS:YnB{駷?S7_>'̧ӻ_x``8*|>Hsw4oCӢy܈4| Et[ڽ}^~&t ?~~}S0/OiHSf&UWǒĆn~|{avF/I% K1!|9۲C4V\oLIWckcuoNq{8I's½w%1٬ZP YQp8'&jzϷ]tx- ︮\FVBX" Qj `AIԁ\jT$u:YۤcڤwkL'ȭgV/ s#epCVwv>0ק% s?Oas{s)Mi s%K"NÍc_/o};S'+C2R0 KVFaolP&4}~٘T2^Y .fzgs$I:lIGCЄdkul !VX jlL UӒuEq;^T]e*w܍Z^'nOeSRp Qp!MFa ܽC3qnX=.._/vȓIl<1dㅻ[4XmA>z~/ MHBS*^sӌhi)`Y2"ΌD00pu̼3F_mux}Ե̞&4Ѯqz[0v}UBƮ^,鰮>D.H.y/*(9FO^W CUˡyrܛ֍g1zmJKA;!;]w.>dO֜m-ywK vwQ>9=mۅGkB9s)DL)e@![&\: T68͖dy55ufV69v)x_{:<<{9{`VjW>z  ㋷L+nA )rD"Qs%c&qb:Qmb)܋}l( {4*okadN>t)Ʊjx&uXYtIA5W%V NbeU0da`!1} w^Ͼ > ! <>'E"GcM:/o[1GUQJOuLyr-Z%|KI_q-VtSo0,?ΏSݛye|<S[^rcO~I=ݟp!UvJTZ*3FRHgPAJ(i%  7Tkƭ69tQ ڲRd*ن$wIbr IC'$7FP|hML踣v&50A ׮nv\ULOYyg+MOFrk`Z/}uNNzٱT"x[2&]=7Yκʷrf .Yc*ǍbfC|* j @dIZH{mke4t=1M̜20=)`z]>*(e0R][Ĺ|ՙҙ-L3vrj gLg%]{gw8L/=r|Osm:dPeDt4'(Ѱ`,YBQ ʬYJVY1\頄W"jfc36 rHm^J-q%ݙvj@ɶ$Zb'ޕ6r$ٿ"]G%x{fvnFlQ)ۃA%(bU [ɬ/#@QYcdm9[A洓NZq.xţ0׻RJ8:2P$ VYΘ6h3@r2x&-d5lcDKZp58k:8yqRՋKgIɶ[E=$^ ,j83>{H\SΠ=9`_&)Äc!FŮakұ-lLJ;;|Li`/^w3[ju 'Y{Oځv;KNsBlH.RlFnir *d뫠AUH./ȡ~+K]T ,e %39ɢTD1T EXYRn*oW N3&4nv'#nw]\MI|PEs.y *Q2Ct < UATϠ# 5*~+HJL%h .&aJ'LP`\PCVRޝ_Kؓ&8>Q}C+L`2ŹPIs%rYZІT@UP*UAW̓&A:o6 Vp:>$YQ(UL^-I&x%Ht)\Aɂc\+5l^JpH!S Rs6B5I.^ShCLzyWR֏mpk~PQ(.k?RyîG뺛T(:q߼f_hEzrs!E2%༱<_\gp*(Q Gq>A<l/\j} =|So6֧zQz;$PVl@-IQPBADpmD֊%3h)QĕЁ[pT@T9peW,X΄(CtѦPIp8zUG30A4Mh,cR1A۬2{w㜿_!W1GmOm]x `d?YIʓݶk# /2Iugռq^N^ꃗEX4gb [.]/w^+kr^̜{iǟm6K^/o㤔GJǡt\xVYx挱4`eq3 I,cq`0r-1 İb1wKmt%sF4;Y`ESm+m| <U/\ cJ`r!a+w).N)]&pxǶC[F6B:< j|uE܁-^>OoFDǕ V a<"qk'AYU^Zy4W* u!^I @<zz壑zژG֣~a9dxX.XâX2(KMPxe0<8"?{?P83s~գe 2*DKDc+sd>Aļr9F=VCբaͮ82-2 [җ qZu]WNN3(& šP/`N!KYh;F=a,Z26ҫ7j5OS7YCŞ2x3cnf$Z }t/h|9c[~B}T7=3i|GxBn߸&W>G~ٟ{ԥzll#փ],Gˣ@Ssmy%&aiZOqzԣ\y.ȗC<6E@m9 5ow8?_\szx4ؖ=Hq:2Tt|u5sy7udz#\9%ί?=h_ {zPYU|"xEǛPc5.HQ}'z S3ȧ?7yEa?68]_?!⿗ҡ؝[<%_ˋdǘ:6=]t_pغؖt!+3!7V1_wa~OU?IDY3hƲ1˒O 8Nn=b,g'7u2{ в%#g5(+BZ4!Ŭ}*Cr)AA4^I<1k^>bݖb.4(eTY@IEL`9ØS$)Ճ̙+W@zI`,g$Oo:S.-ض\h\+򳶨vzPIC"}%Ғg봞}4Xyb<:Uɣg?~t4J wT>O.E~gѰöLҫbϹ&7+`&7C \T[&&krWL/F3Zfi?ƿ_RѧK:5ߪf8<|4̍҅?m"NNjΐKNȱR`(?^"=D3ݳ^\2#b<|!曣Nj{:{Bj-tMR}N VI)CT*2}IQWl"{k6E`i/gWɦlJ6י;QU d*e!)Ra:۠m.fy"VCo+\_BD׭B`"zDW7tEpuoUu]+Dprt%KHW׈u+DItuxt`$=+l\o:B :ʊPJsLV+ku_J)Ƹ:]J:@JJ'BMohWVqu" :@2J7XzCWW+Diؠ]"]Y\GWVS5Q;Q;K'l]]9-v㽡++zsvEhwy )exWcx%ҞJ~8S4 >#=*fs ը[\K/;#i;nG%bUYT YQPEkW «0wr~t=8ouF(nn :ƶS s􈮸@o: ]!Z]Pvu@WoBWt·{#\5/th<]Ja:@ zDW•7tE( tutV0'B\U/tEh-:]!J@WHWj.tvE ]!\u_:P_LϏWK9ƥq99*_ۙ<*sS?Jr*X`R)pnN^ֹ0's 1 .ȄVˮȄ@>@Y0˙]`zCW׺wR$j[\o ]Zy"6p ۣ+ ]7uz]Z ]+BiHWJP}ҮV+%I_Кt]"]iaR.1H(HlH ]ZNW gWHWV*'dj˾{J:@r1f{DW3"{ !ZeNWtut\Jvӱ@}Qhkr nZ5O%a amM4`~ _ovץtPvͷocAz5KZV/Q(JE.'4(ħ;p&ٕ4~Ƞ.U&GV/J:uIs_y=.~=ovf4fCHDͩKߗ~x}>s4C&1o/س]S`D,ӵ#T['Q=i?}SX|&^_Tӓmh.F./GH0B_91`3hIoL6S5(Ĕex*d$vFғȠ8DQ9(PM| 3BMT4[BSqNvq]t+vWW?n]\\&+!Mwa7 f!>|;cth__/}p${p(#XP@ q .Qf!s2^`hz8󖦮ҌeZ$bk-.ёӴbnr Π3E-9o~:3o:*ms5)6V3F| e!Fqp5Yq VzHym;CsK8Ǟ)y$]ӿ`"t!9f $m)Z3pGGnH?DM[d67[n1HF' 7/QfUynݭ/6ox/ta$K6]9k?|{][c'BP cd^#''eJBѤ#Tc$o|: z~2ՇԘäffwVځ¼il'x2SMb 6OI3^NlLZ/G?Wx瑩ge@}dK2E3mB*x P~4Ӫ9y</."w|hZp4F78-]yސOw[?G}tv3hw}w4H7>|7"̫:ɴAR_z%H ՛* So5̿zSE[ϽTRˡwXok-Ӕ_y鍾zYen,4)(d&,10{+uUU͒gT5KE 03ϣ2HJ7E',„Pe%J!s,5Rj&6[&RKG";b!X_Xc̵sx"ac:١NFӟLȧ9 eȥM&'F-5#59 d̒A*괈htz1eHB\)"\Ia*jLx: =u aǷ/V06ȵ: 64 5Q 6h}N!J[c]CEƯ̆ Y@uv>k^>%Aʠ d$S#e.V6j7TD9Ra>[lv̶ƚ^ǶFt!$Ԗ(gBV;gL+jdL*UHN:@6Y2sHKYrAN۔} 0XL kTmXMݫaj8_(_NlOw`]]l7޺]d L&x=d4RU#%iʧPe>9 \g!eKNfU6ك$ DqhT0*YY(ʛ1h]Za6M[\vqV`VV~RW:L 6x@t*C/QY.qT0GB `1dB\2,EC|}PTbH5Z:aK7%c<Xm~ue8xăZDNUEfin Ĭyލ*ĬWD4]ٸLG3f,qFZIN7\&D.~&B,]Ĥ,WQӣb%Pfm\6KN_4_`Y$kIyK|(')[cRym,ȍssfǩV4 .n~Y?ȣ7 ~ΩwmIpC+e?z㺲x_ٸe4HQ*i'$4J'52#-sz6A~ib|soJ%ruH&HK%(`'|)_\WϯSޔWZǏ>l^!>~ڭ'S6,Q|ϸJ2 dBE:I Dӯ͒ D"0EZZp, x /_xP*pu>!(\&BG#*GV{ :-x/x>-s1wެ}}{b0y,_Y $ϼojk5,XQކƙ(}t)D=ϕ9fb ^ke\pbI9Ƥfx,l252T辉k/~2.2 @^f^tAE :,HH ?:u4Ժͪaԝwm38V l&sCH}uzHjyx>-ospP^[vs-5wypZ$b&pHǫ Շqܳ}~c}oӷ%NKRM.@(Ǥyk+&8=pU"cq\~xb􈚦w.\FnA)2^ 9pTr):nU&,/, }V M) ke!._Mۆ32 B"nwPՕ Y2jX2 nueιXM ~:1d`լ>>}B?j)U6 n2+Y /pŀ׍ 4o܍lJ8\krRfW#} cHFM9 zp_OTkv'{W3S{ǭ3̕c]4MV))ܳ_F6W_#K~y8n:rmz?ήaBwt-$u"c[\A}!̾{~%jWjfq}p%S>Qݮ!WO =钨ىe{m urvufnYs]Y"H{TrW?ЕbCIj_$T _o % ~hf9[w*ɘ߼[X6queEs?u8aqտ?|F^ېx]8Mzi|4//pˁY̻|fs;Eޤ֕=n& EEugViYpízm)ENI7(siވYn{n[܈S7̲E5"Cͺ6o<:}jѰ|vGIb ADQJ(WE2{H>Eޕq$ٿЗź<" ,gex0l̗<)BT1fGS,v].Į/#"prH?XĐ"z:6g-WZoS$ ӒI Xhs%|J$z!"!6t0Ƕ iݯ^8x CtO ;lds-j{r݅fq"w4qӀCnI\cɱ9OQ,İ32 JGbYלԚlG>+ 0`,XZsNXIa!Y2̵@˾W)$4wM76w/IWR1 99Zg-t@WVQhhs'~ry[\~XoqO5ri_d9P҆#lTsnX 6ږO VH+3 eps:+ c,yL&0黼hop.v/0.!-2cS$5ɰ^je@;5Z1|IN9byBj}f7h4*d `% *ZLHcJ Yl07嘃`9_/_ioSӓ/yR qsp;кqό-9emzw-&e` AgmCxx{I퐔XgLCZ 6>xEQ\5 @>0\Hd_Nѳ 'l&O:8(Ѡ:F"!}F|UI;'I灁ƖjZܑA ڨ2GkR)7g78(qf_+lߪp\H=SjQ꫽.?L=>SתEĮ(F_Gh;ڑdQ>vttzQq08TAw^4,O`GhH=J[%RϦ{(Ґ-؄%SQEa 똕\H.HRd*ʐcBk_t&sߑhJtTCwg"=7Qْ#@5Y8-BY0{ui mu؃69AǛ!7mOlhO=;ڛLr}ٛA~67M?LO>MFwݖn55 5]XZX^{3T_/N)^\\o]'`yCę|E?UvQےs4d[;D&IXd48%eh/_IH>($ǐ5!)c"$(yZn"&%)kpf!vK.FKu/$ɦcL~5'~ʹ\FwINȴIj)z2$2@Dc6N%l:).cNů9ڊ+N/48nl Bm*Z$mNKA60F uBHJbY׽u=2aY1O AtȤK&PA'dMX:KǔQ!ꬼvQ"c`6+ Ac"FE⥏6zemD-=6rHDb[ƅBzLȻ7.AT*"!J2HRyT)@`>4;߸.^q?=- hk>Gq^UwiFOa|j[ͧ1?Sʌ7Mrs'yqհ\2xZp!'>4iOO4=n(N>FZ;e{ZIes?w~A$q/f~mQ,9 ʤqK1g+ճ]qMhԯ߻+`lݝf%w6bm\{UayK 1IQg]<Y<{}G=`}y}o/f5hrom&?_~~8:%.Sr$(]q!ش?r9hQ*4^ @[ 1CFyY12&E8ɍ̎: ho21G =tHt CCwA/Cw}a߿fhK"x3Jhǣu"S,.jMe֫kc]^^|7ڀfzloOeBv z(ζ#C^>>qvKc sdv̾|\`1QY0%}lz>#}~:\Π'hM3jބq򅬱%ӳ?|l0XɒJ;ҎpΗGff6A?LhEa+6FJpV5ܔ*. Do* -?U}6I! )%RJֈ7Wsߏ߱^+`r{ -ʨS/Mefvse;75>Jٔ(SBZI\4ιo F•uD(=kKiTmI)W()6-\z%i }~xdLb1Y9Ev}L>i& 1=~<&cX]N#xP*(C&h). ħ &8A-(-}D@jkٺua+h;Y7jGuК-;|`u&FSύ+زjJ&jVUA)HW;HW(""RjE Z3x"ډw+i,r[]`EWa}骠+&fp \k骠ԣvtV0z1 Z%NWJHWCW""U*h:]ƌtte++F|UCW.Uc O nNW#] ]Yf4j'w۝4U׮ J32|;t%>* ͊aOcZ >+{ ˶o 0lM7eVoN`7Fe#M?zK+$y{],Z8;=D[Rc>S󭛶e*y"$M a>8Wj;k g h%ڂR{.B0d""CWZ誠E:]ڌtt%T&"ښjnkOW]+`EtE-jU8^ mR3.FAB^]UCWF"O^P~s*(FARRJa++ \VCW;tC骠4#]"]iiV"[OV > j+`%Ѯ \Y1X J=jWHW56ϰWUwUZ=t" ]ch IٳhE(-VM9p޺J!΁ڷ1j v.өJf84|%3w갷 gvY'mu\mAuB)V aA FSDWK mwCkP FAh,Ule5t%2kVUA0ҕTʚJ*X=tU Q ]`NW#] ]RF[ɪt-tUnxZ7ttS]`wP3骠vtx^Jiɶ[zV骠.ҕֈ++8VCW0UA׮ J-GA2 c*+,+:$*h@(Wؑv&kҮ `SOUk]ZqtUPw>pq1Wtp9|>]r*+$㖨Wz~? >0ת/r??K5Fvwlp-vwBkٖ%t uN=B0 .2U ]R J#] ] Aפ]`"Z誠E6t*(j SVDWΝjUAPHWHWȤ^]`]\BWvUPttѼ&6VCW Z5x*( jJz_""_5tUBWƑ\׿20n$`$#Ƀ%ԧE"r$3Ԉ֪I+N{j3sֹV V<*xIkW8Е=bPPb$*R$DW[ ] \MK+AJPe3=r+sƑk5+MlɄ'vnG@?S0/pоԳDCiRwMAOz|眷c?HӘ#5nwfƚ%sPZltsA#\#{=jHskϭRRK{`JqT)Xh< pT+uJ-VNWi p+Еxt%(]$]̞KW8.\RJO~>] J3ʲ*.q+kv%haQ :Ar̢A+BW޾|l] ]y8,i"Zz1JК+AiG1xtmEzI\GYGvAW'HW8RaAt%_Yj|y]=#޿RY51H?3"}NYh(Y#poýY^yؿޣ~Ahz!qJ:cxГ%r `bJ] ] _ӕ4<J[DW =RJpc^]%NzgԂ =Еe;]Ț] ]1 +[%⾹В;vwμtu:te!vִ~1ṯӕ~ ҕs^%+{;+ˋYAA]"]y~f+CW,fJк/Ng%-ldgѕuGepN"`‚J+j1DK=0c#iWօV @m{Çrjw_\HD;3COhְ>m Wo~o޼ l?!~!M{EJ"kτ"?_^ut @otqqU& R[4:]o6WqtX_n۴%ņԿm?+O|i#>9Bؽ?88_}Szߚ~n'ۨ]LjךUPpMr'*JEFBW9&˺d-%eِZ6z燚sʆBcUc.TU M[m}ʹRYwƅLTg; SMVowBJƷ*BEo&3YjAAI&R:7'ZZj  9Plb8ׂdL#f;qzNS$ZtmݻTSԻfw7Uw,g()ۼIM#$BGCƮ#D34fE {Y!SLQ-^rJhB0KqxN4G@sZ)ʕ<@{bUl4#tEt(41T`&?rih*˩|/dC- ïUh]#EG(g@xhFI]KۼH}U!2Zڋ#Q1'B&'Ss.>7'!̪B^܍5Ժs!D:Ǡs )'o^GÜ|X"َMѵHIZ%9įZk|N^TBQZZrմx ⤵l,dFg cMm̭8b"n%l ]k05Lk.4B`Ty`HJƬC6W̡  "ٻVLU¤t)S` `; n++nWH%J͒Q1LhHpu P, |2(Q UPNXVp6L&*znB AvE=2U ++5 e2a̷6QҜ`ي!-ED&TD!MwآU<B:nY{$$XYu0wT `L `iƗ%B`@YLu>J@T`-AAΛ:2NB8)VZ hM e*3[(@Hq`Q g (Ez@ߑP֑|U qj2 Rn+Wa,3UuA"FZ.ɃzFc̼BJ>f !ѿ&<RIh8(X 6A yB ^! C9TU54-dPgm| ʵ_nݢbF\uO**&6H;L'"Bmjy}grfӮײNns-W]-r/ۺ L} h"X 3 o b9PT8xi76BϛБUV%Sh+J2Ba1%OpHv9Oŗ,J茸pΠ 2YIXGm W|&Cj%M.Ov+{@u ,|.۴7WmϹN*d2 BztqtjNf =V`&}˿B{'UEŨն[Sr5H<RVO ]j313@9HhYeFIJApPaRDluH5\Qr#bs"Yf1ݹ SAA L]bF = > rwU߼YOa+l+TO+IđRA4$\soE bX0j`R;ZKYTPcDdt01t\:'X:W!h"mT֥cT-pFBihc`fe܁fzP=Cɗ@=) 1KCfj)Bvs4ݳ y~5k!jY>@MrVkA+wf*C-*mP zxq Vr:-(-lj䋞P+Bj{4CS3&k|TH֞'p*qKنkCWѠ[ 1xF< \T*N6fTS. lKC`PrAvʃfj\ .BE! %ϝq$@Br6Bˆw9{&ZCV/~e/n6޹~fm5 aF΄B.O(N W8= {Y ۞fXx.wbnuUʇkh&..dG_[~{uծI>?~]Yv9|z 6lW7$:ͯ=E^}ԍ}lݳV?Yhus{^>;S۬>sqj[[ 5v4g*ݒr]p- $iYȭ:E' x8h8h8h8h8h8h8h8h8h8h8h8h8h8h8h8h8h8h8h8h8h8h8t@ޠ]r0\R'2(NSt!Ej;@ 4@ 4@ 4@ 4@ 4@ 4@ 4@ 4@ 4@ 4@ 4@ 4@ 4@ 4@ 4@ 4@ 4@ 4@ 4@ 4@ 4@ 4@ tN6KrE5- lZk  '?(lb*1Hd;AU ɑġ(q$qa4{~#RGHfRg`E3G@.u?=OPaDG# V}$P T@Q E=ru3s@5}$P G@}$P G@}$P G@}$P G@}$P G@}$P G@}$P G@}$P G@}$P G@}$P G@/ zfN7A_/:\b2`g,cBQ8~Q_~T}V(nbqߋ} 0zKSWXOcj0?NjXy VQ㖪G0uE,!]E$iA#-#`@n^x+Ùm/9r2@+4[0QA3ZVDLHj={ ŭ?>Cs6j;}EуMlJ{lE8V+1M.{#)ILb"aL!L*}JIJIto9̡3ϸsȢվobfK> 7NT % K*e+U୫޺z dQKbqVXJ@1KОP7]^%,WG)Ը9ɍ7hOo_ G2({iw.䧸Q񗙩r^Mz.H>-͆X&abtדˑb qlp @ebp!G'GqTMf$p")8e=CGY((KKK4׷W~˥<@' ; PW] I*\ι)BDj*_>V\sD/| dRsi&vc5Ӫv݌)nV̗0ĵUBP;a(%ozT2&ꐺJUw.Lpug,K@+[ %^bJ!u)WQW d]QW!~*$WW/P]qRWXv]%J ^]@u%8K 'mww*| Pr^\:`W{ۙd"q*$WW/P])ARnIwU:Jh Zv uRu?ȐP~gQWzǡ邱Z*Zf|$*Q.ҵ k96lp:GSNO]j0,ʢzsS}HPcIҘշneySJ<*eb6ţv7 G iU~h>tu(KTWD=̷")9JpꊺJh«UB)u^JWP!ukunDaW s]rp?WW/Q]1RWX<ζU:UBId^)[퐺JJpegV=ՋTWZ,_a>f6Ob 3#dqNÓte`ޕsd+cZB G`Ny8/oCBP>R-c>tyǞ?TǪyo6mh]AJm2hA}AE"1`SX(Gag .L 4>]| #KPCZa]?@IS_X*=zo{>A_tY?_L uC` VԷ8JJP(4 ͛\ٜTEsCprn\כuˡ_Р2%`F1 )`)$9Ae\Q~ d,UYJr>^zq&)˷wcЌT$WpZrTLbAWuNkx TjU jеfO ̸xtU8k,%+ xÃWOÛF薷-FU}Cx ƹ~2w3"!o2KSy=im-LJexw?j' ;{w;Xh8]$ffy KnӢ Jтd k,)PV8d)>H<%OMēQ"3!dQ3eVy D+y&%v!FԖJ(Pʍ"VD,n1r│Eb#I˔58wSZ(%jlֆsb;ILj ǚC?1jߣ}&y3Q&4~<]{]}lh6IJy8UIaUha8\"UFz[WoxC-7gT:y|9{l}.,;=a"hox [+YE A-c BZu0 Qd<)V|Sj X#&ZH0<)c"-;[s3Fd{VlxU2˭M=t}G\.Ηif^=G~E{H>(LrD`C I,$ #/l\9qNŭ`1VYZB`(3*h2:ڀmTRJ\50CgFd :;3!TVX59Bc"z][4~5pf؉5 6E64eLJ-ˍw9[xcIJ<]i|q2k @#QHYu2LwZ D=FM[!--3htFȦm'{]v=%oSNC;L#h<0%:|.糓-W6;#}?I+~>BpyEB 1,'une̡A~7P tw`LEgnIp6 P~MBh F"!zfW-le0gF`0KlL[/t+!879Rd4 htZ),j4 12H$`fP>bt=ơ(҉!0r~Y(~;8Βx}cѮ\t5.L a+<3(;Fm0̃RPB,Wb+^1T`Snc< NaH7K<^Nr<\ܩ_#w&]/r(#t)[DK \F5tN!;*2~*3#iǰev,5C?hó{n깩znrg3= # # }]$=%NҘ|kD03' GN PKf-lp#** ƹ4‚an$ETXŴi !`+V^@ϊyX\N~՚l\su]׃n']Ls'i]^xW90ҕIs$ztŢIӃ;q~*Pٻ6r$emd 8ef.,&Xnd]dɱ$a+~DCn۝QU+nH*Yؾg>w+ VUQۻzgr7ڲ~'LAiE:E.34sQ{mxOiUؔx~NF:J ͞MiH>d>c%Lp<%tA u3qnoM{΅$O~>;9ꂭ߬!fV/} 3cbfQ8phT񚡹&c*aʅda˓9=i-M!BXfE&zDg.ۙ85 7iO4XY \BFI1'PJ-M(LjWʎ lccaZ}LƍJ_5BXR\l)PUQ>`J52:_'4Lvpv#);SÔ0ZŸ[íK ]ssz=w\, yQtcRec8]&ɺ/P!:wKt$wZm @bVSla ᅊ.ʒe4"ɛTwdl(uާ~s 2Uow~`bJKR.ix(y>%%_#y3Ը*VY/E*؉*s3%`{|몾P@tlMo[q:}ھC?wevQ&OK~m2%MJPȝd龄JQc1+U 01B9"dSE2咈*2(e .Rs8ٛ$P%cH6٨ F B cؙ87)O>mt0aUz- kE6=ty><ukưL ]/>^LdﲡU h Fh4z} =}%(CWkzkU-9D$,&gX( ʵM2 %)ESjS:$mtCFdT]ivkpx“έ2sd@I9%e)xr'n"hIQ(ّ 9ys4yr2gqi_ψX(Y;ή8/L4,$KM"Y0opE0#]ҢXoDd*hD0+~qB$8%QԎT;ϳl:GB!T$y#I64VdD"Xq`H. ɫAȮP(DkA\2 aق ߼}NF?cisi@2(',ɴj$`Gm/j*tDIFVZ4]g50sɗՑ6RR9ǥxZ*$z,*kh1;FzOz&5(ޡw@Bދ9MB<[AQfFu2f!6Y\\@MZDJO篳86BcHOտk=ѳggQG (Bv7:\]ӭaQ฽#zs[Bj5Gף <{8mőN.?xq}) X!Zi<-ųplHXզTG>6NwBZm#-jnI#lFma++6DQc#ΦՃ9>_us.lUG]Qݫɧ^Y}~z92rOfP~/]qQ:Qeu}?|Û7?T͋|x?c<1mY$9s{^zuWCx Jty[|Ўznm דO?L|o ֮zu׳b2HW_aZ\W:T?_x `BB^d]+|/~,^|l٢ Kowv_Ffhqg6t5E㓣||2Zl>Yd {9`nHto8:E=<}OuWm)1Rp8Yun> sظ҇&,%H"~voN6oM6'~lNNJEx}!Q%H |v)`v֔( 3yesvesŤzNf·5)!G̠ڵ୭.MVs?M'` H2 A'١NI3FhUںd$Y-H[}@tKK簫yj{P[Ztyp2uďf}I DJ&&ک3;.r>&{0ES]"|."|oS{**hK] .ϓWbݻݲ.O $ƺXJd 519k?BB![L!G]fOs5MHp f/ ly OI-wTd A6h,A!d[PKJvhY+{T7WEֶ>yV P At<#Z@TZBڳ(";lmdBAf--ᅑg^[ЛHdǁĹ}쇳87kFF#$ LH1Yi>N"b)6Q6uWK2XJ, *x򐙹j*Z0ޱEmڜ1S%/ܽzy 7f6eKupR5 fڢ+12'Št1懔rgkXYg]";U_RPB`%5MXR:9Σq6#yr5.Vsn){Z dmӾRnp} Hh /o4EhX)(!N;4w[2НBw}Ζ|A1K[Lt1 ( 蒢쭵Cj'fOwEMf,u{@?W>K3& 'O*攄:)((h@j#D2&# ۉ.۷`LGw-IOy2{!a#top:_9I-5f?wbH ׄt"ځS'm%m@6Kc ʂB|Ҭ=WN8eSK%Fw uA#B$Ř hiI&+e{JYfkؙ8cDPoQ 1)tDȚ0ǔԚ+ J%$뒲Ĺaƕ.O0wOvI)|=.,%i4EHou1 b%42g(, !JS/[C%-\P#"JdRT, F"cCL k]a)46𮅧:pљ<*Gm#-YZ/̷;q[풿c\|[E<+ߠ&G )!&*%h+0xϐ+6ajNUVIIx HŮ8,!)FNkؙ87ZVv}0{kTdee}q]پ>yx~:Ǔ8duX:??_XtT/D,\PA"'SнO{űt/RZ1tݭ y2pUq*pUH0WuzID)qoޣVR@*}{MLr1Bb 0bꝒ~ϣdhi}NgTyaĉ@; R|ļ!/ꁲzfn[+"q1)3Y>ds=-uG&h2N;szE+›cفӥ,>#0=t &Wi9&⹼8+T P:1@RW.~W[%I|Aj5q.UJlO5*ͬRVW\T|. >3KfA#ogYP8 WVYr-I,s,3˝ z/K]=kPK[g lxYlKq|QlKi8Ru&YwlvvzlN(NkWWI@~Bp%D \qќ \4\;\)WiO OH\!UV=\)•bV?pUd5d"(uv/{K=k\$2LGo5J-y=Bwo ^;z٧R^ KRsfP'Ч, !5Z+ hJ}7ʞ#RU8H]EJc5]1.\cn*V x2={zs0_]QB\I[^ CȈ:\|,PCkEA营lQG'Il Gk\',eMIG Z )K>&e4V 12:9=Jotf1.TJWQAm~5;^~jx:zUwMf^1/mU}"A3Vʎ2 2&"HȊYZ"y]C[$X4s"Ja%W G%,ᅵ$a*8\>#3FtT9Sa&f ^ɳ=N&ܒկ]qS^6"QP]1f_sLnocYjaySqDrx,r<˱#wq_Pq:Q5 &!q,̐1k1Nq;oPZQOcl"dI~M2i mϓKtwV;rofӳ4,7ȝ^+[XIv˳("gPQ$"a&:E9EЏOKh,5eAFw55?U=,r\SD>F_/<~#Fҿ|-RRՔdr)E-4 CssVM>{Fo1&☠4r7_;_Z`ے}7v/Vt+ۑvU#Ȳ}nTۺ$[UձR VلTw^]1kw]znKrVv_duEբqg:m/qu?ͺCjYQe:ݮ{OzkPCZ_ŕ.ڻ1wY=PϦ 7;?uqlZ/nz/vRaiԿO܋d0s+SiFL>\tOKw{ZlKTܕSxƟei<" !B@B$ %̓lUILz@ނxt~ =|mȣD K>1R6 Jd`ҢtxSIz3 O/S Weݟ? 7@8{a8y'>2eP X$ ˜sż1,|2T9CS)=h}oZXEYR?  mj+RԷew7 .dO|/ 4zĝ}0Nl5l ,]̊[E.e.q@c֒Jw mv g~|?2[nP:B.8^-BReG?q>ߝ@I~4auq]Ձ+BN_wRٵ w.! Kh"*aKت؋nL.×Gm}/2H̞-n|GEϡmJ;HQS_=v6E]z=ީHva>~j"?'uݤ:jy#s? -)kJP=tUz2WEFlW!^Q!&]PMr VnԝiSȧEEK[cnvQ('I?9kS\ k_z݌r>;LeǩhŁEhL!nˁG\k}2:p3jigel*вNn{~s<Ι/o<]_ gd蜵Q#YH$sbHJwE026( 9x`zqώ,bH}C=p w/ǗowQ&A)J*pL)Z5Yes%|J\)ze|^zOk5ݷvu6YGWNO'}&7smlG^xXuwJSpSenJ.oJ7,jq}wzFܰw{#w4q)[!G]sI OQՕ@8+HX=Y@D(~nJ 5S6-ؤ/D}4*iF}h;M/TEB@lu?ؐ P9^%l啷UJ›ڇ )()m]uaR?R(]*, UN(#*5V I5d t $ʙgER2hBDEgU$׸e0TlLfWYdQR9 e6B@}9d@X27'+7oe{L ~=r~*Nxq4~@V]LnT 8Ǿ8 {OmES?4YtQ0{O׷{./_3Gһ&䈝FY<@ݔ 88@B:)9;nogd*ɐ 7Q$(RoQz3O{S&}(i^/?NB< }wϣ;];մϟ^@ȲgAS<7M+woy 7_C^sо]ӣoSsί/._x]v)qt4)÷{ ] aUvqoR|&˛;O(ĺfmn #FaVbIZ|2^1okGUQ\7꺹"JQu/6ҷbY*Llp\԰N1d(eux旗?Kś_߼ܛͫ__g)Z7o`_kC14[ 7ڶo0-fSQZv$|4-SrdT|T'y_ȁgdGQ_oWN sB u[GFM 'GI8Iu&1h䋵чEϱ֩r) fpSo8,\^[ϗ]t~r +rt1eBYAJ/I02lR\LZ3?<\g ?t {:ejHN ܣeip8[| YzuHݧU</a! c D;dfWٗiOn8}~4!G^9k;HnXx|s + %Fim'ҪM3=;mٞgI)s *(3RJA罣F֥] qd?b 3Qn3m1`.:9Jdf+5dL*eu0!&%VZM|s/K|)2*911 g rـ-j旿FebF\ك\y|Ѭnz!<5i:=ݟVܚY߾=x9An)^U|=G_KW}iReAr*3CF?N |n~gk^oT Qg},%ѡ, B'S2JfҬq1!L*509d"gCY{DײHU<_)LN'6z[i6}&XG2e)Z 'L|;ph c)+&7n;=]~ -f4?D詸@!u.j .ծz2XX APhy00*k6Z`LJc:(Xkjέ$6&JH56kD,jt xE9  O)jWktc=&,*/+k Z 0KҺ-vhV3D~[b,e:͛AŖN/Т|Kٍvξ 6hs¸)@AH,C Еl]\J\4^1O!.$GJ ^dJV@ImT1馀X9+"\)/}etP6JPԺG3q'߃M><9ݰɇIH/_SLջ/U uV TH1 )0*((xhXIRx˪Zu0oGoI'lzf_fpOSR{v7[7o][FX(v0"{>cS_`W\Drgc#7DWP҇<`GJ;^fqR:(3=*>"G*xAbtF59IڤC2 U \BYA* ] cNLPQp@eaR-(WH9l&Άg1a;IjHQ||N]./V7~g_)/>LR J91:Eab,)@UhlQ2,aM`2ꚟ^bCy}@׳QWd4pGM>kqepˀ]]3}Ř^-dҽxh7@!s%؅Ӷ_ !f R}rDcOULHtٔFW@=l=¦GCMq-$EbH*Ɲ*߷E-8{#  $f\J'xD!} .=ɞkラM@̂Ko7l6uA;?;)Ʈ(Rȝ  `}0^i &2pwrk&Eqgk!NH奔J5d:#ETm@~8o0i)\\쐄|e]@i'GFq=!؉%oPQQ^&4ʀ4T'X J&CjF!h- |,}Ai VH2JKEhePZ;4=0%׽zFiW9Ӟz/͗!-ì_L/CSi+QVJSIJ*-9pBk2esmlHqX#CCFI$Hd8pd/QTA5f{P`qW?ʋ4nw6ӎCڇGpaˬ5<=~s7p/w93[m̭Aܐ2Y[n(eЫ)e'vL^̷,^GH ??~죙M0Ϳ;); #) \P![yz^NuzINj=吤GZj|G/aɧ:¤r6]ߨZ1 ,Ʉl>_ V@ops $O>=g:;:=[>_X jQC:#Xoi}4'ӓ1-</CXjwAŅ.5#$|u|u*ZrCVѩ)A:堭1N 1xI}LlӥHC/ཱི޳6rWwh M. $w,d`ԃe+cK$#cZ-eYnLO2[]fIC懗k;y⾀*ORϤ'RԌih#-hխR]ٶ8{w&wߟ;*'t3ԦQIEHOW>oj~YYtT 0A{LGf:WU+S_z*񬄉YzeХL"Y,0y <C_ܠ@G_ 5m.=Tiw9~hzUpYKkۡ bl>f]qFO2=\$.rKRd҆"yL],8FϜ"MC$)ǒ xs{:=x! ȄBh'ʑu:sZ= x<- A^,}ݕOXoS޲|&˲Ԟ4cr.@QΆ L]c4kRdK1j'A1O t,<̀1WJ g"HU TU2$hD/n(,֯  ~&e&nlKڼ NIKE!Ϩ#&A_Z{G,EO8yF BOoXTTLh XHWŶh4 V1;GِyqiX`QFִM vW^(ڃlGGqo%Rdm Tk$76 vե/ff{ v obk-<_F/i;G{.lKݬȊF. i{@|?&v 7FnixE{mh~z4س~qsU4_ =?փ11O3:zOܳd ~<6.gйzL?W#pݔNQq*UJѩ*-)n{Ww!2pBNHp %DU)dG rI4>s"H*PCޱ?Xj&4DŸZaB'dU{s9}ЗRZ$ٯO⤔zZBIy(M猫DBOE.c{2C/A0&Qz9D ٥A/]9pTO : ܪ,MLIY%V0]V M)^n KUTX0taR8$ H+0%M>m p-JEvЈwe3E>gZ(~G"h`/%GãֺtNXTIG2 H3R|^|iǐDndJfol3XȊAĥ6落Fl/y:Aļ% hz=ӷ0mբg?P2tRR~!>#ّԅ/ 4qg_' S!YAiRv=a#6^?\Tc=) 3L1FyH w-꺜}[7ߚ"4Q|^3[M?lKqe{A Ms6qhmжb;҉a2jܮ[bn%j6N+{Lmm kգmpd+n8%6pv X=m[w([!^Hq6|og"c50Σ0x犟*pPc䝓wi(._w.rC+e7ޓYYdBz#:ATݪg}26 d]gCBfDWd$  AtysP<̂VQF,:{*Ь384M=e۴z& GC,^ZTٺXi}:]ҒTb  Fi{ưܥyN^T3oHu ,!wDGϔcCej6e5xM#!w)!K#S!+9I2Ud$Lr:pPcM3t($")7IL7m',6^N7D_TBC'E,U7CvE)p쵣gҎ_;^ L+NJYhT@VʳPD2} daz1*y*j5{Ђ9<UQ}wF\i2Dp8_J%FP9OJOU9ZZ`}r IĘC8N ȤXw20"l׹4dĪ}tdw1Kiڇ.?MtZ^-D/sV.pVp]( d@NLrhdy9:iNTjZ4c{-HeiR$ Z,[{+282Z 5rhLVʼ7).:$SdExHB}Ή:@@qV뽽 !RxU͙-7G(fX;ghFOce47-1bѸQ>G=sx1mZu y"fm[Rܪ= |Ob}5%|̏`jZR EX"ȫje| 'JdVJ|Q9w1)Ȍ ,e%V{u*3xY];|%5캭V;#I/+ 鈶x˴JtuSr8.JJ*坩O*|&B)/?&x=t.]{<Ҫ xyLը>rx5y-!-]_]ԦC 3k[A=v7nii˨՗^5S- Hg8h\9 ?҉V@޹q3iݯ-XE |W,xt%Eȉ[<5Ҵ4= ı<'=$/yMAy7mM6~<=r%imwqUd3n_䉛W>)''W[ nFvn޸9&viW=>iwSɽZISvOD%kXJ`@c&5 mZ¹x}+IaW'K~ޕB!9Sx`in"';/-/p7̕ո5fmRV QT(kjIW-)ŪfMmQ9prLۀ2*ٚ:*drvxrT>T6%^:gc/y>{J+syQzx5ns7^^S;32g?xJ62D΃+I krf0֌D$yzFA^N%o;^'MAX K(C6U(,K *\ˤqgRs6ɠw-lc8"딵Zt3* K\X2<w;?;Z[ |wc00SZ#ClHP bT-\^ӝ1YF|FJ1CKn8FE+H`mJYb:J(ox{"zR6dƤ} m[MFZq)ND_5BVN)2l.BIZ;餀vXM9KWUֹ@9;gĝ# N&,9uHSeLさ0l^ \rKˡ]ㅠvg-v[?kWj=q_ VASl٘hZT5(e Iu9o߉b&']͞w7ۇimpZB>kGx_}DbjCI Vrȋy-4}sA3ڷKط] P9"RSe-AbEYAKxT]$ ڿuEDO$?~h9oJm/ _woV;@CHZ|a^ `Lb- 8"P ZWAkXl$66^ƎMuev]ߜXÙ݃^ֻemXw-wݏwmEEӟ'b͍?t{N5$`nJB8[ MdN9i^ͪ>9!ݡtxD`^>$EjnIǽK׺>~Ǵ.v~hj EfkM (iqxBj,$#~tޝgރ^Lbjæ*v_@X=YmaiSY# 'uc#ޘtqP`@pHC€csdPtD)S'%G#Z9줨+_N:KVT{ Y6Ttue:i ␫LUKBg痆( Ş%vQARl<fZ)6Ɩ`Q |!!` d8*Qt1WԶbK3C TQ>USNa29ρ ;I}}o.Hȁa߲ >sO!sG)E~R7{I=-)0AD/O!NY pN`uI6T8-๦!3/{$ٙdIYܒ|T+ՐE6Um\**dHIH5IըxڄP ^|\JM)h*gvUH lS.$O&g O"{NL%]O[4ت*]NDtr\s6΁akvTA Uog\QH rQ(pN4@Lo6j-v?~8 lv=7YMkroF%S~zzs$~|'p>ym][?G>9MrD98%w8R(c.ĂX]*Fyf2,UE ômw2jS Z]+Gȡd_+W CHMM}Xe2N3BEYx]?[dՓgo]JOVۿ ɇׯ/& 2Y5&A'`9łbGg}PX頬E]b;.f/dP&J@ ƺd1Z8*Ny\mM+Sy4oX̹P{TjƅK`vpFeVD%jy%FaExJƖMAg%&d]HKX$Y"C⢓)b!,J(B(*$&drdh` 'O%"MLDZ@IaŤm TI'pEƻ5s1zRi$U ܴuV:-iBm9&+#I:L]'x*9;dHSGj\s뜬v >ٰsn6-YQmn_z?񄛵\lnj=}vSyu٬}0 `e D-+;J-9D\Yyp%pM.v㪩tqZp3w+u7jj;J\pu$W"X ~ 4WM_!⊉_cz:\U M7jr]7SM-WNW vlzxsW˙Q 9t~ݕwGlWAvSwd)sono?.>_4˷y"sh@#~+7~b`cmlD=~pPGp_K)V:ղ? 0_N櫳r~,S{zy|kB}h߿yօ9J=|{eX|٥WW`[?_[*[˥ 45kq\u6f_)OJ Qjv*w(I)Pv4gfb;i]*>9dxmY-:Ҳƴe;puhS7 ߝT7N.Qj7_՝Sipg6Vmd?Dֶ\5zUSUSi\"zgLGjJzzUS vj*_pu2N \ Z8P\ !SO:Һ\5\58誩$qENjD0Wk ZĹ㪩$\pu^QWsWM'V^pWM2wuJcO+"~*f\ Z=Jq=9i]{ DjJT W=랢+3b7sW1r1?̎Mo*崣:RFZⷤ =;F\ʹ:붒vMe h1~ˡ>npvSCvC 靗5{1N0kԞvS cʹfDaM_wwsq%AnpjWM㪩W+`;U\JԚ}M8S W \ Pipj*A-:@\Ƨpw&ש^pԲ;DVqe'\`{qr~dq<:U̦\\ }.sW+;•qr]nGWM%/fqŚ4@GFA7jr`Z;誩vO+ܱ69洷[ޣm{u!dֽ#[{o%¾SK{6=mҩOnzCjcl*Tms5ͬ+bG𪼗^XOjT+vXOjTs[IuSOj{P;r)"Xk׍Kir_5+߳\JSKw)MťK v \{USgx+QI/:@\1A37J0}g8zUSkg/:D\ {•ֺ\5q%*ADTJn+Qkjʍ W9l3%^Oj\͆qTW+:•&\ǽJʈ;JCUXw+5&ۦUK\5玫r'\5EWMfJ:5}W?hǦJyژYʽz[ؒ7q݆6kϼ@ {}] =hy-2uj|:,ܴ՛oͶ>"^Iu wPWLИ|jeJs༲V.z,դ&q(^y|ZؓrnuhDdDKdԦdp {QrI^p;JW+ ]OU DWM-qTZqerJo:ɵ઩WQ*y₫Z23X'M7jr zUS~T:^pu"XpWk Z掫rn;\ ,lG;9{ߚHZ?>~X^yXn)OG>7ǗX<>?˫og÷kkhe\^ )'0Zjw|fg''ʯ{Ec gj{ȑ_΢awp`L,o&d;b%Ybm[NIdbU)ɺJEz6o?Of8KE3[ ߐ9ysfk h1K_qtGJ@/<8V^'oKRZ L5PkzLyG鋖v]Hj?$F׸ѡ 8wK a9lH ^g\ǒOܨHX=ydStor\S$)~R_?>~(M75c& z_*|X{wZ% :ir ?BH]ᦛ7OZ:;^ fXxm)jn ,D_W>lq42YKUzڻcLNz]Jx>G7#ӚO논ݟvK4b,tc\"~%p.GἺTG7].yrB=K#'s0D=1chO-]QX>ݘ4.śkKT06wlhi}yx1}NRY] b |"0P c4 {S*Wr2 x~:s^ (vpt5\~e8\?ҳg5pR#ٵIc [rA OV b465/tc^r&hpq^g[XJjExjBh0oR$ ȼ?˞*LqA*m&E6`]MHRktrhPbg<7dم3H][L(H*d@LZ(+r=bkW^'8fuHִd_hZ_ܦZpO\`*d371ć9PeR9mṗ t~)ִc_h[v?.ld1rg4ULrf؞ݤQH뒵_(YֳkӘ2ǛnTM&cm=P)Ԯ"s6%d29PN:=eٜ;ԆؽҠ|6< Eh.,rds(\,gXXHN~J( A[ 5m-DF/XRǕLhG@RdbY|Xl&=rE""I?T4kNow͏(#T0DJCX  JD +m",q``YO &sbs,g!)ł2 c<-8t,5єtgh;5qj A^X)j#U2GJҨG-'CAY !IW1XXY:y.9,xhWgrֆl2d!NMeRQ*c*:_+cU3%Ԗ]m@&YL4ܑRr 4J-#Xe6fH5з;FJ79.R͗3&ezo`>0fk֟3urk;XE;:J-oT>-3p@ƕ ihQg* +WX.u1<$2 ̳A" 1L+cU)Wa"^(ȨsH>bJz|0 !A=nolֆW:[0քխ*Kϋy!cbO}h8imiU.VIU@#AiW}ͧАIUT`ڻ* ]a oth8tbCI[mk8QV;HAQ[c(RxEt\,#(Ǹ92xGOQ㾁J7G6I+yLWxCz֗LmܯVn[ncKA=ťzEW-{5WqA㫊UL* x<[#VW9m+NpR *5hf/~X?9 Y2EMDmb rh{ozMч9':CH6qr&f7I;%CZM‹<;{"0,4Ș u"y#E$̖Eih%s!ka4Zj_v} >PL%ޔ/8M\=$"NfH"55$ϥ; !>grX-qS<;K_pl4rӕNąf )URGn:Aܑ?!w\S1w*!^pYo*dD[sWc5۫.~2a|$c*]QtՔ^=upE[Ylͺ2޿m{~c[r67=D{N]1Y;*E"nbO-E.TsyJ}ۖ23:߻H܋QaHD@ToۉUzL-~GnJǠQptR4z0XŜ>0mwNn}r$ sӮ4xLV*Ip1f$rю3 g運ĕ&Re!uPyHLo7>,m& 4kj嶣:eeįE_˲0ߞ/I)#6@@KQ!sDAfq)X0:$L;9p >&AmdI"/g/CE6 Ι`2r`Yr$;g}IJmі%ڦc&@b]lVUUڛ)x*.(,xB%MFS@O }\Ŕi1HMP @&e(w*GK珪+٠6 OH=JOxGyʬ;ewe3-5a1&q7FIbJdD IT!85x3} hU=4D"T"ᐬ"$&rJ'A x WNdEӉpqkU-tCq?xR}Phs#FWZD)%G߁*A;^WzZ ]D umٍ.B|6'#ԅ\ts<ĝ}jr)cm;&(Sy@Q>%匓):j 5M=Fπ3\N0@m|?Gǔ&RAڹ+v ' jd {TiI"R2Y|0CR!qRջ/tǣ\ߚ7ͥnO_ |^3Ѝm~j}6d&Ρup}5G.sux6؍l uлn L#qm6v]h%Q;,-âpnҿMõa5ƿC{llK܀jMNTӶq74ů9f`(Ȗ#B눗"?.ZBbMOOP*j96T?  crPb}H5UfdϮU-L'F1ׯx:$4ߖul4|c֭HFI pkqLPsd,1Zh 򩖃5al&b}^I5ֱ0f1Ľy[Ɗk0xE4+JhgRçԜm1&FBƙH$BY(TXE&;H>(N^@^57l(;n(Kiqau3;f(YCe!s@yB"_˺Y@Jc,mk7+~[ Sw0[^˕9n:!7Y:"QꍳItT7 l\!y%Bí'_u<|^۶,o )T)s^%):Ss*8I HBՒm4Iѧzm\튃YmY՝';Y.8N^sBf#^Nڦa(hfY> 8V:*n*iO'B/n{8eZgeY6k۳rN_:.*稿R=t'C8-J?FtG?>?!e= :8>?o%AL;}O/w55η4djS7 t-Po2hG~VۛIx7pX}ė&Zţ ",nAQOy!*~F\2? ш!䧷_!*NI" B ,r hRXϙȢ XZvc;*>:zp$mp@4+͘BzRo)51Ќt$(O!%mjt]WFĖF@#e._wz;:UCӄFhKȕӷ nXz[ξ2CG.2 fOWwU6(̙PIi% J08N>Q_[qn2 !ͯwz4B`.A46DCudIrnRwzLh= bNzcʀ2%Up&z\NG4ą101쌜q"*$"Å,H8nhMJĤmD I iic&p|.%&$?2p>%"{7T.brἊVPi> S6zrx|k 6-X[Nh\[\fH'T,^JgV{{%42^U~U%_V!y¥_tv un4?-w8ŝ<ݨvͶ|%5\TIHY՞TᕜUr[y^ԞW?45!1HA'I ! > 2da_4%OE+w\AW 4C$Gg]|wcy:X*OioՁvP*Bɩ6՞S+ @hi-M(@#hDX2sO"Ht+gWR$KEML Zn?ƙ Y1.D4L,gjT^[qc9~9K4]I =<=嚏F~kڵj=ӽ{exf o3F"8v4qU"i)!gI0¥JHN%j 0$h6n[lW*>|d:)?=;T)\h(:K\0XnX)G3U.HtUkF&zYlz0S;"EoEzpjΟON>wkvhKKU6ƃaRBH9&^I9;ޘĨ55mL'2a1+ m4+ K4LߣJ DC _#H}cW. fݰr9(ޮJ5/ۭ,^IOo(fӳRGG1%( rԮ˻n\h\7cؽIpo I.ZuuCj`Ih! BtR^5Z8Γ”GX~`}0h$ Vs"&^C1?o+o!- 6hd;U[yJyɟ21Rmj\_]=m>+:6/6>[]*úx^rgG!!cMM ohj5#T=/_J9 my!#Q*1gq@Dv7E P@C쪻"w[D>Z@f[W\>AN)u!JGNIBZxr/kY_B[:/ EۜF3$p^9H8/&@\Kf$dG FXA08wB-}mLzZѝV"ɐ I%̌l /֤f"Cڑ(1][!b G]߿L`@Dg$!TQBV$y#DB6NGPdBތ[vԭ#XNG87o0̡PgEIElf|P@3 arjr!e>SٶiߜO^[L@\`ѿ3 {{N1 hwijaN1(FUsAN#2S;C&ǘua2cTH`Ҧ+y u.L3hLJ4Ť.Lj4O0R{bZKp2879 xG̎{+{VaT'1 A DѸ$ran70̒j/tѾ]ܝ_ǘ)3۪@2K|g_D/_/hUVu߽tOq󯺴tΆvBxxCC餸{;pMO"#g;M_?z]5F[!˜#r8*wV{4A]mi=7wHW ĺm ꖎ57f;vLqֻXGhu~'m^wq+[UVu ,ZTy!Hmp>p*/`<5''5=hzxsw￾9T/?@*o:;pJ&MV,Ep_vymv]^ߢ]%5^uΟl=v1og_~':կGCi?UɩQYa]5\ Ͽf-HUd"wtCli 1lS]Wͅe/tŷ6к제@wW4 fD !Q++W)Y02&J@ w__d.` *uu01>`8c@|~O)yD&Q{y__DSm54_>}Oՠ?o֦bKi=rB #659ѓI|>p TXtŚ0="`V zTQYE\LJJ*ۆX9JYD@ ,[)@ ǿ; U (eiRHSEFM3 m0JPL:3~qGʰaƌDKsirk1y>q.էs]$ʶv0=gkE3 fW Q5 gWsÓwt {jeJ=Dé~r6ydMV1Q Ȃ B%h)+Y<+YqRJ^C^&)5BH:0SYĬCnsYɓ* &'QBF&ީhSlB Sķ3 ֨4̹&kOs3݈:~ЉB)1PH4UjDQ_, ơϰ]Ux iTx0 Mu"&GhEkSʌR#O<^P/FDmc`kr!Ea i~לc rP% eh Hd-ba8;둏4gמ2ŭv1@[̫'vzZXqo/<6N}LR1I  M +rɻ-8lm^hr ʺgS ~\Z},Ƞ TD:eM:(ykmA6,sM 9V֠BS( PNZIu<)Й97+'~ ii5lMm k.tj~衷Lԇo'7s(0I8i0( Em:Ze) Fɘy;qz3.ӑմdu_ f :ڋvc*{!`G7U8_voHϸms<58Y /BG)7.܀ADS@jɰ2`q{azf`o[:L L? #dC"1H% $F\ 99k JGE  Tԙ# T1AE`"ϵ̉Gqcٙ9W?_I9F +$䠬չI:N)E?^Ixwy2uRc'5S0{amQG0=,,:l&9=9 sc#SMQ"7h hvϡztӕLBv;5RoEQ'*>1Q5CR=BxBzSlMUs*zSlכb+3ԛ5{9v+\|lkBtdSxvIɹY?,#,݋-E"L`*ko׉kvޜr)cՏ+6i-13xT#HYc+.X&)d{?t;3sn\t:=Y/Imh#8%)G%*BCYSU5Rǜ0 H"rH]dS0h &Zuvf-7<]Cd]BOmo+KiB wAŝȔL.Rl9%u%ҲU&Tl Ad'acG̜ՙa)x8Vx\jrwO6ɽJɽv9w8Ko=}o= g||-J %D/QjR]G̹{gXXYK,ԻXxpaQӳdN 7NNpO?p8<<|rx!ժe .S$`:*B* $ z.ux]&3ì T ںd!ZN s;CQ-Hwѕ97G8%ٗY4jCQvQ{`UQN(~+Y HL!v;3xؒD }M#8"]DEk1hJ +$oۨS12)GV;9 $P͒(&pݖ5R3g{8Wی܏cnM'KBRr }gԋCQP"IbE4{U]U](S|<(`)eqQ{4ŭ*0]55kO".C8-%_ggdS(;\\cN4RfcDn4Q6 X*G83yO6vǦPu,fp "lx,UgPG6Ap}3E?%۫br:>sxW?j$”< RR)pLL9u m`,nE[tH`% PRj"ȝ "tJT@A!T nV<="]Xuw} (ggevyo[wkLD.֟ 6 rǯo}}.2^6 Pjm YIɝ1(E5x/0y;gh&}d4, *I0ViDQ*#RZn]})+e,\|2l;je xe3t:؇^N6k_i CԔ8K FIB,JArFoلv>Z8,J Km`u4[mMr= S1%Tɓe:#~^g}|<g%XˆI`MQ_&eLj u`9*AqڗQ2Zy#0NX]dW, irXkOu uBQCŠ.IdKz#:[@ʐ^=Vñ6 Ib% *jԉQfфpA,Po-'Nژ$0f0~ $;/ )>(9@r !:kZb^L'??rl$:Nw"`^_OlG4 @F.ƊM=_ޠQT,GmW|Deƍh)te^,(pOM̅~P!VzX|(] IZz‰!&d-JQ!,"}ìRJ21SQtdۢPP)U(9@γYi WkgM`}Wm!wyɃlGEq;٧fkݒ-Z\It;ԕ65ν~n!)[l(%.u5ܺ[s4E~ӡ[=UزĖ]5wzj| @=_kz?dWN{]Wc^gϯ+~=D\~>pm7+-ߚϛnJ.<6O鉓‹zOܳDV~<6C Y5&}"xJM)uZ2$uUElGD'7'{Yy{?c4ZS%`6!$ zwH)}<2<:Gl#B:vz{C09MhLq?Vk矆UJ֭ҦOקx1`}. )YZhXb:k4Nd,% lrA{`"Gyob IrO%ڥр8͂'4#Je2{g: TA҇*Nˈ1&(ř@L ːU (Yȗ ȹ_0\p$`$~u+//5y'x|kIWʲ[Oo(+eum C|'#} !it |ĝ}bs 9mNF9W,e,JFv|J'StJ{c{b|#$I`FgV.q@'Am|Gǔ&RrV<6)DP%92EJ&; uH 59BKzCU"^ i=ӵ^2xtw`KΡup}[ݳ.됭zdXǧC{oc6Hr6KKȜkq#_cm5R-נZcSM]%$~|7;7 7%֣ [{bdl<{BdU(d5CrNwc\CY'8IlM /YG}|fAŤoz;>-OvHˣuwd{#Uj b?~VL|ywnk^iy9kEB A[%^1ʲpX0NC2G5fiu6=?1mrPEgETyx_F5i$R5]n*xRkC{sGUA KQF '-FWҲ3]򈯕udUBd(/F j)Z'[&8^̸'fS<Xۖ jn&r)2Akf ۄ=9IК=˚.w_f"phMͭ4l+|Q5qhk _\Le<[9f㨬/}kP8C(zKg7Ϫ#9 7T5&īNRE[IB|C7"և'>z&]U8 s ŸER}MhuyJUWjB͟u)b2GOSqQkT yAƎz>_\!-͗>B`4F_ 23yk1)K”j(#e4%IHg76 㻆7MMoDK;SWkF7S-OY.I40h"9Lh/ AMQ9i B*nԢ&y.4Z#‰ƃjѴ!2aj.(CS0$t]3rXHr(2z̔ } @9W̏]hGPP`hqK8Tg.x͔%)<ꍲVGBcHAic ׄHt4qau:hW%5F|gYې㎼s:Bebf\^:H1g;@`woDˋy*?e`h8Xp2 .cghY{]~s,,+Q:Whdxdqkn'tx~,I&5%03`' Sf]3(>{lo*%ަuF3]B辎|QJxdzԁZZ4VEmK'AQqrGK Mjmd!n82# IGB7%1X;!<Ԗ%B[X:Ք"ur)\&ڠh'q0;c$:'-uu)uFѷ.&)5&8Jx>u$|>KܷFw]V={虴#ň"׎糼}=yNG O%Ӛ`+u$)#6*cT"T*|jH>J D-dD4HNAdqFp!(\Xύ$!\& O5Zks 0$ubgܯݖBm:owf Z3iA^RQk>8XJЅsX7],mP6ju"=‹ɸ,n4\|GL39u@ #TT~ĸVٳDٌ#Wf|F>O6Cw+Q= Qٌ/=߮b6iW\7LykR^\@q%+c f썸r/*S+v^Twŕ3G ']L"2z+R^\HqWK:>jWMgqn'Ӽ-aE}][oG+_vaT!nHpMΉ!twUZS,Rm`^DJ$S#H,3U1N?Y@ f/chriR) f/C}=@΍N2ϹrK_퍼 }5XCY]=WsZn~jpy2};x}'o);>;͸`Z_ ..˶ rM|nx毵9>C<@}Cܜdz鰽ˏ *ʁowԀR SR1g5foOdBirSE& Ɛ"cl~!` tt;_Alr?J).ڥ}|vrs)s{ u3K]1}VmY)ѲR.iŠV!($`\Cը@M7ځֻK߲兴l1{ԲE!l!%aq \ A0TQQZSK2 %)ES(XUݵlYZӾԿײvx5Q#>0i#!ERO>X*ZRJvdBDw/<M^Lje{\|EŬ%ʵuVd2U|YI5IdޝؿR}uaK#/6+}ğȐ YeБl /_N!v$JrEGp +6yYĿoo *`1> %")7M!q:"P&b[zEVvE6@F3>(KBXF!9L2[D*H7Uܿ ×'i0NK˝1#d-ԛŨ՝cƗ [|:#IU&g1_{?nn5,{.mv;^H7MO x3'l4Ϥ:@z&uS'u3qpR8i~:L04c Uɘ w"d\r9]$kU)toOqDyfz!,d~jᄏh<'^gotEvN/xxK]AVrl5;S{Wh+[N4~<ٳٮl~|tm-F߃F8#nFmì ˯Hqk ֧XWhi'WcLoY*q6{NN>-tlX|>>oc).&lʠ9϶o1(wE-ﳋf-~ף_߿p?z$?zG:'0 )O_ y}v]c\%-TrKhOrmmH7_.>0¿LfZGu\3U7H_i.X%YڬJ)&3 lkiІZ}I;V&'GI&|<ɔIkS^Dm)R11+'UhkUA}ϓzaaV{>Þ2A'Ĉ2c-,$L*IR⣖>_gG^L&^~;_G׫^ZU}γ8($`8k.h0y+R @oCn Bwh s-=6(yo`I},yOk\h۽3wB^݆{yWm)_+`mٿЮ5/H뙐s=}O-IsWs6#jt>w| Dƥ Hh H-'F_ 7{g`߲eK"PݓV%$SJ.Aj2gfgM0g`&]61{wZ=x6fw/ٻqhd]y)&]ûM8fRt!.Imv(SF#@XWW,S% x$aEʚXG%ieJs=:JD]s7}\cSízrNчM3hxo+i],-_o3qW,ιŐ/#GcI DkRSƸv) 3z =UمJk$q] O@_sy>ˣNy-nonq粴\ɋcnYlrF E 0v*XJd O{LLN%b 9zB7[.{.AOCjffלomOa€.q/v{1}xV}/]gU_j,"gK>٠爥~WAA&JQ(tYKֳM<W5u^竨W'>u>K܍B7k+AuCyGOsl?]t*c{=:!3ALeah"PO6yt"':O:@1&M i|(%(r :ޡΰG3cjs)D.vhGb> m[l a$s":E(YS5\8aR$Um \Ġc0ɺlǨ3^VO0$u*nʒ4A"P{"X(J.BAdfHRΗܭTm'-k@ L7O cEl:w+*a)t2m0Cu39ٱ**Ֆ,6^v[՝erڦ^r~+U>>,D{_SrK|LRBM~UJ/R+0xg+05HHحszNJURDǢbCgoFB_ DGhQJ2v6{uU:IƾBG{^Tdų>;z|pNhp8L_: Yx3,QЫ|QXw.yѭ -6HTZUt*+EA d*";ߵmx+s_vgұՆ6VGVWN S Dns>jѧڟCJ[A8H 9v[4bNa3dFEgMLV2pu^( EfRM.Ǝagӹ~V XcWh:EQY2a,˶Mb.Ffj(\oC/[0"/9%Z NjtJ3 S@,kI+p:: Y1"v6{L%~Չ⤙+>uv&%E۱]]moJH*RCGYE1"jAp?!RW]| /3> [Y|ɧɔN5PGu)M/qSN׻Rg R{:K`co,Gw-pS@1,dhgAv@49(R hMp콊h=k dFLv+]IKhD2Ũ2 HD !Do9VxԘ!o1$ɧ/n,n1A&.6{K #`8̤G)WQ%[p& 9u6wh shu9v:o3Qx9r.IRV(EQZae]+pR3gbNG(x)ꛇ?hOkpfY8vcPoi$ !T_Cyki[ gY{8+|ڽX~0n.cM+K$MR ÇCQPm QHIl&:B?pV K*kG |{c])v&<18W3a`vcV^qn>fqt/V%ͱH:2GiFIiOq^⸄Ӂ8iA$ JXFLk^%tֱ$t"cDD"#dD4( >_e0}9mո߉zO$q0TԺuzl8(?}|F2L q[78)ubPdK)MOB. sU$4uN(EF*J!Rר* bT-ʛ Y'2#@'DL0Y_ϗ/xwѸQqs`>6CWO4}(dfcZ<# <1y2g !DDHZbQ s4zK &GL:*igKXj#jk%NR /EJI5p[@pj /=[+(VxN13l+:4KAPTyFxȶ 3T? -BJI¯0Eq1{Ft% NRY\0ʕZz^F:8/е../-aiqYTr&.%q7RJ hP8aWU_I蓤žKU,eS& bzk9qƤ(HE$nOC7L{4 ܒoP}?ˬSWjBjAmb8㦓wIݞOoPySF.آA@ (.SZ$Ɗ߃m=^\AP>&UG '񰨧eOKkȝfF85A%TP-vQYؙr‰ΤERo_?kcJ$ϡz:)P%bPF %Tbg舚k3Dž\ UHj8EwhZ&Ո ڰ-\ d;Е#0ν~Hn>)uq:;:]YoF=Yu-Kl9O^yFww^6yz|Ȗwן(-Ǽ oT y֝qm8]Uie٪yj·?oz!ݺxy(cHIVv`;H;qUf=UkMrт D@%ZS봴eHI(<:Wjw{`p0p_6`Wmiڷ3F5U kB[]@6l=w;MY >QyvPWmJ䍳eY9}`MbjKUo9V#~kvOs3`}N#YZhXb:K4Nd,%2l|A;qF1t ( IrO%р8͂'4*&G#gaZe TA҇*ψ1&(ř@L ːU *[U[f An2>q#4^LP'Xǹ;f~A .3('/sBk(WL)He!^Z3c<0xz:ᣑtqRN Dk򍧊D d!1% P(^9;qV8O׭?x|-Eǂn0'ϞQ Uv)2҂$J)(>*@Bw1쨾5_;o ֫En*!8da.%]|6%y & nt Qp69 TH/8&rXFY9jj3Nv-6.Ӗ.\6scoz/?+++*x.I szOsxd\f* QACaj2~==bDGoQ/9;6*'?gk _P Wa⋫?QI?Y=JdTM2aJb';!lGL Lzn:|>MO7_?X]՞{﫢υ(x /J ޫ,')"ixs>7}7<9 wBpkS졳 {# ur?|=i>=Cq3[/],X8,;T!h%Q4p0KxZ9=R/g&Xh9_"bNl4M`p:0h)]Wժ{Z]^Xd KQ -fܲpR4y̗7*grR$GTV6|}}O7Q_6qrkNN~GGO6bwȘP.z&ô9ktՍ[H6'YncyXImdzA\آ3l6 7 M觬~ c]w5Ŭ2mX?:ζ֝z;gx6hk 22(=zG}ӞESM7֯JbͶk)j;>hQrp>pZ\~G. Sjє$!ђ]F3Qnj[?'V4ft;jJDuIҠ)uFK& YfFB|4L+9d5B*nWꄶ!<e?E-:A܈ƃjQV!2ans.(CS0$@"jkl{3Xbϐ} dx>Mx7Kkt+֤ʷ]onGP=RB0TDWAEsIv3fQi@RFY#qo 1kB$ją!蠵^t:[Rp9XXPAHbǕ &)b$!|1Ϋ#] jwn/hB ΀PA4P$X >ɔEQ 3Q'"eU:ξbl A4S͔lns\6 9ny3̭3(1. s+@Z縐UuVVqneXg%SxD^ wRopb]d&~FQQhF1ۏqﺢD:-Y%uRoĤVl4t4{.(bOdU *ĿXi- ^PrȚ U'MU&O p98CKd:I#< Zv +q:9)Pmf%sre57aⅪ ^ȺF$jmUcis?&y1-S7zP=vSE \B8wvcW_!bRʟ+l?vfUR@ǮBvA.ٰ .®2 ]e(M ~U+Nۊba+fm->OO*;F.-e9Ci*t(&+Ӝu ߏΗG¹L2`LU %YIh_9z;?lx;͚3õ?_e]yqC^S ƦAī]Ycmu2OVhhJ&*F#b(D;MMWq1 0QMnpR9&ũEd?<;/p)%!|pK2ܡ$%#" I9C޵q$_!}[v~X`m&L2/{l,5%%K6@]'3#OʓpϯBno|Bys'ͳj4A޴\s9 Sf0j$7.(]cpO^wF8'֤@{ur38:Akc0mƀ.74!9~#X'5/V[uEQCBNFmhjcPG)N 0kjA 1Ȏ ўF >B.聼0!b](P!6:sP[r=JNwh A^AkaFh^v݊#:CI0kD(QT:ԡ/qWd$Xtypl5M]`1WuV6`Ņ%̆4<ƺ2?K+5Rl{ XU^FrIiޘȆR/jPѦ"jх6[ r  (lj RL1 v[]Ye; R5rwU #" OGCסd#$ ŨQAQi+JZ26HQc9MF2 VԠ u/r3ki@ƝwS aۙo]0s@ JAU.1a RLA9 N0t掊CC\jSS5qC%:sQ%@ `CAΛ2MC{X +TtgqJAQ"Ł.0͑vEUP4kϒ (Ez@?Piu `[UW Nr"YUDI)buCbKR|ְCBQH ʚ9f+|B ّn i${jE}∽}ABufA+th7XW%$' cEUJ=.sQ~;eCwL0ճ~9m{ϻZp bcgG]]`100!ƻu>%y:Um jmlAhqU,?1:#'`]@EE ]ʃ ;lHDNkk*fʇ`)C#`8ӥ`)`Q &(!t_15lm;Bdm:^z,0PFu5wNu2<&[WB(NCkdۧ|8ͻAM ym& FD |2}owZs/( 3 t]pb;4{f[*qx"#N qH@'8 $N qH@'8 $N qH@'8 $N qH@'8 $N qH@'8 $N qH@':Hŏ̚:'uv)N Ffr: 'XqH@'8 $N qH@'8 $N qH@'8 $N qH@'8 $N qH@'8 $N qH@'8 dqԒ@r@Gߧ9w:D'K@'8 $N qH@'8 $N qH@'8 $N qH@'8 $N qH@'8 $N qH@'8 $Nva8Z;r@kb@@@@8 !:8 $N qH@'8 $N qH@'8 $N qH@'8 $N qH@'8 $N qH@'8 $N qH@z}i=/ǣ/?祦vAoݵv^wT/nί`z$RS1.EbZq h{({Pb\KzNj1tIť%t(ܠBWCWe\ +*-Y ]12(tutP bi9t.d;]1ʭUp*ҋ%mS 2ܠBW6㷁iχ;}zEGNbz\c>-]=q(t{R׿.0xto߆~W[7nW7m-<ܘ5t܉'8+ww_}ytyq>708528?.{Uys}zjZU>o{qw1y{Y^Q w7i^߽V Z//_GL.Ow;Uz?+ / P~;1>3߰7}͟EjZ4L7Y&T4Mv=ZE1b4h)ʨhF‚zBWNjOV/}K"såŨ+FvyʘHhpZ]nZN1hi/83JTR +g)nK+zAF\"á+ofAtVb.da/%)#tutLL-IjA׮] ]1ZNW2UZ]0?Z ]1\BWSK(rg0*YgÒ.0b 6оLR ~6tE݆{MZOFpbĆg=]ݏi7|m7'to-^81ો~K: ѨS}Zô&YaWrHCM2QN~J+GiI#k,p2OQv?kDUaxN2%$b2[CLo 061__"Ƨ!ϧ_Iq}\=]hv12Rhr(S2o=Ǹ`uH}hӧ5q(IB(R^}^џb[M+K~)thP:D2s +63]1\gBW6}S,DW 8:b6;]1J焮ș芜UdCW ׅ {OWrV]}r{K #\tA-R;u(lǡ+~ܙ+kszt(!U%]`Z]1\RK+Fr*D3isÍ+|:]1JRAo?c*Rf1thwbA2|>tq3#&>펴( uV4=m򀖇kJ[+t<&x7[y =_cpU޼h<-v3O" ?_my^i~?:?YrsV>ٮlbMV&ෞ}?|MTYPĘo[ is¿^U}w-Ww!}>_Nm!/< EpjzZ|c8!{q]7cҲ77|ط;u7|S<.Nc=܆*VlNrIuA$u-kop>|7K%rjS//%=SF_qMa}mBƶZ&iNrxMNv[fdҹWlqY?3hsy5M%L~}nAG_;>lv[?sB-`X)%M8U|6NNYu{\gYM~ r>ʿ|<+KYh>3,_+̌Z+[R4|fW^x>hQk ? .)ذ>߰^KItoXۛ5RvL'zcjaW3)op(H1H BM1RKsq6q_Hw$`|3SIuGCQ#Ny6S=aJh)D* 1B1-(@'%vKQQgPQϊ%LXsm+R(B$!  1Lӈa!IkGן~_z۬FhgaWD~ۗϳףzfRtY`"0,$~dHykQŶmH`O;wkt3c,Q`kA.v{f:LMw6_2/wcdP賈wRwR}b^rk:kX^'N/n={[e;m9KKX-=w=wCI;u꧉{KO̧֟I噕+'ȾG[kUz`5G9G֖>㞥݉p~9him6ȇuPk"O|oq<``av =~ouio{7V# -ui3^\۠[zĨC ! erP\&D-F/)[1$܅o$o\\^>sH ,bM&" >tNacl@Ŗכ-.fCn<%YAuCHd#)%IWI?Jl;=*DIJMIU$ t@|hR uLFFUt)`v8އhLYog`U,RV`gG_|d4L_e3 Fׂ{U!I懰Jknp'O+^N}H&hNJ a=@"@qQVI[P{&YO!2)<ȜXcam+s r>Bx\59l8zv'ҡs)et)H>Ka_]I6ʣ8LW VDlcPAOQEP@3m |Ad2Nc vGUM.97L2٩NVco"zfODz:=<Ĵ,{}H ԃ&b֬QӘLSRr SiPԬ e(ϊ:Pg8#uZR)`]$I+de#`J;E- DP D9cNcƐTE|dɢ`lt>*1LS;]9PL}Wo??u$m ΐVAJheJVQI&J )VCI*bc (!-w+oP0d" `535{cCLJk;fʰs߸~nk1MyWѓ)|.n'[wO- Z=̷;mR:=3U>>T{KǤ3]κt9cR1LAR8 2E!(6akzb'f<VQErR %D2bSvi,줺snΰJg38_{_Nzm󛊌4[_8:Kl_ hp4܎AGliUb2eK0!JTDXX>6f%td@'g3٫}an' eJ"S]c8QsڝiDZ^tM{ٶZ `%jYA>$c\dJ 9^Ooc2#CiE'MLU&"1pd/`dEbRMcؙ8aKW fc<ؙ~m;1hJ I³n beRޝ&,|00jqqPuH!g-KU9&0ȱLZK=bgp%gҸ)ٙ]~~4*$RC, 3"j M^ bJA_/?L;c/Soku6v |is.uT.<~tCý ^QPwOND@CzC'9k[9J Oc)I}?I}+kwzBc!T ڏ.i)M@aHMtTN4[ &z%ϾjWl"sVJ A@ k3x*!*l-F&$/g~.1z싪@FZyR6V+>oY i*%v{.=; {yt4W58BƌęhRpЍAQ5蒈&9Ƚ$r ܫA"E8aΛL M@9mr*ζ at!ھ/['}{sDGoX3v`7Wk9MJ< ڼ4GI|1l2U\hѷkWS?![M ISDh)T=A~*=Kz 9mJڦ;e IǐM&R5Eg0AHg56oOA$^~GI>2]_?|1̓m[nPDPO ͫՉ?z=#]M_.h (+@Jcc |h/.Z.'lC,Q.˨'eNʙd_ґ?/}KT Q^/K!E*պ=hrUٮyKvPg] b斵*XY#yEΊcPÜBbE٩ڨ5BDE(A`IIAaW&$uoq`d^p|ޛu|%^31K+ >(譴"Bj<1\ aL!A!,#\㑲t -2v$ d jImNR˘sBeFf_ZyA+lc,yhy'`L\J%Bh+ȄEtHK, 211{'6! 1m̦v0ߛ^'C^t^Yf6Mj94 QE]ksqsqm2,iƐ0h,1^M>?YRlm:L`y[mpO/~: ٌ֩\؇A=y B >SJ_q%{k2d0 jYatT I+1+)Z!{y;*X"m!k>D:hy.'32q}򁵫a}QCڶ܏yHGNE7*^>#nd3 H 1FpR1 #}wIng׾p= ޕԁSf~4}ݗyK8#-ZeHCȹhh*)0XtH]EjR)'uPyވe$\GF\-ф&ոݨwmm9< \wZ'CV5E$e[I忟 d%l1`/v%͍)޸T g01)t -RIFm0)(k|a'r5{b:C0rSإA3xp:\H- NIQbE*Li Қ)RRA8);ŜtZ4,m6L Y؏M Z4gɻ5레[^]b1iQeW"̲3:ttF\dp2/BdGa|fҶ>T1-Ӝ +-Xٯ/s>Df?9yߑp5 4.(C{ 6CѸQ )-Mç3Xvݱ͆:x)`;%)ׅtz3~t\yz҇e`1n:4,gxDvxv6ٳgٟN3LpLJ+Xΐs2XCQ x> g$\}ZsiكZΞ!p:_ӧ}p8kgv륋.=O@s_2_*@;/ Sv`?&٨#RN1kuHQb1PeJSܡEُu0WبJ 9x]' {ud\DM6>kYϖZU4-<)o`ˆI%M's.Tָ$ƌS10o$\t1n-U7RH+yAk| Y9Md~BGl59Lw̘9_& ߦi\M}Nh9QE:S̃vԫ)L1?'_5G~S_wnl Hh4m8Hd-׊V[uB[Zmi=E ӴqM`= ڣl>=Z3j@:D"ʅ!g\kriSvWƘU;*}/Z^hˮX/Oa>rJ "cj'8,7Xd{ a$F}h:[S(A; ZDl ha[0 ^ذ18Y,xM|;t7r>լ@ݖ4xߡg?0c~S*JL>ʄT; AKB cCz̧J@U54g^\cri;Tu$P(Pz5z!h} N KbHpqCH"H(1N@TlڇQO7{k,:39C=)ZR-H"TTArXSbD AG'!dR u"wWV,4^VTxG$4*%1dNF0SQivP B' L|;62.4ݲp2cnu<`#jA!NV[52he %g^FavJA۶Xf^peB{(ړR\* Bk%=OzK/H 6pV̼zQty/8 2 8![%F9p~["F!إoKOCݞ$`QpK ;'_Zy6RU$!VT+1ݔavj\q|?'] +%+,+x zuwê7xRP-~g lZyYSMC{b4-uh&\ub vkk!@*_LYib,B1cO6IY`8\? N|A!8v@C妗_kH$S;Gn'(8.ݚsI᫔D%FNoÂBzhӡxxK'L0fQ1B !0& y0QH:봪acsmIw[o}]t:EܠV;DC`:S0+Z-^P*‏8 3!+bBW t()1 a!J9 ]puW*m?]%tJ"Ct%SQwWuFJ`vJ()ҕBk!BtwpUUBIIOW4BtIvGJpEg*Ut(0ڞ "k=Cc6L|` .KW 劇tE6+CV~%b \y =W7eDbp:@4_ʦyy9~ss3 ޾ߔQoTLlfN.sbEL2xUdZ.L3'g#%0Bro`矾OwʎnDp6EWT\qXtGt嵎% :LmV)"=Kyស)|j,D ;e:xZ4B~d2pr8G|`sc-˹$ INéV8;eS<`s-:2AQ39up]ƪYBXo=B >(D:DW + ]%Τr%CWi%XUyW SwJ(#\ o3w\EBWVcvJ(ս'J.`:CW *ղt(劁+A1Ct12Jhm;]%} ԣ+ICtꎫ=+t vJ(ҕ[`,CW ]ZZOW %#]i8+,Hw|W .L CB+ZȐPwDWt͡)+i֬#z|BBj͊٠ߔ='~#fa 2[{ɱ6$Qs-9˝R 3ރ`ݛcl*JҲ#A{cۡLpB;DWXU;CW ׂUB)QOWXKp ]5GW޹FcgA<@d/`@6/ۛ"m'vSvd=Q5.[.%4E%$s z=U:jV &t5bAׂ+%5Uj2yUStexzprWXK28G\Y'Q 7&WU cmr9L%W3ĕsdM {UT X|t*fp"mUbTOtrC5Zq*Y#5%L7,Zp^NW,qW}R\` Ռ]Am~t*+•􆗢SZH|l%gNi (s"ȠYwXˠ81sLϥA]a,TyàV29a-՜4PPb ؇k<jZ:&T &ٙY6^uߍ rj_ XG\yz B\Z=+Vi͂*H1 zprC5cWPKsJo\}5=ޞ"&{GAro9oNzMr$oYKgݏ21{e.pqv}6^xmA^y33O_nUԵGE~+<׫ǕvU]x6gz'ߝ|ɓ;/%!ő ݺt>tQ n pכr7sy{nslѫ/wcỂf'9K\c.~9~LE _(>LOpwR+r_J}\8uzz<'9j0$,u=á,V3ju0S;bh}p+$YUx˻nn0*%9vXV+&}3AYU;bӥ'~b蠖b9yG-Vo ,j\w]7 GZh5H^܀-ٳpiÃwԔi,ukD̺E蟚nλpϧoᾯuW7PU6a!|obY; `X iJ.fobD˿Y  GHxxH](ǻ|6N`B>#0~{㻇V#PyJߍG??_~?h%VysLz=V莶]>}[V7vo7/sy)ݔ>]qԓ>#oݣZNz'4Zbg+nϝ"gWgzL'U?6ʓWs񮼟͜9: Kcb! [v!Jύx<v!h/?k^wGaP8&a=-w<~S6X){O3'yWǓ JZmV4F θj(YӵQQB/#er.cʒp5J$W,WZpj:X f+"\A\z+V'+`C,qe!*Wcj+ bbSjsի:omM ;[:K\A-ɏ]A9ʑQ8w`ojQt`Vq*2v5G\5%{W+"\A4u\JM f+縷5W,W]+R-!7d|E UVj B~t*flz:{NlV:LS+%tpVosss9׽+vi!rp'z^|>4Lp8!rxh15G:iʽ'cЀM/6*["WָUXp5C\@m<\Ak W,7ZpX4 +#"\App$,W ė>b*,!p^]\&%WV8a삫ʉlMɠBXQ X.U]AjgzqELagAW {WVO~UZjw%z) XzAIֈ0u\ʩKUpT*T+~c*Q XSIgU=n7/hrJZ sfݙV R$ʭ@T9H;:UGN5G*t~Z0L?U{H+މ!S Ⓙ/oQMX^m`, A|vW_vE]:[yݳuL?xdUm~wV.CuϻwuwVzrj[USo޼0O+Pc?PE ~F4yV>nϑO4g4g;DOw?ǟyu(7eˇ9k_ۨ;kB֧-yY558-"dru>Jd[}7W~$ wM oob6wt/D(A.S|tMVL^;Ӹ(6%6x^s=>7T#}"$PƤY),Ŧ*eZ 4-bzvliF!WLCiK9*0*Ȑ`,뀝*HkJN(s4uP H7nLTJ,VMNh )R9P+uelt q(wP-j;TlmMQQkDljmdiuH2ib }a1,:3UIX3b&P7ImsHLŇ651 Gh cbxoҗ|2 ZF !, ewmI%Q֤2CdӴh*F)yߤF:ԦR7=8_ _Z6  Hx0bо-1_Yx1IFI$֡7dhuiLHhO^'DP,΅<(ArJDiZT)-R+IQfhuukl2y/T[%)Hۢvcp%H ۉ2m/#8:ӏkuDICj[(1 $P . V"kC/ ,$$@ol \ A@"bџzTtJ)zPFV"(ml5吤@ύQ 5`E|PCIZRV(xGkE"o)x~;0H%"_6+"J>$[)koh6+DQ x@b-SCj9]G*Q#54%[DyU*0Ƞ,UhZ1G6V+tp9$ YGG\ZP2mɨ6>$o+.>A00xAҸ6 snF:n(Z%X@[-w%#2JFtۨR:$1Ŷ'p ZMZsUTi@X4$(3*x ҄9ۢ0X <ӓ0MpE([ T.52Yg|GeYgQX=HǍu ®DA< |3B5DoȻ|IM0. !@ C9X@Y`mCihti16)d?}jUixAEGܨ Մ#|(\ $*L1R`L o,8,Vgҡ 8Y)AAmOHSQ%\pn wP,˻MZF`pHp_H$ ;5 RRvٕk?.>@'2 % PD˒|ni&~*bB~Mc#-d:"ZR(h@ I |!w Y+c[_ yM<dG6&*'lW:]-5k94s]Tjz=z>Yݦ. ZMG`8Q8t ụ`BH˭@Kʀ00 sxgXKH̨bB7 ҃Pٻ64Wc&-" X,,%`PWtDv2S$EQ(RbqdTWթst0H1-+">X\46`Em&6Xܾ($AJu+Sx$n`m mG*YȩՏQ_(mNɜ/+I0Ɵݟnd_y9R`QD6 ]{ #0z;KnCzwxC T&o%@_GWQ*#D _I aʠv!u6yЎ2ƚvh|tѫ3K:vwUM9{Ռ:`ШXK4r[< Th$&e#s4hͱK,F+I@8KMF|d~ȃ!28"ep% ~X8zaE^Y`́v CN$Jjƚv/h^,ngi&0 ר8UڀJUΪreV{EXYw _v@f&=. f\M:(2`HuZwĩg7 |ܕ`itǴlZ6k3 C,K0MW-d0l&xfs`*}ӿ{BKQcEԺආ$ڳ<51d`ps7(g;]ڃy2 |SC^b,THs1c4Jvãlk&^;%FFw #TA(X`aH*P#Fi R҃V;53 j+Ap |rp4r i N:3A ;b{B!DE T( ̈OZwzր U eh~ xMڰ(RVGϊSZ]lѲڭH U gF((I<(1SLG4Քq'еwִ<jugm5o:k Pٗ9pG@N;,33̰:,5Ƌ`ABl4BS1F0p<{ 5a=GCXe(1⌚Tt+ W7^= \8 w9mzm!$*V.uADC[v)&YЬlopM8$CdM]XA7.H56\~@5=]h!w*;cgt0H5aKBuv7fR;SvK)U! ;xtaW 9}O[瀜:p=*Btam-_˫M^`K[S$jh4hH;ɤ~_lhUt\⏓MBɯz\llS. ~!O9-+].lδ̷O\5>h϶?Z.\n€lQwa2Fe.F5w-l whsP?r{&??>U8}H/I5z0lh笁b@ǘ͚@$%( DI JQ@$%( DI JQ@$%( DI JQ@$%( DI JQ@$%( DI JQ@$%( DIMIِjp0sˇI4'%1 Դ'%( DI JQ@$%( DI JQ@$%( DI JQ@$%( DI JQ@$%( DI JQ@$%( DI JQ舓@n!%4\Lh:$n\) tI $%( DI JQ@$%( DI JQ@$%( DI JQ@$%( DI JQ@$%( DI JQ@$%( DI Jo!!%tb@I מz & [CM:$gMI JQ@$%( DI JQ@$%( DI JQ@$%( DI JQ@$%( DI JQ@$%( DI JQ@$$n/ri^4mKM[_ܩ<_tg`@%al0Z%=P .}9%'3H!_~hxEW~DWzuRZ?vj@j^<ܵghgoRJ{UaB>XH?Wbu(e/~$/gZxi,׿~|%ur1, ʖ}4g}9):y9'h +'y?MAqׯQz 1NӫyIusk*nK&-.N][[.\2 68۷Z_5mYw/WmL0_jb(S[䜀-!x\9$O1\1 sT4j >݀ 2ЕhA)~tP DHW˵Cعr>VCx#+%$|@t\U+P誡5骡tJ ]a~0t~$jh_=ϕ]b0tZ9Z/y~:]5殎]PQW lh=tJ4Z#+'-Cj`3Z;rg骡dZ i13:5/4{Xkv¸[lm7 ~'\*6;vSn/'x!{^=0^T4lR [5TMJuͼ~Zy悽6,.PQ]X_((7f}{uRةMC]\"ԜRI5f)ZֵpU;Ah{R-{fR!Mu ViKBXOP,{iٞpˮ,Guipw:c+g]} P誽@CR)#+Ѷ+יnR2#+7^b0tz>ZuꪡJi%])KP誡J-%:Bm!A|8tCғ:F2F?{핮1Lg~(tj(1ҕ5^!]QÙjop ]5:]5]!]9+ܕi9jp`]-g[c]j0tB]}9tXqBǃ۠Wz{v(laor_U.:Q f}6_O1v:QStzuYn|<8N'[o"_]W]aߔE>Uڶ3uJ1B%-v]LIy@FwyT6~ |sզ竫wS0L w}y}{кd}.wxў)fݽoc/UO/. >%;o~X,7ubNO:G #Zfg\B.hF&bAݜq<Cqy WKV"yφ,sR:"Bt&Ҿ0V^V?߶6?YcZ~îZfcίtgqOc?^lr_v{>>,7hF1u8k(ߧ[N__mmϻpqKn r\Yw arۀ|udc5y&/S>AA;sS.(:Vw4EI=E%DI"Q1L+cU+_tTeᄑ 6Yx'5dHS2ZuHjͅ:$Cɬj¥$X{\"F @g%Qop>->fCDqhg-x7G|_4%uƚ~fՃ JB*A Iz}v=|?7H>Oz}yǼ!#wze'[wk9h~8]N}tFٰ~Vcwܻ pl^=}8PڻN9hk ,R"*ZVUf|wWKޱKaEU c69l <'a _Q5Z&<5|}H\:|pB'#:UM,܃{|l;Vc]7=#ϵ kq/Nq'\ͥVGݘvy{o Ȗhs-C.ăNS,= 9K!@,W;Oӹh5,ـۣ3Iv%1iMΑ!8Gw&١8Gb9ROIu8%4! >cknRlyRWiK 2 \tܗ2b\Ee9ùrY3PQugopz&Q[4q 7[c-O!HM(re\хr]el~w竿1vi:u#Y 6{#B㵳`lpA5E2ROu EIG2H$]}ꯪp&ӫ"#d:M'W~Q͎k7tf?;#} ) y~v1=NWcJPXɂPRLFE$-.^L.oATGmW0c'U8)aZq^b OAvȑjȓ]͐eHuͼv! X ߁atT$aKE&HEP50*'rEH0 ΋96I8 b~y}X]g9o U쮮SwA#~Dq糫dإ,jW;fKФjeZo>ߢY__uUE+ζsݒW-E֖U06϶&r#iXw+&5NSn!bغt sS.zquO2oA ߿ogS=7k#: [mۢ[7Ce',qOaC6adi+ֻ&f=)Ƹw V*ltV S!'ѽ O*8/u1RXr Ti# 0i\cdZIJ|4:yPy )6D(Kք@Sۖیڧv-ݺ<^4p &0ŕT*!h#<eOrA!Q$BZ2>`:C఼rSA3x0)??Sݫ`E*Li Қ)RR9);Ŝ(C՞ColzOwMyiJ/+@| K,UdߞMQ2W6 l?Kx;fCɧ<$B,WƂ*´6A`':j=`gD[O?g2=I(#|Dh4eT9N"C) ecjF yz< 7JVAYr|\SC IQ3AR6D `֛Тg}#Aonh'uٞȞRB<1pSn;8kR+-`EЁRIb2GA$X)˙QZmE V*^Ņyх(9'/x뗃;X?N Ga^_y/f3:uc1IvW/Dq tЙ,NMa~pao)dk:op5^pW`tir4[M5\0yJaGO|q7u ߖ~:Z?)3_J zTR5SX_2N4 ɸA6`HE )I_'.H(NTXD0<j|./v"ӦktLw:g'q-VOZ^ԅeO$@% 'tnoTָw$Ƃ+6j>Kt y96N􃚙d?1d+%M3\iڃz{O@=ǿ:S::^_)A/)u3eo dG3%LIZwKg 晃a#T⚥aq,;MyZk+kqӟ^qO}.,_hT"T~.&"&aԾ#'~a>jh3}bŘ2s+ gyAlbP[c/r<"폯CLl6Eo׺a=&<"?D˟0a1&ƀĈjI 1Ȃ>6kq!L >S}7"uq4Q@eZCUIř阂DAib Yokb,8gO<.}}A*]Ypp<VqЇީ,CAJd(!ZRڔ4p #݂! w`Q!w[!5G!1RP "cLYd{lq0Wcr9J9ō8`1wa43K(F[ ,9A {M,).~?G>m~ꨀM81M s ` 6FKLǞNOVT5d^RcJi AVx1DB é@GA.Z%# @`I Q\y i\@ Yx)CB 4wLˍ69b;C=)ZR-H"TTArظSbD AG'!d`+W:|QѢqk_+*#TD2'T4=KI T 0B'L=@uһd|^Ϭ,x GB(GTkLe@H Jpϼ`:,=㤂]3`L@x E{0BKűVh6 _F*bƒ(y]D NDFc/@b'(A6`FpwQ(&v ~~4$5 #?\pkX--Y¼ώ!YIR|OJ %fƅ*~}q>6oemcuQN Тܻ2t2>UdGX22PEL J1uG…]UsGaY[  a K % dNt|79y1'fb|JÛ@'r.\SI3J0OM}Y V櫒ZqmU l_5MFjٵ&o9l!av+2aѧtQO5T?O^.1T ,\_8cKخH`lfQ?cuxgKo颫݌ͬ.>'YFQL;d/UStrk?!/| l~QN 5Udy*^g y~ BG&'GՓI<:}Qk"ʙ:Ji1K[sR~@d<(uГЊ2sohtsgsa&[ŨxU) c(Htƨ Jx:앳yokw6tHLpMXup8;=ѻN;)~h0CW-oq+0/Cڒ-Jm|,g)5J[(wnJUϋ8bro.['#hQڃ~m?QyPb+t@HmҊ/g)Oc.2b @Xe;CMn^$J:=?go ӵuøCA @0Y »7x~܁h ]nշ-9 /Ѭ$K\_+`pf;7HH#GZ.ƻuc9q(%^ +}o=.eRZK.r Sq\Gqo~|~DY &MNIxuچ4p;)x48x3,7N?{GSR9% UCJU󒜭Scٲ-I"kzҐ]i435@8CTgi(L.Λ/$K;cdrkknV_}47]:4ZM@xQ6FUH:\f,GdZ7_Unm)溶b>.ݣ>{m9~]Zki^r?1ps/x=}3%6Xdk)ͭR!{583h[.ϜJxp[,]~c[A >\q^vź^]׋]cSQƪ 9d(k%J?Z!ےB1NΧs3ᝩ]{G͍1 ȅc92TqODW U1#0WڱZc.7wSiW CQflHuӆYn[o,jo 3F#l @`Ǵ?|G_6wQgB:M^Y?a0%0qZ_@нNjomͲ/Žl1>g[}r,iI,EU.eR9;(6Cq쭵#~zf⻋NފI71O~Dxbbށ&攔VL*b@b?I};1 ǀ@~ujs,s)5Fo~/fQi*O(7p*40Evx13l/ 1&3l %\z^eorƦ$ٖs)`o00QDPb-j&Jnv!̨팒CF٨<3β9m#&n =}B'b-=/' )>? 5:߱c{Oab$ Xx>/-ɼKW:u6C`\g_{c-9w;].[.f+^mvKm\8=,oV7T פR|'Zǔ,ol{ufd]ɳY2[ًPe4lM5>[6tW1UTfWjrƩu0BǾDHߴlD&kvX1+s&gٸ݂Rr J_UV:m%Ϳi(,I%eV+ٓ6 :ٰ2WC?*ub.^N׋Xjd_.ڕh5ɘA]U!yG=k[ԯB'` LP)gmCp6;[n?>>{]q#v rٹIp#@ُe@Eޜ+$CLiM5>G8eM FɐO\ 2w6ul._B.w[0dRtTB)!'rM襄z 9Q(!w%р=U@SRWBW<X0B?֕w+QkD尮WNi{rY D.^p%jnW+:B\+v\\^p%jy\JWAy]Gb\ܠ\ZTJTZ?pdpevzsyZ$X90ɵ]-S\jJk.e \?տ} d>4z*ON~J$m\]O)8N?4yp_CAcΔ;SH || 跜_޿8\L?$Z_^S WY,$vr!o?K0y!u1;װ~dIbA ?/'/_y]'̯)ьҰJi)D(R!<]H2 vůCB볓a (ݰ͖0QM ̬lV&C"/`"RHvb$[cbߞ}yVgWy?73~v/y>Auyrc,&%m7sMvN?!u ;%oT晜No|8vĠ{wLsE7UY)Q lH0ZRX־Ej=0 *-#\)¬%Wq%*:B\#\`D^p%j ZǕtWLj+4tO"8np*z%:DSWG+2'\`|`ZQ u\JWG+ Snp%r=+VKygPTpWubc]\W6j[Wǃ+AO ȵ$2мu*c?}¡}\񃨱\\TJqWOWé2Joh@FOR~ 3S0)EZp@UPnkYuuӑ gn4q۽KÈgO^jT,ݡ]erA/R;Ln`!,1`;-PGbnp%rWthWWӺ#\`Ǻb^AfqD `W6`bNف#ZuW"8 ܃2[q%*[>z\`z•>ɹ%rQ D-5o]JW:j {`]\н|JTcĕsJd`hL7JH2DWG+Dl\\&AXn*8t+JW5a wzejwEekmWhW+~1ܬ+K\ڇ.k|\J7c  ^&#gP X-ijWr+:pu8Wd XV\ZhWr+|9pu!\ޞ*cl]cċGÙ6gOMdCv/l m|Mc]Vf,p4;o.jMӘE#ƍȢQfQn5Ђ Zߩ[$<|C׸\с,RMc$99w5otGw+WPE㖩:B\X0) DJb֕lգJZm+X0~+kl/; Ҷ'gJzlWpWl\idF+k%:DcĕCO;"+4+QJTg(q"[O"ȅnvE-q%*:B\yp*5 D-Q[MW9$X0t+Ouu\ ܎CN;U{gH^cR hgn=ķ:gq;:j+3ӟeu׸\$8[V.Kv T"8nᆍk*płkknFŗc9#~QV%ǩx/kklITHJ409E8HJbэݍ42>v[/j &ÕWɈ ƏG.ֱJQɺzpE<&J#"+C.4+C !\1MQ#XP+)B #\q-0 `]r8ߏZ)B+C& !\ i^3ɠ!Dc]j νe4($KUl7 yT` #wf< %$P˃ žw (\hʐKe,pE"3+0UTU1^k+C±%a:\*)Mp Ȭ/\1!WXPYpTbWc,Dʹ/2  \j 'zp%(R,&g֏^G.4*:\  !\IJx ^z ^]ſWV~ɯd|1vnw2?*'>K9i;,l2Av؃wu ޝnYV/lNo"{Iߪç[Ü1}Į뫏$7fP;Ye׻L&X?}bvwfw?D*K-&sC|;L0`7π֋ΞfA3OgU>4)$kFd5i)U.E͋lK^Z-?tl,]mryyCfUҞ M"rdw$j#+0ö)R*nJ(Xyȋi\8=4I.?|}M:oW' 0mqOQa"XԺHLi1 j V_חd=F2_y%`2B0OeJZGFGW5LW]r2!dOYTC*E&) =im_Ir 9-uf".9Ҧ+Dlj =4┯+D{ CRL e}4 B5E9%}Hb%_0R[-"?I}z6MLJ/V[Em_Ҍ{W<.46y\z(Z2vp{\e-c)K-XBݣSgN d-0Vh̼-D IHRCF:5:׮6eWeI+)cenajV]v~̓Ĵ!ӎr(;n9CJA*qe“Tmv+^󛿽Y ttv \/L*v :eݞ9lec~O^Wu{5UFο>v<12 7b8GsSOKuNuD2 uFۏ Ֆ=姶}?(06893뺞\͌> #JX#͵Uŭ<iw%.ն6_kv/+'>h^pi^u]*rUHU9ip%(yΚ"UU0/a(Ȩ -AӬi{P<0Uf9Z鉼% JxAQر-sz͂i!yĶx&\j bGbq4࿟7| *]~qT^bWv &AprRӰ^ &Ř(萻ϢpQ"/)΅Iʁ?PG0efkHڬQy=<9ĝԋ,q4s\1“%n2I6" iȀ EӠ#Q!y@XlVڬ(i6pT^ak M}82:-VA7\oũpx r)8+x]ZהJ Q"R҂+x*8+FM]HU\xk1NL|? Yy?\#;_}=}OymӘL77̀) :jq>ۏ)͟7U`*,S8,&+ȳiE /] 喯\+z 1آ=1RYEHRXzɂbaKu)$Drbt*.nj'1q\Z&%5#Yyl֌8NivPtXlV/RX7 ;V඄X*~߲>7_G*3#B"SY.UUw1}B\{Yi񆚮k?XX]fζ,X.K:MQBRRU@\Mq^j!W:a%3Y^^ eźAi;Ⱦ>붏=w3cwaT6"/r]0Sh ՐLjB1gs ׍Lרf}TdFuy5'QAWxveMl*qݸn:BQ%VL=wQѲzSe \kʷ^7G;X @X8Z8)%&K'%JDv0=<]-Xm-;:|TJ fDl1ÅnkUnX2&Rhn?J.ښ)R!e(bh5imr@vߩj+.FVPfެfv;W$9 GJ|)hF(BǼ C5@sa7Qcz o?jk,tE1Gjpwob FwI膓9/fVtٯWNɅuwb=v]L X %eFZa"ʫJaP`xSJfj-k 9lwg AKp`<LA;rK$1:d$k&142ux["<4\"RsZ_O(Wb+v6҈VǖĶ .czrhr;q:~^z:룲G -Г8 K1hbN^w￾o̵h$8cA;>*+4Kx&Uєww7hԎZT}1>zP=X<>0~|u _*ƪTqEGykI#5Ts VٰZs_XOl)cLG/fha 8}6/)sӢqs|$+Pjxͱmn,Gc˗@*inulxAOm8P(˿c]xˌN+5=B/=%4O%2F:VSlX cG˗+uK -SDb9G>6G$aO*cs2m~0=»oms$;涷qb X.8-k\pUҹd(kbń2j3\hCf+ rf4YO߼Lc&aE/(Gv=ؑ D+ȉg>}w[6Z- nI{È$\ K3s*ap4Á^KA 'h`7u2u D-KNgY6 ʥ pY$#k_o G^ކUR/ot [T_,up6S.Xl;1ޡTThqb6yYXY30@j8(R}RES%tEŁf\niKJ$6CIr2v/Zx3뷕FK ! h.h.|a 0p Xh^M)T<߼ |FDlRgn 7j|Ns*8`LF24K1Hr2-mH+YOCHF=C6{VDt* Ϸ'0&ܖX1,&dѢrV,^jYr6~u,T3 _FgAr?^WŬ>?~~_F?*m0[)lzcd=a2Q?~=\&ByE%,vƒA  }<ٻTN3U3Th^4r#yd5 8Y.b#ۥ`5ʊ10mҸBm"t*s ү`| ڤ'y⤘|f |>:^SQmOxk,`)ïnfUa x@9('CpRB !յ IX@>:zO;X.и>`uBѻxG:BLCf0;d@uWz7 n?U쌶fjbTfT/Tz8 z 't x v?=S˵fbtLsۆ~Q< ΄h-4z!g˚2J?%IѬhyI]P".LlZ(BJ.T(:5s!]ɺGH<θf,r1+t7vP Sh&'TP:l`~v;^f9̘fCd[fT? Q|n4}t{ tJCgNbPgtlxq*/ZW:٦-PyN\z-4][POkU yŌ:TzɧD1pfx`A0`ݗhVϻ#,U16^{hGn|aԇEh_%yFyޠ 4/M9μΑ:9h5(}oޯ+^dOTBO|&hR ]I7FLMSݞYmq\rȅ6~HzAfEex 2LaHV㪝_'0Yo[mf^=H@툡p*i!juN)IC2ZivvaA\/g&]- Or,6˖@78ՠaLEԑHl,"t2[xu%$R53\#n=BQuS0-я|%`P(7~B9)U* FjClvnw{ ,4)q)NS]=ٽbFg/.8=gG:wHh / FG.{>r/~?p 4U KxP\hZE܉<+̪^1ܶx1XgpLik @gԋ/TrնDgH"b=A+L Y/>*#,ЏJ# [0#8*C8 %DSq,"Q($^T(ʻdHbP13n<7@[ x )wtF(# .ȳm=3eP4_okʁ5i+a<5p,zT.fb)(P̣^gd޽S0B *4j0??0I18B) (Wf0MSZiAn%t #x{<'?Dbg_rzd BfJ-nL,8P.dZzu(hHFyd`5R]Ųs@0)0R 0!wv1m.[XcU]ַdxhTy.xQ%gWGS?=;RfpU a5DF~{MB:OstNiI4$U62Gu!lUU2/pv$φ~k}t  sU?$0m}[( UY,JI#)T!Z g#!F;cO|4ȣBq\^8{Gy7NO)Ah"Ƀ @DRծ@y!K28h2U@Psً#JM ©LͮgU5ߝXX y|SgO`r VF0x`")%8ٙByҕ 5~{q~r Ǎ0kǷ~ߐъBJtUp*VhL 9֖.aO60I'/V;V\f!^Ѹ,2J %־ \Hn+e N8bñͳB{>(ak<xdB)Q *j"|w[6+09!-*gv]ګLHDe#n=,3*eO v6$Z%4p{"$X$Hnuc'8[swT2FEU~A(kkZhS[Y! h^Gsх~By*[-cᨱsn }\su6rV"4o,OXKsy k=`JZ/hmrEhÅ֮i0~NJV@o b;UH }nԉPPLSd߭A+ D~[hh"G$ևA 4ex߭23,ode/A?a%=_F4f՞-JmU^06,ZsEBG1^eQ(tbˇ+*b?r}8b4rOT/M& Ce{WLCS.3?j'v?bHF\PA9WHTĶO6~|Q\{:E7σ #[RaᇭE$s ]O%l}ny1g gOnYQ5k-AzfM҄IPU[ŢP 7-r-q Z (NO3$3s+$Hb%=cb1|JD;L ҡ9`JxE+Eo ^%bp Zh;Ʀ-9 x]df p*s뵇I3^5D']BVX3&WpP uIP.38XQ[uHb*3 m|OL5$BoMwȅ^whg];_}`D'H܋乆X&>;8{߻kq]]뭞GvmgKkܸS'uJfİqDc0a'nq Wurm8r-N>ѐ40 $;G{9ISy;RK4A! Wmߡ5n}lr"!qZ e#vޯ[E':At4Ycˉ:FX_ӛ4<:d댣P N|4euRt GtJ77ᩢxtNsSj$>3F%}P'$[hF֣=Z,edUQsMG1hOΩߒ[RZ0: ~T= ~ ľn,4P859eõи8\cdиFsKzۜ-435MiUx'D6HR|٠wV.MW+A *hr+0(7 5 KkYV8cof}Cÿ7*VF)eN \}ku j*dbJuNֵr!Mhv Ya0En {Z[sK`x%uȦTI^/mpw|gɊjzP 4Mw'.ҳ;ɀ-:LRS%6%VQŢdeyޫ]gØ4~%*Va>Ups*<>b섆tF~%Sڷ I<nD gD3\PAюQ)5huv5$TNPoѴhFU'9mR~B.$kf<@InΕڻ¼#A݄hz8Lc$RQ'mpt$ t[`a5L5fQ<}1 SP]6 dwس4*[ܸj[ގ0@v(Y,j~іGP2Q&NM f|\3#mpjs3@^Q%,+h9?BP-S0Ϸa X$*bIp{s;s)']0OUHq&U€_:!# ڞcf_ LƣEjD]AH&2p-+J/Ljw(ꅭ+,'7eZ#3#2P$I "T_>ٳϔԮbꢸqPuݬb&iOX3ZXk"ʪquZ(TYLԱL`~5' w͜^!!ׅ@kq₌㰸N+-vs=)%b*."'9N4o6RMjc bB@u.f&]T 12*GTJ'm, Ҵiu:%W,R]9)SI1RL;=Ĥ%)zj ߰4'C{r !5i9}쫭ܢ_y*g4iٞe?w;#+akoòGCIȘ A5N'\C) .[;W72{rgճySbiFm͆l {:͘:ΣV[`P`6-CGm/~`,=q"Nud&']gXO'!nȹ=Ȟ.,NZ`Pe pA!m)H90Zgl =^s}.56ۙJqo^6ml+vsz)'MlO8+ahOKE Sj #<} ϧa=%H_0!sԛe) S(lo(ő #'wJ % 1(vonvf !1?+G vR v1TҮO}2MR0R$VzS t*2ˆ{E}ƶHy=4$2qLioIcR#r?Ň!+ceܴ(ڶJh#drV0PI̵Әi'u[&FnQF aU,eP(%m})euGYHrAVf_eu˶.ǒnE$&a`$|LIe`2x1"QtұúnǬ  硡ԐLz~I"1zVvsI{$>x" DXvHiKZMZ|hO Π14{oCy %A FjyaI/i5Rڵ>pO '2!}!-7/MC ޴Pȇq0j{oIu'%Ycy+-Ɛy,d[9„K]LJ0Sx܈نOݵV.{j(-ŏ%js6Ha?\CYKYH3א_?9( Gr>tOKͲE>CJ+ZIRayҌ Da&O 2 Ro5i:|m"|ՆxZjxf<|_[ a-Ȋ8F4>IOg>ZU0_Ù-/߼U|gl}?~)<piU%H?qL@\MRh׎0-{OwY>ig@o3*zK,oWug/*Ʊ@z-r8Խ y%?.CX0J}Os%lhsRfCFH57NbZ{HW)99@iSKt/'&V-(?x0-'@?Iٝ3ĎɎ.RBU "=GY?y<gtC3qXkB@|Zr Ip;D@3LiTQoR_`'6rqQ<(ꊷ>YC^e:6-: 8_@vg)b_1Ж+E{;v:!v7d οiݛ9"0!=n{-bB(?m?V@Ep/;NfwH*8;N$"H+O $]/8ptv(0!;צ2cYf V<{]Y(͓B_G&a~6k6Uc*/ƣ w΋82:vJ<8Q x|:NfdMM.ٽR3q1qNLR2bWLq#h<ϫ?0rU$xI/:`fJ?6_-Jչ"B"*w@E?g5A' A(P%4e{J6l逌RNL `kO0Ι>)_.̣Ac$|Cr_xc6W-W\¾v3PhZc#\g6)/xINapX8ujB0 {)Bkg/{%]}*U^;ť%FE%㔱~ !~]O@6p׻ 2xadZP,P O!B+аf gR*($qvWoF`[jVv@(p獹7o jO8LE$ɀRBBihxz7ɧaRXu&eַkFohŁ"Χ`bS&PY (O8PA(ˌ` xSr[Mţ(3i)l~ԶΒ08P[^3{0t Pd)@e:FZ' _5Xo.ivm m#wRNA qD4,Z|^N1DBD\ϲz?C,vcG1O&BC| (zm|u_5XÖE~W \¤,4oyq^&q;2xG9۾Z^Ivu\Oр.ӾĂwBb[׵JktǵqNᑨ݀{V$.)^n \V?M2$~cY TOOZ՚Ȳ *Izhn @􍞻! e﹓iHˮZBu:q@#\l $Fx6`,//}8^m(p-5[ ]"Wܞ!m.~2쉆\Z @$ve j[ :)~8Q4e^GP6i.ó|2=놤\v|wWX V{sІXz^BVV?V-(гՍ=ݬF4Ng"KDаljLS'Vo<%)k )kA+L0[؅=oB Zh_-#{ZXMiDF cȿzLwt(e ʪ HA )m'M؝ՠچ=S 5{|繱,if5a㝑؂&-:ZX/$OAqL8x5΂)XbuG|.uTC70%: 4w#th=i<7i1>*,E^Π+`#@R}$  b_6L,.z66a;ЋC׫I4}Äa" vET8Q2mS7Qs7-M)TcIH+,>ņMҦ{c':""HGH:~ {-DG=Q˿3 kK\[j[fL)XQ[NL$W7lvm~*ڹ~PY Ez? m%_k랸LXKզJ/}T&K?eE§I'Nw-8_|ڱSΩj@*{mD@4$Ԉ:*?.)AI&>Uh><2IHTk޳"ԓ^,VJ Ή8Od4ѿel֩S[6hm@ɱFX&2(ʤ20 Q`sSÍהHPqhNYbm&g;wuz#_VBqSLhZԻXA+b}1$=l#y]ٱ]zS]׳?w+qԱ"kXk||B.45呓ʂNc&Cr+jiC 66uPip%P! MҤȝ-vTTm--~SoQgORP$o W.2=&柞~ק-)8|'ogI`r`?R<5%S}U12 b< ;i-w+.ڠ= `HH3S2_oF4C 9^<'-~b\u v+fOlzAŶc2ſEVI Z#a.΃d)0U.‹`4 45N4 إ\-Z:>x1m/\@z_wɽt<@2*BZ68XZog}/p6ӳXfGd}SƴeJ8I=NS0*U[tn-{bK۞}j mlڲ48`ıo[d,[ݭ=Na}W֞ 8GUWi04* [-O}7a`Q WT|.sm<Ѽgi@|jiii[}jMAR(0 EiA7 cb ϋKL$QB͏Ey9ҋ"pdH׍΃UXˀBjs} ]9gz7_򴋃 $ ,.d@I[n4[-K2%RSHz G颺u꫕U|)u| mmgBP&O\06dz:RQi%#CLJd<5ueŕ|~}ð+AN 9 㕺ycFߧ/m%ne٩_}\Ւkٹu2gwn"`,jW~Qu<!)SG0GlH3elX4/uV+cE"BMȹ]Krg\xۀяe0R3BY(3&Q͘; pHՒHvVW3ǫ,SVi+SO/LU fQ sv32LH9h#x!Y2歕(O+%d,3'q!T-*ږ#͗D8;uC ( }|p>SA, JBHX:Ob"l,=d5\ز$ovSybD?#JES` ~^{ǀm0KQ]O]ዶRlJbi78$2ttD,Z/O;/maG;4ЧW>JO#zDl-t7Or65SeL"M20v TB|jS(az̞rnȬOu2uZэAG1G#?FsGr=7UxZ%2y%ak?LOK9s.$p2!K5g rQho᭮~?(b撃Z[E,VJ]/xCWզDFO7wIFКBnq#9 jA$ڟR5f-=lRa'_\ 2d |9eH?g!RR! /r+!5ksՕ>C=^bqu ,q+W!1]6D/F˨=HsFjI=JaHx17|\0Y`$]R[nZV^r\m)d<Ҹ-1 umfp& n5l\ (imav-cFh4KQx紼ʞ))@ѳ%:JA)akmẔ%= ,\G-+Α6Y(\_d\ӯnt$c[x^jtl1( ̃}8A^C^D W w%CNʅ>; ^-0Kpfy!$}%.dR}Ak^%2z_j+宻>:)i y6vj8EeH[7]deFSͣ(<䧊r+TJq>C>{2\t//bĉP7DKFY-p4754z+jHk* 3˜EB SRkJn&TP#WC09/UpQZkB/|1M7n%&q'A-qpJhFQg',|Ri¢Gq˧g9~}@)նY"#o)ZN 093)uHç@X1Ȉ塇~=ry>>xCFÙ?RA$^3vo䋍YV:ԯܟtǓ\(mbUU(s%XGp wkͩюqRDWE-6FZڋC*am$]TUP],/ 2ݖ/zT{=4_UýS7 t;?>sԡx7(0liĕ$dg񡷴eM֦ιn5_ X1j!\2Ukb,)XHIGQ}:wJs"u#D?ٵ4ګlE N}'Ug5+xUKd"򮖙h%o1n zD$;8BqRjcYqឈ;Y->R#cǪCdkLTS"+eW$y ƙy>jBxv~g2ɻ~wu%D*q-+4Xn)QKgr,,CZ6ќ'F$ sȩWpW*DW)'_Eq>hp[+iЍRlJb{%S Vf%2h`Ih7o ??>+Rŭenk Oxsݑ zn*>DFOme_a:}BU,*hk3`GRYB#x[x,Pdzϧʸ BGX ŚZɻ"m$֨yx5;z` u}'fW{ Jd%.:&qą~pHrkuB,E hD2zlOt.:zj |<MyHꛟ^KMߏ#+^V[l-Z Q ݥBʳ?5X9{^ ׽{l5`? Z/׮D͹wrFU Y޲}{HN8ʲRr߆[td%ս*T>|U"yW=#y\(/D`y-l#^-8W '4$X+Ǒ@BdƬmk4͹mI.|S>-u<%aΙSU4ŧ-Ay|,+nh-Jd|-z Ad[ 3e.3MW||."#%; +B  JuIg-O ^\DJ WT+thUn:xk?Bx/UL^`5o1mW񽸺ƌUuC[Q<.ov{DqY:^*M3GhINJeS3(q%Чjm*D3{^nm&6 NM1kTuXhl˯+0ZJ ag7mNrI:Q,g[m,VuMn~cW;"Ї]u?d2:f9<ȿOX?Wn{glXͧ0˛ k<8xr n:/ڦ4Zb|oa:Y< &S_^Gy}0 `i0.E5h4t#p fu|@u\s=9mZTI]֚23Ro5R0a]2RJ+ƛ >e9z0oK1\Cו D)#6SRKT"ct Qvb.]71^|djGQqu dhX;a`cR]mT uM֢&tl* 4@JqnZq/w&^5 `3Rp+bqa&Uՙj܉ΦI_YVXKN.틟0Tkʏœcʴ>:T[e$աqNS׻&uw 8GyP[&Y$mx6PVFVP +;J_)t>|euZL&d/Λ Ћӂk(ks_y>N(F]G/nP (=>/L"_Fr,̩_?4XL̓Z}/ty ]ۉ18{k9I:IBcpkVL%2bP"~6ThU®9lU5b][oǒ+_vHU}ʼnO8@諥lo5uDўᐲ.8MOWWW}umAJUpoOb<4#(釽QȲZR,7x}jjj+-ۤW(#DY1gggy%+?+"$6TA:cU.dJHS hІtxRI (i>Vfۉ [u8ePn[Ow7n>J+x  콫H_ֵB4dbRd$68dYUՒєՓrg62)%M*\vk&>YN ٣IYKI䒤#Ï]4{]"D+߻M4%CXh`#7FF/MPLbk1. GUT['hMV]91 p6d *Ado:ʭD_A2s #@ PR'CɱRVƃژЋK x i3%9k[DT I ƺ$Y\ڒTxVu<<ﱹRQ JoQҽs[Z[: .ytz1;3ˇ; ͎{QV?{t'Y<ݻe _ok9/%jNkB`{Q[JQ/^C[e[`Nkܮ`aϿzϷ-պia p/_l !łI:/NDr(iDܠNqs7-l@2vVkfk DV(6~2<äXT?'O8m_wH lit(sP)T*Ex9CDd&\3'[EBPAF'N>Z\7T-ivDh)X~o I` UN`MqOh)dպd. 7CɄGslȧRMԪQjٱR"Vc민2O (j])Gfs|\Yz364f$1:<V*v5Ccp:zi`<6Y5(bZckWӜMI" #L~O-7~5^:2ۮI8wҦߊ1mKf"jR4aNB.&ZXLO2ӻ2ELbS/6(fD[t Ƹ@]7#ghnarG~{Thcrfy~MfTRTضTA fܬdB(%|o?6lWvY*bҴ'K!XX=dd*]Cr>ɲ.*]!K+%mUFELtc}K;L/WB;PcN:ULK~omq >= Z5`ԓTڎ3LlsDԊ1QQcJyV۶ CgV*㉞eAР7⮹?+Ì![Pt6 v*n V\hSrhD\MծwCoHx]V-Rb,: 0l\}!ΣF-ٴǻPfݷ<ztsz{ِ<.,%pP^.J>>i2ikact?x V-WE!ɡsή?;-dllEuѾ!~C@xEf <$()j!?`X-ϤF=}\Y 7`bwlQ[ ~: JvÆ@R#iJ+梅X7o/-7 9UdOp0xXi>08%Ioܼ7 Z}<:o,~K-^l !9x jGm\ތ1`1;w$ղccX,=`ekAԜ'Id0(?wkk)zp*H1ć_-Ԧ^~M)6d/ ko <\^L'? O09\=:a,|} 0[-YCx}n<{oG,uwۛ=|vժɄ ?櫲 77>a9ۋһ].d0D=;r֟[?V0 Y욐M&a1tdo &BR*΀ syi`,H)gp:%PjZ`?}h 6h?}QMFz5N"ZTlnޅѡ{k~BwI쥲 P|*ҵ6d6HCLr=91@ ٤eᚓؾSρ~9 `7~_C/qYݒwb4qoJm6ܵkg @%d Mw$H>N6bFC1'[ ňC"9`&0sl(^beq32>@N>YEXx[5dT6eh |mg.A >geAWbLRe|&OrDD6x{2cP]0XJ{mڜ"m,Jc VpJɐr·ѰX(чRFJWr;Pa*E1{|({l _=RRF((wm)DNŔ>&ހwPǺɨX#P)BLқlJPEO +6 "~s㪙RwYX!*;V}T->Y[~]n;"nU&%)T%4N/fgOlk1>vwivĎӈj0cnG~2ųѽ{{8ji.`9-M8M"5^n ˼,|R=*޲Hmeݎ]yػOzŷbkp&Z6Lt'g@LA4N(m# xrlLJTp _tZL9 1{:PPVlCwilP+%D,MImhkkU\(#RJ8:Y2^YhP{%E#}fDdmj$&i<BD)VaHQe[-Xo*54lٸHЏ(7IյB #:)[XYQQW;mؒ#"Iۍ$aK7|9 !tUAsҲjWJzEȕ5 Y/D 16]ҟ Q#p!cXWP]6Bgt%38u%:[+FlFEc x:Yт(r`ZZxA̔ 6yϠ% b@e̪ͅ؁9UZZ r)sß=4_a8!Q~&M)8Qniئ |S|uF Fؠ6Rk+C1f-l=k\ *a5xwY)lb %fr )v16yX,ư= 8}fy6huO޼|N6c:ש^$LN 2/)$),L:! REiOcRxmYAhwHPJ&i۴ 55ϘYGU];lZ}6_ mtVgTKN&>7*L3;I4w ZUVMq)цmlApdHN ,J{J0kxAE'MEOޕƑ#b S̾,0i Ѵ|$w{ `YG*K,e)CI R1E* :l#ō̪zKlR\CM RJ)Ě\ts~Ϙ{wGOfsy\_?!ޮTU(h$݀jr@E4m'OKh3a.ؗ ETdSz}JJe>^G3G5p R'R_y M&;gsEcAN'&#Uūb[͵Ĥ b{}6A0@״3;)9!w]I`OTw.Y95 jE]i&%lz(gRWq-1Y-ݵ3o~NsN,{^_$k78!*P,#'!ލ/fJ\J@~{w\ϳs&8$B|(%&I΀ >ް}0|Yύ-0 Ζ3-uD7@6SކqTiƨP)2D{^g'\`XG”sKб Q)Fk[U6RscHϹ0r5cvsuG]ۂ6Gz.g~>}?>~C=9bz-ĿېD|'ܟlF}WTAT 11 F%1z%݄ Vcj<ex15z/4C=ʘ(5~Wu^gqu^gqYEs++X,=D,6*naβB4Ă;b<-#Vr*+v55\I??Ƴ]oDK_59qMtNW?<刿IF;&=_wϝۯ]~=w}{hs|(>cz{[ըyOZ">uz{A~N쬽{38}=9;UqE7Fg@^t[P{غ:{| M0h 'imzKL%{%9}cڈ2fF%o ۮ83pmƳ?Ziu>lg]{ :5Cy> cAgPA_ΠJ=`PD3Դ~KvN7%4*[yU).`!&|r@i~CMpoC?P[z7 ,|؛rԙ]W"Z9"yhG;1{4ֻ^aim>Ѓ'IvW6/~s Ò8=dh.T*hSPdfv oVsh |Qu՗vdz;-j!?3}Ƹz1=px48l28a7qd16TB#AN\:6p Yjq?jkP-㦐 Ll,ȅZ,bBM\I-tyTSP ܻ)Gd8_==jMY&{Qo`|k͸gRΡ=At" FJ-Ü%`DC@ uo~m(3~:5қ/aMIr*SF_ЩDysRy9,͇ܣSk;|}j ^fd4d`6m^)ڀOn9ui'Ϗ~5`CLC!PKș<ĜuV׉{ r9ڌ%f 0'Lu:غEiR1 &:؁_L %G NF?$pUnZZ7W?zO6E_2ZvEӫ_ڛz7dUXDIEcU ^Zp,d"#:ϒP{ԔYĨuSR1+0,Xe@XPJV^&jlbXAG`wgL@ :˼{_!&%~VED! 7AqٌƂN;4H|x$nl9lnD],|WHO"@qmq6I<>#^Q-2n}skv7VK*jġO~HcN5lk;I($e=2ΰ@3+Z[g*zCsnaijG6 ?CAb(0{An^3d@FqrtV8Oϯ ߤ3w摺QKKyNŋKNҐ䡆b'.NE"'bu9I?Sʛr#> pkSʧlhSR/mrLLKT" N ]rD[,:'?Zfƽ4tuW̒rεHu4ur CNZuy}nE3M }u<i<{hwgH{u܇Ͼ#80^0웨o_.x+==P% %Ona\dht7g;rN4'E2ǞL+D&ŶDEk}ɉu!qM: 9B9|7ak #s_*ʘaOd:rv®g>6"W|{`)v B{(t >.âB +* !rxG0R{eM邶+7w\u re<(As-\$TJex7)kY,qN=4dy_Jk+ntLe,#BS=żc` ’8-Hn<$F?%HF C;yu!g+v^_/}{zN3L3͒GLkGx]GK_xe/%turY__5Sx'իDتr§ C\~rZ;[mC1Fĕ_{S#igt͔Ʋ-e{C+ceR:b'vAu0fUR-ܪBY+A[ǁ,|_O|沃l2.{x;ѿf}gk,uJ)2f9-8J1j#JnYFWS5&y?%j$5,ZJ &jc! n͈9svkNg gMTδT!M=0NPtt:R*/>>oʙ5ª:cB@v{,+a+d U"̦C]U`8r96{ޢJ;(a1}܎nwWݕ +ջ9&['t@A!'Q_&qB#8jtNCM9@b9_1kii͙!Zps*Krmn2LsiF!WsЄ2 luN^[žqBy%8}Bf#ɖu! DÜ*ȹ/P̡Z"$]{#e~>-y4b,jCl-9IF% E ; pi~R$>G"Vp!׿uAƖGctj&y) Bv.KF$$쮄UoYhtA,[It3"m1`Y.cũ,L{3$52jnwȚvX]X~@z߰5(7I 3>$v s(׉-o=ɋq%Xd;  l,y4mQt&ѹf]φ7ж63g"x"[ c$_@םľg5r={+;yz ZȁPGg=>Q} ڻ XTH+-oeSzYjŤBz $;o5[< uVDQAԃ-]g竎yaL8>{cRQ9JfriT s8w? }LJ f_ Y2ű|vSHI:擪cs5)n H#PQR ]D[M˛2"S>=rI잫x d$OfɣW\J-g\|M MK<9T'[r}8: `ΐk+(vYaTONg)Jc&7+ X۾1k5[]rOJVroBipͫ\Zj+߂S3)tvzu4d I5oǘ0߱H5[\4_0iEJK7߸`\'lUx䓊E`s\k'_;+`p[9JH8Z>ŕW::fyF3Bɔf[tn##*BCWJ6-ƹ6u|6jr?VfCfU-a#MZ*b͡{%Nމx>ŏF+[Uvp}P,x]Jh}rH%Kem@0MpWF9j\sS05$KlE4BwqP\!-$tA_QU ?SHwW]\Ďd÷eӶKoN׿oB2N3~B74|w0Ð[A-6iDiEFbN6mISlL $;TY)SaTɵ~j=v$ trliL`vĔKb`ݮԅEU)@u 0͐;-]f~8"ZLV*Mb}!D\mےԣwՍJ#-"_RoC16 BeK20vWT#e[h^GqOdqCj`/Zo)k a7CVi)3( {J'mZwPb`v *?ELc܎6{!à:7ҹm,y,>T?4:Fi״̓vk5e2%mWt?/y4h{ZAٽ83P `[{Ƌ7~>~wd<ɾ?kU =di=@N&d*{{V47@V 8$MV$Xu m6)ӹ:qǰRVM_F5vV+47 M &"6C4#g9(V5fu&7ޚt qwǸfE7K vvTCd z/ՎuKFGyjg4"1l ..]"ށ>Gega[8JhN}q=WZNÛNr[ h sIbgw{Sp,$yflNf#5|@VRO#YyS,TEWol߱mE’XCz(-Hh=&fH'9W5? \^͘6rzP& L72KX`?s7n\ApK }݋7?f gl4n=*RvRAyGBQ'Ůq`ښ8UpC1&/7/@aN8S-1Z5I0y})nR@wimKc-4vd򓤄(}j*%$9UU䄘FלkN!M"f4[ #7KMYsn9FPgb!g'~Xs|pywZ|Q[c^#K5$]>z{Y9T=B0e7zGq]f죛t8kMD (s6zX$JQ޵6rb]]H ]`$g_65g<'%vj-%KVd"*ֻ%]# U 4X)0c5kHaEݴu j)K(O7UE* X@<\$ vA¡FkG:P+(S,IOP&U #xGHkTTܒ*oXR7qe͡ AM8Q6 lż@rz<}fl%&j?=i`BkN{ d=#vZ򊧵>:T[\Ƨ"FrHÊyA 4ؠ7\MتQfOc;MRrR9d͜!-'☗6$ Z+99UAbKHYh/hm 8jvf.y>"5hjhmT׫N;|§ FAm`*z^W hfdGY9SC6^L3zGd}Z $R__奸BOka J_yQWcī@|8#1*ڌXK %~ =M|#DFpB۳}ˑsX/ Ͳ'5@*yqld~U?1; JruNd_n0Z_yKлI&^wgʼ7xVKssjR Q Ԃ8&i} fv&XQܟ1sj@6iTBn }ٯ9䕍 4s;1OʳeWj/8Д:R#L;tJWp4PN .f6JH)}=pgY?>O'Kލ~)l{$W,Ȣ=dM&mc0'YY^' yβ Iy$-8\bLطeX<#+uf 7iB_*>5;33_1DmVs;UIҊa'E 93|Nyc:(Bf%s3x'ωLCf =5\)i0X#c-Il-\W֐r:͘}/^kTjP/=K.579e}S@ .'JD{R8 />ƀі#e#;%x2b{y" 2<+PlڽiyʦRR%+$Lʔ@g.v=-myvziu ɐ!ᤇPRgU3|Vģ64ۭ~:H|}^jݯj_hktx #i4>u4`^A;ldlΦpTⓡs#轫ʴF8l+9UZ}:OӣI|(+.t 6A ..(&Ѓ֡4ެT4(j-)ɐ& c>R9xmЪnt4RL r:1I2a:,@s!1f%f', ]PֆJu~\;^~E}hPEd9ԁkfe65|`Z=#T$WwR _rzV棜$*{_nz@4vnv@Ȟ{BXcl?V6ch_p0V ;?9,EmY\VEEpjet@#l}Pjh+Cf0&QR:- \Qq@}]CB@KIC@ :H_B3&[^kZI.5Vw*8Umb[Ӂ.SKGm!şX.,t&vNv]]1SO^Exf:_o[فE@`k{ne;;}oel]Rd&}2/xlen#7ٴNc̦c1*_{TolvͲS_Mv]?˪a?^M5Z0IǯgG4^xYNwz&'{ꉹguy/G"mz[L^LhOIlE,M}'NTd󋯩F7{kSwy%uSA~*ӳO'/Ab5.ˁ{ѓr`rIf4|чQUSRC-|]pE!6,U\mIofFoĔ/+VV #" Uwmlm*M۶-Q9S٠rC 4)>,r;͟}j]|*{$ϯk:AMn0P2^'UA?1$`]G+aնZGZxyܦēuibCzlh 6_i|#\l`)!y6k}soag;%[jRبJ>JN  '5 Bͫ~37u69|[Qم֬]+{t%5r:}'l f5YE"^93wJ; bE׸s!Aω5*Q{ԝ}}(;$!8w˯A =J[M<py.A.ALQU{,ĎРׅi Y`S]^\ "7;d> }lʣRg7k+;]D9>mv]ksϋ*:+WP.;q^Y =U% o+lC,Awmfk!62FPT[\JCnZ,Q͖0@z$gȽ_>]C_E7&P1g,o.R|r7<\_ bԜgl%>>eSγwgAMw^kMZm1~ F޸~v˯A5 4׷"ڐo_L^[~-`]3]ߚ&_7jFw|ჷ6'ݩp~?s9A9-toILϭ +eEOld.5x .yee+TT)$^Ur &n6b#'5lv5 L7&䑾[*}Hደ@Ơ6[y>>e.XW,^C4,`t]I8qr7vLqSdԔ)} .t10<+F``\}l}p ɣi!ĽȲݸanv<|!h<ɐ=6 {{R!9+u:d8ـ]ThȆUIfEO fl\)TJ@`B.6ր',0''_L^apK{ Am3ɩnt  ̝X|a ݉90PD{4&4A,#!*q9!&b*Uc$$/ٷ}k:> .`U~w$M\F  (:UoJQ3lL8mT #KVL}8E{;&c]ealO-r۝L7ͷ!n>Bd|OԧC$SgF}J0 n^&hsOs}a#X%uLx&L=; {=#=XT]|ttM [XCZۤ{^qR}}t!^$Q"g*emTA'xyxrTNrx?KL I{Q;e6@DApnV@ZUAd/ l}oy`Z~]Y?ON.;R}5^XL֕8}7ŤSH/^:\^.ݽ>toUW8Kĥ+Wߜ,PS1ZeѺM]!؈K-)[l e|sNo|?5t5w[B"Wq-.,n[ت-n2IA_nw{dIf8k6,9;"ݒzՒ,JĶlbXUԉ¢\2hEϕimO^ٱOTdL\bR@(evZ ȁl  |6z,mK4Vv?4;auHҌ:;ĩEJ(.GU!+r|Me7_$ItNP3yiG/(-GGT ^ׯ yW>Lj/K97Mh5zڻ6`edV$t1 @~t4i=}roC߁$ү8oۏW .N~Տ9^/C[,QALgeA(@ǏF+JA }26S;9܎J+ 5?sF@H(l3ޜK4T bIgRyD];^gpZ:!c>M}3yyohdZiܖ g,5g+Fb8-۲|`/I"9n(6;K5Vjz7F[]WB<-2{kڌ'رx(Wj18R+41 R'3BqԜ.ɨ\܅82?dȟ DP~CEFmqm7yؒjNwNk=JxQm]u}]Ioo8|ԷpIދߗIzoQB/vqC> { 5D6o8pC?3 >6.Jw ˑW<{SИ27v~.LpZOvH'n\]-zT=b^O;6۳lUNbd[P*N'!O<4Q=.ՆL#D4PDKelsЫmg=?O/y{c]<{&jOuIםag'CGPV~]=o?sIkeԣҦ9u› sTTM8I5^;;,kōwI5kH \%wda6|ݑ{z"S )~'83{BbT0 2|Owj'PPt'#]a5 dAr=  ,BS6kl5E|U(wB|g[-@ RboW YzꅾbCdmޗ+xk=&&B4^瞜>wP"r[KdoupͿ!U-wL3M] kɷɸznv\XU;_ݻ Rc[g!?717k?(i\25.5&ua rZ EO^p/YH4vTr 23g< .^^QO˷^q2x2c:|sǓ҆2=orljbupJU_[M*ukRvpF^PݑxTgw,Mػôe.yKm^RRqZr(ZuFL$@93.Y)C^Q-QƃI5[/# iBa1|s*o> yy4VfNuswYfA.~8d g)tŅEOn 5\~gY25r_G܀Gǫϸ"i:]+2#K9#U7]'5G׆?,OǮ d?3m8\zF(%")|FkiBA<miVI+> QDp7Biq+7t)Ϧh&S_!QaqwӻT~I^fn^EmIwwIЄn3-MΦ j C.$KWMJhbG !2O}Hk-! h\ق8*4]aTbŵw|,\`DmbRO%VEшjDG/*NxA)&DY .0ݫ7䂧&xv=n턧vc gP b,rMM NDioBeRy 1Rt\B)mC]Gx\|ve5wYտYښ`f'62ibDeR@ (|: @J(\prHjh\9ڮ'] s E:̗YMsq; P@{k#iQw۠땙ۜ#G+;Q|_Zو@k] 2cx]:ju'J *pg(́]WKpZ 'IՒ,Wk;N=IfkvRAJrע6QFq(Is14:SuΣ] d(#\Vs~ /wbgĵ04Sۦ?Vv*ߎN%N"A3ौ`Z ) oLCU85Iۨp>E tdIE:8i\oD5@;- Rc.ycݎ29x~$LrNk1\ 0o>EMBރԱTY*O[^E^3|mXi'/8LDHY. Vhu s_9I P&*9q4eu8)|.qW1bSpr!+1|7VC^vAHנS'l%`TȨ6 o=Gbp#DKZ)PJnxjOx99~ϥ|p6Q9F"QRb7.B$2& {P4IR#!X#Ҙ.=AQ@h+Gť6:;Yͩz~~:`k4'2; o] [BP.Ȩ?:PH1$5#wNy$vsmH E+NB+ 0?mQS&=*ϼhr6L+*S'cQ/U$%GÌQI(^Ed\ ]$113EBp)Jj'DX x`w R-qCh9|h~SCZΌ2iAhpe#7 Ze " J# * AGRR݂D}6Aʵ >cB"ET"V@q'8NdaLI uSi/UE[BdsGUe>_/Ƿ9g(o(~ԭJXv%fjPA?^~ޯ5]DG# !L#ˇ^܎z{7>㧨!W>s=]jl-98r@sOBRM>a\S|7Qx)G}R[(HQk35iOpQGIAQGn4qEsTnNRU2eچQ6H2U/WSFE9 CܶMfTǨFȋsJppFp_hEaooz+TA6Tw숞섥dv8l_kcR;ӂȋdG)B+ j 9 ]J\ӦϽ5Açy{ eG|w; ˧a`"K͖x l?4^%;TmW c44n* [֩JC+>NU%  pV@sjT,{j&o:&3q#ߪO-|܍HOPh~)$GRl\9{2 )a$=PI`5 LGTI'Sf#T2zPO| u[2!uNY C:hwf 57äF~qT<|yjc0> 17QFJsC*ʇx \SK6>}@k]iƖ+ ț4յ/;Ɔ`FnIm{ Ej")bAlU.un-Zp<;S=I!{TghS&Eq 5D>iQL\w0$Мz]wTW~(74V\d\bҍ=sO幆;s)s0`!NJB B:3W>MXmZC BjScK3yjP X!Kea\:.M9xVrIa0eú@ H@ p`xthMpsBonr_rosC2|D-5-(EI0ֵuhGOd]%K3*C@h (>[⚛k[\C[-[\svcϮ,05xX'/q.r <죓b,4zbV T:{r1./KһҼ;a1҂ /CN-bo @rM;}glo oɨ&.􊭴}Q,@d' 1dmcVdv+/-D=p_زD-k]={M:vQ cNjC&'In[`raf/zş͓hYӳYZcwʃ!SSI1l^}"'O(9RwSE X0 2B0L2kZOl *)%8FK>K,t$Ҍ@D1O^Y9EY[s }q@^e>~>gъOxr1l1 DPj*'ڬߩ2,hsd7f56@TLDu5c2sD[56z:*`J.|΍"|q1Kl!Cb@)2?V:rcP{MvLRCqY&A頽_זBa{qfVm, NaedJU< D@Iz+|`L`(^b+ 3u iZPL&mL'wtbk(n!äul7adecfj遮Ͳ)[ ĒW4n'q̓ 띃  /6f?Ouu5|[,GZ2[h^}.G*ݛɳѧ`/*R8UM\>Wb?{'o_yUq~rzR'LVWԑ5r>T?A[ [oAA}ѱkQ݋W+p"/18" Q;*k*4OJh!O嫅AKfE\p]V;+ 'tGdn n]1ç5ZA@ŐCx2&dC513bhLE1hqz g2\_BȣDUvB1DkA8m5!Pl34i""zi"Dj+ULn{vMFx9jdpJ v";6/eJ@7+2Ճ@=6xCHDI5Ģ'[~-wć"PO~ YI-}[ M=*4BmI# hݺ6C#y8A~mИqѷm`m`=H.l2V<-s(pg&H4(xAOɇU?+:쯦g7$$̵N5$ ƕt1[ػͪ }Q[x@Y;PwQ)E|]U)ٱ4Z)Y(3 X9JJs}Qm_k%HMu&Rv p DeR^5G *9g%Q S~V. vTAlAT=ʤ{Q{Ɗ Whjr),BLzu,[ DuJ:N$XwmLՋ=A/I ؿ)tip(}GJ%e*Bo0 R#zܘhH6e&tZiUFͦ$~aRCy5뮴N՝ܠB+u''ŕ ˏkwyfG!@@c[289O'?:?pmG/MLg~QVQ,Aqr*_>X&C.`ϳH͊8:+S3Ɛ0F`ts:fkmE+{{'tO gI_o_>^]?f}MOǫ_B__ .w/GO|a i9P /*_ -rxԟJ ]ŭؼ/pp1`[~Yn|dre .g=dql>UfJ]|ez9S(*vH_,h)^խ{Q^'ac`e坿>"FjaӋO >g0+Aظg36\Q?T>u`oOA{=}-dp we۟ >zLV] N?ȕdd" _F4[@.xzq1|5%?Uj\<B4՟F:Ar υۧhRHd̯5ΧtS9VCb8~1qMӗMϵZ|y5֦UoQdqU+r\Ѳ|\9rá_O.J_EB g[ ,_He&C)Keig O [)7~2-^9uQ\:_wz20j`ڛrRpo'}1/ֱ|s͓gPZ_.-D%EPI:&1ɎI6`.糎IvLcu'$rs3jtRH[irgΘ֘I1BwVL2E]6ECECzXS=f)~BOB8s߼yӘ@ ' %:Q< J!*@`ZP1R|usKP|X.._m;i6 xTn&n,A T{3 a1Xu:aikI%*ϊKKsPuKLt\l%&d=.aa42 .Ʉ 2dr,w$TR;.% Yq}i.FG{"X% ;YE:Y.7<ꮀ%4

01.hf#繂)ɰZyQ`gBsP ĕҹaY%02$i B2",FØ"H]GwmlC4 :s1v*T`2 ϴa+:0Šن8uD ^ q&e,8Ȥ2>ƭ'P EH8쒐Ɇ%awFɨVKcϡ Y,v(U|jgJލ'xO#?9zcX3+͝ibמh-,ء;!I4+M&S ` 2 .;hVޝ%=hBQv[fEvxۓ̘9-F:[}͹r9>v3 I̎CX(sjZw~_\'Yi1<h \FcV8l SReQ P+;x9{bp{$F f4w4"UP* m %BX<` Ceh S  yzE /11a%&zc"0Rxr %i`%VB <1DZ;4D6H 8jr4?+1 ?TTiG5o8!N,9${l%`B{㭤;Q_JԼ@<)VDͼ@ ޤ`uvg (Ic P$wq;|ѢN;Na S[崧\(d# `Bm uӊjPzG8jПzvWLs*kw VyoޥR=D|Q=Y41I <51_ˣMV=/401HIX,P9ȩ-U&T_[nz4Nܟ+Yn'$\Q$oqYJ$r@h (lKHO$kx>D@\k~(NT#:VRFVTBzĄD$}հhTh+$m<"?}A[">߬+pUu)11k9`񯙏\ȫ1X gZ#EoIvl_ $6vXvjAb~bXDS#a5I&&PM qYw>O{OWԯ_]77+;5-AS3D0V1ilch`A9LB 8]>ǾkGqCWKspD]y[ _ey|(goW8K&M7&mq뭺!Ϛ_gߦ3(?n6M6j+ 6ȕ9ny  Lךa7n.TV)2+B$KM" *<*;n^k&)D^݂imLQH#sz#8<*QK+CXo7_P0 sy`3c砕bV=O+Yɑ1!ylrcy)fO׷;0f}l5l~b:0k].kq[]Cm'W_obr⻻[|lgMav9v}ջ˻U}uCzyOƠe!Qڟv2B+i0R.l5vX6&Ci 0%,{8D {ӱG=-0nY=r%0(99dР. ةC1;n殊]бΗ9%SYUe}UY79F3gب H$ј"^2&93a(G ^LjZEt[h-5(n$PN(U\f,W&Eadad9#Rr&HFPqP"#kio7[jT?z~OZ*m.":(\ Pvb!?ܳ~gCv>:鶒]ךY3vv1-K *OS4S a]Ej?חxV`Y. 27brar)cXa%0.יUBKZܒR_*sHG,,$9|͗.BMqwl?dt|uu/ʞxn7~A7H+::LFG eARe. n5K %\ mEp/ !V{F#ƪi|?"!;%DF%QM@}&]BM>jrT32qOsUÑ>@#c<Q=~#r 6,F>1hLtnM{gC.> ZRRV9ӓNPpD\9띘 yR eVW8~2Cg k&Њr]#qu!n>2$KK4KH*]e ae+ZQwFtQ<(Pf:Gɮy#GIN9JGf0Wx bd`<:ݵ#9}? ]V6DH(ٔ8]d2+Ɲ^2O*)'eܥߙH90PQßnM JPlA!V>b iȝ0L)_I?NZzM#}[ W- Hb8Eo#i-dp$dJ D ,`X(R@x 2;$îE Rce=Eܜ32ĞtOVv_L뚣({6뽩g3I b͓aղzu#_x]6ThA۠A&׏ IfחRX=-}Yܱ~b[[\_uݵl%jV_kϖ?e^UϦs=S}^_VSW+PVE?ȔXׁvR+IqxÅ0?`#w^BB,Wnh }F]gK b#u/he8lDyU^sa U[ͩѯr}pG`$ۍ@u 9U[lO&mX wTV}p<zY v Loɪܲ'rmÏ ^>xEu}nכqNI 4Z+?mBs$)NA L1V/vwnǥbzڹ '˿%]űΣ-)Y>dYI uZ Ν TqAH  ’TAZ62f)7H½D%B2iuJMChxRUATp)5y:4hNyQ"%RL-XmM\+ɔ<܉Z\Z(e{y}ѬºŠ"a0fQAX)K) 4(K8 f5Yؑŧpp.ME(0iœ!6eovRJ3MIO&EjˆdYXfFHXjśfcXAٖ"sT3rlF@f)mf'D1Kh *MAy5(bl`\蝑і }t,nhBKRN@k*U\h -B+ZKJBs_Ǹ0XKP$ ΅9#:8aP4nаm;A ޢSN-6%]hr.sE:E9-n[!oi ΆiɊɺ Ϟ-E89(<WTFKKgx^zK<=YFсfsˉ";'K/o?>x<Cꄳi)D.?ZU}^PO*UhQ>p1t\HP,L a8 '#("/HJ9bHlnՁҩly7@O^8.MPIzRE4is4(4MPs0)9ܢ 3F`+ |P|u23$Y5޶P,_򌑵h%o%,AS|(nHɣ=$F ޛdjAKI#ww y8"y8D֔](xU92"M:d@sZ;OvQ^_t)bΛ?>vF}?5YyZћOY=g#g5tb OLއv,?53O,=N ݊"PmwlfNFLL$?_5p`x~ ,8 ҥ`/iH4B5)Hm2jW0m*53ؙͥ)+y) <% 4%i@/j-^`SX/1cf5sz 5s^YM CW-i4^cfۜsf6)..H6\/FJ~ˋ{ Rf'"(",H)p U?]jU{FEkg+ބw&lV=ꛫ&=9_X_U[?wdFw#)f<0Z,eǾ=<6nϱSdJ|E~y *9ךpy`6ot3,bhy ݊CCk"(OOi;Oaekz:h|ɬ $uS/qNA!eނ`i&),醯)ǺI`+B⍲}\`gwк'hrzHC# {K@AS#-tHwKɅ@ZEke|p{ނ쵊VLr9kef30w;qOJۉ$_s0 GB'5h`4!J4zMk 3IBA=*(AQ!?Ct %=ǹ,1%x);`fX'3~4WUITطZ]?!p쯜?,U'˂bFVdD~BFY~3[4~NecTv09bƨjDn-sŨ|cqrKF1*roè,[vX9@FYz!,C`R?{WFdJAvwТA۰a0*aUH,iH2EDFfܯ>TwR Sm -]÷ռB``TNBL"3d\jS;~w zg"L,Jߗ=BH %@ؙ_#jɰ>/5`%?Vgn1,"yPMqD}Yv8y|L5¹ qT $TӑѰՎP`l(3x9(vd!vwbb>{h_6z(+u|hjn0{ٰ>8m44TPqC] wP1뷺 -x@ ĈHq%4"T69׹A+rDŽ22{uF}nf+VQd[ M tf(122G}Vt|1xf5Z>;۵D(ڮ9]Ҝ$P&n>]YW?t'(Rj*s&QD"͒B`2w^JwE,}H3 0ƸۨyY4@bu4WS{rFuGhxsCBw?kW芋`^̷L;^|Q3h bV5yvǛ }[bJ2/zE{x5TCF|S^~f9ߞU;Azwq0q}_m\3fߓ3?juz"2!]pKnO/(_1ggoZ_CЃ D3 }_ͳ !v>ō'CY6V5DF\m:);{q0qk{N➚ǘ-hO 1 }r_t7+ Kz&\`9~;¬8+gjF{ӿn1Db<{w)_?{V}{ ƪ ^.?Xf{\/3*s97;p!ǴVdCd٪:DI L"ToY0;wX 0jt`;HxF8'Zitß*{K'=Fi,!WfCV_}YyCvֺ$G3Lg íif H9BvJ`W$^(ZmV=!O V1ƚgL'ܣ49XdI*ǜMev]نe#/#MG:ŊpARql=<. KP& M4ɦv?nq薪*mGt Lݒ감oDle>n\,hV|&dSM+H%qiHmKulƦK+FNִfNeI/q*aV\[?Yd+6Z6&ӍpSI&E)t,Jr+ X1Re_fnDi9 Ei0BQZnHqQ(W޹:/vhRl'q{trKDjiՔ[H$yNjPwӜpE}P8=%9G+OBdc!Iyi- Ͱ\91_uou:LMB,ݎ1S`%7DJuq,ЭBc!dx? څ #pq8VZf+8("!KRH;iwZHL+Cx ؜ L"-2(g9)c>G3GΗI]ݡH-U׹DkF)1X*r.\JWRZ)Y*V6*){3NV{amsuTU-A0='w_}pW7.N:1KA} Gb5 \mIJ wMk4aٛrbԳ iiJs6TJ־1gi/ى@l`VdB|*"2`9sa˂%Sh$822!FƉ$rDwd$#j%Kn-t 'ikJjZ0/ 41$hTI4^9sjF " 9Yτ&4*a=VsqlTT!>c>9})|:,7$*O~0-?.׋/n~ڀ6.N|-Fef:h5Ao$]M?cEICx3={53u đDXIiRX,S A!V/Zz$%Br`V@R:XL~*LD%; ()B< t2џRQ#RGB(9.O!22́)>Ny%Yk9=_^)2X8#=o)Ef{om6123 m@/8ϚvUag7 S?p IšUhO>~#5g/Xf]wryT璗w|}*rQ^:8Hcl;&跆ar=gcfa!R ?\D ?jj?i~ڬax+x6(umDB7Yd({YߝV&??~?|c}f6ǿ^(􂻚b3| K/Gv? y]C| hhF$XESTxGDJNsǤ"9KaҌ;$2(U!}AxeSzmM̹|]hYXOϺ=pΰ9a^ܢ ߴy5 "Hc3 :<|GH2I(f?Zn,v M|K6L%޻ES|0h)_~_8\pmGqޫj3>Y{tL`"+SbrV3`wlRT,n\$v3=6WV䅞pFvEr'm-?MM6'>wf4ю'&(b Q`y/8LsF D3[%> dhR h! n0hj* C,Șe.6ĕ:&t  Bz;.., 0,!/}q*lnCAMq_\S/ #D5o@yˁZXZN8B1 VN`!sX(\q-O%J(LaѼK V\jaJ 12Dr| ` `:΅T<"7vNP}vyl9abw[2PZ ^Z[Ȱ&$af0)N̛_MӦ&j+ݹ63߉7v&DӟB]L f33I w\YP R=`]> .+׿83}?2޽tŇM˷R8uU.%a\~S_~-ov}vP>/C-`dg&gSW`KkJ \cڑVX}IYĠUQ$=tR4EXWKqo(xUy~;> HIt A}%F'z<:Ch7sߴ""?|B薂>o u:ysLdFIE,j+ՑѕS?2E8++~~4LS ӛ굥KwF0-G*Xty< DZRl ref4X1`5B6:"+3L\Ex1kNZ1*#CI=2`χ~l0xX*[=}7 >wY7d1e4OW鎏xzޗ~2@w 1O'-ME^,@y*b^!u9`ϦdzG3g=es!߸FԱ֭ǀ{ۺZ_ZX BT'M(^fݢ nmh71:>@l˺'w ha|G#d)nn-h7Q:U mƶۥºb:hbN2Wtmݢ nmh71:8UƜкb:hbN2U7hukCCq)lZ77кb:hbN2U7hukCCq])tkİr3(/v]!Vk-ʖ| œ5s793sϵDPlf7d2qQcwu |%UR{ K/FS!).H$_vAjꑒ{-:H;,+)?z hW/VO3ǧl Ѥ >غ뤕^->QYx!?%}1ѻ+x45b l$MrMR_ЬTФh`Z\Ьb L&)93l)`5Uطf[` F\zY\zY\zY\zYe/ ' Z#Sܟ]B݈~!Fs D dFΐbqa\䳘5* MU,8S)y ; Nіײ}Fj(6/n;#n L,y+{} gfC) "_P:g 4"yH`jD$*)Gօfϗw7j?84-]|}pCgpXt8 {*d c)*N_)+qiCJ/~C(h*MqG]ɎZKVÓiӯբ&ֶE.)D(ig8N2J(sn؇`?<g09F;dLtRi Ө<!G WL8Fb^C fhNsۺU;Eu֦B9l,6*UHX4XyQRxtJDY(@gi6cGrgi6b ~$ϓTKJՐZ MkM <@D1aղ'~\2DS rJWeHsZLSJA-ɱpd8P^U:oS+᫙36'ѹ:a, 86eR _m%X~Q XaMmfܹ9CrxL(1J*D§^9|Q+KtQH)%aqf9uAقG1 ^9|`범Q†:r0{BЪSFĦg$@n8 A۴ϖ*R0eeQK,eʜ"YZNso$|jqF{NR'=xc >;L2|JTk1wf4cW|&G}V.zi4MG?<~)ok#Ȋ`Bw#-KmR p6,)Ҁj ]Q8?_̬Z`~|#IGo?)YT&#c>z^88 $L'X|&lfIb XǴ8s/;`7Lĵc:; Z#UM.M3iQA.bM&aģ>dfK${p,Koh!!̇w3OX(sy?y,,ydd:Ώ Nn_X98&>+{<Â2Q 4MF^j$<}9dN4q1;7 XegS>f |[@g i#pܭTC܄e~Q#06fnk>) >ϙ}SJ딋n\ \1U},;q=57>y x*kVnfڴ \G5IZ]Ofs2 4| - ]Z_i%{8].Y|YgX֫uoKc;r+͏[{}"nyae7,S%W9Gq=W>v{5(wWL6ʘ#V.JhOMZ[N'O~=_qr;vpT 󡳵o q1ο־q8}8DR] az(.@x G#ѹ @^3ܱ)2+ 4%$aJ&-O$*Bhgr\+[|n8O&Nnތ{4wsV\HnrexeŸTp%aL9j 6vJ^|X!KQ"xHR\cYjIꎄ=c厣47 #2ˮ`:6"U@P)u pq&슪ޕ}ӄ]2ΌUw19O 00ic#1)#"Mcfe)?֫sa͵463D\a&OUkqJ$yTn Oa?&䑰+=VZN@Jc%s搴!bsr, 94 H!h-ܤT4]F2(㤨a ݖWZ6 2N0ι *|?+*3G YHL 7dS(NcW*[9{\`D/JNI}tCDC<b.WWv1-+(ϏyiA\,Kn?6`V<{p~lnrՠjfcbk !/5l\XIO{w%ǸFO2 LW/f~3;ɵ/yǫlnj_JпHl~P3AڥځON})ɉFeh9s;Ii0ƂmXblTvlRv<BRVW'l8rO! Slm2Z;JT:7Izjᅫ\% Xm~Bkbӛ3zA^kA fsif}ZܳΠ}ve~;#TlIΆg }پqS 'UcH!\x0[̜]Wf??T|t$0L/5$l믾Xc|_73ZiITkvB0Tho-QjÞLG>v$[$S͠S_'f=8`Մd$癇&{!߮2a÷}f0ǝj]m;y~L^~3!6<}9E!L⢐ՠ  -AE/K(Ȋy9c(| pȃ4{;Y=|vBNR!k/HL_|} {3U*:G"x!~$<;PE"A H 3M'7d9w)&xDS{-GW ;QI4hpDK2N!Qk';h[^Ja@* 'yIՁ6Aq}{ź9D&/a٧I 5_f  ''5Cs/<&ME޽ S**TTjІ wxd/z4'+W% O$:y АX2_Zl%@EDN;UmOIVSռ2+TrN"͑gYVrb,kKw^0 (FTU/sĂHhGfDRp g a%BQ]@X띝W *!(&c$z-]n JGmRzEN&xHMh1Bw&GYuݮ,gDg0)\vj) -"IliF$%UTYZXsCqHs&c+-=. eܠKZV:x E/5mawj^FXڋʀ ٛ/&3/ſ| vr7*[XYKM?._nezmXAz?5p8*SŶ6=Aje kF0j//W鿏yV)Dgfn5ܰr4E4Fgi7jPh&:qhz :5_E"%䅋hLqq~vctF-!&ڭMJ^[[E#RH (ii2\7v/nta|qdCtknDtQaQa`(%eeasRhKS`9[*&j-EEpasg[Q8u,|69TSh]^AF5.0߱4эhbSZwftw y"#S$ڭ>lGKщ}G֣w<3hvK!!/\DdJ8[OI щ|G#cUfшkE4Jlݢ{dCͨɌZco:9!&:OId64NoK)$䅋hLɖgt g{feaR*-lI/ 0hc+Qr]"[qk3U)K#)RR$ŶK^DYI.9bh+(/ … _kx(E0Y6˫U ܕt6Xz2nj΢0!0M,vzJ)ˀZE4J>{۷4g#Aľvџ2шjE4Fg#k7F9hXN;h>xw[4-pA=0I0(r3[B_^H@F>F$R+"b/lۂȸqT% ) m9+eXK  L92 X(!d`tl2ќMFek`&1=EgBM0;L, Btg;T{@BB^TCC?%!&ڭGx*+߉F4T.Q2ߡnE!L, BD'M[AuWhvK!!/\D˔PLj?aK 4- D1%\Xb'Zdؿx6 3uepv?qV1#cP[9V7?Tڃŵv֥m x} o֔I~QsWR-yR$^`3#Ix+@np\7.+.Fb8a]&mA @0JÛdװ8\a*CGH|Tv47\ KUe=, C)nMKyTS&r4= "DQ&I>2 AEGqG(4ɵo;Xkdxn8r*@v2YD)!:ŏ K.H GkGl9u9#7 Q.kmނэPU{}YɯǤR32p=@MV|́y3  m=lw7cL|t2ru,U soGb2~X~wY~_>5o.>hX\_}bA{mAr$#a!yV*O|JSȇ|B 981aO! )ȇ<|'#J擑n2Qx*0[:Z.8Hbu畆 0 NZH}TZvv_݈/csIU&9ռ׻f8nR\D^j'5ѿ6B-CFtsD*_0c?@lt;領qxT?>m܁+ pfX|sn} h25+,Wrh 雚ꙎC:v~VQR7_OɔݸEQ"Lg%KF}{4֠vVڣR<`^?_3 ċBLErcȗWվpCү9])z~۫qAIOsUpYڅ& CfqX8Ĕ!P^rF`tIB)8Suڭ<C)ĺ.|z6!+j*geۯc<+08O%Ӟ^ 4!ϣ;|'TGVRq8y4}#EI&Q]x WX/;V0?qsw]4.xөrCv:TDx  jaǤgIo5Gp V6[K) i +b`9j  &HbցC096}Z`Q=BMVꦠxG<kc@եejMdJ!(n=bSƲs @gj>iM3X\*LjnfɜȨ6WޓSt8AbuF%~j cY8XwWz0d BXNvl-)>Cx{V \^TC4 J5qA=~nB t6zMגFD.SR&C6[c)v WIh֍6~EYX" NF;k'wzFdI=f 1Jp?)Œd:qc3jMux}dTwI]h4ș{Se%ܪ޽_Ka᷅5v٤?.ֹڵU+Y]J .x6sd_(D*$lfTFڂv,,IY6jwPnwh%5cJ-#ݎ)#q-¾ؒR2%Ia|6'?Jx|j=g=(` I3DFz/@3H3h lJiCԗ{Ki] :\(,zS{h"3t2[Di1@iE4=Ez#wGL&y<@ BU3Md/VJ8rL'Lo+ւnl8] Fâ= sRt dX\UNL|j.ʹRdeA"Y8ihĉDdGc0l#G)!,[N┋]BSTvG-a#M{lp,SeFԱ˜d?,\hB]=z~~9BP55sKK$nbLn&3MT[]A)vOme8*;ڒ#+4sԒ^%bO ;IAµhrg@>57~#Dfg3W>H: ߇\H +xophd_DGT0b4Cr4*'ird~ L*le{|%M( a يBe-LB&1e'v`wFh~yqyLQ0 mS$I NF *,P. u0GMmN}% tm;6e:O7H_{%C3vx%=n;zrLWC6@&BJoy xr؁̀D(_$ )΀h?>28G`OhKPv,$!"·( n"W9YQ~AnhY!8Eҏ3=ժ0&dwaG;>fp/Z̦$ְœn6_tXH&c pψǏXHĈjO+IJ}H&K|tߝH7@Cv14!vvSE%@"! w٘"$-VBܞxV+OŞ2KsB'4IPA $Ia i"Ӏif@ GI@ʡzKb;"E_2EC. r]/C![C_|Nd(|y},|+Bwt0[Pf ^Z|LnW42 ,GRշ~<$ ȇǫ! ԟ_^ [,Wf(p3 64|C׳<-g3ۃoc@1"9| Vܑ xsq"+27.#)]G9B^_8ngo]dO͝E8$ڴ>Mxs%8]Mf2&],-2 nzl^S71#Mp ؁vW|/˸9rBəa9g)\@ʩ`a[>=>ʏ͑ sz׮O8}g.,ib' s,!羓 *4 TbӸS6ci jH4c[@\z9HJtf4y-j!^А \̚O>%5nGp]9 g9<BNPP=rG(Y-7sM $Nl]aݞN;=mzqjOּ嘣E?yc>B(z[|bn}us2wN4TVV67{8U\!{u(,NU}?_FZR.NwtmS_-==qF\O?Q'%O#'Z$%H̼I,WYxf/)gɌ[M>JYuvԢ"k}00$YQRy9EOfWfS9fэMЂ5F FNy(MMtm,{x𣘊i*fq6X6"m}y٥]e{K֓7uEGl|Pfe4SNƧP܂۴8lGn0܈5)Cjy s>UiC]?ع:,#wx'&؊랬yڥ3 !#ޓXkyV=sVGwTi'6Vr4sr[eۻt꯸{nwG= Xa|i3bQR*#)9լYnvi]G4M))a$Ɓ(rw IAfDq89lfKSE赳(8@Qjrv oPD+cb6AaDؓƕ;g Qh%Qq, €QG BLEc˜8@aCHP;sO>=iBQ;vGz Qg ia ;< l=b2I`߸etO :QLS^MMEw:k3P8=J|7j%Gwj0R"迿j qA0{;;k˼P&[~֟ bq5\RdvB%sB^{TjЧ2Fqk ̎ ݈q\zrQ^ݻdU`e?% EI B;jh5}Av ~ J5sBd/GMXwX)gI:Ay_SkJ hs3ʰ +;(͟048T#q>2vazC_0y&GSE{Z29DHA[.kb3Ti'ԕ7@Ϭ.F'<f&,"fr4 # v,PmJiʅR*h҄ [ݫJm'Cޒ6՛woM,كZΏK =-03N)Oiƞ`h1@rtsqD:Os/ ]t"70%3#CjL6:#=LzH=6<3QRS?[/b'h%g ]bS`?\dr%Ԇm0pСx@FV}6$j:0yQ`(5r(n޽hîńVf,&W;_60,ݛ{dTma^Ub$LGhDIQ;9-1Ky?zXvc#Uy#PtDZ"W_3t# gĸAϗiTVդeLG{º:թ?PT꧶O/0I'Ƽ؆Ic|z[`^֙OMb!OWMdYL#cYiITAEly3B$P yYf =U^@إAbJPPذ-c //R'4Tǽ"#厠~r%Z[2eC059.VP$QOD}"E~-kr)=@nx4Ʌ\2ZwdɊw5Qs0>,%Vh8"#`0"idD g,F >yָ4j4ymyRsV^GYzQ#FGIbٛ:6>߮Utס66U6:_([ 1$P/ts?, XfxȬo2yHj @lШX[ j8H'j3ެSe ] ]8~\f?LMP7Ty= 2(1lշ#*p1XE_?sKg2@(B6*L?4\(!| L>aWbWMumyجˊi $JvGZ]N#qkjVe(-eD|ĕ %<0āZ@1DH#Q.9@X!R@b?ی=-ՂrM9ʹ^ONbH4tV=9Jzل0MLD&GQLi#,JPД!B8r+a,6$UH̑7+ba&lNb[ sʌH(#ڌ+ x&f8\\p8D(ig)@14K }ȉLm+ԣʉwf y1m{qмwj%I;mA@MAArŅK%GMYAYjX¡~Ū͛Ib\ZΜçpM$pLti3 Dl K [_+̂ S. KCԥӸni=$k]Whn65Yo;n.-VLN,VlV *dGɭ. bɭ6Ψ-fιX FK-8@AIaW~aCӍC0" <6+ndj'WR>!;#Q$`>;˂ Ԕgi0ZўHwb+ړA$]<[ҷI賡o1~; 7͚!i |W1tttFġ-FyD_ְZ7 ` `A}hL23Z/c◊I1>%CV1%A)($kw vǰsd5.$ܭ`Gwj0E D5ss5p`z: 0Ql>ɘd6iv^tO6^ Q~QLw5|*St|=e9L߆I?^m]p}GFR?<'Ri\'rx*v:k,q?j,O~]JpWғ%+k-IW.2"ʹsj$1\(5ԧ盟^o\t14A- qθVDf$޵5Er#BrF72&'y8az'.606OVCPQ.Ǟh2SR*.8k^977g./؁mS`h~AtN¶hj,ߛ858=´~RsYmh"8I!hM6tW|wH i; xl"sl7d4r!am7^i;yLRǶqŽ4xg R+K"FB8t#R:~q,3( fV^̼*k!cl^x`;n[|ާn @ 9afJ9ӾBrvH'w\ U&ߓpݡ /WNaAZ*3߈ʓ Sǵ"JMc㲾tըczSVѧAiy\9HqsuPjǨ!y@fҧi3 z6\ b5[k:b1E۫XwJw.*s{[Sc: z rv"{^ ;'/bW>SHgSOK 0Ӣ괵|(>x"Ao| 2҂|l  .RʒzsH)M\I2GJU^$<% @DKBF ,f+LqvKr6H SJ%* bJ/=zSIIMc4ڠAd'PQ=+*7k&cinnTFG{f>}qJOhtWt.2~r{;uWo:鹯YAr{Ŵ>, L]$F( 5kv!oC9'wLJ]9[:Wg)lK?_pj/`hQ 6~[5>Sْ1k٨km$Aw]Vy>ֿ.8XzrzoBYy!/;:/;f^vp}yf$FGʲU:^GAAA:Aemji_LfVOZnt}ч(pin}Ӵ,v~u`:$(ۤ]Ky|Ia" H֑ YecgSFȨ(xJ,*䬕7qG+ceJ_2 bФmiM1]?;I">i|`G.aK˹ ryDlq$ca|vFo|K,z"=c;zg ės gQ:ל/Z|ʘ`ptycNXBaHj1:ڻwev9^h[zUvݫ.*蠷Bi$mQnNQqoNRf8paUҌH*oښ\,"-m!D-'+xghjC>"N=R2J#kԭȵCgrV )xگ_k}ztrv:PvLY^aj4:`ڠ+ "Wm_  SN7rAjD\(q{<,v`_]H 7cEvgVUUJR=[1wNQ K?V 6LˣJ)PwA xw$1mP MU=Ra!D_oSUc-(+Ս(TϽz1z 5pTz:]vi Za~ 4;b߽{V) >>xW_,Àjiw,Hʽc&cQZ^b%|ND]!B:dRI*X~,e+S)`^PJ[]wZELR̍SnU2Y$+9LE;c0Xw5J*Z>[mh%%'77R8ZyWvC[_h;|N:K-k}h% wu/tF>_ljvԌP4uɗf~2:,nZ)̈{"e 3V%f܏;"p[u@H9̎6cBo`))Ufˆs6U6Mkx9&cU͚XAz{U?|<S pҢiG2JH3M]qd@9m݊Њ Q0Zn y#)Zxd ӫpePQ nQ7`WfQ>^r` B%$/8}I Nj_pM~ukbQG' UJkc!Y)B%Z# j1cVQb~/į6fdݸRܥ/}.R_pTHFE^|W"y!.&A[j"tܗ 6_}MuyZQ]:w]thJU:I8M%|Sl AEk` oBR|FQT%Zz-RK3\O+S- Rg',h*4N&d$eg|Z;8 K.Vpi)a.7BNJ':6cpJU$T7sAnO/\]IیARt0#/ta՟(%;%# .wz",#Y%EU Oنw,,鰱S}fRY|7""^ڒ~ࠞt@,w @M=N F.9y`X-JH jĭ3ox0R!z phry&8?tTN17 zHFw'CI;x5#܋_rTh!.U3rY pz3Gamxc G%y&O3BIo`͗8Dd'Qκ{oΚVdp RݻevޯfO}w|R鎛1ZTBLuwc|*@AD>W<3lcJT|:W[Ջ/9;Y\62ia=xEʐ,rB2A<񼖎*)gOw^rߝUgrQoO71D,CP!;<x(ʠAdLZRFJJ^6 My;Uv9JA;J,3;##-gNB(*㰧yۿCY]T#bE%˦%+.$NJe!X$fvQ#9WN+adEbaӳfT̨6;vm:\]3NjtBnU2I C b`9'NYK1r "tPӠ,]Ţ דbEXV6')hBs]/ZWxZVZP}+)nxH_ULR(իa[S_kG⾪v#Re %NɹS@ZT c?-h_%/6(^ HMII (I}E"4[9샮 b0T:8xE]Sm)s,cxUqMZ"GGr洮 ulFPj_5Ƕz:{fN^O*K_$IiUr{̱߇W2sn.?Xi緧>pm:xEhLj/7u[̭<>ߏ(5=W⻅xyqvϏZ{kU4g8_M*1J >;>\#|9g$9Wc Ek9q);X^2zLpU2TS胇B4`j6:>ƉY+Z] S__ƬUuy"TWz2VM N% TmB0gʑűA4V:EEٻFn+WX󲵩4EɋgJmqʼn $ڔHԌ4)IvhIl;8\Wֽs SW^ ֗b"kya\4O""5!@h̓9 Q^ A "r?HFAIo#:$cIGJK*{qv|?uBj Ne NzD2N⏤InwųLb b̐BO#j0F ATIzHñ~lPg=yEoy^p"=Fw# h#LAۯ"}q97G׌v\-~ ~q`y|}u8Y*&'֍]s~W=a5]2(V*5ô -[Co=OLJ khmG<{1l/x >9yEq:͖%A{~,Wy2N;'dC1C%vlhP08%'$|$`i d[@EZB% .yT28 .ISx)ƽ!ڻn $DGU\җDpR'\U1 P:%A=S_Rѽk)ҳ9GCDZ@kɧ~ڛ7AkoTvZ*ɋ[ka D+rʄHpSV; $։ ['6,R8PyZYE"Á(`9MX *rZ㮐aKps vKev67$-gvɓa4b!zRO4R0/W[.~Fݻ/BP|ca2cpݗ|c;&.2XkBEgd>Op$=J l"\5"V\h΋l1?E'# ڛg/PgR@fA3|R 7[Lcųq TΰdWax']_]MBicTi穔8DiV3 I %7{0Jq|YJ=adXy4S:/0uw4ݕyTh\SRJf8d "'m%tIkV#QҭEK5Oĺg,T!=< ڒY|}~_HAX5*/`#`-׸{x =N pq /^d;{6֘S~ƺa@-ˏ;-Wg3lXsn`y\i?_sTd]$0UM8kyu 47 :%eC`A b)3 \( SUڌHvxS9I0z?+Hᓭ vS en /k0~f>CzcQE`OU(g58T#`v% ^OT@,EXXBzwrQgEhBi(rDǿ(bfGMY3hPjibZݑ$i\|p|6^ f U"b6[uM%||9b (0lIJ3g2O,Ѩ6CXeFYa%tqÙh Vr4W{a蔾`wѺmc1le)z jqFo[ 퉐{L!b^ӽD^|]Pű ֕351]Z,(V]EKpVs){Ź817,2s7)iU fXeLsɭ+Bƭ!"\pEi(dހװV(ӻ'+91#SuZ"3p|4[ʿ)ypTLZ>5(޿dWI~i*`M< ($' ^!]V|y ^\Pl9Bzi$&(&*L綶@׊'gJzbQw@j@ uT;.F%ZƊ)0VT;k؈q\#(R IX+eok7/%C2yX OLW٤Ya!ÏwTSPZzɧjףN ?OtS?|?Dw &?ZIbb~*PpqΕ',?O2$"ڵ"{ (M0:S1!e4)nޙ7'4U,5 υy(9a$p X4YeezF]휂ulF,>^NF%(3}F<>GS'% 蝴֮JH#ʜ̯8 A6/3HޥQpOY K@s/+ $`Hf_2Q@nߏ>OӰ:L""<+QK<[ă,X$l \o)RrVDt|m/Ht"QZK8sRZR)r0`0#PA4H N*xVW z@&!Uq0,$qB9v ᜈzk9ٯo0t*A2q}-30G/%|AC{&<-Wa<,ux 8cQ6e)T:sXS ]Fik.RRUmgM1B gcY\'Q2(zr꽓+lȔ rMdʕ;!4o gXoĮa %tB0Z%lLr>}iܕt[>Op.b!)f@HۋXP<68;%;(3^b$Y o)"D8,UW :JbO:NHKH4yKl6^z]y'kBZ Hj !b) u{lZ Wj:|8(J_٠CbIz6Ztںi8C} .?z} "#<7BsJW|kxw%1vfVZI1o%"T{,):iqr縺뻪\]HCqa0V)%Q"a BP!!}LJT*JԷﴖ;mK~?+,zTjzPX+)g^y6U^PV l?} j˷?sB1]~7} F\R6_hƊс5q vs J$a9]GB:rHtJ)S9LlƔ0f g=:74khcH+GcڬT9=NX32\H#/20TAg /M횝 j1mE֝ցFXQ)/hR8\9ʩtYCh5UJ؈VpNDscI224vP).bACAs1zFaJUτaB+II> ,I_(gf 3k)MQuZgk1 *tleeۼpTh64|]"q~}\,f%CTMӕhM9bA&z\cl 6G+nIJ-0Ps!)I:j0E1|b1D VP)0]νSlLdcH)A>N$OoK2&Rd4J$L$џ$c 2AJ RxiET ױ#KnvT鲬(TL[XZ#0qR|#5wƍuQp.,W`rXj(XVWB24\EJtfc(^M>@@.!K DURYN*JU%ʂoNDŽїR|{bfPJKcv k01 ] UoP#JP.0/ !~.5|X-āTp ˜SPEl%RWW%`-+|EK JHK2rݲ<-xh]RgaeXoW[b%(0_RV8h8k_e*X uQq')D"` Ra+$(CUɫQsK_.90E@A(t38dK/0KRl,Do"!qI_As!%Rn\Бzh#Q>z.׀GAao~9W{Q1%b=풷?=0U}zV78쑓C;?AHq!-IZE" r4 J")0UR1i=!$=! hNQ V&"n8{49Uy"y҃݌ _'@ыs)/X8b{ P∩3B(w*w&^'25']LTŢ#2X$MkkPdsk `n!K7D/ޔ KƠ!AKidaaeK'j ._ߗO6P"41;turӛ*6O=l->Xk]7FClQzˆ)ڭ^B-ݿz#ѹKqt;r^>hԞl7^ѩ o۳USurA㭺5OGXQ;j4@x݉^{A=ܽӳA.X>}gaϑCۻ%:{wg_'8?Jqy$Yo9J^~saHMim ~N]IxѦA?;gOCT +&/HN,(ZID!Cjڣc~MP8ش=v=N> ury>ݼº Mmy2J])hwl4d}5躸kdk!5#m[1AїFJά _U~S@8+!h<0)2\WzS0T8=]̓aHvK`(qFE:`:c)41BuL;CRN[>qy:㵧_H";_^8#.cn;H{s7bY?zv${Xw%ޣtY}ZmVi.cN+cAFjjsBi5cl)'Gn䤥OڈF q؈݋ cdB"6?QǮdFS8vgV 5lzCdK_~9,(fq _^ v#S߷p[(9+u|[~BGijQK> >"2~䛟~x=#!opXG-?UUXw& DV W*&`xOU@`R\Lܗ @ kSg,K,FDPXJ8%VT.phdjxPՕb!u uHimx-4 2ıBGdxRp ˜SPE l% W)t -.RïX4t2)ДXL a+JfR)BJ0+(UaJ* P5ggV Kn8a*]u0il!`iQìk?x--&APs{@/},b TDURYN*JqScepG:N!2"Ki#*+JQ1 ̫AqJdUpp֊ &jv` qƿ)N,_uzR8xs]PZ7;3vw'm0y~Pm0'ݺ݀Ztuۋ|7wrv]5+_J?PB8 '~&Z~ͻj߅> 0b#/ز ïvxo/=`3ژ MV{+1uc?w(l:ѯJ +8++ӎRI*$,"0z3 }UBr-guf${Ao/-LX%?ps'ZKҒZcOJ= WӮN>${Gk{>GyJ0\w s#)'}p]"X 0 ^8nmyjIyAhܬy&Z˳yR!]\O@vnv1lblbA.Ųv vvqZّ ],9]nEŊ~1Xgkk.Iu]ZXSvvNW. )"G- (pQ A:Y a1IapUQ@HR@HjcD1}v1qv1E\bWRcz+vv1&]IR].fE$f~\Pv* "Se1Uҩ3VrmBo +  K(t6*$lkK(maĢ\Rr-#bNsUEB 576ؕS}B \$Up]KI%b%W+d%=*kcm~vT [EIG)kw;)rw0tb"iZ d'Xg}~^6FϳɛY~迍oU哙t^U 뻐m o=i˧&3_/YxƊ@qpI3t,+V{3%bNZ SDGT P#rFKM6#tE@+usgW#s$ \0uzf.~Ɔ/:G)m9TujH{Ѕ1GV&'?Wg.v~sUN+:P ~gZ'}Rmlzy9wWlLZ&׉9~!,O+}pˍő3 jkMiWBA m.p ^Xo-A mfglbHșhL*xpGqᙆbDtB8E%s&GWyz{ Y|gw{]߭ey3A#2;BKP>[VƜ sx^ԙ [V>s&(LhE{3pgAY3y,y&=ZGYЄ.Ӭv+ǘs9L`/ͣc13Ac>3 T1seQ3cc9u&pFc^+Ir9ǘFW;@"91c9@bk.r9ǘ^43ZIc9u&Ř%|2ǘs9L - Xs1G Ř׊c13AA.,4ǘs9LXŘ)U1c1cTpc9u&%cZc13!Bcf9Ɯc1f#13Q1EIFcS9|1f._Y r9ǘc(_Y 9|1fA%_Y(s9ǘbic9|l&2_!pkgQܼ_9!8rKͧ|gI9/~YM0,Taje…E Kxڬ>5˫Yqm^lmπ0>8of5(AbO?: BG_VG92Odj8PdO7Bšwi\[_ˋU2ߛŮ8N>R]-bŲT9A RbK,~s-d=Vƨ)GEPLE E׶X#x?z~jAPzcʄayu?zK&I$ /`>ëv/<]L( [ޅU6&3)U}meT宋u=9'T2$FRP x@ &3p1kk"!=}AQ'` @ q48ET!n!7ԒzRI+J15)AW0!GGI98FbPM30 RJjklIi-:Hhy(C1ZSy' 3V08IH` aPs1!5!Xq\A9X&bA1u$R +J?zRB(NHIdB aD XJbmdʔ;d"dQjs3yB^BRS>CHhIE9!? B_1tilr͉| $rjCFPU3`A:! S1zG!,@lx/`pXf0L<7tbqR9_fBkU?+i•M%YV o@Y; IbJrvm0$`Y7͹SY׀0,~~lC sLhҘ d?]4F_WQ D/>WR`:ۦ\?[De n4 gAۋ: %Www xn /wWՐ}5s&J̩&g ~2Zn;WһLr<+<]a8!XċPI .r߿~ o.|ʨI8b- ] (n lr6Lq*k"R]qk$x[d M׻5s%yK[0qpg&놛 +DnwnO)t\*CbFiH?N//>Hc!V&q!RPUeSJ蚢asy"bG$FZ#T4.wrki P0`t| n0hDӠׁ|?%yd_F mdLϧQ$[j{g?}{pf *GpϘ*044%B5φc7>.y)h O@KyqiLnwG_zYV,RbX/дqњsvKņH>t?u{7ݻH;v.;7jZqHwqe1p^`f,QFZ\h0G8f4v(_g^I=~c^:envF#3Ϛavfzi>)"Zh>egk}L22Xx0]8B2Y̢3$mnI$iGȲzB"UŵHv7|\.W?-zRԝ+ꪾ XQJgFtQ ]>ͰmR[G@֫s#rbzP<"o" -x0p@B(ǹ[AAJf Hl)Bqǣ \;K9X7HFiI)}|j ?V;'iڑ4SxJOGZqfEMASqT0DRS6a^3hJ9~PRv2ʫAX7Q812"d8 U^R1Z?k3;s.G*!ЯژYi3FT@ `Y<֔`̓BcZru`k]6FKci62RT 4Ƥ54Bܮ-B)"*foVͮHV ii M&r!a44hK 1qZI K0-asژ/ hbXTIYDyV2ݘ땭׏Ek*1{ G G"{|sxY!> 7 f:V#BPQLn܄CSbZOnwƳ6wvwB/>YI !Ĥ]HZYz rTpM]̓Az(Y|o|fpHĂo/$AmB 5/4@!v!jP8 V:p%WB(Ph9z( KlI0|FLQ0T8´ hy0Z kUA0dl,a)^!-ɹ!zq09o"&Nhó*7rS\8vj}Wh>Y(]\(F3Q;Q8, *J X>r3TAl1\Tz=Df8l^|Y KbAQ쬅WS|.Vr>7-O9[7=H-B [M~U832gWT4ݠ'x9U>ԛu>a"hA7d7Y^Bb1{܌nMpWާ3NfOh!ZG7DhA"4>]snutx#53ǡYH5 ֬_ohfMhA$k/ً߫-Ą;<69usʈ~9Z|%z#aHs#5u1W(z{5+Vu*]9uuײlGusmSVܱ5Dtjm`%zj#ػ]IД"zCj! z2FYLܻA)yȎ dΘz!`ȩ_CaWfU5g8+x rZc\D9e1̃+Fiy{"(ߚ"QyԚ[;gYIt)]D9-n%w[l*1[wyd_GF'|_<dcgO']<8옉'<(w kd$ =2!(}֍<Şun/n;ϤT{٩ȶ_I]zs7IJL![a(yn#KCcr|gdkB8V=oTNԄo㲤6?ҭt^\ \2LO Hh]4ـt ҄>wav>|=CQUnأ։ zH!MF^\~5έsc`B`DBay5M̓)$lױ;re륡3m= 6(&74eA|7lkn3 k׻VjlVjl&pv/wLd\CRb_ #{j1ψ!y_sy69|5~q>_gيYONMF=3pjC۾ͷ0wCnťd"Tg<{[^,/´@JV*PRU.P`8JKe%f2dyo>y"p9NQWJ>2UKħ\blmb'<#}"x.IpcLlak{U1r&(AC*&p. V@27mVJK#c>!#<2yOH>d,C^/W o1QO>}{$j_bj7QfS.Y΄I56SA~so",4jmA1k>x7-ajYR);=9wwئH0ET(.`ՃښhRNYm5oryȶm>ȶM4Hr{20+U,DB Jb,l!SZP+2~d5mZ_s5fBяB T#y+ mkLшڝH1aLh1F4BjMTܩde ;NI 6t}NueN_]NjO}a[h{L<C0{d.+L?q1)<#X$&v#od4yG z}Viېw_& y#=Chtw\rzi89j'@8p(VwQbF`|ߵGM =QG&zow_΄TTR*׬81n.Ix=OrrgO`/Xk91xMdA{gb0pFg}}g|I`[w~x]Ų{@GBzGȱqdad}$<-7W.7XATx3=,D>Վ;=&zo',;O C=4H`F?}pۇ-DJeRЦ-lr7Xxս~)r5ն6{&yA%O={M?_=' GelFI(ӽuetv,d/,I yxzrv/+;gkrF?[ɲ*9Uɍխ_^$[HVy WnpD[qmQA_խϤ%^\}޹;^7'KÎ\VO{I6R57MxϿEg./Hv-Ʌ_{^)q-7Ʌ?[bwPY:4諛/M$A'{6tRC-1iPÐO_ ~ewzP?f_z^A8pbI-88cׇ}X-E HgUz$ȉUMnng (e|t?a2qO!,z C(,{Hz <0حb0 'UHaHrZTO=֑?,ĤB'+4D*?n̦،)h3v7vG2vg7+H\g~ S80f5O]hRQa-*_\ExmR n.Z:&y&ß(s0l;Ml(i^Ҙ?9^R鿮L޽9d`C廉k˙4s[y43?S `LQN,$R̉5̑1pHT }%K(ڻAm^wpyu{G2\ ?c0ՏGPfZ~9Т@~\I~Gw'۸g;|gұA4| G5{NoΟ&HFne<=IujE8ƛub @O-NY9\)C.=Ҽ/`+9)Ux9%`ZL]妟o`)DH8~Q튎⦑t\rqOu|Q{_ \b &:f*'$B2+fëL۲ \IWEl;5cmoo܏FeUjx I~}y{@ }pEa`l9b*ܠ*Bfj 3jGg3J@{ k<ݎ1O^-y4v|GCRV{2{ ^-Sƛ0 ׷=RX\ >zi9!Sfmsǽ~A # Rƍ0SN:,޼8tT{}QQ~ARKYNYN5o/ddpY` 2ySIԕ̃D !I )*~{oG^+J>vw4EGV!@t㜽jFS<Ə.܈pr~\VGѸ;\Y%\Y%5sPds1,"`}B̪TTU8aKNߎz;};ߝG\D,*X&b__qMNC4|ۦV,Jȼw>yNT`"I`\/ T(ol|^4pۦWZrѡD}s/Ғ t$K4HR'Gh1^H`$I5MVzhY$Q5I j%g!4p`ÁYf95ÁNJ],+JT,bɁ-҄\:S+B@-,};ev3E(LE/xcV D) i!UɉD񘜷4pt@"Ji( LvSݺ$*yH/HRŐoIo%g y`/H7>P4lA~'ƾ4L%R VWS䖜q[ŸަVR=fh FHW%cB*uhnS2C g(('/.#-@o܄*~wzR),) y|}ˠdK ȑAk$C#pWǴjθ$r텏Xŋ1f6A)} "*YGR)%+GJI1gY?IJJwa(^LB*(O9Eļj12Y%88(ZVZz赦4yMvHfjwjpzRi;85bS @2>¦gx7if7|9=])å᧬!ϥ9ys8E|nymK&ykƃ2nl dBAg1ܫcAQ5v(ՅcpPЬ%ҁZi r Zޙ ٶ `CM.6zHn-G?{mr~Jv79-| `$f)&}8CRD03vi+ruh4AdDlBlnbHղ``JA_xS $ޡ& ,#lYm[BcLMA^W=fطF݇SD1Mn!'P[k$b-\`ˀ7>]N^A$+/4J0P/b@b!(EѳX 4x Qe@(0RM(ds8 ScDkL'(3+ 6j[ӘF* G_(IeE7פtsڹH6J)/Sig3X%DzkcN؞@ADw?U\2sIOGj_JLU%*.,~DbD4UQL>xd0Rk؍T` Rjk}\^pMrIrN۔K!o-6jCLb"ݗ7:T,`S||-&wnq4Z+i;Yd 6)ݥ;L];aR{m]vvlO[%m:϶8+*rV[ٰK\23rߞ֪aW/YǫBXfi-ȗ"!(!$s b*'nﳙō .Q)e}`:cf' >;{e:C:쯫n2|Ȥ()Qr&&V"/9!N03K=B[L?7J\_ޅme9/-J2@{]P?,Jnfٶak8RJMhȘs6ܩn^-k&5\jpx`&JxV^VMH{-tL %#:DN4L-"Ϭrqm:R_d:R_90BmCQ;Y1nȫҴɿޛv&t (/W,$Lpx.ӛwyV Å~uU]nܝ#/Ns1c *Bh qFRPk'VqrU`)97V[U}Y̲_uuf:/^[4bſM3*͑`YT qe9)`竤@X~IT(w%p^; #%궋{<ФJWdj2Ȼ˯2$VO᫟wUB!U(BO%]B?Fr)OT~ww'N$#l٠Ac<_!B~aO0T`kpAKhb5C. $0DTuK֗Y`ѽ.`Xp+̴LHЛT$\`Ƣ5Q BΨ$ۚ(EցBpkEW'ec9GBb1I [O6}u#y.@{CpBќe;۵=f]f\o3Fqwfc)36Z N`TKd^Y ;!9m"U69 yYA2͵X 3py-ҔӒUG"R99MsS?Rl198w|㒤Z Zʺ9dEK2:$6$6dM_=Ϫă1kF<煩dm=[ĥ d b8= F<LclX_e18&7BΊn @(k #MNCE zq r:Oj/kE㬸U8+}_\'tЪ)9ѱ[ֽXmŶ6aiYwm_F?ݹwD ))Q^=_YK}e"V:ݭn]x\.$:j(G|1OhOا~{ F@)tvHi8^4J{p1tnw`咕2ۗ[ROWІڜp%ϗ0Ϳ%n'8?ή30u91 Z]z3ڿC3ܸD]y 8z! qfEfW \w}^Q^x\(L^W%;99Ѭ6 TgP13Zn|.m5l9؟qXH߅h6d#eJj).1k^sjiah 5YLjiyق[ɛۉ!m,'wIFw9 ''m6 .%;\a6M+9ӱ7`t+-_ql [(Ud7NS -N9 늑@0-\RYi9..^nކ́ jb/nI#V md4᪩ BCz M;[_$Za C n<ɅBN詛~6֡[ũĹfak^S%!f!q5*-( әZ}( !%ھģ0N<? E& OհdR$;d[|,u7O ݮϦ-,O#)FǻdQ*y7L d 5)I:gxc|Z=[*H!NyCX\Ngy*fS yZ Us? ䷏$Ea;^<v?d wmmzٳꮾ n&y8^}G"czHÛ|hb@ z ßNxN~&b?/N @COGf8F(c`"NiE=~r8iGiM;ߙ&=?+SɅ`4Elx3/c!f9:T!M)f6&CD = DCfTCsLxxJɢoUQeJ$x'AvL69e! TM!fRV]uLjՖCBM.t,JL7 8PъzR7J(mh}Ua֜o]otO3FKR0OXeBqE+5ը0 I(Ikt ı`c9&5L s%wIx3X v6I l.iLzb)F4. 1*%Gy4yTfh*[1;>\ۖ#7oGY0#~cS;\4p^*  [TzsǤ?Zt(.+k<֛TX mRoZ?SLOsz,-Fg`M\PN%MPIXHevRbh:Hsb[b NzZhY*\p``ABq4Ekq1; LP*G[:1jSL=sЭLmN{ =k}׊BgY`T hk\Q8fh2]TڢuE#fIL0qF- @SM@Ġt(HsyvR s $H6 @ ءJ\FvfC=O1Xv#!}.J"ZGDH:[Fٓb,3tD QXZĜ",ZeDD$:``LjIi,yy`Q|! EP@0@% PaS(3#Tзm7|&g\mKr[mkجӻڀp#8ۀ1 \w ՜fM!+&' r5m@vII.*͊ bl6F8\\*E-ol>3^&ncgR%(Ky=hk JuQ*B4w"@Y󼰇,%U鬔(vHɤ8C] zn4Do6%ȊْG o],eW˯;Dyj XQJyZUi( 3*B8AW![h; y9QeWʛcS4H%y:o4>D`Nhc1!YUK׃73B7?]H7#?th}46aDSEdzJV[<u=Hta"gGDwKߴ[ ,KvKv[r4JR} HhXNo?f7p_1 (/O.ƔgZ I{jLp\KcQR]?FAW$u;+5ƹ>uߧ:e\5BŞ+4>+F.RʫE^yv2:(]Awtvz\S=݁9u|;¾!VHC/hp8EͿJ*4&{X *oc*>T6{g#+oeJr4+HE @?vC yQ1eP:@ ו!k&y2o*U|?~V/EY޹K+k7X : eJg$Z; G~)q6ۦ`(r _7FXc2r>fnT ׏9tY4Ar ޿}nv[ZQy{7in/U[{V=5W:?6^ݧpTN֙ b)ʩD@ɮl8 <25 G9雘wYO>6ǠK^NMԴ KDMJWfmN)/(Rdg,vV$G~rS!K--%7_Đ)!h8&"!')opݟ;Pgxmh"7ſ8{㺛7vry~rَOtiJY߫XEO┕LlS jZ"op 8F4K)Hjnh _~/:C>.Ԭ9_UgݩbP8 1Y4J|;!}:Eo.<&HnDbԾk%]|Ȍ'ѝ6yt3]tG[_ؐG#eˠA?/~yz[BhmŬ`4Q1_o|N'B$@C %*lQ#M;_#Jݧ\EzV)3-`c%! U$@?.y|-))RJyeW*ׅt/uKؼ qN椖σ0ʜwSUgQ2+싽Y8L_`z۳~<9|lн"C9g_ 10(kBj$ig0!.2o0]9gj]S:n`5Xm9?XF2p0u 3%!j 21 uRmƧgvɑev #8 i|-Imx1LlILvgݨ*'y0ȸ1m>rn4N fR" Edo/5)g}ƭZjaZiڴD!AE4P+r-{0XYj%L͙"]۪{|`8X*Фx`4/5<>6EF@1'V"=4zlWIBBmwFImg?~iwп4˻뼌CtF}dFz)|G4QSIi#zi]5T8]Zbp EK <:fB31Mr$g>~O5.Μ~oM|cJb*)Ȩ?-2BI*ϡ޺\6hJKaa0! M-0zw1Oo~vmV)?|P;NE?)?y޺w~%S~I8tsXC-O5xFGs1GHZwC0ǫrzN`dӀ"@.+:@B]>mZa%eN(>Tʔ. KKK1aZT#t$V c(-T#Zur@+0>8l$]9I{J>&Lvp }m5Bt>װ:R0z/my_6Pkn N҈nsxaZn/v;5f4ۊYovNrk=cN\Itm׆)`"Isn9P!k;7,Ռs(Fyn )Q7b0\ Am޵φV}u];Bm$D23y:h F9!P0]c,`G/-6_/;Ϭ )i vO8=~dLO™4S. Z#b0٣xh{,G_ٚMhR]$Q{B|b'7,o#VꨃOyc6* -4ǮkhiLZ{L{28$ĩM"\l*֎g0Đ2ujVם M+~<0+|;6h{GU1CR]- JV5"q!)ګLD~!)oik'kfjYFIo-nva;_cIu2cjV&?CEԎ1 ]=Je<.ҒYLט %]m R J uR:^Y9W؅/1Һ\xlĉ ݏBNz>Og^MRx8grV*AQtl x"/0j $G!):t& %&t̘feY;,OJ~7w5R"AB[&u5P(ـq&x(6d10ݿ>Uf3eQQ(8Rmu#W!i%FB=KA:"2OL1$ƴ6wojD˯{A櫔 kh#qѨF V[]}@IDFlyN+Аq')R!Q%]pWcCXT9&Gop0ER䉠*Ǵ,hasLcͭrš ?\c5VB֢ !3 pwupݻL>ȧ/J,h,x87ۆ ]miMZ^Be͕{4&*pRb/{ݶiseΧ9U>cM&޿Y`e,biOjQ퍺k]qb]O<ǡCe~$d,;#G+^L[? uT R]ίmP0 $\i n<6<,>6<(%tD6ht4@74k/@c<ƕg 3pdG1Po88%""o[pl30$gLɵ-Pѽ6x|Gq:~.'6Lo6p#L|;f5P9&VRii7WhFC9+*Mr;aWt-.+ J& 1{$z?xpp&FqnH, )D@e]IskJ0Hl|1"+{ݭJS4xth$"A]EBct9jgg&6t2b]0 ά2%Jr ~ ~+c(J#8` |Ӻ(k"ylZK9`^VJ;w(wԢsJA"e=,CvWI*˰h7xPx4fcN"bf&}zy 8W$Z&ms/cEFT Ќ)U- I[+ERvv&JlbjEU?=Հrc%D}E>B^)Zk2JZcC`|本xwc8g\Б 'HnP9r)L5aL6 Hlo:^u>6Bմ`2m,#!`T$vJKGqP\7Q tߜl)v !3*7"N@(m$V9َ1^)^q#8g.Zze [. uuyzog' 9ޭ)T/or,H3o u5!Z/y9۴scyH[뒖u^]Rⲋ6U2(<\_r<h&m7}_47MʤJJည9h]2ڢpkesL)t?W}/} .V()Krddj\W8( ۆ6 o"VZZ`~4h^xdz(YѹKZ]jtt'A%>tf)Kp+8(l2dVdj X#um1Ɔ5V7c  Yl5to-dq,ݯi6 G Pb(%{wR~=Mrh $@ l*SNI7 thP@⭷4Ltjf o=߾Ikњ mju@--ISف+%>Q| &dF0ojZNh;J4B)n9wk20 _?fcb(aS^b)ÝτgtW}(SX:7۠%mP&r]Hn@nm3¹Sz5Vo|wVs#rA1NaH\>^՚o}t񰘢PFAWG`-mz.%Go{ee*aj6J\L9Ț\US} -;mQzĥ1'޼~ 9I .* Z@Ŗi99xOX/G-<2J3@H* :>lo Nѵ"5 t5]Mfe]s*TW+7ݝ6!A..?>M?^o<<ŵ(|7ር]MYy8ճ^ E 5 $oԓ~`݋ly)$Cmo(ȌwentSBJgkfSK]/rԢVQ잟X6V[/w,9};O`pNؘ/' DŽ ɴ 0Hǣ>c?.E׿1|׮h&<4#jZAuŢe .y\|{g==ݫ:v 7(QD[ӷw^h&Q! ]YiK(¥:k:Ti܄bYsSI7CVڟILkg򯓳:"zSy`v`L O,3,caF2(J# (P2LXY@O*⛋3,Q{ڕtkԣC_}$x`+`~< "j1ċN#+|=]0:X=`E$xR9}N>Jw2A1@,jfYm(amLcɳ2JS*jee,KaKӕ{>何,|'.ą^SX?t34r›]M!˙BɩQ0|<Νc\ 3":/`dO,0<(+j(VПZ?/ iP g7l]l?'ZJDSt;.\fmp c9snə,W*(VGf/pq4qR"•HPX""eI s.` )eb!f|m=ZmNJ7P:ڭZ 4X.d|+`װ/ak{,O')L-dYSVnF/USq+T4*Ia'8etYgJMŖZCi@>ĥB-s~g/U,6- P䢃QgWQU]D  \rz3*JEf"Tgg5$Z ݩhF(eFPwRI) !d ɵlQT['[ӯ}j(PZ Eٮ,% k'WȠid@9Ɔ{q)kSƑ)m6ekN\s/?_ċSECy1DUG:dRO8CLmf< h8Xu 0Re}qgЯtKAfN߿vnC{?K*57z답͵s6DI&CJYk4LQq207+fh% Y*,!8Ju^0md)Rx)ڷy( 2ID&55Qu jqX`<'{ N!'(K2 H hYlB )-c;8ur:PakW*Kc\h@e91xl0LqfKZXYph1΃(@70"݀-hQ F,}B1dq>B!7kEXDjA (P h~\IvaL9{w f*'Or*'Orr2Md(`#Q$AlZUF) &wƉA$՝u Ղwu?1"$2s1qZz4}=O=Or@8!1ta# ꄊVyϢ{mm b#e.Z.]ȣ*l??1>+y+yĔ2pYhJ@YMBG ς ״c76$ÚdG?Oy2{/5S.^!1QcW̰gǂJDr:AbmJ9>o?_?FguIq\!mea0a`Cf2}u[!K;u8݅>8ko%s҉o[38 a?nmo囿ml_tOCяG^^򙽭[ͭͷϐ9w9~n~ug3t7FJtО v\ػ3^vû[\tyq'MSgtmdPrŭdtM( E4™L7/ʿ/aZ@wz֋w_uT? JO 's_z}/8pIyҩodl7UEB僋N.`0lpŨ#QjmaĴs[~-(׌>!pKe۽\ËN uCq\z >vy-S1 |$itwEx{sbl |ta ow6?;g`{}~$K[o;ӯ;';8wyuw|J} krT['ov[hZ -|}eL!R< p51QF&:OqIDH4QF%0,``@1\+coj "-dL7 g_9XLØ2A/`΀(@0#KYo ꜜ$NkE\Z|qm.8.-v_2וa*%1u5KVWK)U2) y* y(ԮTYPz`"Nq|Hh OJ,'RJ$VX\P]%57!9}vjohAzM[5% Z(PʍoHJ͔W)W\!}N(ߴB+DB]9,PJI kPbJO\秨'^"nd# Yo$dU#زdQrɉ ܙ>b,d f_.Ru)8]\7bL9 /-*}10VR/r\ =*^OZ^Oq0Ŝa*-tbLKdϵ>:;|`$W~Η%A ixw-`Tq >Bյj%B!YڬSϮ*N7r f Π_Xؼb}sC7)R+Y*R+YJRܨ7  Qns# Y9M UcY xfW$F9 $EV8֎2!JzLBLZe qw,80GPJ7(0zRbpA4]Q4|>K4|>+h9zKa󠽝YYy"tж T ݆c[Pɷ Quj( ङg9PZBkhКLԆ x j"`% (E q|?Ex4vRyP =&`%H;J5t f[8,`#@{e`a81gESP+:GQ9cDCæ aZ)%(/tF@ɰ]L"TOpxуlw&26x?{&wW8}Tݙ0h;aX@ raG5kʅ @U 8qKܪ۸UJ /zk~r.csE VWۘ:ʃ!kr9`a`|DZ9}Kh`!Yy"ᜆ}a' MШFS:1uUSUC(9vljM?vΥ}iT9ȊC@TRRMaַS]#=V!lߗXn}J96:sBJw&dk0dt!,o6D_bn} xahPJ<Bկ3R4r3'Z2@51_E[m!>EOFTiY~ze^hX8g *rmtxǜ lm콍JEFHD& 1R(C|+ZN= +C]jR.n>J(Ď# l NĹΩ/6 Qt#LK&ݨ8@ f4%OٽL[C7I=ǩc ۈ-<*$Kb dtp*[Pq" c.-+s9c [hP8gHg[hvf,ˊ-L4#ΫBC5M3[h_4˽tg5{;Oa;f;'P_q.Iu:.(|i2} Owεx^ۋ%Qbܫ]P]͢TfT"vySbs`2ʚb`~}劑dc3hoNfp!໧}3/8hMM?rY;kwot\L9c}d?HUWE4BD8o'DNq'Ē?R0V9b+?n 콝ف >kVq9rnRRʬDE!V\; _}xI$ W?W%Tk㥗ѵLv45xCiYvwT| Γ% *)'nm7@Cp3ݕʰ]|.iL}p._/[(g 6#H0-iAb yk[o`{jxPǿ~;l:y j^m|qu׼e|&BDm Nus{<҅huRu,LjHH>[υ\$R[8&zm8F`%I`K &r8[AUn 8A/RLvPovPovUZ,̶آD{Ԃo65deD4ycU hF@go+Qn,gz(G?m|| 1?92uOtx \w_!aRЮ_6S\mkT>{h<0auG+FVh>T zI,5drdg%M}J%ZtH'|3N =6849G(`ck執>y7}CֽAepyv!Bݩk>ѧehG[hm=m0V߱9`.XFpcM^j&:뉝/iS0jdwx{aDAS^;?6Ҁ*¿(ǂjLw?5$txX8|aKGL|rl+GP,* [V%UU9BCH5q; 9WחgCΒ5mc8xsvs2}fTOisW`F5RvUR,>q͖~[w'^+BVE/Ys\u[Wg06h䄐S% IqJ̽& KA оim C= - )66_~w^w;ֈ`8$9~, R'bjW>t[6./۴ T;rs@k V΀#6wlףfio%Hx{~y!B{b&> "_ .N~p})W@{yld_M~@P_Mޫk(yB> ȭ{nE,XA l@v݁7f5k&trN$[t>ݢOxIL9-t>ݢO-֗nqOCYدIPg0c4Z 32(d fMG)MkJYTu]E1dXЬS+UA$ N}j HᱣXm,*PLX̻@UԵCh7=tퟗd~IoOoOoqCY;Y{Jf7q><#>a2͍T$ZZelOQr^h=up:$9$wx4|ڄ^!t.G;lʚ!k0Tu/[܀__5i0mVa5k&P'K:K?&U˔fc5 }1Y8{Aߛ>,d>N=26ྼ~0xtrqg7nQRm̒b[!l)bZrNF4k)%bHYh)gp͔B9[RCi4Uta˓πGwӅu-⯺_"1O$bG\Ƭg6yJ\xvn)ʽkh.H PNʭuwO-Lu̝ V?}EFTo9hZЗ+q>m6/QځV/ ]iGNb 4q  V6iTf+Q >F"D͕*qiL3ymv#V=-W۵H'ciVZK;O^JDpF&]㚻CKQF?h$A,wa(;uKh3߾5˞odvquuF]QWQWe I)wII-*)d*(z+8UZԸ 'cG/#qO"۠B䤼դ7ש1'v?|4z#zSMV/5,ew'r_.y2}=o{ǩq~\^>dq2PFc%#Z95Ryq {Ia ꘭}=.z]|eaDo_tSPRLN 8Eƌ?i c93nɘ,S*(VGf4ZyW8fVCZx X'LDpZU!케˥=^+ )]9#sγ4mu/݌υ֢ 9}x,ZVL|tV !G2::#5:tb5Ǡ  w(D U[Nq}cܴ,(GJ^TD D:޴?GqxjHH5 AJS]tp ,J& 6[y q{`:i5˘TE0Ap`P H>@ Ҙ#Swî;O ]dH0\珒TcPS2͌>m(aց.A&d­ǕzW3#5ZݷG1 + GM)E}w}q~w- ;Z F>=}~KX#A ~ثq"I|./_|ڱ5a ( ~ jk~1D`hnGVј\'n %aӳMe"Դ}22GT _ q-h 7F Ɗ[R;,Y{T|&)2|8)XZ D؄:n^]Pg;t_^We;]pCn]*0Iz &`6gpfz1T r}vWL?°Z^![B&mignђ(RtˏoJnC_]G*Ӕ{Az !6Df[rEkv8߀7Q0\mr SF\!uy>s>FS{yTIM#D&*D1m({PluSr4贚jN)j)Y[5&Yb!52ie7\cItem`IHƜn!<qJ$^y7$R"ɤ0a|)i{Ŏ,Ii#)laǞa DJ>\nBZe t"TDM}}vJ*`{kQ"Ά2*%b\C-HX16Vl Űf &J** B_1h` V||we&he}rC>5{n9q&[0 s [NXRPx%c~_Y)@γ~r9,H 2 ,»O+aO*©"s{7@ IpvdM B!^೭'> I%~s.C(O戮T,2&68r'ct6l$ 4I`k6$?vlMp6\(<ǣxYa_2vub)Ș2N.:M4RJ#Ăٰ|ۑ5c`-XJ7 _Ky}\M8*y/`g=)*I6 bRH0|JɅI$P>tMNZ/E츽$q8l$ lܷgmLc2,;71; HZvf2".=a2Y2Z|{:M$1䢫32PZ VR( R 'dF{8ZB"Ri> U;H,!YHr /.0c >xTPncswF3_I4Bj@gGL:Wd9,|D[Vb7~D0{gѥ9 G3uL+)M儍y $i&mЎF0G'?vdM&vv&%vmmܺq8pP%}@3|Fx)K F %4PB$.!PL2B~Pd݊ Eiˠq$}2 =O P`#8)J&'M,I1hQl1؁qd\H~PȚBB٦m/g9T@p^\4\]cKaFPuR`k:z?AdJ eHM8D3HH9wK\(v fnԚHp!! x{s" .S7ܗ~OjCs+מjC 5goZDi^_ Fzgi} t\&w[  ~??hRB帧bB7DpI iA߷W,C{j'ؕU*d =fBm߷lFb }֠GDB}/ 暷i3-wkJ ?`mR_e*i:KAe v 9}T% yn4" gs3mhCI ( V.m"3Ƴ8l8y-w)0d XE&'̳% 4N;oYSLWdDŽaS\B鴎z,"!LHƲa #a-"E$g 2JBd*#!i=U;& 9&cgwe?+|G o4hlI6OO?[F1orwM!!sq0.X.By 1ڊ92V[Pzi4"цU\dca,CĈP`{r} }s}"lp\#L÷ͮ۱5A\dzC_zg[*qy ?Ieu0~,?f;'*a?'F%Fn_܄-"$}X\q^Ӑ5Xi,{5m]_'VB-MnpK-u3C96WE bBɑeI/D̹SKF ttqp/%#7:+3n&G4 0haly `AsGݎi Lr—8Ay@B$ $Gp䈷XJ@)G8{: Z}x  Q2u;ka7澾ݾQT>-O77~:_zxwO7OO>q ?hF4~v _^koxA\W S+\KKy/8߿5>Feh&kޟpnuz|0K>{d M"scF*_x8{U&S %U.Jo:A<ȩ`~x&^2oMڞa&.DVbMlA $xC鵧3x9Fxf%tۑ5I31{#x$U(Ub!_+fc#WlIe }r͗Oxzi)7S^JJދ޿ B#. ӂ@NsEhɔ6`8#˱Iag)ۑ5yD)m3a/^E(:3ET b&nS^?ZbYC͏/%x$č@E#ȒI +1D4XzHyuw7V~j卾*~;m+7wL]3AHzėq}hz]7gv)wwOÏGo$>_籶ge$fm^9R :25=^A=^GHA kkO񌀩! 6hcb-K6P% ]fj_6U oCj靾47Gd)Z`(j&xI+BB{(fBFdgLC(1lgvAxௌ%^EVx;z@9LIN ?fӤ[3O&4N]01mA<#2]I_|o_݋G(_I>m޽|خ 7Hp+ߟ8-3pUql>˙"uC'{?dN1̮'2.F 8 s:GvA  b,|u5C9N݄oIv֏ݪHsZ6fH!O'g 7Wd:N#˱M{;-?vdMKttD&w؟{iOڌo{+8? ʓ4|w}M{ ?7/}z_c]{sݷ:]7a3𷿹\CO|VG"{[vO팏%`sr9G6Sw )]mD3d5J׸C%JpĽ`ж%ޙY4^8+},xS: Xg[1QT 9irtا33GݎaHp<љit'E뎙!%Nrvn^9Bv&lOY5 "` E=hx =AH>wo־ JZw7Qɾ&I VD1:Нʦ|]Ep]nv`/cSO~^u?~n0it3~A}ڌlnh76xM2fjL6xgbSx°a.8"ؑ(VMȯOܻ֧x3E*k.Q|Dڴ|)GBublwmmrje\@#U0dΙLHtؒ#N&[ 6u@7284hEpdHTEnbT)%inTLɍqSd͋7R$T}Q0e0 ^Q~f㊒yV 3n+4(<*KCԺ~iWhG+3zg Ƿ#&ǀšL2Ab]˒egf:ߤ d&1W1,84Juc!ƹʠ5WĮ6 &| XjgIcOg+&FZE^:(FVp!!r(fL,qZ\ȯ c-TUhZWYȻh3 gI,{YѕNd2L@ M A9\.V`=J {Zp-WOf3ѢjcEUz$.d?7zj ߹|ӛh2r=#sTuB퇷bG 9plgb7nnBݻwgh{t17N^*h2)%s5 ! ϩep`&re5yl@f"Ox\}$+AcZ% C(T/z -xO% t涫G+;:c' .ZV"s(3*r)>a1rιI@S;ʅa@r|<]NLۦ R9< Naf;s>>Ýo bŨڅ\VȚlY.a2O~m3ѯAZTu]@zuGڝP̀~[pw.z5nQ,ʝD[YN)9et碼)LНzsǞbTI}Iq){jMOz7`Z. }dIbwc5E %y34_Fi87HUNy ֽo%o\}U.~l^34I)Ʊ\im=R&TIb A{!1g+L8`pvbEůB񹗡z*xyʖJܲRvX|^[i{u,h G] ->-W%g:ҚZ~E@46^ԩ&8}U&V [@I ?vIA1ho #tmxbbEuD`Pu * =fP6m53Q5RsKMJyUۤ5X,.^x?]*<;il6jz?L_^ԍf{:y5֥;7_1[]j7wAA7v/7lfSϔ]ꅇE0?L]͂k_\_X V. 7|M:~vϙq<-ynyڭ 9q )8:B1oTn}"\Tg-|4vkBBN\D7ds#bR6]ou.7xΔ{xPFRj/b?}[p f21W'8%]ewt|ś~ #nwk֞n{ / "|ZKESXp&S^6[8[)\>N{W^!,P^Q(f۞4d%el5Y(l|lAÖ&l f&l.e更)Z>|6HaC_Aѽ&ɋ=O S^He ^Mߒ&u)݈eQpJ隨bnr~ *[:?,??ћ8)_F'K^~uLНN&ţtd5%m 7A.Y7$V(PsN/#Y>ĵHs:6!a1<KPn@ *Z*4F CIz}ʅL# +40ޖfi} rMRlz湉eABqɿ4C<:'camz̢XI$cbKSeV/O|jQ>ٍ=RBtǨt!|8|.KP eS U Xy5T*ŌRVl('|^} ;SՖgMU+m lx<7b?>v{gc|ل[cgCwAt!~^. 83ex#U4g;]Z''ܵL Da]uEZC伸jK1]*:lL,[`x׈^)=34MyE%ѰBYOo,Pk{@~Z0BYP;0#| 5.hO`^gS҄yI@NIY%tt rvDǕz\Ǖz\DA bXƌ֌SxFJ1:YJXL$*2D% ~ `ZJ,2OY(v[YdIjXȔʦO$}Y5Ha8Mp\0Fég"m&tyhW݌!k<5.˛ {ˌ_> =8"P!K~.)?v~qs:8[jڲJb aiI{+1!i}#b#ִx[ȱ0ESVy+mr_z#r[Xc[} O:f shRk/hrhQ5%骙9d֖+x.=yJЖmy=!H[;fЖ69ːoYQFl9R.ڃ@o37w҃%wn)˻G"}*),Y;R1Ԫ-Cy<,myr.Cα"E[|t\CZ76'6b@ >W YjZYYC=^H͔Xr5ojAq3a$A  ZzZn땋U'Z|鶇Soq(B \sfZMKi'|Z]TJ:RM  -AHD!?ʬ 966Jӈ9[i|Fڭsw8ᐝC^ \)X emXE0.r´E@>&H=_-ʒ"RN\#I٢m"M.EZDX3 VcTD\2/QaFdl(Z,Q>X2d:^-)&UISjO㾭PlC> S?{`q"L:ՒY >eKꀪdDBW̪ #~:ТĝL"ILSa(s%df3Lؗ`^^JXe&Zb1Dٮ3f?G`s8'恱_m$R* ٱ;>mN?.g pT8Ec\Z~/o& [qRau+O2S[4=UDM-. ]&!g7|a8I6®:,|Nc7D?/ =6XjE؎р!6㾀^ 6h:U-cUxxs}z+$yWȟ޿=e,[Ms;j_KO ,o_yu0f+4ͫxxPFO\Ǣ܄+\㹎5;<)|8hKL\ /kKaZ+v%J(kCvz9]6ΐﵰI "l o@Sh0E L('ksi/hs/#"G<fP}HK.FRmJF=FVQ,w qXcC96|1# W+WDJsAw(;";XɚuK{ y5nv^%7lxƑeGWkR\s%3'jB'gs,tUy-4B&@Z㰧E!] B5`;Y*THKEj$m*^֖֞ X@K%sUWy[ cSphʼnSi01*$Q+SgG˗umhY1o6 fCWê]l Ϋo]noXxjWcsռfK/ 7=˪O<.Ʋo\1S)ic);SS ŞU%MWv K#d*7B zJl D 4ƻY(1SN}Y{~iFW!-v~$z0;U?=BW+3YϻP+kT5Rc\pݯ<9g ÑXI1]XݣjB_4VdBFᩝmgqI{3|?߲{O=pƗ4Hz4W8AGGG^Bb#Njam {~<el8 'q;ȡ;L ?3(`Z0abv`OޟP BxT5i[tfug鷮jד"?aѪ( <$4n elt/lT1ke~1 QI&dH<'%!TLmQi3+ʴ{VQ$fUėӋȒ]ji`%ŖJY:ju'~Ƚ[$\Jı- ,} qqgiل 8ԛ4gʧ)P4g!E1MψhZl=hjLh"tkƳ|NIQ<чhAf9xzVs)I˔5_L+Pg)s=@"8 1)at(l7C0|.J]+ utWAr :F  ut|)ShB} ǐq蘠a%FObS7%wQ(utN7 mh) AimD8kRؕKs%/ޑ ̤RMNj(SyW;*Sr+[rWq)n >e9a2!l孬DZMdSQ={Z8ݺ c{TJRvҚB]8#Xj~sSλ m~;3f. 1nKއqJІhd~ -La(7"qne=AX2m>D"3mw=%C?_4ݽ}ޏ׌tPi.2c=}H{3HSNkORo:<\M2 T]ۗ'!>;UK["ƓKěЗ#bcG\x̵#b|2N%\lm+HW@WMF6ޑ,#(utE'tVM"z1& wd œo7Ew~. |ѭ|gJS͛GmGYJ\ImLXJZtCҮϞY\XAYxf@sI-;<&B[*xr,>= `0\Ͳʑ@xZ uHno;:a҅Тo;!A0pAkv& VO B9(uePl}zTrLp fH'v$Uɶ>Ej ,P\,1j*/mOZ֠yb,]!z=N*M#1Lbn.a읃Rz>wt3&Nީ)9u0 v}, B{cz a+) pJKdIgp0n0ԫs~ '9h&a19$)Hl`pbE؎р&}-imu⑟[sЫ\yqruIOҶL-_oΡdya1i5H[A~37{!^OG.\n {>w¾L H _H7^My.sxƺAP45 8 $qN^^J;Qa 7>|9=;}9|1?ZDvu`Y!0l \%:IϏ=^>u7I^7igP?'n&=Yg[ՖdF?lϛy??,"OgOq_>dj]MI"ү"pz<|4_ntܸqx2j'w~?_|x'A07 ؈j$rF $q+o$wF ҅jO[TOTr(FJ}j~ ]r8 LJ7fh.=,{(3WC}wmzϼv^?! /a{; ԼAc~~3Winz_^^]stt6<wo\%?Fy鍫ͤ&y wy—'MƗxSt#78;p'>KyVB4>(o*`']9LnžZ)3tCQ/nD ÷ps?<޾6 |gWy8Ik7<&0/wI,d2# ~~'/He6 ݹqҞNbB 寮^x`$/.jv{'oY"Mb"pB?(ta::)ʌ.exΤf34?֩;e˲gޙ]_%K^+ .!gS3s,w[$˾l_KH0ur\Ooݟ$ [)R0鰉eKױ<89e7 v8A|12QpLV 8"XPje9p +jIn)Rfjɼ?)ٻ6d^f{eaù>Dw%+J[C^b{b,_UU]K S> E?k$٘>;i/-+d''7eP46`;𳼴bS*,K“%n/5Zx :Ƀ˝tX|6Z?[gKuK_*Ux::%,UZ1ET& a!D<Cp\QY J~s7rM o #`ٵjXa& dȴ`Ck zg[M=m,"NFMkh3F0M)అXZC>}Jߛ&5?u9 f|1imJ2/\UeR̯J1*z̯u?1<(c%'̝OJd@ 4t>rS&ABN. $,eĖRb ~(? }Qlu0#m01 ,=R`TOQ= Lr4*@fF "q [gW:X=ڵ"JIX2@uʨ@TYZ ޳ a 7`R`%`}J%"»U Cp@P *pA.8b%=9bKG(";+ js]%]%]%]uv.rI̙iMYopBŚd'U5eMF: Y%Ӄ&^hA,-: w'sYsEHaI,U2=ySK܅n'ɃY^x J "4 hT28` ec4# l >ۃ)ȃ)|e3Z&S`-eb>X,L}?Bx)uerkwrTK={^wlL,qʐ][|ɇ<2w {rvQ<%?s; m] fˣ9yY$sDj{38h<㖅;OԽ~ Bd{FX' qt˨>aR ̘)PU ) %vU4k>j0)R M/f~)[>N/mѹdۺG|.>.kw.J0pRmiy+K0NW?^ "Bz\,⽿۽ߋݜhawĀ*A_v/ҩP UWfr#h@C[y-88Z~f^u|H)„ZyFSڟ MiP1Ŏm]Jo7$f 4ee8ގN8kWI$ ȕ)I0B p"6ip xmK@ v;Hڟ0:aZQ'zOPDlS4߿۟6M둉V\5 f ݀,dHή݀ROc6E"I`IN(ePb%ZP oyUt ՈP\&#PA``izU@!M:_!z فҁQǛe,nbM1I"`]+DZsVAĠUAs|R49:@` 5䆳fH~sLbRnn?^NN&5-8muџVSIVɋ+[^|^SyyyMEa0Lð.I+e+)lPV%H>7Am$H0je!!}@Et7goȒ]^54W-'5``V)yG'%t9sՌ_:&E&0[b<U֝>OS0)s!7ywN7Ґ!ֹ*ImB7Newoi? nXMYč"(eO(G&pYthL)E<S/5G=I$ɸ+<$a سh1TgIB3eKҖwosԔ:c5 IpGς/,C"1ngs˄KCYB=_~}'q?+Dg`Xh6gu?94l_K)1;+̒,M=W¦ָЃaF//,6HwѯzـU7/_(0[ۻ.0i롳ވ@xٶ1pWr:eZxc5W*BxqsmO]r: &! 4Ҵ֏MJqFS^"hOlNОEPrȦ)kSXի_g{膭LFk2IV_Iù:iWˢZKC+(?!͖19BwtMlk !ҁL[ckbsd0%^0uA"veMl$1"\RQ,kW V5F1~5rEtChb[˟=i8!۾@Z(' 7qQE#ڢO,eg/\7)?NQHٳ?ojtSxN[.>kBўy.|7\n&m`ZOI9{?O"N%"GNvlIto7u~v ف-p>U''fo~/ASq} +.ةcIξ׭Hs gXԑ\/;@,›S9"m5}u=LjIQN%gD?3)%3?r|+W§77>\F"\NኛP>}ҢXFZnzc'{ .#9Fn2036X@U/m&)ο8x\qSM^~y#9____|]/\ʰ&>8`Tr?6*$Y;-s쁨ѸwonM7Q {9 _mc缣=z 䄡A?kyy;g=^ |Gz-pL9;!XsxwVoM4YFiNT通#@=95]TreK~o'('@]fڬ.gz.3j{$rI ^kFJQhFNJ9AC^CF{w(PCk'>%1=5KCxuۧ}Ҁ!Ҏ $} g#Q"$y0{3βXDpzzAώ^vV3f>S*qb-)% I"JNLMttD:)Aw ̓9['t"~3o6 #ޙK/(:8=O;#Sn`:MI` $ܹп^ҡKԛJEB0%SB0% KiL9t^r3$JzPTqd9QҚGzbe4/gH2'd܆{>%hΌnxwVӲGxX7&ot(:6R Q kATcEa1|kU3ߎU%0UzUSHɄU 19` 8Z)|[tW:\R5V{U9=Kڑ}&J[]TgAJ4D-b5Ka;EyӤh@%\pA ÂYCaK!_.V?ɢo@'͗zĿ7E;me(1|# xo^p$JN.qw`Zo;-^gx 1짿;yX\-H{[--0eݳ$ 1߯-ù6h9 +. GkHCjn"7|Th< -cƄtNWl/|c%kLnmȒoSDGUQ܎ YUs4V2\vh] {aj6&S9H6{"s$K1f3NQ:o&1\3CtwwܕD!Hrf7w򁭢>fpV`hvҾ"Xfb$G4!%sRRLB hVVh[8[ӮmҖ1gN7++OhIm hҐmoRˑaigVVhg4cʨ6%,e,UӬ`2&wl52y$V@T.?}}ܓ-(7,:*BMM< Srs*!0#Y\-;~w8]$yTI%zb"ŒN8FL$3B-l av:KyG3!6Ѱ|z۰H"\g9XmOv dT2RE.bE~!G`EkH=-)*ǶvܔݗB2&D)hf?WHj.6˃)#ݾw !t2 z~bۘKLƦtl&uiHJ#]aeǎ\Ҕ86̫*ɽVwa\Yvʃ:8mj/q?aق}L :38oqܨ0y_sFϖasbfLcm.,'nRB8-ߖ٢^u 1Ս.M4Ž'|CO~9h\:'~T|r#4TXƟ%>;nݜ UlMꁟɚ -G|'Ӥ7}o$[=>)i ]q?ޕL+|U._ݪT_Y^\.|){_ x"^KxwdZ}1m~e+*2˯vOUX\.Id7Ų1A$*܇oeҏUW!R_ni6P ^|i$ēJyo+|`$Qݎz᳉NgV<'2YFiq@R/Oя"Z`o!oNo6}P;&fGOWvz 'ёb򉸹ø2ӘgyuAOӘ6s_3@dr ޛ7 N3@]+y]ߍj"_7ƅ՘oM4د&͠55ܲ7bcMͪQ_ie{oOG7+dظ~.̖# 4um-^5 EcyΪFq@~ ځV{:Hk) lÄ3PI")ObrrG.L1˜ ʺ@X̭.&v'QA,Sֱ3յ糮vDt$ڻXk19!Gt 1+JSkJd+qSBa+C/C- XZx:Gs"zڸaA( fm4jQܜ1b ~X-ٱ t;J]BsR (_L"N`{4BMKs!zTR*s'Iݶc3o;h[,JE7fo~Cˎ -;.в&-k?K)%detb»p\68'SDRPT-{4ix '퉇HJUSgDaS̭O : T-Zرbf.tLi:5g.Zjø?x={I֑'SיÃNɍKP[a Z'+BeG4ǜ.t$ӳvFEIuݛ7գ僐.tD=v;$ź!؜;FH3@^<MlRdMݿ]=YdS/w5Tǖ~JK'F͐T@?/xA]IH}.E%,MS-JnP'Q))bErBQi6Վϸ9zh\]>k"2|^ff˝ cuѬ3lƙ)05FMMKyNnG}av: <JϊPLCnd .<>#70k:C:lfr"ԧ7s^AǦaoZBvg`iq_ 쇬JIֽF7۲M sh|)`ktgvd3^t8νb=J"cMuDG=Z_T_9-fb8,:3Uއ-dEr]!X?,6mr 0eE6;XV,"sR?__.e]>yVֻ7/[sz{@;IHk\ VdvY/?oΗo_8iWc_%1x.e%/qg71^7yc. lVk-nTx^7%r \B:n^Bj ,I)`uPYQ*2l65R z A~j,Fɴ<*OoJ(xwKMS3&,|}KQ==o/ C}IҢ$_~ArR[r%9JW/Q 1bYZÙ8z{\PTE:yh $,0+eci*7n~{x?7L]r# B,C Ë\B6HN$9 Dz(Po+ ( eڿ5Fb3$V^أ\$TZ1-#G6B-mk,sn薣x*o]Hɾ^)!8*MkH]䧛_[7q(G3ObGd!vO$8> f_:M,~ vfu޵kA VOh:ǻ+GONҮ̎y0XcO3kt=>ـ06E\zl4\)= `*AH{'|C #d}w*?Gwќ'±Z%Ā yث6qǴi7c15sU9{ZE֞Ѻ;]ATZ.Aț*d]beirMBCT9QAA*BT9vN{iX~PF y.C:ďay&@ I<\9%1$p^_xGUEd~ڌj^F- bSK#MFR%9],g^sF"/,# 5 +<1pFMK{xbN5h*Mp*sm-g%0- FF5rWf Ӛ1C~ P/esK $!b]N\P(m A~EI.hS8@ (}ul5 9;W% wWQ2+SsM=ʼ\ܘR||KQBHL[ni-׈ѯ[ew,f0x2oonĸ4+:\?YQw3\} :"!hJ-hK!4R1h\3& XG۱pZ[Y,wQo2&9 m!CpA"(%PYM_Tz)D0AV_*jIz@ccLW3=P-pR&l їX7}h`Y+Ӈl7oy[J`R7j| 쇷_aFj}(C8!owqcGw[gvn!XW_ŕ'_>{hEOxj@b&T.j[ d̉Dmk-KTEm%-PKzE$4 Q[qVXN' XclWn)̬LjLƮZW774dQ~U47!|cEG$FN|4Cv_ߓV (J8JAɃkM/G;zP)Inc7(^ EEǎO05LѠ"aHp{\5DRh}­nIJEc.+ׂ@4t9v)@5DRzBݘ Q6H&c ˬ& mB֦r5!j6'ۆj~tkvE$c(QVVJV?f2D2^ BMT:F9cGFu3-G |AR!U(p"q󒤁/IOqSiCܦ@A`&pi" Mwf3|O;34kaˎF-x5Aߡ+f9:΅drr6Lx3ǘ g A-mc.7a"SY+[!$%fsZdT{irA§FTG"tBp&Udm%hC!*fN|D*PB98($,sjlK sʜ-hy6=Ϊ;OAu'F,K)h"z/ p!Z*H"Q>Ÿ> |}n`0J"MEV8lͨ3EU"3 dp\sOHƍȫ Sݔm0OV%[$lI:>?{mWBF}8Ҽt dno[_ڭS" mk!c)7B[5њr~4sLp:FKvLlL6\}r"0<_ϟd(Ƶ_gJ,D-DymzjIr_X `Ⱦ$zp8\ ˷Otq2o?,`|aUm6Lf- eUO_zEF@mOꬅҁ^i t]:w b\ zrNFO֙>1sٲ&=KPJ C+O&UI9 ۼ߫ a=)TʠW*PR-riqY}"Se7}\<),!͟d<~cW1~cWld̳"Tμ#VL<ך+02`NSq=ːٹl/4+}NtX["( l1)LCeүu7TO#C*|6<9hY,D e-Y^ `f6   O,66~{| E+׳J%%yS=bԸDb;ݬ <⹔yxHsxd*&VGHǷ0 X_@.~c&JOX$rӛ~c +epx\ٕM3R2;dJQp|n w,8I7Y(քG5'9΁M+x#S6\R $<'64lwۜųv ;RPGg9D(+j)j@KN-/HI΍+N#*լwUnW47M K`tg^#%2>nKj ߲owM #[ɾRѤzl4̨ }GԱC=KPi7 t;7viGmK g ႑1AQB!75S9`)ZtB;%7tik?1R5sJ ]r yVheU!Q78"MMhm0ը-GLKA S7CՅT;JzҦTY0@fE[sMrfqB*υ%:B-7oPƭ(=ϐ[i"Ze8c Hb vbƔ1^WHWƒnU)ʟ!O;k5$U}Ɋ+:p0u#׌IqX|q8ttA[g5uˢu6ԓѾg/(2i*[)sK?K Zf˃R\9lH{N$ž?E˷?g1#UET/~X}xZTӁvxpbUL<&T/2G, I+=3Α<[.= TȠJKΓ/6%ڦ?1CM +GwqP~4G?rY$(BkDLe (@}_Tg\>Ckڭ:?gG]t~g%ߏBgx?\v דūWA?zWLݽ}{ә Md(xW lXdnldQ <\\޴,#" U)=Edz{ L ,$N+gkG!o3tAlMJӄYP|_~OjFEtwW>=+Ꝗ1ψ%86ɍIFxfg}(BI؈Za=/U--[DDE[ ZM Df"9zwx={j-߀M^j=rv6K8A0IH K.}`[y]gD+bܵE`x;-F ݥM2-@5dkH k =Ils :XY+]X3$)F)a-%r iJkRkÕȊhp{1G5exp!5&@ncoKXïrFPaNJ'؇\hف>F@_rݓkwqT,JFAѐMfJsL+;k*KAaCQS0 BK„`Ɓ:(<A*Lj`8(%#Z$cbEcbi . T 0b| &)B`Q$<8BD3n(us]ʾ1ԬVL1̉{ 髋[ĠE9q7~Q =Ƨ=Iӟ13[nfP>/aV~Rx66iZ/ GvTZ|RM1 7Jl J@V,TlE}=:::+\!p); L`>開A QGr;&ݲ'*?86)A4b'JxVgzhsu^w?j"f SxS1F],:Oj$LIg4>Q;5#F^m^;l4C!1 QiQIFIn8$rV'Ǣu?.pS4-DT?ԯ׍ضNvKPDH%bj1YW`0>|(زOu3>_]h/n&ŲQL| R.&4k VTJBcggYX2ڂS [-*we(Vgry\q1um=i=9;Qt{GW;wv9ے1IyT &XjBq0Ž1G4(UiYdDZLKVhZd@L,VӢ~  9NPydKzۈek4҂fpR}$TUܝ4dƢu?|h#A[>^]g|4A"Rm-Ӊ DvuWT~TOBc"Q }Wtk"q#E]?́X}%5U-N )WNk 4E7׳tE!@+SQyʍ>VyZLșb Cڇx$Ir! H-~&B$i<ǐJl?-5vx!J|>Ĵ,KSR}uH't4k:UjA"*Gsu#%6WeXruR!s5FL66bRd=FhmlZ0IHcO5wG$O-ZNh|Ee'͵(uv0NAQ*4ꏢW]]?SPܣmLH ([nF%?*Z ߙ9ikNnN}cz4)$]ݝB_B:;>z8;HV :d%]ۙa5O2&GQerg2Vi,%YmW3s8&kAUVrY,MQ%{,=Ìt (20h/BK}D+$^Pн}3m#P1cr&|6:KɅ3U)i T -LaRRA#!h `Jibq!9ųg39EoԘK_wm[ohKq@<#$pчB vBd03V \|z& /kOB$Nvx a^sBJŽg"QS5ω;9'gZIhcȐscAF,[;t%--CWX|$.ZNU)35X:sUGHmh-&82y?ACj$qC% Uc7J67㼖ӟ}q'I=uW\R3MgL>$)Lyu1/2♏)m-U>$yg]4.6Yr1[30xrXU":*'$Mrլ9;f"D5#ԉKjHo*f:v+1}*u#+N7f%*5v&8ЉC"+InIz a$:TvID]Eiwo8czI'K.NeYkr lBJ? r?TlŞVB0<[JY 8Sw{ò+86=KP"݆%cE:(x2c(v$p@"9"\DU?{u)AmP1ջMp˃L*` xcV_0뽍6!_xx4|9"z,x ?7UWx\PaL Д+XQIT`+ywYT/ Lȴ}nm˚ :` g-$bՉH "[Hr{o->! PRFDDCN#"g39S`P%~^m/¦6u%#|ٱ$g9&@ @? -҂om_V&Ub:7T`*U nßal立Ӄ:2$z g> ~:h{*q*KJhBND [ ~lUM<+~U OM(w iX`BUTIM#Ε2 Bk,ֻw@{`hk 5WK"*7`Vj #wJ@k4žf"(#s= \("`j'5a)//GT 4H ܞ}O3LR|-[B?mnW]k`Lt ޒoV .,Ǎ"|Vu| }1Fd-& 3/Fd:/)frw/1ebR}zt/ZKE5TIy117ˢ؇_;h[aMp~1_B*Nk"$"VzlLPPҶ.Kl!JFͷ tqW>#Ȱ`x?=}rc9˿#^#$39͉VdGD(Yy y%97X)N0ˤDpc) (!_1>&RDƕ'9XyR޸ze ncLq5_j%*?0-oh{(S`qęmOМ!NXy;Ly*J Zp!aZ h^wianQ93*YpKyU &.cbNyXі,a/1/Aї#"ˆ 0=tiϗ GڸoiXnuʔ;0C.aw'jź1wfiln>h>ʽ8q%4_[|6ևS,TrQU L" llS-jftsE44 `[i/aLg63S-wvxgx!`ĩ y`c[X"lE@?K/9A@*8! ,Gy_}~d)&eH0-eX!c41>=~5/?JLFyc& NLh\:=GdUWQ^E{UU&xE1|F3a,k2A?)A25PuSO܎8:M8T|1W{+$4cXΊ! Sq^H oy](j["\M5^)n׿ОW)<剱/+)t{?dvuJB)V" VNۢFY1dBqr;*/4j V;3J݄6A#-7Ce4 $R'H4@%NT3t_)jvnUyɤa)B+BkS1n{TldԮvGkIä{IB-蚄M7۹s{9jg.8RWޓ\wѺTIQArc.eaJq5EPqY9-J6%%A=XO]N'Ev i3Id`1kaHqlD:K$:LǀÇŵߕٙq7W|S&(GSN8Rt l;:?O8[Li/>NA8vnDnMD68}tfX b3H8>[f V>We.krj`?bJLKiKB l+"BXp;m3)jA 9k[slYL/ZaMFq[[c^ l9_g4"u.i;tDIN52(uZї,$dqI~'TJ!啠}BЂHPĹ cJPUaP|M\55;ɍt_]X`> aPz2*} j핽)q$[=yN_=I\)/ o~:TbHJ5zg8ɂ Hd]OkDE1oŒ:FCFcj_%8ttN/`=\%vã!t0E0cWtJ?3S%}?6*U 9`D}{6 Q(!! dzqQma"9sFr2q4XB%'H^HK}b&="wҙ;DXf2TH5Ni&jlbPrōs P%e**5ԔJ ̐Ɯ)HV\WbX PTZ_+JXbSk[A V/bC `M $/؋\F.frc9 L̘ Φ94d~\X nwm\;q9û/jygۡ|+(Ml;.H:aTQn,-5(,!Nb\aY ,Y򊨊r>dE0FKjK( ֕RM1$ƾ| Z G-C[4/QqN+' nZR%JB;YZ)H86Hq$$q )q0,,eڊ|SAVTHg9[*^ 0Z{td Cw~YY s 1fqRuubN9pFdub[s\a%Ws!H`dBM奈 1ܘRҴMY}KPWb m维A\sV՝w#bYOkiA'/JлR>V+Ԩpb,нöDz(ܮO?ZVW7N:Ʉ<%\aqwfהsRU01}pH/٫n^>= M"t5[ܑ? #3;8Ô4 ؆&hMEOhD$ݬL`qm0k>k*kɄzj㈙yчV{o= ;!wgMɽ`Y]&9 ~;n6sR0r{f)r&ڃ(pT"I*JlAƅ$WR[j&KOp]Fgx<][)i|!B;%Dd7By5Aqrl+9!,WI2un(ZOb8̈́$s I;}.i> 3Z`:L "zo@9y~$ i-K,#1?gզ6g|x9#f/7~mMbX]bH`QXtSBܺɉ}WndWw~yӧ?͊q<7n ]fo3ǝbЃF@H@M\Z&s /N2 M$@}~{aP!acFX’ZIɔ,0,` _N2ޜ=-.P bl<;<;!WhD;軎 v3㨜Cp NsW&ɹ[S DprOlB>v왹/eBR (Ӯ}`)c^8HY@-!1͏SwaMGwlOS,h9WɱcqwuNP\E>uUN8O-40P+CdiWEQ'5c)dFc)Nu`18bZ..oW_ Co *Ro2;əWvÛ<qY^doIG=y47_Bf SNbb08d ğ4` ){hRe=2XԳ'E?"Y{~ANC/x^i^ً3fArܣ~xfo2f"WVBi ʒ TEBJ 7P)+]rxX!#5J|9\oAX6"c& 鼒ʧ<0S_v0,|PfvHOTeNV"ьߺs#[ȅ+ F*.SKZI0XKB+iQ%P 1,֍.|` *HKޑ2_?;2AkEj dlU趵ߟ8߹c6`S@`Z\BVnaS{|~u N܈0 Q K4"#.Ĕ@9P6ϏރΗMiy,vn@"_v3߬7c֗6Wٕ|Zsvq8nӢZ>!'N3T % hP2ELsMCJ)rK[8Њ!YYn#Ћ&l҉KĶь+}meǬHwYw0@P 1pR\h,vVsb餋>('8ȡH0t}/Y5a!7*'MEXsXUVXdhI4#fJSA(VX| ,;bXWp5M[ц4 ZkjDpnb17WW;&~]%hz]%-5]O5661Sɮ# amo*^§]DɉDr1?Qړ4 pH:!))XaSɲ1xGr᎐ܭ#lkW ,mCQZ/'6tb!86?=:dW".">X5a٘C)5K4,J1`'Ks>R?st9IDHR]7Y9p a9,w%{ ޑ. %[Yvlll0V{zM{4fN >KyFDk=ڇG?H큨|Qe5ݓ GSOG 1d۟@'M4?!g398RlwXpifsi{ Њ)&-9E}O} @Z;%{S>?Ǽ_t..7_}pfϭS|xw攘ŕ>}$h϶cyhDkW:\j5-8S Tfb QPP|j76ryekOVpRT&F5! _;;_?PKqVյ}6H /jso΀],Uign~'Г )0nyq\)SpZrLʽo1~Uq Rql7"J @Z5*=ji>7錛g0Z0~^#vU-ղٕVȔ0wrYB.5B`n9V%*K.{eLcN8Re쭡[elVR"<(k!%Z+%Jא#*DZpa%v`ȝIYf=ňKơLADMyՍ[U #_w{g+ԝRAeywV@>׷_kNwYv}5mXٵƌSoyCp~rjLVW7f}$jp0I8lԣZG:,H&R=#FGHL4VK DڦyR(;灼GsN$[_._5dsU 'KLԫٙF~Uݥ ' Q|Jr(EyFyBJ-~$ Up..2 ù2-eEZ\e^Z09Nyy4vCDJSƵ 1z:6K}mzV>D,P,'bA[ΐtW}/т8Y6v'9R s0s,m( GKfv)ęP$q!ľtFVS5Q޽Gګ#4ʼnHr& 6bLwpu/$1NƷ/jӵvڋ)JUiOoG 8橒2ZRȻ2,;ĜC3{[L6 J3̛PzbQyN1NFnr Ɏjd8bUi`^Q2Ruk-]D!&JhݥS&K5^LJɬdp>_͵Ae7)Ř@H&'%+Z$! '媑Ѣv&sJԢvKHvm;~\op᫻bV܇Wpt1&!3~k~ct- 2}%h6y aJ5qq4Aiԑc -h״vjFѼ|/#n G4j\ ~'P}#m ÉO>QXh>F(Q^tS-bFʜDІɋʩpLlKb$@D`"tc ,PD򷆚e<>鐋:$ĐHZ>  Iw űH|K1:h9!7 FQWM! ESKe=5B}|\!ǗH2=Do$hV6q762yfǤl+Qs2J!ԭ(iG%Zdcّ\A6!Ny.<\W7SMHDAYH澂J 1IX/!6fu?^cέ!&1x;%5Zfwj W7¸fў=xZ㳻^0,X-?O0`&A\zk:oL/J| Ui@"4Yk ][s8+*=2-/C6RS$@Y҈3 e-K2II>Cb5Fƒ5=;gG1!6PTgK =8rȮ+<Eg4=F*Qo&VG+1EpbIAÅwwZsƴhU" #|AKVaf\H/2!4#;3۾wQ\ʭ^.(8a}{Ff|>14%RāuThkA|;5j S̻uejS zn%\@ޡQ+v^k,{Z FaJg]`CNëU4"s˵ǿh11W lSg#{ A;y^uЬn2gHޙ&]>X ?MXʹP)4M}ӅnMU>x$HNl//G_oQLQ7HL/w05Yv_~~rQ`4_|<ܸ wz'NON^㇏_;&;M\GНn?Ϙ٫ӟ~|,_?~u׳'ýg}{c]xICݿgҋPߛ;Pty]s]XߙyIq0|yi`h=|^`ݾd+Ktqu MKri@KӸ桫ɦ[)^|Wp{ځU6@62 9tq/=75`~s8Mvdy?P׿Nfpwtv$.JzG7HB1B ᭻gE0=l zir}.sAtϽ?n\2CtBoK 헃#u (elIڟy: dE&'@/Fō0wrRcle7v ^$g'o߲8PcP! ,Gt}|w伙'a9dT_.w?ˮ7O^ y0Ӈ/n ".Su?rA+g}tq}^dXV, 1|?,qZ.B+Vmy7nJ͉Ei{-F8,[-0@Ly1N 㤂IEi-~ L7rS^ @A+=jjMss;}}Q-!ssuU\7w+ub/4xl ۏ<ȒaӲ|wrکj|zsO_\-C oQBxnd $fĸ/LDpR:H RZ*&w%JʚA[sy?ύ\[KkyK& ic$X*;$Jp!XJ*:nW` o;COp o ~xc"s]뫅T'_GD)YH\?HԿXJU1Ȣ QKu[q*=an<-fwVd,޽QZ".o8Mqsq=ccTb&%1uJ$cb4siH0H;ڄBuH|bZ*ZzVB̶g@N> P0hsМѷ?VTX;L R+A: YWsMta? x}q"i0`V?&5ZwZ;1Q8IqB( Wa@V`۲G0(mfU006V&xYSⱝ2, vʬhs55~ VZL m}iӄn]O?>=ྂ:[>LGJEx)YfuhY hA5*5TpRE KE'0X3yDOzVAjOǞRAB)4=.NHc*9kpĠk˼ r",֏G>`TOrrBMz{.;2^M8y:,> BrQ:0wx%FY$pES)ӊJ)QJXD~ )BFt"S˧};0TEևbAO@m q gS[Oi*o˙V wJo|g!Yg|Vg\V`hӘafw . 16cgLIk|0".DU_TI|N?؊ hV۲j kH5X˓Ăz Qx#;d|-ɹ$>Z_Y y[OX RJ.Uwb?~{󖤤!')@M"ϋ(QSJrKoY00cGΝЖAԳ w1[? C_ݧ,KCPmH\-"~:Y%WL6(XatvJpuJYjȈ`P2)}0*sB9{%ywc m"u6uʷ]tL)tj5!9!72E],nicj5 F@C$$QFҳ RIr5s $SUdHnVW8R>NLMsQM8ަU%`n42 5UΦMh~Jxs!:(jƍp-lïG2&>1[|gXq:&ƀƴ*2ԢH3i$F&b#<Ŭs4bWq-+PTJȲu23FDP[g,s]m ϟ -I6f╸YMGtF;Wm`ϕFwFɳ#mww6h!s6+PJHi;nu(w5l\ٓtS:1ݲIliIv#HX=Hۥ|5 W#F;qSC2w? IS̉RMn{g|C$Τިmwod->g)y[~rMCog Y|D~74|8֞Ƹt \pJ;97~>WGOjIķb'},rzXKQ˂(~gc"-҆xK3Ҹ3=_qHSc&Q'qkgFI- oUx?~%qko^.gl?\VZ׌~!$d8 ߯n\B5oAX:g|ﻬ"-Wȅ@5Iod/\n\u'FMݺyRgGLh@fӳ\$8n;9~Cۧ4;ϓl4:Ec#"F "kHiH.QRAjLDǚ;Ü;>0HgbqLǯֽ@U "sZsB~=.~] v7o!MwȌ ܮ ߕX.!kmPc.[i D8a{͕(6󘩘kTRZڠ61|ĘiOU#Dl 2-NH#jpO+bqV$&Jб0=f *e}J e]Y e-2|F/ ZkkoВ6HG3k=f"c*qle<n!t}-Jt$5QHR$5QHRԬX^Hp̌Zx'TH4 )2 5@5 5qS=WéR]J404L`긎e̹C2$K(`ЇN;F{FQ鐍) 'FNEizduiuH sJg,`x*3)#,p@ h|8$Yc%A h-Ib*ہqObZJ d Pe,I8+fs+eøA\>n2f¦̃@1 ]P!˭IZR;% ^d2:& RsqRc%"dlm&+Q]Qֵl&>q`?;[ tZ30ǙK}g<hcW l۝ٿMb=H}T2LJ/?DI=ĩKGWdu^H=|LL 23\(~GAƙ\يecp2cpa1zt7^fJ…^ k{6ܺ5nc1t Qk|e h&Aq&(In0F/<) R5\tSL1e53pe r9>W,1zu+4d"xǸ&3s:rut5J " -7]a&aCr\h]6L[US1+x3>ttDxݤ~û)o^YVz@ 3?0KbQYs7OTx M]-=*Ճ<mɘ*h*e<”KjAC,vS[FKz4()HP0 G.NDm'cAyvצqIj(-۷]sí\s6m tYN]v:/ rqKTƖ+@@}<^#EqIYDFKW"Ib,ی8l6vl5_Rᶷ8lBZܱ|ftR 5 12lm7 &>q!$+^5eֻf4{3r]d[ëGq"Gq/ӫWIn6Vvnާ ﱬĔfTuYNYLWsf!˛A 9>٬ŜpNe1 rBލ !YlDw2/~ @:Ehr$ (ι9`;v/N鯖2NU;*?;)#cV~pu/]s}2gV6m"G>= 'o&qpU449cCWIJuYAKˤ)DFf4h"0N0 _ }F)?> :GL秿w]& ǃdO5e xwx?9=~1Lws㍯_>]fu_A.?rׯ^|N_|sQ?8.Xxt|}o/_vO~{T/C#?2qӟ!}wϞXp\>0a~qkN^,PZ]PShS__Np\^W_x0է_.j^L@f_bp?:'/=D/x:=oK> b~2?܁]Td|s]\r`Jqkǟj+Pk(XYp<xqAߗ~̮_m Na@ *4=(*kwYkxO*y*z0Eٝ+ck:[Yj-oJBSi $;?tj8gXWrǕ3# 2EDl9p5sвC͜oxpsa oRsOKJOkMAͅ9-}V|1G%qDsbΒ ]-LKv6$)VNp3V7)$A;U!]p:/5L D˥,18g bq=AA\Ӵ!U,J jEeSBLpOvnWv HnO {CI"1qAי"b\b#|``xr$@~;3Xf\fC53"iЄ H+;2T@%k 2G&L\Ԛ:(^G)d}LrAA\X }SEe1յx60*-Rjϙe;-4hIKA$g,pUܝ7`eѫ?,+;'qo6;J+l@I״ѫ=UyIOxP:O4E{cbu(-yMwDOPI]J (#H$K`J_ 1ғP7&.8cH,UG{p΅|3i(]]1ou aSlR9ҫM TC,Z܈,Z 8+h^U@C%R{^*,n9b iE IJJVIdJ#vJ53lqC &z\ `G҆x!XE a@ 5֊ n5Ky#~:4_0^# q &u"sSm ԧ|iȬJ1`wܯH7@=P*Y_P;_Na m2.B/Qh|d↖Ch,F  jAy-:F%[ ze))kFK JsfA`LEuWQNUD$`HMAG^Qs6NHnC| !֣i%5}\iUN)tp3#DǣmdZJ,421q5NF᠗l -YY&qԴ6`#*i gY4Fo/(pE% TRU̲L%s#86U<# cHcf*`O>={_ώ.~_ex<ͧ珣ޥb}{]T7b3}a8Ag .$ޙ)1[@}bIώf7`gWoqT}^Y-_ ð @_yQ'`$M+-Ww{쪸۱Q Z.͛dWSKToǮG%h]X +-oz{¢&"ڮ W-Ɖ8D꿶iXhJJ)NⰝ5y\=Tr'nǛG"e*S ()V+,uYH`lqH;k .ސB bjD"O3;6C4@tڐ+P0"\OcF?#h28:#(زRZG)*nOU3%Ce!'$& (XqxGRd%htCXkOPZhЛ۟6v!kڂ72V.B(mKpc[A6ˎчY*V> v7:& {@)LkGK*2AN>4v`vM-,|J5N-oGmR~/kѷh2oMr3x%0[ӴRoRᖠq{۽"t+;V̲#TaBۆa/.Z-/o4;zPt2%: fգ&[My>V kHf0յHYUof&OnD%"o:Y6W9tWYJrLfbL%ָW"KJKGDفRJLYwGoԝA(藝kbc1s<0Es-ܐP^jWnjRN4pGs(o+n21@7὇2!K|!Q@Ѳ?;W,;l;py~c"^_0#WI:3IƌH gd 5Hm]v{Wy%0MKo0SR浂Lp)Puo&zk+o m:`ٍUPKDQދ7>qn,rt2h?AI V@v`o٠PIB.XC!\~DŽsqaR:+Qv^VZ @i#)mt>B`||C ŮBi}5$.'FPCɕD~AtLqjguU -0deevJܘt |`Y |Zvc_#IًjbyhCIt˔^m]Hߙ/ک`y۽u{JPɽA0hVw'L>԰AS#y~>RlY%Q8_NAIlzяkwtj>s ZoiӍ: ]άɼYd..rҀQkv,R4#2c"|K-*YF'At [Yz5S(`!#@WE`'`k`Z!ҭQ6-ԷaٽǎDmEPkG6}vyI2Hѝ 4k.0ђ_jawZg:;N7j}n"fJiy]+43 >+=!Y%%[qc $bG%c!3Ln)\H<-wK,് 8e(N:ˬ64v1?;ڔ)"MO]6YbDo\4`coټbC?Pr*:91@A*W=Uw8Vڮx?%2Z}KΚD;dI8\8|,yY.j%X`[0T:s72h>9TaFV49AR>Jz{$쫻lT/>GM%1IK~Gu9btR<^hFgEL:z6-h*[ /|};ſ^+xŕ#-7޿apF,d78{UF 1}EyJoۛI'ӻɿ|^_"|E\g , 'n2z89_8e^iޓk4 bprs?]_ӳyRPt|mi"1=W:7T'k65W1θ8GѠ8  DIlHqj׎ L1[`;R-CޢtQ1n69r 2mi< L콋K"洽xb&G̑h9ͷOq|@vY~Id7PtE%g9 PenM\PBUw3޾Z#Rtwi&" #!na$`6%lx]^b#'hAY]n?FϓSJ[Rxv!oP1iZ|IP>BYto{! 6a!Q[^Cjx?M7#{)L 8 ,NOgb.rn?osE23HO; 9M!~:"tt5W޺W$83T&G'}4H08ø 6ppFR,xe|c`æ90hug<-}cbHaybDidAGAVvqW<!xQZI#esR,O.a4ڒa0p<&WY2a )V.2&w9prϙ@Lk\$DL$3'\=.UnG.WI:3ILY2+y R[W9KQ&U$" @_ TO)p^VHcoky^9#iPVaȩAՎf2gPez yY2|nuA8dȭ҇Bpzv*799W 9 S[QJL]kR-ڕO冗];]񣐋Sv2]X rLJ_3'ӘHP_"7ȍ?+rʹwOÛ,9sZ DPz ^J:#$hs[5DE8Zdvrt+!(f#p<݇)ҽ%mJms/`aoSV7sPJv7xoo_Wr~P|D+"F$J$=K9xm乗?$tдvc8}skV.Y;c|^*A(@oe b- ޫ{B`]?yҖ^y+ H}. " rFFEy%fk&h2:4zRM<cIM^J~Ϩ}&hkK(3dH#IC4RJ# \-jb*= kA$ N; R=2eZhU*Ҭs,VK*"UdI;S(d&H0 & m5))#3oX$fGO´9y}lQčflNvVnLlٿgo#gk X̋ͨPfޕq$BûȾC>he1؆$61GɄ"r#}I1b5]]U]SER8(EaDL( ^Y:mD8б #_ B IT ~Cehpa@6X2F"hk".+Z*IAJ@8İR ҅"! ;3FTD{9m0S1k]hٸs )%}Cgƭ0*x\p{l-BWбhVu8`PUGF2N,o s(3q}s:pdž{,AʊF9Qya BZgfnk S ݞ2֢'MTy*)IӺbIj5'7eZ`لFG_QcD퇏bJ"ݔN,ǶlJFTٽ?N6@"itahs EV_QmhdþIzcb.xOK׹nüЪ4XVrBu*TAz{gXs0p&C( }9a_#AVGevǬA}֚^y7Mqٌnp9wI-$M(iIJk#Y+UN":Gma]!LrE~ES_fcm!Z3F!0/osߚYH]tVxݲLTdgyxrеH*ر!F s<Z0cE?b}-ǧуU+0ߴ[ϫ4Q?ڹzb Ob\fc`}8j[p0[[zCݸu8jp8w%)^Xu!YŒJ԰m""]܁#uz#}ӟuXT>ETnX'y4ྉ$8Ja|uӯk<MzČvbCKF]oܦOr<!|yNc">! O_%&Bwc=LY /sC>vXW |'  +42(DR1  qg~n֫\lYS׳ՆS7%g-ĝunbS[7-Mw[kX2 >ٯZ`D⧅Ȉ=%?[dbD;PD%!_O"2*ށJiPPGڇ)\r_rRD B1m7't,0qQG 8!9#3iC}0do/ut+|;T9qjcp+"c7K!(2(6m[H1P(B٩gLn ̒Ds{a`W b "#V2Oa܊J?0I 5DD˨ ;O~,=IV8AKoQo蹽ğ?wX pƝwdO1X_SЅH>PscN(Ž՛m}k0$myv G %{ܹj_<Ë^7N/fv&nH R"0*h$P3b2.geq𼼲nyeVPb/.w֎sXD!$AD-զm߽뀃$XsxY]O]/U 1h56f) uXVDspڶaÝ{>y;3nQcBr? Iz$ ?񧜭I,WGMha3,uc0G &{L1%j[1XRA`+GI Ǐm2b7˳gY0ES+Q1#aFS{S"%7^(UV-pm+o * DɆ{.A\Wَ.5BX~aǪNԬΞofDGNt82474*r=h@xanچڙADѦFD T0|D#Kv*wu1\qy!K@J<$G"z SS=$ȏbA#ܺĬR.ޟcw<;ԉ;`x!;7;9$hHӌШPոI%==yY(`o&{i + n;ѱ䡐lZW#6Xow{4/6`gЬYr\Z*]Gz[C ev7 rN[˙P:YG#P3V0Ƨ6)"IEpU>}ܠHɒ?y;ռt:>ׄ t#aҤcqZypbG aI`iG`Iq˯~4lv-1c>kQQM $5ZyBuΎBP%iB\h'fM Dդ)!cs(c5$NdF;'q Tc_2sMJp-s 9V'ZI%-*f:E_TFcM%QySa# xR'u CMtJKDmAnX``#$UH ihdʑFF.hi4>z)32OmP%pa}m(RVӸ'k>d#t+%iZ{ N1bGs6y}k.eouf;#^~n\!eø6L7s`6H+Vި^5?܍r'7ܣѐ2w) @Ai+pgKP7SwY@^Oƫm/[րԎ@yE*z![ozu`Z =F?  )"F*]9Fb#&˧F.Z9kdzRJ,8nUjFʐT^_ fRDhCDXD-bh{NBʗQBLmoECdqۭbSǴۓ5z Xrr?]cDͰmf#D=3#3}A:|Mnh|k;8T2?D (.cXIi(.p*[ʹ-w~ m1Ar&@sAJi @y#D]:yd?NMi\^@ӣ'੮[c0 +֙Z\Ӣ *,c}d ҀJGc0ai2 8"wRhmI9cɒyu$ FQ023FA^ЩLOF5Vԃ{NlEQ=_: ,UcDI._uć;"ϱd"ˣuQvqNRxCdh,tݾ-*J*ZDQNieOM얁Ԏ -LR %sX'ދ;P tnI~[3^0Z"ӺjXpKG!;5'y1 Z;I_eu.M^hH?^g0=_~ɅFzaAewLE&hnϰXpE/ʞkγ"fpEo"GήGܿ[޷KNjٵ?~ ugo1\/svoُn^],|uxk d>DkfùDaqG(Z(J>Xw q'!\UN P ?D?we;KD^Hy?/o?<}3Q?i^vƋ_vׯ^v?xo}x_0_f޼~ˇ?_n5~|ax81O c.ޚ?'ݡG_^v߽ഇoz~Ύ_}XbRHoDgcJ8BCvx~AIl؇!)2HCGU='GA6{U]]]Y?ތ.#wϒMM͹|1o3Oܛ0L9 ̶w!Lx.{Oq"a˄VghJ |_lM]_TA~fuMr]=m7! Fɛ~fSbn5Ͳv<.aO6ҝ`8 @~׋g`ԗ_y Fv:xU& R ^6}31G/⏀OL1g?%X dga*/+^`xi:e:@߄!9w0 _,50`|7v`n<>d.7 `&2Qd!fsMW*/r'Y=d֜dz']25"2+ZWMnr-"D\824+@[B?^e})q*b!Lr(nH{uÿx8;" g*ͳ^|;L௹9?:Kd͆,c/Y g={Zt,& -3WU%&.bv6! q9 A2Μ/e΃.u_*ubs0=0EЏ2g}c˾]U%&>9Jl{.[Y%t,/Β,Β߶yG*gXJdƒ:Dq4k c娈{N8 ,\ Βo%)|+KFJ'ZΒ_A,j,y' oDrΣut_hh~"9Ovhh=!HO5.ef:s3矎9qЙ9ߙ D<C2<.I: \QL G)2. 9y3bo ʜ@H)VkLkSPG*ra A-YΗm8Jj,D6awF6}m1t[2q?#\`!w\Cܧ~#gۅ/D+-!6l[aӇ ,է3Q=h3XQ Bx+22Z,wX KY.IgGҵi%>[CN_vM›{٦{? 3 3 v<:my`>5P02SY &[š =S&sNkY70apwC\R-G5c i.fU{m8/oxi$j-`nڼܦib*XLn[9V/o[ 3~\^ͭ LsWr 4Fּ'oӜ-W%Ш<'جԉ G1i挲^*'&֊Q8_{BsrL6#Qf׿x `MZÞw:٥DCm+ 1.ψ D)'vŸRzA #se'Ke`wsIt c \8KY>G 3ɥbѕ:Bj~I,%b Fba _crXĩRW?(/Ƒ Ѯ G FE N'HD$󕀆ZI&CiA8I:@(M#'siLȍ9aձ9QsBi\]Sa5brtEZw(洢 u;蛍><ޤOL:rue(w!|?s?3XçK$ەns#1J@n,»URWD|} tnFo]G@x-ެXMR !%&:* .i]\QZhg~XS:)jEp!ԏn K%.sI^r:KS>St-BZHSm6>|9 w?鍧~hy^0ȽRw1qy6w:NhNW܏:?Ub˞wח{#_e+xk/Pũ;wbNB72rWiUcWix_\eC!ZTbȘ5ThI69:cO6d˝pmRƒ%V@yG{!R2S09 d~_H59cF ]"fp>6JS# PҐ׽kGu*$ZRg_Sgn2J2^aڎkVv}pH#Y""%n3r/0s]A a?$vT|.8QP,2Ueh8m2RSևX$lRUzDXN`ъU T[jI<EU}v@z׻L ,Y*e7L!RD}|]K v3Zd]W głA-^7\@S*]\ 9 .fx98VZ uB5b9^bVHvw_ۯ +a;Zl>bQ̣:݊Y,bYڨ'i-<[PHPTPjYsym`MK7F > `|v*>9(>أj<{87-^b:B?JZIԾ]Z|heY59OԛMKoոQo!Rt%9.jf`ҖYTsɧxq7!&,rY+D7,n k : EGjkD-~`nMF󑨚 *Y,oqoQTaYf+zծ<Y+ϴuM `:],xT`R6O[0)"M(pm=)s^Z}_ҪL8T,[v?bhzQ4sZ5hJ 8 OlH@9plմ񩹃*,Av,$!D4i ~J4{0Ѣ9\-9ԜVP-`.& e\ýh86 INX˘7w˵kd'Yl7O$ j@w:ooOSDPO~S+\W@ZR@HKp(rR,Jֱkx:W(u*QZ!tvG 7z1 Պ ~56pw3.!%j|,KTA$:Dɧ`*r}V> (s=H2RPN#ĴV[YAXBBMw!\R)iRhϗ. FT }- Y }dbY"RH,pmir s[ .傐 n'%ɱIF8FSe2  AVaV+0w <&= TXj 9'49:0-N %,g)L_1K$RU_b+mLXHcI8v9ZG8|Qyc +.#0ҪD(6тDy kvT`u `E}Ed$ aT>,(D0KU0@O! =3ނMgq'/L.3&j>:(MygZꪺ f2bXQ%8h|*N&ZG]d |ʖ-AKU_>B%bJQҬ%H42-->GD~TEh@ԦM>@5S\́!.?sqO/c&ގC:g)G?icͮDM5w\9pL:o)Ӱ.Ky^\oT$Ǣt),3xw& NGS°o(Z0BJr8J57D̹wm?ǣ%}M1]1d^Z[JK)]%2KV{g,WR w4~8`y+ !:X̱/sP7gf 0gt/ E~O'yÞij$Ϋuj]L_meO.mzbRQTS ަ*$ZJM&&B (BE"VXQ7Ę#AЌ"-}"[fe&3붹p(uEm *3,3_8/eZ-{.Nj͑7 ܗm7B&Cwo|>9)LoƞuV],<8 uRgc6э3mPR$?MjI-ldrL0Rra3e 33 JN!,uzMQ ;,`:DKM]oUR&MIS&7us""9LSN\WVlp;faNFԳT38MN5cG7I%HUɑ~-/Ա|Uh\WѪ/Zh/hl}TǺG$m3reS=^VhѤ*턓8.#re..9lɛ2G,\G?u8ik)4sJy#QrPƧ}1SHq?n3J?} =+ʇnR#đnGrwFkH8.E5sEM W2m,򁱆fG']---Bd#OՖ .&t'-@)0 D.8^=y Xw'B85p|!.)#ޝ[ \wgnGafjR̟U Ņp?8&,ZT rqnn݉Q.Ht嚾;±_ i?{W؍,C0.$#cӣl_Ի6ߞV`WbY2Eɇ㼤B*+z[r51et;i:0ePi$O];)OÕعeVda]27Bh$@ʕ84hatRAˣNȔwh d$HsKD]Z\0oThfbS)xP*d+8zb@V t vI rŒT/,p{Iw4"ԘR6>F_윬QDcspA>Fja2XJ1XF(Kd^A 'bzϺ@LSk_e*ުN3:'@pdIsnJuB($8\$JI23m. &>|#iNއ3j&($r$aeLtAN= sL̂3ߑ~RaF%dKcI27:Q9HOe 4Y1f!)`.B~`pN!FAV5"r:.Il5T.fɪ-q8RPIֆ\.{vt4ܬNIGgN){ȳ"]"v3Ҫ5! *)&5,^E 941gcq09$Pv4rn Z a,]bO݇?`EvV?K>:o=O5Bÿ|^5Hլf߈o~nvnVv(M?qxWV=Lf`.*~GdNf!Dʐܺs;se[6%3VI^={' p@ځ_ &ӷaک!U) Jn[eAU~~@lHɶatr"euM29UIxt$+ռBeΥKHdTkDEGde@=N\j[ER'ȤI)X㽉Qwl"qY&dL%)Z3L)3#d[Ri 9+wFn'{kYj t6xx5zҙ*QO6$¨+mEE*/+9E qdvM#*v*ErNTyj}sX%V!]i ّxR((R9ix8+^fm ^EO3M:KϥYm }=q2ߔn`5 ^h=/iId'%3NZ wE⣄>G+;Rkܼ0>4(DL&$W* UD-v *oK;d*e粞Y7R`.$dtfP>dFi4* ٧{?Mc&Ҡ`a<޻KsSyoY =2eEWMEo;ȯ9gЕ] tm;Me^WJ ̵mCTFQ:t9+QwKRpŹ5|Cq~oc7smWî쒦߸noB^s: kc<#[$A֠Bl]kbZ-48Ms1҃5konr؟ ToT=dVAZ˜~ZWzX*eg{6ӘYQ"b=Mqn8 Wp}ͮ`77ệA)ImׁJ{,8J]W/^١#~8ڭ??ߵow61ƳNMK^9KX r=Y7 ~`zcr~MPiFxLaFJu&o @B̳² XF5LKUꂎ.|fg&ѐq .9\P*+<I ġ.E9dth.ȁK]"<W#R#hS,OZn0`C*:N}vYH٧W"ZA^j\g;pdֲ .OV"^b$ZHЉ5ģeKDu>|x}_;ҝa2r /OWz%m4+:wT=3ק+"xu>>a8UR -mXi7sl}#S.9V׌z"mIX|,Uy#K{|_C6XkGu*͎1.@%JT.Bm(_EB.>/x/e(N^4Zk+w'ɦ)χݷ[.3*2A}ZE8"+-JpQ"h [:E^\L]X.UVA7݊U~OVN3 _5-K|t') 0U_ϗ Xڜ}v`?_Nk2^}KgQͩ6os)i0ڀo3S z.ޚS[e@W˹%.2T#@r%*Y$av)E.|,1˜I҉#j ޣnVl5h Džs, Q1m8_/\HfEF4]}>FݻQD}7]{`Gzj3 kUNNz*[3γs{S*C̪Uh Ȩ#PLJ%v*KN;ńRo5mK@]v3S$3fPl#BHO$錈ѓBXn{˽lK"^ -7t^Brzw gD`TI#@ؖr -"H go,OT?F)KBiFIؚ}8nc8,:POx=0DBcG志WD"X%vlzݚy8nTIW(̃ 9x0(-n+0MF.3o8֑nջѻۡ7]ޭJn>bW5^A맨aGIg! $sT\EVg5+>_|s麾fV҃ӡy_B6=N~oHԁYCkxlj%~R=G%zï˳Je[q[[i;8k{0>VT!1n3u-Ho@/3vL3~wŘ}b^ZԽ?T5U'eJ,V\Q" ,S̨U~xtMG= y\|(u˒-@@ǫP]+s\}قj%0:@^Erj' >~t{56-<MzKQI_ ۜkY^aS9Wxn4=:%[ %x&W(5AkW(5_Yw F)WmK,.ߨ%Kc%Ļ_i_\xm lٵ߾ Ѳz|FY2<kjnSL;Stdn$a}Q!F3dF{TT/:VB:hCJ`ULs_qBZUea.U/]xxHP)9RքI|ɹu)I &:%}T;UNZ҂ m/uG N"1I4ONo *L6izI_[UB;!r[Չ 褋l͵ZO0h@PE+tYoWóJEʩ^)5-BER,TDBk j7Q>u!U%/T Aa_qɳuIѷdwq ytO&oBׁCHʵn@HBD( 7" ME:+#Di.R6@o_Xp)`l=H&y ]ܦr>(& T?PI^Ҷ`ֺ1!;#UƧfH":Do25(4ZG2 W~<>?߹?po ?qow{UjK:rW{7 a~a@oß8ՙ(+|CdE&'}|\`6az?(f@>ow_rg_gg|*'(獧!G>Eb-nc[bN;X1]BܶuKWnM̐#)s "upOior%jr_Uy'&V{zɧ)Rk]M)+{\O!+`K>4_ buڐ=M^9?fK{{u$Іě=իk\.qPDr wd6:GȗهC5-Pt$1$R] 6-_~|Y~ c^y:9* [D'Y/]c7eb& P Mz8iF uj6#OC*(4^F F;\ I0Y+$n(Q8mx sW^(dvx$P$G |hE/oj8{^!F H0Mz^Kv+^K\I[h\$_,PLh&Пc2~ԺQH:Sx8XWUO`q\Lkɧ6kbt Z F `:(=(H.=jlN!+]MQwL덎7 KK5fwjjxow_/ƶB+F R#";@Y:zˁf8'j)u\kIqp|W3Zrݧ-Zi~k)Bk ot?״!AsXZ @S][Չ]OeZPvZ=2̪rT5Y}Q?_y3_q0u_;mN}A2/͙:sK.v4ϝ$7\CfN<ƛ@M{ʒS~vcA]ބ_z}D՟EK=^ $>tk='k/a=a?dʂ3%~cXVdp29F ֨/4ˢ1 3:} ͜AW+z̬L}y =u.ErO=<]Lp 9x3)[e x'Ӛ Ry`/ qF0S!e$%y2ܩ/z3z/'X4v?<\~}R.?ڽ+= +ߍnwf/BMaMy<oKY( \:͒L^ǯ.Wsu^?;akhx^^g\QP[ :ϕ.YwB}4=X ܞl0oy_m>oDP,>޿{oywg0= HA|{+p<>l{_]8?Oi63"Ow/N]>77 :قy&F+RH 8i'5E?t5E_(R]Lch{%>Ֆ Tlz%nֵl|E3 Wᯩ6lOxZqQ/lTs/|iEb>x<+M&o{5Ev b 2ŎݧY- dѲjNIǤ!qo6>IR:@+he•_ϠĖv0'S4w:lf\+Xn9 c6gU R\]Rf-U# C ~`HQ `}d[z 155y[DCسD$H Ї#DKg; # W4 mÆ$-Q8orpe7PC[fF}F""E艎 ꠜ~RTGSZz4>?}֧PGbR 1L '-z$UF$(ԴrDqf|vd("A_:P}Qf*IKNsKV9B\+$7T{R9]ɵ-]ԗ/HH@۫D4) KRR]*R0ep4qF\9NќiO>lVvʵeZ !Մ5Ri<4ܹLHp'7&BQETiJ}alAͥ!9 [ÝY!WBρ[*-k47?)nՙNlzG4ãԵ2Y E[KNsDĝ)A}*RiNfy߽%5.2SÜmxQ#CGWR9Q(- )Q>X%}|u7#8]4G`;S7ŭa\a\ԗ7ƭƸ(*B{3{\/FSL+ne#4c F ܋7KfB0Z3ƍuWK+@p*' ڪrvl sfM-H-WhmȰ#+R:ϥsE.sUSETDB˙W QȒ?$+1꣄!rډ 7XWRƭZP(A p9!%FX YɣKП h ɹ wU M;lz'K#7Q8=~^I) LWȖ8J3V \iC􀓐:+9# s.FY0)e)eK!ZX1ޱ'-akDap[Qgޭ),[ ;2NZCxЏ.0| =z !{"Aۚм5]y(YoQVf;`oԽ7/ hWywhEdo}<Aze7jEc;q "q661/wW@i-&N:h#^}*/# Zq'P2+K mZ_CB WJc$yņ;}(~ϯv!^_=hU]"T9yw΋M^Au ڟ@˻ VXEJQ}ЀK5} \k (z @dQmOBߊ )LlmUh 5|'ɫ(ݠ،i -+ӴhF(8ݚVpph{hoUdc&f+~41]q__mTKFκ1+H~X{ņ|A犱VRHj8~CRCb5_UWWuWWT#@؅At1=Bl`&ćQ"}"S n6}^t_3'nb1UrYM`ɬU֫rYY/*RY8 *}KwTE)Ն<̮ |{p&֓Um|j&c;n; ]PT`N+ 9 Ǻ. ـ[ Lŭjɻ~j6`GY;[q„Ѵ;1uqzEh.ˌy1lh++8Z)P( e3whB#c˲e* HЂ(: ̭0>0>XI RpK0Nk[]S[c:WlB›+Ww Czgn1([λ]ǖ7g8&6Vݗz xz~vLxɼ:)A]. ௞w&bws==oȖRєu!M "}/(ra83_cA9\3.1.1.1jȡ\:IQj̋1H!D o6,&āߕrضV!m;ϗ?ȇDTsH•ַOT^rds {2$:&bXcJ Iփ.9  Lb[Q9еmz3iۿw ]NF/N#=r2%H7DTZ\H$5DD #"Dz51 +5k#i%?.U![pc.J^X6d7z@!+_{AͺlEu| ,簹nf?M-rq6O3ֈ蛤o.Q`*\ [RhB}!4X]4s y0&JQJJt(tq䞊` eW˱}WF;HL5Qq~ˍ (5XH30m1- A5uV`c+˝fO L5*.<@`iݜAW5aeFՄY%l9G40^+zvtmc[G/SUATcqԪ<f=R"|{1'X'Ey97쒌6(B9z9->[>1M+gӶ:ȮNg`K̼GϬU` DQnokŝwϣ!w8/ZX[r~m$Ruс-9GA<-{v}C(-mok_N`}p{8ζc+0\wL{K@0ueM)@Q1Ns@<9ܰ<l}&~.d>V(qZiH[XrR6ЉC/:zwuYN Y #3cp[*:PXj_m28w;OPlbw) bfReF]Ҿ'e95 7$q}}|z¼l+H_tAD`#3~uz6_{)#G>\S\'ݖy07+P߿<>:`DD - |y1Q,eDշmXhNS-G2]i$! 2$\DyVqQ*Y>kXE)Wag4Ȝ2*E=ex2 0⛲΃s˃_H!xuǯ9ÕhǦpmj?y+04,c/Q0ˊc(XƯЖ_/p| +qIo\DȔ\ dS;vkAv;ՒxZZnݺ߸˔).MOх‡h^u?(D K*ЎЂҀpؐu#l_{bb [%3z #g \=o|s.붌1o6*}`\f䜧 p'o@1S'+@5 .b./jFّקǂ3/\g9i3+ipN d,UPZ=Efg|kJru7퓙󲸿yx4}GX>TFzKiVX?U+8F&(Ǯ93p(GsF.{eg\^4PCRޕC+EWIOHqp1~Gr^_G1N&HĤ!DrF[̌ ]#J'- (g 7hW9Lj(rH!J fV^ G&pQpwݐ&a( ff<sjqO~|7eqHM . f`\Pmfrpc ##`=@)(IRXkS<FGtĭpvxsXIXR pEB` @bA lgYuH_#B;1wZjBX „i^]שZn 1%?D,Kq4So&,jZX k8 9 |:&M2$NR5X;a0i% n>퐂685zi㸤*Z[X-tDƚJhѵ`\46A@2R+mBb0y /mI 'di+yHֵ)tir9=muA:Km|:qq n|cʹՇ?w RZXuh<3T>x{kfy+P0Ţ+4)-#>+11BG, +A&Gg*YX8zg3\g84@ I!6M2xR^:1ØrQ܊x#e$1Q#Iks[5hFzsTi ^jyf=C KVwazJJ L*ex޾*exˇRVW oBDGu > ACJ؏?n)&f#j%8 OGAPڔ]D Fwk!RFyL%@͑L<:6HkSTH)D#p%툖.`͎VHcE8|B2' cn~ѵ^`xK/&*ʌ$FlDpNF S~DJ%ҹ<])}Jr SXP6kb95 J9o`JH qc%uQ+ZLY^pbj3fif\@7ӣsIdKҔFR6On TH ЖF1*ma !`ڠy`8a):UkmXEЧ/b$n4flّy"}%?"-D⩮SCrx: B8.\FF-tg[xB65OJ?6#;;PlD4yW W.ZYFa1ELSB2ɴYf3#SlI32l j9&4u MRZ!gA1JcJlX +O~[`AиuKD)Q$`AΔ 5CD9e\$ l'0S1n 3-Zsx@3 LsPesGRmtUd ZP?)N,s\ɕi Kb^)r >O0z zSH%/ pKT)Ekж0L,Hɳ7U@Wh>m\mvyɳeTOtc%Mgaqµ)& ~"q*Y+ ܔF|u+Q@-_ߟE+Ʒ TmRN瘝 1wlc|d} +bkfcExX/_RyQa>Q$!k2mg<39@ciP}( mg^8res7Kg JJBi6_I3ē-הMRTLAM'JM/ΫҖMjkϓ#J:uA,-V75_mȢ-Y7 A&>82-x%tf=b&먩iց9h~}ꠖ$ +tI4Oyh4#ɤDAEnK/ V5᫨"6 QVYD%z* udWw(AЗC(߲jh}/U(.¤wyZ]5 msԅ9 %[0 s*8gmc 9eñMZ I*FwB,8\}$|%D}TF} D-=řKHHBZ[N%QYɢ 6'8"ꋂ(:dW\`]ˑ~8k0AR몛{_;.?qƔ*:#AŨZbq_G) +[2mB^a;X]7jR':3kijHbR$_Hn_.qЉy$5z֯I]v/#/- -h"0m^z-t&[콸 s*ß4Fuo %eRG h]֧-Fs >o2)fQv啟KEQݣERwTfrGZxQ,͚vTkc&Zx&XTIFC?Υ"@ufWO5Af5k/(G/u.\U^hM2,$Xy5Ua:`+2JWp&5mXimxa[QWZ0FgAXX! '*׸-Qk.YXJޖk-8PExݼ}$6@w)D"/nPZ3J%v 2Ǖ~˷;Wâߜ!J@eD2!F"PRqm4C^*oepθ>#xBÿRcMy Xn1e'H2E20}B(EdH`>D攴Fh&(ROS,GN3u3e#!5ʕ*p,c?2CÔdfPgBhɬ( 4GH+O+L Hv$ Ni8O⧋iLՙ29s (r }A` rhv^Nxhym;\P!*\91"єڄ*'Z2>?5gαz.NF7!&RDj"T&@ =X!vK`[Vf 3kφ??Au.~z=;'ngw~mt `ctBrGcꙗʽؐg s4/b7:*w6ŻuJ P=gsmF*r\QHxpkȘ9-bB W7>+qgru#EN!ƻ)J%yG!` œ_~o^6OOpB)z@5 GL.dOHVxw~?X#+[[uZzxaJJZ*kƕ(O`FO{W=\ q8>!c ڕWEUn\xIOTQ4 `,Y]SPd.aa$h"UJM9Hp[A2jƷnI(џ#ߛ|aY^Jq%Z8i`V8DyqeZ#sA6(՚˃v;2F5lE1R ]6X]<.J5ٺ(ҺEX%ph[C Q(ʪ)z >j>‰^"+';`K}bRqk&Z`K-:wUH܇i.3/C&dW9lu|OBH "!}ӷDeyebXbxe NYRU()mSB /3SK]JL,Ci,LYs5s qVϤh<^=f92ĺ{mJa:g6FQmh3T.gz†:GOi.']nZfiJSm$K`x0$)éDxSdR0̰ zp\2W$Jy֗*lOeiDJXf( <d*RAfܨd#ĄhXqDugjL(/gSwQprQp#jQ2(kC[ l%*G m1)鲾I2q + D,Dc˲LbCʈ6ؖ74x"l,ECfSblP!: ,&MHVI66kˣaW$4iQ MT<&6HO>-6'嬚v3m̪ι|;Y;h vPݙ 7FLG=wm3, ecĖֹC/vQCk9"6 9~5_;U 73B T?g(Ypyƨ!״G|drbޟF2+lThT;auHûQ!y 5爌> z `ȇK 4&N'_nVӻoR,1BN߿>d u2sJ0s<{XRkv?c[ayVf/>(}ƒB$_f$6q-+_Sѧ r]$9}H&bd;ߖM/RzA?yˤ$EH;M0ƠJ1gn'iR ݊_4Wu!!.E2o'j7Qe>v+ GtJFѢA[Wdj.$EH?d})=B gLO#ڻgZٞir"%Sk5(xe6a PoDQ,2/0(0 $q+&S ^Zz"-BZy#-ּ^g}xs' fӒr)9->W>54H*$雄o&_jv 'X+LYv$sPDn(Km+nM5B'x# =ZDS`ddl:  .}(*/ in!պK׍[,PH48Hbpu =(EtC"xkh%UOt*tk|<⺉ Bk+ISrk3nY"_{W@CAq-'K `x7ųݧ/']&ZS1 _~]I񥖫4 _#iu5"L &׍x*T45?>%rY>_wreĦ2ꅺ7o7V&VBi~(}n$q< vbw՞N+/+ֺ,R>"  UF0\7QyIWsɁI0U{#ޮ A;}KNj,D</ vxy~C,&yF1@Y74/T#WT&t*nԜAx=GzҌ:k{ЖKݹ^5uaLI%.KWsfa3TkTNWPQZ5gZG,̈'A?k G*Nf,UHpQ#w [`F0f%c!2P# , f(h&f3L 2(9ls> 众[8)=}8b\>k R3=/^K1J*Ϸ\ 5$ѧ5O,XlS1#[iVJ`LqE`:$ ~kxz%ØY~- f_---x&0Rs"@5n*5|CJ\"+++D,F RDo2n3oҒ ڋ ņ{*Bhs|P (Y&KS~_+\xu3B_~66wf.:%bogw.,Z3ߟ=ˏɳ<ˏɳX,Ww{aRIV5)l.Z F)B18)Âd-G=w ߡ|".znZIzw ]298M;_6ջz{Ӊ1$~sL"4y~o4Ӻ79d彊:_|[ݽr{x Nqp9%1 sW}13l=ݫ'3s73`;Y'N˲CN},ؽ:.[ ̋tf]oH\1 oae⏃3jCYbQZH7{PP{| l&GרtQlIG9/ 4t2޸ bsk}|{z~"B7" Xd;`04 ?rx+pXA{ńbLzXBv {Dco׷@8#Rr퓚l2[1EIדzts}ZT.uy-l> )$_b޹Ez8M8`rW* ޘR~ȼG%Zz!?A*!OюZV3q.Z\$Rr۠S(v܃t &0>Z%cDk5$%Pq$D97 7ʩIL. kNy$I$ᘸ&C hHi.gqVi_Z B$=$h!'#kTQ,RnԒa".-%)(6%[AQoZ A rw4MjŲ[$&p~V Xe/%;ZI@b7eaT`Q$a= *܏崪d&hӪ:"+y1xTKPk'g+߿WI΁W(;x߿>e|T}yaw7y7U?C1?yK>cIyzw~fG$k>XЅaV;/ @a_Rz⺂SX.HB.\D dJ՛un3EOJ1gn'Jyڭ<ݺ "”tu5H+xGRR=: E)7Gp7e%xD )b%(z]ǺeJ^!tO :9KR@Wui,d puWXB~ƛNj x|w}uej /ԁ(UH,qINx2p->㿲*3?DL?0zfg^Ɵ0̚saAdB(%SzG[qi䠟8a⏧J[7Mu7>P.B=KEIwAޤ7 dkU-(fS?F|N9cb( :hfຯ ,kgPݓ_p8K߬K%L5h5FU5RAnꎳ>ősRkkly 6F kY?z3tG/|(O{G`=i&JLfa/s%d`svmp;FԹ.hYb&߅ͽ5/66XUK7X^5|9c|/h$Bc8nȲsp=;k-2QŅDaZ@7_Od4'u\!4ABJ+zt FxD0s=j.cĎʮaӓ/z&'D^KQc ]V \%d|LGt u֞T! "I,D=LZ>?WRene9v$ nwcCTW#FTRWQe(Ւa7rV#0%H9$Q#x`2 s2h=c[V] :C%r:]s?U[zm \1kFTvڠ$`b$#ǘx8RQ$6PlC Zڄ~%Hj(t|:%z B.ߣWDJ/1е1ӳszͼȻ̋%ay>+K7;ftkga|W#Қސ>:8(7'_a;Gwn=b%Ua`gE -p嚼Z1x=O0CY =|w-Yatҗ]]o#ǕW}f[]3c#$xb:%)Q!ǁ}ERR&cf$6_wT?o>~磏GE>.k0&4]3 =Rʼ{b1JDɽ}tvX.s}d:tCoW?5Gte[R-) ?}u6oVCL;06.A\gu5f ɼ1,2)m]=~d`v<[ʪK{g[ Pb͍Rn];(y1A[.㖚[}ow"i?na;\KVlз(Br\?/91[l؈$f‘=6uN:b[{zE:ٻrճ,^`e7WKw%wS9i2$љ6NJ4it .Iɍ@Qp@EV@RZDeEj&?l-8^RNءjaFiD8"o$]PYQMphәѪR_-|S }1M$G4mw~:vU2«x,Fx4#?n!8UaE 4cZdS$=uۥ8Fb'yCpz5aim"QV,RD̺ V%j>&S#Z#I31P!ԋXn\Q(G2E H%Tzx'#<5K@KB 0RmfFK $- ŚaÏJ,d#0~ ]IW{%% N"x3^Eva@Js ;so ֏p v~s{0x5kƓl Oh|^\ G> 3^'DB6+%͊EjY#fs@d5YG^X`"Pgȱ^S?%8eItc*1gW] rF:ǕR"zJ!\5y_977_fq(w?`P>qmA9UWBj I˵ca!FR{Ce5bq1˅j@YmEb~vgf;-pl< L!^<*\1幀Dc$ ZX!'$4#xT"/( O- Qch{ohMc1չZ{Ny9&ylK[hNSh4FК"p |_8NPh`|5.?.URw9mT2z|%땷`Fq:(mܺ0Y{pi |]66G>&c?Ss)-{Ɋ-~Kӂ.$0p6QBi3}O>Ra,ܻeg^a8M: F_O䚟-g)J:s\zH-I+&2s3؇li7rGnM1":MQGiez6v/vkCB^.S\@׎:ħ0\u 2ҤCw9){mO 1J2V=~k;X1d|N+7~!DKķG 'Y[tS.%Yc8֮۱V,p ,XrT81*``# R-FQ$ "4Tr}Hs!҇6ty!"IwH<[lxЛ70Iu\#J36.[eUWs..˘x̹g*d`hz{_,dG͌յK{gd DejսF=}-3pW?GDp@<;mN6(t-#|& 5UlQrﺞIMŰ`֥G }K[J2!pe/<)D}=w;S J66tv~oน./.^5Hr)!z{} Wo w3Cf vw8G" E5 2CÔJ(Jc0T 4h.#xH'ynR0~`/MƊ`Jbv&JgKԂ@R@ 5x79ȝ)k{ٰ'qz[>lRFxӲ{x|g/`=kOs3NC2/ji/ɷON|^|e='0f6O%O=^l撁67C?e0{|e[F+{| )z##+*<7(gxabY%7^ɑlkhSW1+fu@|d?"win$<<|pF!.@];hD>0n~8ޠvx0wpxaFޏIǯ d _|^*o.HW2όa>K,/Mon˿rE]/nEhĀ]) EݣL((8 IHo錍TbE3N)TǔCӮ*q,o4ذT/* >/$bƪm)J5ćB^ao:Ԭ+m*r@tŁ7G sd k^ܐTY8})x(N}|Z*Xs$i:o)Q-^.U~ہ宀2We*zkT%ϲTfN`]ݞRud<ޫkop]rȀ*J׾0CD޺~ՍI%K1d5P/" @ 89lU ?`9oi=UmUm*hOU3VfpR9XLF[́jp ~ʠV,_m*ޥVJEEFs( l##;PJz -i---&8og‘<֚}:j- GtEf" ):8HJsB 7Ea)U>RE%-XLIe1Pcfl" :/$v??{F/âm;wrspwe&(G_QՒbwKJc_Ub}i:_].~6ӄ(FG !5QS[7FӞRi%ps" g)wwg?$ʁ;[2s_Ŧ\ߖ̩Nf;y$GӇ9p/BQvDy/F]zES7Q*[0ԣcjVFR0`9վ6|;C7ԮJJHݬ0KOk\6sȪ*)˘F'nsR. Dtq .!R/! sjkNnw 4IohJ"dg~X4Dz'KK g=FBJo@%e(nO)=6FhJFD:z $ !`$fy&"a {0-UAN4D@PTqWD$({<0o-$IuT` BRUmxS tz{7%Px趹ji!-GOs+_1!7:17tDDؘ-zvg[+Y BALP5 t ~) 4g]Z bŝ<-+WDUdPQѬ^y-Vgw)% %9#6`ęiL61r Ixkxz܇ݯ_=KrIHF~z3>3# ;|ېV:Z4HK!PzG9wҨLZk5׏/΅&4 vvq}³#Ys":lgӉ\>2]rk"A{Ϝ:%)/NyuKyIa׎")LL[Kt֑xFI(!a郊>}LQb8s^l_zjK-sֆQ2k^[EZ݂:6E-ݚ4j}ᴦ<%R CQd J?^No?Ln.r̯W,o%$O4- rE#o.?(G?X;#nғ D;op. ,]./scǯ݉l`z ez.}!Wmw*ȹA1=5ՎHkqx|͙Ho.q:D, @򫆲\1@㰱Uo5b-Y,\W :,뼠xɚut#؜CK5!Vӥl }vRI#k=&kzt]Y{ |x1' ]TL.{k(!HУ=  &flCb" Ζ?(8T_}{0J݉DGFbtD qQjqP;$XePS1P|m'rqĻh M ܞtQX<IJq:FqRsG6; BK Ba@eVIY 7 qEێqa1ȆI\9z(*VzktE\ɠ!yn4cE)ZR'/]H;H?©x-\q/eMP%aj⺙9 N[|ֱൔ"2i%y"LʼnVk(^dƫi>:r,lk|QèU*߱gqcW`p3:=vuc \R6Wcv*$:>;:KGqj#iG^,l8Ra'MǬ%dШо˰U8<ou.#,TCr7oP ,P8,0nMjC1՜KBM,J}#RyZ\+?)؆` K)> 1QԐKNҼ zIs#&I:0d6fRmQyAhRp˭2IR@n;IR [8 ̶T_JF-fy4 )O8 'oyu[^7eơv$KR^"8( S\{$RϳiJTDPX-yUYA՜e3sVsTk_,hxM),D%ig0^J+Z]pM$bCaa`(Ykkԅ4IS>vqLS,m)7L^5 G6ԙjY^Lp%3 %{7<.19bEtѶ-r!eڸ =mR3+BA-ba!VFO twIF$ŇS/~],ER9(q/: n++9dڲݸG>~XD5$5"&8LP(5Y,0DlvN0c\ȎGxYƢAH| `3<*E@yw 'hPajUj'1ʊt\H9F'X$4(7(4%cu2`nn8%noc!FCF2 UԠC@(CX1ENR.0`@.B>s+(,0`r akIu E^yuC1i.ĉn/Y|9Dh[Ge^-Uχ0$ frv l{ ;O^gh\p7"8< 8`ȵ2`l#K4h5u&G6M[c+#kكocܗ^mƠnWn6AQ@`Hgq*gs=,v('͹=,I_z~?ʞ1_ )&8( 7j&cTzP3&48i2 y8j6XL0GY>(&1vzѳr;i8D'^_*@ þ Y:~"w{/T 4Mgwy0DtJIORyy-n" 9pF)ȸR5V`^f_+|֢T@bkg,HaI4ਪIѰ((Z9YeF>jK-#WTkJ.Ԋ eU*Q(kj gV_J*:>R N4)+6J5P\oUGl(8)'6X$^SnkAD# /qNʠҍsqiʽ1u z՝6ay]b̘mFqwa]y_`J4 1'&]aM&t,EJc7D_WljK͓aEj5&~* #>׀Z(?ub/Tcf?o,L;g$}QQ cfJΰzuܚ'g4(A!1ƻI)3O52R Mue ]Ak2%Ӟid Rif)wg/?t.L#zr|y5a=q7@vOBKMrlT .6ʽR`R0e |NICZODBFx瀺o@p%8{REjD*I%O>˯oEHk-X́>][7+¼ v{ŀbo$ݗ ^m%cI߷(iƭKbwKexD5_ƪb/i L0Ϗ&09^&+X2Baɨa\k~MRJ1l8[P#bJe+ `96F&Es&z^u"a&GU>4RKK*k02c+9 Ddj932JkDbg6OiNN4Y++P+}r@0$W~6. '5kU=5[I& aS8rlj[.ήwI,m 8Ar+:c'KemH@g<"z*ɳ+ޑdv׳Ft\>JZ3~Ҭ"0ȡ\_َFzQV9 $xڠDB˞λZ 59z)tT>|L)kgp;ӆ`WůxۏxW-n"rfrd݊Sv,ff!T;=>gϵej9gk#1 Xs<}4w(uEtVZ9nj(>(/PHy͔";x!1HQ R}α~rf6'SrUts1z;|69{j~ܽM|{J:?ǩXyǩXy\/V+,Ib(#IF53"jMh$(w:`cR;t"(|oZ%ԈI>TК~}>NY㔕>NYzVz}`$@VH,Xs b)yĜPt}V:L c!LRb*TŪ)%G]" Gqy$`TPlcf6WJ#Q4ZPD҉\{sNM?45+߽櫗KXmboŷ/^fI\"P2|:RW|`A9TFi eޞ[ĽDNHxwحhtjAԩ)AK%÷cNf"i)9 ,Jk0FށQh`^0G cɧaB8E'\TԌ 2,U0ssDzGҒx89d\ $!fI V8/Lix?`;uQ`B@JTɈ*V &%JaKK  6Fw ;M c'XpEF"MyJbM3HjؽTS'E…C˭)NZw~J6q3oxBfMa߷_]HVJJ\1DWQIM s,$̌Uq@b r  #A 6er6=~} +T}zZ @{Du{u-My=sN13aԤv˶d)g6,sY2wC8ɴHQ40× }z&'AF: 30DW%֤v3ߌSˀc|7169O#hja}Wd&o+>C޼~6v3ᣟ& "*Dq_QfwK2pu5{ujwwߗ 1;K~Na|Zඵ%20 O}xWX+A}qC K8o:L{g}ID?P x!AўrٖaP)7CuCc$_MaAXlEf c=ʄ=oɔͷ#Ҕ'+6nnu8^SS՝>)n —?3{}q=C8Y&uxfnw.GNhZ臫w1[Pho&FPTP辺+'lܯK %ԅʇh;=5Ws,1'g59g^V2w2w֓8aL_{<4֩JE+ʋ0U*pKjDWg_][٤~ⅉ(bgcT:N2b\cBI"Jp' 681a'|S5`"Qi ; 7(Ђ7$#sg֌`ۦNQҥiâύY2^ `c uєZ3#TirL:u}jT׬ Z2QYCY# ~[04 Ϛ2bcJG34ڛ^w;U~alAH0|@%&"eLI6QVnժ]֜MNN'm5,̩0<2Gq'03~7aid&jG @%.0(/`u#ހk79" QIZ"FB{옆و)Nbu#CX;8 ﴋy8,;P@2Pj;è޷$V9eTJV9bs*)Q JԙY^;L y2Al h%O3 }RICEPʡ@%P}G Q0ʈB `"(|[,JzԏCxZJ \Pg22' jaI@H"J֠Mt%&HUZk,)$8ؖ.J狄1ߞ'Z&qotKp5Dvy㖇j1y t'GVځ_iCFqvJ8lvZ EӀF,&dQ$ũm'=4xA_ "9XR+)K%X|M,!* o1 `@TM;` 5ѺSMe~g?,"k[~#gr( ]r9A jB`XPB8N V!ȁ@aFd{(q{ROA,F.;VdvL+\yU+E 0]7?,x^׎,uq|X~͍C.ג5s  *WQ=JlAo҅ Ύc.Fۗ1|SsvP]ےe&8q$Fxַs])4__.໿]nN8Jxeտx`f/&r).ŋPGaƍVZ^Y6F gdռ~I1[.fitMPRčc |E|~^-H>Lӗro9m)//Itw KTKTi*you'g[ZxІkkM8UňDvV$ZKڶVAid+B.$-U ``KR͙ (㥀}L5=Ek+#gcc1 o1H~ 9NcQ3dX.^oeJj@SR2c;Fst%tdJȮ;ARbO]緰Nk/gݴjDWu3Z魔;듃Ύ<81릥GVk5*؉Rc1Sם1[ )"siݰpWUvZ]6NT[Z 4@e9gشFi @fcL}-m"Xoi,7`TK5l{Lmb;8H^J~&/%+o楴c:g}蜯 a-޵mbx%,Pdb6hN/P$x;㙵=Ӥ߾e[)QEdK}zsnO!hWcȐ3J(!z@ ڛl/F9%C^A x;,崇TFmΰ/BYB ]85}AeMRV%2񰥚3@4R_jS^zEaP!Su`GuLy7r(U`FֽOnQT(}Җ6y0ynϨw?v& ءWc3ЁN+7>H[^:2>K5]xogp WIUwˏg~]͍qn^Sȝ w̱֙½/>%wS];n򜐀 WE(B Q* iN]E j\\ACNV[)JJV[L}(5@1S{+Z!=Aycq%} 6xG0S5:jDhթ8 q wS!JB <5$B+|JTsIj{.ȧNBTsc,Whhhk1ss)1kݻؕw-o|H+E+#gRLcCp,ӷ׃X^N [~E2Y2)] /|6}x2CF_yfX6O"zp^^m+w. /Ky- THҾcyW8p$;X.ءu3K@k.1gh{P i~:l!4b"X_ާt7z鰶knf}Bqz@V̕_bGgKԈϯG0nnF\au؅v?뇧;' O7Gxϛ\-b߱Iw?|ې1 Wr9otkicM2"Nj/fV`܍3ޛ>\/,Pů E4J0A9CXb1wYNwy8@E4tukCBpm.SvOAnԡT..u7V72a{gOgGNz9]jm؜rQgEv~iON uOIm󫛛:v  fǧ{5s-i;wVAEmwgi9DӀ~ ͻ߻mV-{;[TBSS终"(;3DA%d0&awsSeLI®x);hˉRBhu Z3U8J nQS) 6=b)9PJ(d($ˢE ')5_BmuO Kl{ʉRB=Y)5"Z%",p懈&z؀ݜ`c]8>twj';^5/uTE!j=ؼ=(Dw7_P8+N- |<,&V%"&z9/1XDNhu1ECK T%.1YEܓ,{vu! ORi;WZ#ȨHg`dHF& ,-)Leyu7d( xN0zQDA|lg;g"_nUZ?4|'b쏪zVAmgѺmr0*4g̎af_N"? AĠ R5|)2ߍl5C޲j{2/#[Ñly,FK0c+@1ñg.EI$?mO$I@PS002U3ޙM=< Bpl@SvM~' nsS/b]*8舊m؄\Mr zs_Wq4^G$u|IwO5Q']@(^:HsګG Fs!!B={.m>e_93uCU?渾-_mf3&2hq% ).|υlQ6a;FGLB޲>fjI- y@ҮLߙ{''3ayA 5^֦$a4\pP 6f-Sp9iDaҺr@GR %tv(!C^c:@" W[aYu濿7y=v|$yuG;A/ٷo^ j!MQ¤MT4|jITa`pR%Ȏ$R;)ijMutoMb'?zW3̨Z㑜> 4v[lcbO7MRTHQhen7+$NܫRQjQ&JS]xpu ger0VSvU,3sd~澹cxz=?fY5z5sjq|EϘ%'ʮPYq1޺z+i*^7L2PuX.OvːaX F`-lXn [LkG-ʬΦ(b?XNgG#-=I,n_`"5-)GR$vg):1A*M-0 pa !@c)LYn)c_1c &avl/F1B|(h총=;[fLYD 'uNQMܨ!')L6l)qi6vV(RYT[( `SäC5uUi䚋_`kzwrU{faL"ɩ(툘{ܚYׯ^PJmL!\i)% EL1ךhˀu JMe f^^Ђ%ZN_NWܰN ]GوUzrU•M2gLө)4h܊εM64SB1D4ĖgA+)40CD*R b`0s[~N:Vt"H"fXlCfDbKt Y%1%nf==k3-[} UycJ:b߿Eoo^~?Y7["{o6_ñrx=F @M(-`?ތ9_,WL6ʾC \]l=_ 8N-_9I }W'ӜaO{W<_tּ7 ҇:[çamp.9a}]ڶz.g!zYW >a!kJ9 ANڣЮT ˞>h%'j1^]$b~%w; nݨ1tic߶ x iҝ+?^"dKd ١u bK`MJwM x yJRH.8AEG!"g{)J3͙A6c2`AjP:XT⌠1PqxŎ P:َ;*vvϯnnzNrPJnL}O1kw}l d"ĽOI3#(8Lϣ[vu9qFϛw?Y,fY O,iUѭџEqSȣٌz_Bemͼyp$7'P ]ϛ=I~gx>s_Cݝ&_:\tVֺ&U}_,&=+~|UyވAXⴝ q@%-p9 {N)m]/ 5i#2 c|0 h {HKCā}WAAՠ|RUyBU5SH8L<ϟwiᢩ, !Vӻ\װ**L/R9a{sIbK:A]Vf~~~xgmqഌ޵u#"*eOgY$33/;0x;vﴻd`j#\uw8Důx)U`dS9 'wPPqkӀT"b$sܟwk_xZݬz7}z}n"h~z\ݭۼ\ލwNJ2]Tse=@y"oz )$'r,JACv:['RiՔeEd=^PIc{>a-0~ZLt3}?^f/Rv&M:]vMivJ:׼C%cXh|}]xuBϡs.FR-.=8F^?W=W:dyWɋ>,V>B*hJ'w6hN;p[Ci^'`"d/6-r >gT\SKh Hu+P= vdщEyq8pLV!™+V8QCeϊ wH27B"OyJS}}J 6eCd9iu>!i%m (1bDž r\T7_c2-Q`mTscZz*T& D ^>r(JkY Zdϱ7|ks Kv >74 b4`B@3^H0\q4e:LC;ḧRMWGTA9)AΖ37Wm]əu⛼dRHs{( _΃eſ)2Na}/!*(YC~]⦋~7=K@ h+"P^[^Pq{zݭs=Έ1L4QQ7 ^*Z2.fZɷϺ'XwddyoBYKۇF\};L<΄`0r8r%9EZS1n,g |T, +# phAMW7W;>[E{ ڿz@W+ QRC̺yЎ`# exҒ&])p28 8-f+Y[{Տ\>|Yj+M7+c PTGgD`9* zc"]Y(=㮇<] I s컷~{Ax{w!!eIL*'ts2LdzPbYVұ{>^ct5${*5RP~[qYCk«Q;_h>o_m`nhc֐pk#҂əx:mx۞3ĔGGjrJڔ]l L4={R+mi23بkSh H۬GGөiPף!ݯTPzrc(wI49}m};%;" 2#u PG_w/8^ oFEDE  Lrk4ʐVT*>E IAr;lBŤ ;Lcʵ)ϰh+p ߻mR<$ɴwqBQy+- ^ڣ.'W?|nEJhh00KHÿSYZiY&6l6aek L6P;b5$GBwz ]:jII_V0|#G8{9`llpua`}}5wj(fORR.fk0I [ȶe#B ςJI*}e2"z{4 sBN,E&e~KKjh?Vy`K0$M:ͫĿjuh>3~>\k!VnɮLKvLeG-5 "83]3Prp0ɓ#`>BҀ5q!nZh(H ;btH6mRv;dTAS|'-ճ_.>쇕"5s,OR6 QdC`ES vBr?JL2%2W5D^*k4ԭu'Xd!7(9&QQ:~3#kDkz UI2ܥY˼alj2Cg˽p'-nS5 V- #[[U`+:;ǹA$ IUAd+Hޤa(JcRa!9-h<'CPUTEb+r,pB8zBK$9iI REc! -GY"E 2XMcLYIoHӲ3ʤɴͱ)s+ɱ>j,܎gd/kVWiKE){coz P!  nl reKE!x=o#wRIf&6b0R DŽ4bɥ#'$~+0@ SqK L"^iNUF-F),Q⇇7m|ߤVT..iɸߍ1joΏvbMS^* d+x28[?Qvd~NA5x!EN|a^mhޢl.=+{?r% 9qm$Sˡ +1lnM1sniO+]k]3[r"H4 N^nZB5 GtQGT>]k@3[r"HyLB5 GtQGG3޴[Lֆ-SQ*Bګ+tdiPWJ%p!HK*Mj}P0f|׼BTHW 'Y ,p&lPB YQY6"`nHݴ _$p.kL aHL;8,NUMp~TnRIr7KC[ V8 iܗHu Eʐ.iTIJ?k*_hQR ]meֻ(_!C@ t #,R Q]ƀ ѵ!C/a[^嗏+P*M 蛧t 7Z_Wd4SwH_Nn?X~2! t'EN:o1B4m F:=hSz넳%,bFa)gQT~J,y5z6fYKѝ]NHp%%Я 1v'{qP!xż6^YTQ6?ݗb_><͌lO];K.J>}Ex;}{}=_|V%S\dR'=¥96CN8,i@E Cb ˙W1p -CYjGBq]ƣBlY?5S3L_Ck"ɀZS3}9߬Xy>mt׍|n?@eeAC@f!!"Lu۝-CQZ[̟^J;/J%g=dNF+LrTS:&I,kIMCОuTuq Q [-wBN(p ̆:%kcs N\%8Hh5`j=|{zIJ1 Ɖ mܰ4' WuR*0hJ#+|7% <;) B-z"`D-J3CFH8\I%i ׻k)R-\i4K_j|0YlvM^uXSK ]!ywk)jU/1J5-j2єb{u&jIq]sHui;y]xP?6ѝ#]+7W+`\%C.H!4|!tbpG!{`U9Co?k{F.h`GP"ۯ'er W tgq|aqPz0JHӝ{ ) yCɅ8%>O: uYTyPK}dI \RʡH.8A!E &yRK!.O7^M1pJKJ"仝ߐ˄]iöxzL6c3| $zd/_T̆:>q>|$__a2]o.⧴;"jo;f2fJz'7ҕۣoJK%8<{Ap4RD.b\`Kr>_1j>.Knk*K]`ȹ ûiPM몐L!J:ՒZ@^"< :UYk:aŇ8sB|[=LC%I }#?<.A'/$"(cG=bn<4J04mHUDC jRB$(+Hz`x /2g6W&Cl1P e8A[x l}78{ļAxOF˻J<&Q&ރN `{No @z59 Qr%twbAX1"NpMy9M2 QC>\sJwMԵ`d˞\FB4P&EU7eCQjҒ[_3* ?g "S dp鱼?o|AC }ioodae3A%OMҬ6 ^+66q̦x"Bs$rl\.ތZJcVޛjF$HBde]xT#1ZiR^HCkK!*;x?ܾbK (XhKA2AjY^>ⷦK0Pkő`jlRNgct; )E.FpYH@y>Ms`qnhe()!,@*]b–9-s͸;WR]ܼ&rdrheЅPiF@d;~ iG~^{iU}pHPB^EduH#aYHC\ў|Og.^6|TK l{e~? :,zv-@fD5qps'律,ρ=R ׼@)w7`&BrYbPq߻ߚԥ@I1@io@:0vj x\THXO""|i1Cjd# wўA`!j _6PaLD͡=1~ie~+Xo6fqhG(N0kh5đ2K zPgsH=3D*k[q6dX5/o&5A9l2q0DPBeІJP%[5Eiy݌;@W7pbL_n`Wֹ>C2VM R}9e:|@fp S=+t$k+Rf, oA~>x*8W~ S١:?P/WMNH!^M)i!T6717Ύ2(Tj%2eD Qs|:)v"ݏ{LA+I( H1 ^1;.LazBo_f֝Ը@ENZDjho]^ dsTR*ϣov\i}6S{3tǻ׏Wd4&Mt_Nn?X~2! t'EK*uS*uJ-Fk򾰺G Xg3ڵAT RAђ;V  8rKeWϫѻ-aujъ5jL'tWcflW&<ތƮK0!5$ ^)Qyx@zHSuJ &V%WV6p422%ky ь&ڢLkd˖ܦ5ܦ7\E*?/W X<;揿۷WYH L?'oV}ut_0]m`t9X "!HCIZIΈ0Rjj`%f2V[O$崩́R x9ƥ'0yx0sVn@0X%|d<:O2-5GKL z l1~+ލ w"h;^w)Yʬrc|&|U\a'd.7)wDDg8+w6P{;TG˘68}{>z)>NzVLS:Z{968Ô-|45UMA&j*,OZ3KPBsr)̚u]&j]8 us\ "¦=*$f΢%9WL0(PV:) ca}rm,c{`.]iLqosF .ֈ`mT~J G5%҆j|C(ySr 28*K<UK2!R`b&`|Fr1#+w1YOxJWJaZT(# d0rWRr@|ɲlVYc;3rF1^a|7Ddf}MV+<½}iX#ܳU O-?Qsy({ߐFnOh^( uz;JOcIcnm홤-P \œwm=nJMb`98؜cÎs^ &XduiInx05.~U,VE,8u6 SΧ<'`]9ڦT z BW>V!t|_u M~eƺr3 DTW|]2"k '#SH*grh_^0뿹d݀;& E8}wuOVYb9Za-bD-s*Oq;OE*T.IjrB쑖eJ^Qs#j08F1i+iC58g3( TX"#Iה ,(\qSMa.QBЗN0 kQ%w w'۽<N%p6|oQ<# #W*=%yl%pI"l @ ˸5`aEDN,QZđT:P@^T&Ap;J8"9] 9m2xbG$&H_/ě'pᬘl2B'\gX@ҚRҨ'^?q݈#W6|rXOǽa"\uMp8XÔ?}P`(HM]-T+s cڗ\q ?%e@) SP.e2E):Ql\K\K4Be1 fT9^C V0HLTulnia03a;!aXi~Ia2)ܺaq<% ,Z&?ya$4:F5^& +jK޲Na=D!aqN:M:J ़1ځBJ„qB4TQʁ[eh*JE-ڧm-4" h"37#SI>;_,U\**OST#**(Ř䨚WJ˺MŒF<tjh Aeqtl- -s؟s?X  !*CFW?N'kǒpW)-:5V׵ [[WOL2Z?ow<^k  wV\V0}Cjٛq#0=-6Ԃ]Y-<G)׈W wxWkhSdo'M *7 6v? ZpB%˜d5˪,6~I@Jtġ:z*TFge|qȵЪOZx)&tκû˽t0,1Oqߚ'x뗨SQ/*8DD@(P G[ӦӘ{v9P%ı[EAںFOL y20R͔8”'!Ǧ'LFz;N/VgcA3:JBUB4a3kSj# 6XbejN7"cf)ai$t( _#giC5GXk?tz[ټwMܒ2xH&F.jpL\}'c3/3V񶊪n:ݤTR!=ZswFq..}BuV~W3 h15|y6,܇d-U#6TGϰJ.;Er*C<0#.y9iC'a)XYcUE%p1%J˟ˊZG180.|rSz[-8ԸjSNFiNLby.MajMz"areKK6FI清96S&;w>R[=M)ĂPۓ<[4|<kS ͘duϥs/$_6Z< E:YqލY <{ocρ֦Ef~?Ƌg3zU e*16ힴNZWl v˞^T^f9 z x.6Zl^1\'ٖ?)zn3+rguUAR L$Cz?g2E'%eyqX%0jވ}·}:׫ٛ q,*[X@\1&&UnvKO.J!&/ ̝$/.Q2U(5W'Xib1":Ϩ.[@j&$/.2E'!]}kiT\N7G=M^UjܩmeCE\9{'݃f;>:!;IpqWj\ۻ;*Pr<}&R $;{IwuEk> (71z t!3En67x{b18}e?s,MeVtB=x!%IVԬ _ 0{p+ ;籢CKXgx:^@KLX%%}x6RğrVl Hz%W M qM> Ey>`%"V…ϞN|6کP^_ed ӻ{%.3SRiD2ajH9Z+lyPrv2)O~2-f _n>-/no1,G//v,7 kϗ1ݣptO0݌`'/úCk`-9dTK3~q% gC_EQ_#j08F1i+ł;vH/$mXB|;;<\}ƃS= v1 >Vgv-o5@G6S?h/!gV4 ?a` \6` `EfOپˉ7q0?~^=n8p9r*M|4ήTpM R:jET"$j=T&;gBBy@g.µl\ MRc[ʡ6)˹)eYu}ujZ>d x^ ׼>J,tyu'_1Zӓu̡39Tj|*=`v$N 4sگM[w.[dtMiU4LShrw|4Y* d T>SY>K B,_#d-(, %%(`FJSH?1K ό *"13&~] BD_ W4#DE&JJ)h|rujAӆ6P)B}.)PnhweHzӮg1/ /kC` K}U:RLdfeIj:/qR$A ,W*#`jq ##^7 (ņwC:+,K4x 12 &JbL&ݟtZU~u~BpDE잍Jsh|gXN]Y!̆ҙ ~Q,; `dVLf k_WKQW40fbxq:'knYG@ЂN~jQ `q5܃-U9=BS.,APH>k%)%We2XYи ] XeW '\W@--g~PTm dFl8셞aGY?H9#;85 ¼eT{Pvi7?%x9ezOCZ;6vM:Q ε\ʖ{TG?rzWJVr9xǞD0F A_k3BOC!ړKczm9#OJAAmw2Pa7v'(̓sӰ%Heјt">DI!FpGc9AN"Y AbN1'&<-E042녉17T[ B-/)@L2TzN&/˭w0I(Mk^%fKT(O}F(]sF$҇AK9DZkP!(ќWJRbdIQrr\dlh]EF6` gQ7lrg 1Z[9yM9_1K`yKXùկdT~5i!dˍ.a9#ԢN$}B<f%'hb+<xVz1(LwsA&k F4`9za) &Nȕ5R_YK}J.xV&R%՜ {cfT{5)^wW,De[b ~:2Rc bv?(}Ǟ4/ip6Ml$G.|R?{r~yg{wNv|>{x uXg/Jɨ} ++I2%ill On&Ov5,x@ѺfuZ{<pK҂{\%̂U&&䗧*<%WFvAWWB.mV$>`ѮV|ZYok'p5`\iЕ(Kr0kߙVw9jtl.M99K.i>qrfxv&cs(ľ E>Pggi9zŧx%@'+1fbŧx."?$j{}})xO@Yvӄ?#=S Jfs~ݹ.\]%iwi?d$քr p O2^&ۥ'Ot_y^5Ft"Zsi Z Fb6tH ^Q R}EƜhpyuxuּBŦM~FM*43TJ ·ZC9[{_wh x S% Vs9)i61Ud$$ሄR/ =fa0շAYa=XgJiw7y #J My\L8Xm_`pDVP~|}n羼*Ŗ8eT.|q)Z5Qܤ[Jl*.Z \zP9U>!e.mAU&[f\(@G.~`ƢIPI#&Ć@@Ud*Dˡr jWC GɤQ.O:iey_T>8!G|f ro 3? ˋEP)u:P/"(UR, b'~kǸ (W{` @B;toC0͝5y|@":3V@R J3)do)$%Wq@)f)Ϥ1* ,Mk&&L@xf3RdHl#pk`h>xkPWdɵRsr,F?gHEKRDOOܗ e=W~կzqFɡA҆E ,-4 4}4X""c,jg㣻ggb7 (9:"eggË_c~u9Ĺ( yu\; an\aߡOGg` 3\PM74g?.ΤYcmDfo(z}RL$^Ev3d,>CﷀO=|fIӫ8kLfR9$i' ۀ b$8/k!Rm^rH%`av TrΠb %TkaN8^Q tjYnT~qd Ղz5(kqJ'KmPsr? JrW~:dk0_5,Ȝ;&qyt&=*IGь<Nz Zh9js#OXs*qVi )&.Q)JLnPij SPpXB&r-R/kTuЫ(zZj^Rűŷ FS)*2_a'Y w!TZi/W^E:2as,:8Q[|CO_.r# H],o]DLIO].ΈCUqs@ <e+׏(My{zޱ|Fp6#Fddqi6z6TrKkN O_p Oi|Zn'_M:dn|r"x?[BUNOMK^OYen 4K^fknzn-~ͅbЙh5MY5i c3%&哕ZՎY EE yhYScߗi~w-ث߇6Q@.c{Prf&$V1&>-IΪ@D^[x;%=ae<(4.j +DDKtٸZ&)l4̡QuEq EeK%hIպG@3k*e~J&yei܄LBY";O7?jwI84= Xq+CJ+c]hT<\x6wW9w?} /ܩZ|- =(W}o~uKzO7ܦ)n75*D~X+ʼn'E"FAlùhΛ dCAѴhZ$n DT-i^ (ШP)[{jAR!CvXw'e?hԈN4bre1(68'"c^ q7F)IOj)K7IK튶4䴷h zo݉򗫯g,U;:enACa?/sF9UwZ'[=^-{VwY3M~p! (ou~B\T)KD611{61\S@8 Vh^Y{c YcoSpެ:Wf*cXASz΂݁΂gn2*uQ#Ō"G?&k LùݓqJ&gd7҅n9Q'a,4\Ճi4lvA-aYܪ:rF}iF.KgZpsj .5TS߬[U]AhΎ`SiRL i118bfRJf= "cί&CAS[xO!.GZ&kct3VUƮe\ *s_+j\x?|q^a:㺡g& JZŠt7nͰItE%ft,* +z[Z2Y<( ˚OD-OS l'k0Ewե47p̄߷Edk>In8h[d[>NJc8!qٻ&m,WTa7Wa+fS㊓KR]ѭLRNM/ %H"%P )ʭWisY (]Kwl"2ǐz]BԲ;fRbm1 QI˅V]RPSXKzU$rgU7rBהK}=[pYg BZM gJ)0`SԌ1mP͔WܳH4CxqhW5@k( POM )*8deQ5kFlwBzݸNMI#C,z&Q+,H{фdD-fJ9{gɰ6l:dֳmhȽWT2616howxwH達ZD u|k/~GWhv_@d0 ?RK)c=qqĞZ S)]Qp>>C}'r٩hWώg-i~]'/(>N!2g[ nғaC_ r,]R:w~>Y\M(wr!QH-IUi^VҥzO!?7%d6sLmXAmMD]-txF 4e\ r&u q!gPhICRk(Đ9a-ZBi9+0 ]@Y r?\- ]{+Ҁ53  $[rV1! 2߂bBёDbK9=~r&BHC#eiyo$t|2K%t/$j1 ,H?<.ĿB}t{;x<7vNy-"8QTr:#eNSDh&v<ev6_~pd/S!;e&^^zDH T|[+үǓxi).הɉ)gVў&*U@:Uy3xtyQ;ktqdBvJdC+nl;Gp^ykщRip6ڜP,:a9 BĆs%1"s*%#.kniv${3N@Y']/]3*mZ'#?̸"DrMeo BPB "-+Q( 12eE2~n ʇpûӸxxsu!PJQ?;瞑Z'6Pp b%SϾ-90u0#0oG[t~bR2h 5')dNTpU@ ℲBUd);d'UP%I=ZQˠFCm g9Žc1xkH*LY˱;mtb CrPjL!Jp4@'a{XcFp`bV}0߇EV{*z'K0G7y=M~x>>O??v2`E8'X-U!xzVXg=d՝27[V#ܝ׊|!~jt{vOև7r/-MM7Kqu׋7#;Vy?&H|x =ў'A1fWP6dDase| ]f~!H,ªyH|*g9Ԙ-|>퐚I:mf-D-3$){ ;)1D[*,a8%.%b{S "\+Hu)Jz+Q»/~޿pH{:|̝$Ķ3wލ㴝l B& |mhl 6Sۻ^&g y7h[E8m 7GE!Ww4z/}Xs5vYPx cb8{P'56jMw"μj`n~fboJOW* [`w{7}%!RAЭqyvCUP~*I]U(p$H@JX nDnVD8V *rӓC/o_1Gtf,=XџEz4?Ou{_&R^>7,h񾞕\9 ]437u,*L J»fACNpg{wJԡhZU?`TT"u"Wz$졜љ9A+{li״p9v<5_| ߏ/?Ö QZ }wVx-sOb]2Ty-/;e/"i2Y|<6z쟵-zOBT\P6<fz,/(!Y rڭ[DnݲA+(,osck@@dt=x )X{&eA%@bwbZ+-$D#uR?RoQ29#kXlT,Z'L.=|rKێ9\j,%Ld`zqaݼ[D!ۨpv߹OcޫѯY_%|}/o]~zy%8lpzfނо;D#Oɫ:U>VRw{D0EV@@=ۏO }SȈV}%V&gmHf"]R5t3oK@s#8Poa[ &`A - 2#]wamHMD+Vfvl2|T|z-G!㶬n^<=yp1~;E<Gx2=?lsכP.%p߅^)Fo,o3̤>4rؒr)[z[M-u;. xo-TBukCC^fTr;mF8A˜7F3hTШ8A[v!\Et @m[1}L 1n;U}B[ U`BB. o 4^w/űݦLقy|S`Jy{1u~ht$Ne ]߹UEعIo[5?.,6TSPΡnpTXT6GQ=·l; {SkalOyx3f SR؊Q)|[K_҂sq,uhWjIQ,])oY,1 ?sq6LO,`~CPt*)Z-.c2$by>MHYK{o'bji녉sO -~qOppN15-W2J\hJ a1.BxBk%aUUnE*czXq7J»p=f* lHOwIf!q`g /7Xy?ߒ<9\Q_MSˁqph*[; }7IMeaYnH hPT"j]PlWiWmԄЮ~R֕A~(]:՟HjQǀtӐ GpoT6֊??V8;U FQJvP=o;* ٝ-zm@h]s>:"*ǯwaomXObq:"XzdB6ηN߃GkBG5y{qI=]ݎbTj1AhSPX#E/S&JJ rQmc3C`x)HI%(qWLrÐ)V&h')sNCp%0#qL8*v0RR +ysaބpZ"-qIQpd£zuyoM__>hJ{|S#@ә.7vH 0ΆWlu7p&]3ƀ!%]ƖӺ,ʺ&=ؤ.c艳L2jw 6 !``mt[-F[ 0.&tYlRM?Z&݈j'PO)ߞN!Mh~WŃ<^-쌭Q`qZ؇lb:CAz#jRwB1!Jk,ܚ#$QSi%LpDa.3eIT@P&s dΓLO\i>iJ ʜaߡE9ɤ)$'!UHy@qp`l~u NМbd Q!8VX=d-b%sVhiac cΐ8Hv_ e3UFo hF*4mEu K&!-;$Qh ՕzW)o /{-TLOQzɸ}EddO lQ_P>%%k/5Ҩ b71H[&r!Pgf">>۱*+HN)ir.QQk;sWJ"9֩wt-r1H;F=r6Q $&oCC^|M <떉AdȺ si-C[ U`B@\\іJN~C)1spծ&Ky]eJI{$$K:z~ׇ{ / ABL4&〦IXo{ظszz:=>( 1M%?g7T7]{5/E Nj7Ʃ͟Ղw{ ٽwȑZR HR߹ Qٻ޶$Wٝە/ ,w{ &/0[Ir2bUDIT{0HbW?U]]]]/Bb]偂]{+W z1.#_v %f-ZE}t4}nGrarGNG.-.|NeG#MmZ+K\svɗ\!8:{9_,1T`ymZԔv91,̧e"%xRV..,?ÿeҕ+/KWeYZ&wc8O5B+Cp`mMgg3it*Yf&&χK譿o7"G]+f<?-omF~j!qe萠pH$h5J12'44KȰ)#Zb",UI 1,cR06:ya6˝gV@Ds,Rf)?Ƿ02MސHNJ$;k]so/.68!s3bTm$Jɡv#FK 2֖18I`[q݊ObUC^oK"=/q<)gs7/5{<}!EbL=A|YE(6l{PWg#-rCUKGdn;؋ Hc2"92QSzKB~tE0#aq;V7ˋ y祒_o O/%h?Mnr~_z|FɗdŁisA=q:g];^nz^&|tRש %L0;bPIi/$ h<ߝXrJ9QhP#1[H;>GКa^Z ]4ᩐ4C0l"T&a2nQShbRd*+))]roX*,WNMi4c )()>tBbB|+35_3хKK}Qe#P('уwϿ9 I,V%o<~~Vz`yq{ "ƠiEHǟ^lX3G]s?,I~n̏)< ܡJ^}{t/0t.$hϊw/]XR% U`XWm8$S"ZJ8~>x+^}j1GPV(2E+CUii24 (*'v- rL!JZ91BLFfi&`\jNg˄Z[qєK$Y `ZCLbΎ>i+{1QD1=*LZd,@-"S43Fs(R`fB)h$ap8-NZ یF&HƮL%ReUdQ85j@UYU~F .1d\qQ3cK9sZ.= D.gSCT lϑT h\sƛs0b}^Mj֔᫪b|0MLI֕B SB i2Å :Lm21=Vp_?~8݌ix{D1+e0f_wN p+,߸XjǤҺm'kXnL*'I_،;0kX"ƌ8ӈ4A%z'&ҊyˋT(Lu$s ːtց1rtTJY)K"iaRqBqY1^at0Z $[`H-x+%՝BT)ya?iτzioyxS ]b$ &gWpę8Or(p8"V`0Yr ]C?jB0y&w뀄cCFeR`h4I22,I&PU *o2ӱI#i)[C`6$䙋hL U'h78 !hX BD'v&,ݢ'ڐg.˔{k/c+l?No"m\O&!ǎK#BQ$FQr:x ˇ ]<8Ȓ@ Ɉ&cАK3lZ[ º[XLxPo0`.ݡDs+?d˴:U{XY1{g vU+% aw>tMG7ד T:*9y'(!*5GZVn=;y+Zf yU*"Kex-αG`!.EgaGjӰ]mz]mz]MrcХn]QIh]N2ν  [NQ<=Miz d*Ũ!sovdstY,ɼ*d܄7.go,ɬi=«f^,3ۢ$'/ϑ&t%&z5(k::>z \6C[O N1Fk^u‘⟮g'haGxH]rhrq7Zם Ns*'Q ]VQ)I'1 :t8VA"-Q$uOa=^e.WKamt=|Uf E:w}2Y8SL*&*$J Cf!OrΩsK!ŭm{ TR8Z̀O^v !F#)TN:[籌&S?׾ѳ;xr%*5LXfy۪$Vʹ:ZaV쉦m0F\\r5pcB<B)RwZQL6E<n*b1IEZ륒v]]iʜʎSiɘ1إl̴b%Ùeujj%e*%)X'R ,vY34ԀVFX-H$&(:j bY tw#{zݕ/ʛeU<~,o,U]D5`94uk_:kFoVnopHQ*4ۑQ )ZnѪ}ʢS:2MtU֫?Wdgl#{ZcM]~$+d'xɇ#񖖉ߩaE^ǢWeY6fNf4rK,Ejwݿ.y!mٰ6$&} C&|͈|+Xz }+ޣLqɐ|LZ9 ~kװm='z+7bD $ѾƬ(ob߉Yo-!Ov 3[4e)  »fBW/!p2qa28Є2pj o曄Ǻpi!B$ܿ %FX#zYr3(aKzĄuЫ8:?g%Ӭ1Rtfnҹ2ǵC6U{1hIDN -jLLcBY4_N| ~ iR%'`"2!REUj5j9CiBĥ06!f3L4Hb2Ec,*3"1ISRـhաhD-jΣ  Yfi"7nQ-9+b?%⇼8*5&۵iK4>|{+o_Ďvv_n<tpBx _[-caܜ7<# 6]eWHO!J^=_EE$䙋hLI=bGˆb11h#Zkvs&[E4JJit؄4--)@6H͛[f՚Mux0Z "lX7"&I:I{n:wPv|r=uEtW_r-~|8AWBEBc,ȯ`E)V#r2[`P*/ƢcͥI lrQa (vT-ȿ}-srV5ciOQ"5h&N|j C7grTrLpA"EЮfympM*_ub+@`ٍ3K!ߣ>|j?$: ]k .`9$}wvABZϨlOosxen6ɾzW$k2Иrj"=N9R )1ǩmJo!ߺSeGm,N|k`$W#!xO`r1G達ټu Ji=[ջ]26=Ta[A#ͻdqLL78{(S/$JCr<DZ[\߿&g5c rEI-!o[UzM؃C=Wnd5,d\;{O.d٣GƼF菑]g!M K (#Ta=$>>Ԋ΃ }ETl&Q$h+QK 쒔 OsB3T!<wB{8"K4Ğs~4a 0ԧyA8A3AxЖH {U~ZᮊR]L-!t@b~M\DT % ?M_M!)Qôe/6#M"MZ/`y1v +Uls_%:.6w1g=҄…󾣎2Py8#]2?lCz=u_ZG.S89j6'No|8& 7wL ![|X&6&/l{a&qN;ŤrNqHˊ2& Qu:q1pB kRMS˴)db^Zq-ĊsA)F!hB &3RδR fg<:4,k*eӘ+(R6H00~3b@ iLKu*38;s*&4cF&Ź0J)!H"T4YB1Rɝp"J% 1JXRt;A(QS`S)uYZ BW=7ЃjHl.a/}=J9@nIN )G%o<*[OzpGO`[9AD+ ?sFd\myn=N>^x8;BU;S\};-L߭`'6GT%͊ P\>60U> "Xs).jp<$/Oy`28 &,BKK&ρ);žhO30FީmL7W'CG>U Q 2HP`)`&ĎH)̀pPf3ˈ L]9s4ED2ZX1%+3F[HY1A R+hbŖ cD(M%qXqLXE(W][ࠉP61@DAp&*XP쟊EJPt-fW挔Vj{{K/{d۫K)>TiD:%pkBגa 9%BC䃅,; 9DgϿzǞMQ0>soMxg&39emٜG4|zFj^^ݭF?]ݭVW77 +%sfyLG+;G ήzK_6 W~Y6Nfo  J M|z Or{@/AkOԉE=V cC(ӄJpK|r=]m23W}&4Rs'nl^oCbA`[Z7@h# H]|(q܂ WHyz- vM/N=MgIΛz:={#|g3`in*w _ !EMW^G(ov< ?"5UX#q~>%9(b˼Mwh@+n:݃tC+S3Tti TOE ATdj˨E&RuP)I$7J27-*Ajw8 $idzq!5>t:ڤ$du8WtURc*d`Qm3U;  &fb4GOW. }|%y8$(CN.Ԣ/jѵ@ xB9 D傌l$Sg'45uQf$Eq`:bcH8؄g OF [HP`-8NRE QMJw(HT EXE+m)`ӱP4r4K )>;Tn(\\# ~n溲Kauy'JWexNw(ˏ~LBwFQŽ6n15v6yƛ P~֔t6zEM0||K*θ]Q P|za18 3 ǥyV;ϊ/gcLjD+)ۃmٝ"x˞<87ĺwfu?)q"A Y;XT(熖C*bq]8U ֩H IoN2^U %[& 2u NNh{N$SS:ӾTɎQ?Tѩ{|йD޼锰{B;_2LVẀ+DAXR.>ܻ%o<_ὃU\N>>܃ &J KZOEG^8I 1. ~YWOzBBn+JK@,ꍫE诽Y kac'|>w}Gbv?%?xusW R&1bAR1SPSt5^/X1-]݌5dD3+68߇5dYɾqB$580vTmgxeGdyc1¯m9 h=yc`3AX%O2.'&b݃-c~-AUe('1!?/$8npO$0-Yrg!pLit gk7 ~ch!FO};wK[<-xـM<~\]x0BPtFԺS dgT 7e W:4E*M'˾PG3苁?>z)jk#KYG^LèFb<ͻmٕFK>G.w%r f_/s@t]B;XKı4_.`{ޭ~W\0ø%/>\Qe9'!O]!98C;*Who%Ԟ1`!?O7Y~7g矐S%J/'V(kV8lw">!3VD+ mmk?m2_-׈vtd DRf|LmE _ԙ@n=k?8^3XW7f:In'Ae*nn>y 2ON/ 4mL;wLٙPHgB!ZXa[=FamCĖ[4=|B; 9Cl]Z~U$*FgA8W%^N9ӝA,ʲk0D9jn$!+`z`/1H\z09d-I"C]{"kD ySנD6F8j>D2B4lٟTwS%dt1b@ffmt̬ÊuC*>zlu R%+7? n ;3h|*E8ڸ c t&B/A59b`pq5x&)4"kG;|= ޫXqvi1$)[HiPhSUyRv[5 6u)w֌ tT -lҰxFc-y&1BwQFuskL U(-Oob{7rO:d 8]Fa$');dZaN Tb!A9OF ISb@>I-RͮF5p" X\^. BXOhA qÎDv".δqfaOG(&q4"I DfC5R W X*3d-W_ޚe};kMZYZi)6 ?{WHlwsxQբSAYuJu#\aAׇ¾=g/z3Swf5\qE u_}|?NnVr6cP-tvpLey*'.B')Et S fF LɞmNbeá0;%&àBPv݄Y')HQJ܇"{tXAot|߯GkIb3AAz)dudž )i~SAt9clkA`rFFf53´DvV]hO#ЫGEA x@䮛Sp6#[  ~Y-~ٌImoVjSͿݴD@i,o Zk r=Y(7oD0'sކJ0ާ%xQe,YOndi?[Q۲0me9MϡMj0!L&攎0CfRTZp䚃$!ꠤ2R $әeJ3]nI~QΒ$ `#dVYB9PB( / *LfH@ttč'2r.xqt3>AlTV9.J;;|IYϞ}+;JYJ[f +Ǹ\>,9T;%NNS*Je~c rZrPB}^ \8%9{sx 20j\Yn($H#:??S&v&G1o *ʨ(-Ke]tEpشT)}}vdkHqȋ愖tAA]?׹@lqvfHtA7h[Z0Z|)V.Z+V<[o-[-ƈzLs)4؍A:OqVLW%'? \47q7#.i\] \Sާ,2T{AS0J,?&N\qJrc6E1E)p Jo~EcTo!/_D'Qօ^qNpsut1uU=s/*̺=߄ŎqwVc9&VM!M2wmn7aSiSlBGŜy˜xI(hAyS*gyx,\u fob61T k-(rdz XW8bVB `sMxÌoT 7mUAf Nܽ^=ll47q8EeWxBᗿ~vwS~p!z&3²Gdӏprg]wg)& W;;~lRNV_(*X+5gL|?R %ڠ8o㪛@&F4F iH㯾z8R:D,4-KJ8^d7O\ -*IIˡ$ޜ:ʬI v\ | (ۤQ"s ,shMP_/Mj;//[a^V ̍9@E}5=IŹgB.Ϧif(hmH@Uy^@ ]闕JE]r`H;q|sJN/^jHlN2*9@z- $}WibVs.f5HHҢIl1 LC.NQ$qmRI[p@T߽/}$58p4-LeSr!e]w+Oy3?F>T==:gXe|PP}97qgTu~i9Q'U|G"ꜰհ*==F5'{8"#b<:&_G}~b:֕Ux[t|+0-W`"4.4ཧm$&R6ğLNz'SW1*Ɇ]ʵKMBl[zm 9Jj?IkAAY~&FYpa>,)Ui6º:Opkʠ5I/}<)4 띯P隄gm5 u/ ux@' ? CA6o 5Ar.{X`/a4}L6P3=)ia8E9։1|th~ Tk0_nܝQΙd3Zksz~{Ip 8`l0{1&=#ozZfwN{ I%naaix0xxMZ^[>|}k^V8dn%U8t}8Z_n#` ѭ S7O tTM~b# Z@M( גDgfO%*)(p5,m8VAN!3bpޤnzP{J{S NyrI=1Eȕ%q#\ r‰+-Ć= *nG ˼'!Mkc{Bn.lPi-$!!;A- \@ڐZY2j7{s{=HA2X0T.EUn 4#Q^ʭ4,eNn/(E(͚>w?|Q)!QJQ@U9N9r14!B0C P+%Q"7, *%:9nK>[q.M4-tι0Ϲ+]~=. їb.sC;dι;qMf PuG$Wxo<M`Ĩ%QufjO;;|h$"ٳ^% <+ȭ%v~x510Οv8^:n<%a0~I\IaXmU s'q7}o0z-gVxPR ջZ͂ҺmToQd-f#MVejGbگ_p3%R/!7;-%jF#Dw)ճjtqj TR+0*x<{qNFd0ezF^C)b y."6qŵBK2cL*$IxgS2I]TS n:+“2*2WfۄCAZomҐ[YnNס+.7+9 5WuZ-n^@-]Z@PRK4 bjIKVv^CjUh*VpYoT@iզƔ;ogCƍ0`BIeƥvH35˔ gI\#_ >/ű`sA <'S4ѸUܪia 0Mt}6sTӶ6r SO֜r*UqpLe8{ '"O(S,!RBkQHůy 8u<@ EjT@ hNe\' mS#8#mEVn$SNhG(ZbF)8NJ"~0Qcb:P N*5YpgAYMi&+^YV, iԕH-,B$OD[5s K΀\qS&?e:Xa$VAVt*:GTn[3D'842=l=Nܡkg mϒU[w۰J !'aCY,@t\V fcjY("% q1bbs./􈪁4]S=U{u1$Eم,-{`Z~[Zr<(fƩjBsDSW~:PedO?} ;^?DP{|&]VkrС+j1*J'$u8BSQQ$ҹrKR:%~}-;7AD7&pr\Cq,K}܎O/nL+`˸٘}Ҿ _/wYxx#!s ~ n{㴧;20B6dOWMk;ԒSުquge"lwԑmrRhC;W\gkκv9`A|ͩ VF׸1z?4`SP;$Lƹ͚"SxBOYA,.iQ(e1 vTX"ʶv^dr}ltx˃eAhw,>KtdtcO[’ܤgz%El۰z%?l=YP:wLjTqU'BX<#)(p/Q7Mik(0E's/TH0'#bJrȅu{[V!ӆv>0EΠGo_O8.\ Q&`\ nlnJQ R$ jH z. BK0Bk,X&qHw APʨ KlQS{YXURȫ UIlHB$ ӫs.c-|@q\l͹\4ޕ5ǑnLp&վdc뉙e P%!fsuDuHRģ282`I[3s s=3,ٷv=&׀25W3jVEq=RYɭYks(YI=m5ڙRq lfNA"+&8HeʃHtyԵ_֘)Θb|'+hF8!XoEMǶM/edo7Ox~1 +}AnPb"ǃ\Dy]Rl]."+1:*Qޤ Φ؃j}SyqS:K Εc3A ?0CkWHڲI|zbp=zh=Bu^meI^Lp}{EʲHc+Jؾb{R*tYmFMOeBa5|onù&~wwOۗ]5𼞣G(DIb/]؎~?'-㲮*[6OY-$u%9{=C;+ܾi2Gc(̈yKmld"%V(OLϑW[."E7Bs9S(*o]WVNS` N*5=F E x()ǽ9\k CYXP1xY]goFG†=*R@W,س\{r]ՇNL<Ž=schF|b/w~nJ+GrDs(8mo;Zdp}GC]QCŀ%âi(vhvw=Ӣ[ity7Sz^E뽊{մu ~1BU%[zWim.vHqLCᄴ2<֏7Vi?kw>.omaI]N)(ߖxSY wDzҟˏyhM[noV5qoKֵs#J≈?!B@*XL`frJo[TxiE!d /eQ+{CL6Jg*+B!i7%C ۜ=`3B +'WTvot@EkH"*jxvs|&,7<'Cۨڨ[gT*jN8Ir35ˆ~m3spBn~VB(;NfJˈ9|70}sNp 1>%ז5;d5!m <$=5[,S1U~:Se8uT֓z ^byڅk qOR^I{תsZ mr(%9]qЕmvpzYV7 XGvy% _;!1H <L.5yLTFKg#ՌwB5c(4 =l,|\Q1%[nPr$xVx'r:wt$u>_&QfƤFK:Ϡʴbꜧ˾t/͍9RD8:h3yRbuJ<<}(Hfyً8|h与M" 2Jo#A>U4hWьfܤAVX+-(Žg%\AblwZ!*kW6Ehm_e6Ѻ-ZaR1ֽRx{HN2%E%8j%8c gjTchvR2@ՌAmkm@5J7pdȣ`χ`97|z>fMnI PMCZZYg5Fd>$]LX#D'x6JfkM#J-~ y2՜jn~JHjyҳ”Ca| + KJcL)-%0\ [Hm h{ȍѾrumRQ ύ6TkСh\Ri[HMv&ge)$YU-A |B VS)Z9xn#5iC`v@a l(Ѷ+BX[ZPi]F*]!L#+Ij-gF8+Gke OG|뜓)M?L/S?5'GK~W֓r9MhqZpiEE9_~_C7\fKkоX..'a'KiD\ c]0׷m@ Ub7uz]5No/h[AjrȗIZ! +'$Je0jwr4i_$].D{6DY4睳wmI͘i"-,5M*J*CE(ehsКQ-u\p4 d8ôa cn~7^ډZt6_vK%mD}O='"Oju,ȗN5íqyco꾿ƶtufVihaDa\lڋ6Ǜƹj/ajZ zTPU.(LYye)qa G )UP‡R4Tuُ-=ƪJ[T4.YS9m3IglI_J9Slĝ631՞غ?븆b9nwTfX3KXA6.<̜щ9nR`.e$x i _~>cI\O_axS j4Bh9-9]6A8!;?di\!LTl@Sw弼X9v_Z4{ \$xkPBC_W1=’uv _eFr%W,\e*J eatgL+C3snv2(J VRƄP[aum&S9?r! ^S;)x3yxL;۲ZO a3x.'DYGׅ5Ve׾YnKL(҆4&`EWx\Ur"h Issڕfb+*hvz!Lf 7hTH6d'{å2P΀ FJcK`.n{VzVTYiҡͱGy05@nI͌EL>9C+벫OV~sǐ)iL3pv[PP3Ҟ-ta9y5)O /G =)Ϧ jꂗ4diie+7=0l_󋮭Ǎ? I*:x*GXq$9<"$d% ċ ,¤4ey#)6sP3F['v@xU'hW+a@C_R/b٫y-*ZU"elp΄ %+LPECy`&{,$BɃqx'jVeھhSmlQJ5d*.hadeQR pd*,C,u@Ŭ}&0SDhތ1$"w-gn($bXcxL97E(x\5MƒTu' x`D5 BhJ*W% @ǹg^"7<Ru+T;&bhgHXG&\_ۍ`b0V@м:U2Z{a ѺNOPiB GO=QFBJ׸7͏9,`hզ7ZTYhu#4x e4l:-|?<ё}_;TzF)|k6fW&\߬~~_MV-aڲL7I[]8탚Z/s}DSjQߖ:ÏHm@x'Ƶz6 *mDXi6"qV\9mJ%8([32O4k L@ !\b ^A1Җ4% L@Ye\Qj .]L #s:3ʌ}ۋ7?śyf7e+1׹@}s>83%?OMZӮ8yq`P&_S݃M{j?|:HOҧ/.%.I|%&:d/f2^ϊDcdlv$S6h`i1]O>-$ 7{3ZߟdȀ֡tif Ef|v<4 K> 1Ŝ%sZ-`'`lrՀ<[M?MSWÃşl.L>9zzp-sF 80D?kE2AP ?piN[f( 7{PxG&Lş]~gt: v8~Px&}Yf`ׄ<IGŲZ뽕9IMF7Z@2GX0L&ЖvPQ!F%qkR˨59z~GjO3-zGb|yQZ8Y?F"jbbɯJ|U Nvhhw{{h!.56s?[+c.cᶼDkhSGɮBMR2x2̭Z-}Qzuz”eCHYGh+B6ioۻ)3 [[ RL;r2QyDDօ|&ʦs׻iibb:mߑNPWuIKn]Xn[6XI`ˡܴRrSZ8ݔ?{϶Gܯ~9HVd?-<$YyI`lҒWz#jMawX أq]7*`P0q%+n/_:?[|*IV3CCiEkҾ<){W ;#oY S~>?ޫ䧟Kv>?xyC; ,XW3<[0EdHl` *tMSGaK>5N4c:f#P`Ci7>rHGs qKC(C;W7_\6p)K4CxB%մ40~Od^"n;@›;bŀ-zYCig5 0fkt_ѷS^g' ^]D'hõ-KE1g륤cQa3FGQ*yX4C _ m珬ij62 Щ<wpbgu/@1:8)2v )=$@suDV$B|ry LFB'0E ӌXKPhluۢF-ۯ##Mβ_hjjYACZTqL#IkB(T"', Jxry6ONR8S\NJr-(4z .E0Hd(*qDKsIz0]  ؍\jEt҈zNJP HƖ 0A_-g-5IIEBՐQ+bƨ;7ւ4fsoTX$]UQ_䫽QAұ7jR"΀K@ި.L8$WoXwӢ#% $dk3n&Yh^r8)Avq  RZ햺SPX{ fKNyh}COyWA(mo7X,p5YʂOcU5$Gc>o\IKZ_̺p9a~YqzaUdp(v_iwzr.nc=8D&lU#?Co8qկ\?$uv͠QOqp_e4@RytDE߹-QяF8|_#=ih%8 P Cl8_#jЁN|/*WE f iTJ0CGޯA \CZt}G&6;2Oa&66Pa!$*By'v#ǠjiP":g}eګ֗j!$*":z yMIaj- JD}%N՗j!$*1f7 , =j@X-/*r\h{Q(uT$EU'HV2ChI8$wQp-/W1po 2l%Ar"T.]]-&ן6\_\}݄o._WE|6^K?W(JuH+;#&x b4Z#.i9866."<Лivk-RRZWjzR o|~l!5GM%Ӣ=s Ec&z0Ĺ2pw=bjz쒪⪝PiUf=TZ=j'4ʮ\UO' >UOO` Rً1 i8ŷ2uL]L_e@Qq1OLιWO'RRX>}yGFЃWFO9JY?||4 ŵ6PXG9jm@"ѶBIYoABD4GevoM &c'2 ײޕ!^^%3d-!otqe9tE~{Zs!%ם+zr׏B l1lՃ7S:]&d(l` )FVf tGa%RKzHuT1L1)G ǎ;z1VB>^:-T+¥NOy6wϒy?^W]6ȼ9}"^,pyWo6yG2dgr `^EXSo2˰Cvj 7 t{[YIWd $"L7o% "Fydlo5\X)!/);IڒWv>M\$Mq>j[ע**=2م?VP;7:o5ɿ}? R{or,/efsظ0_8?MTR!K,R9jtVY2,JT-|w\%LzER_|y|{>]|EoOnRd>g/7Ngay[EVz"EiQ+oSKb# )gBT- ̋G0B3d- 52@*cuܡ6B1[! IEGtTIÏ2b\j(E0rޚ`>ļ㞂6*v<;wxJҳG"m=3Lۻykj~~XWnW8Q.tJ^%e[5􌏿j\^ _@y-zЃ# Ar(Dfy3;"cڨAʍV+Rms6})ڐ#PE.Cݚ 43^hP؏F4h+ǞHjbYABεIw=D-dZI[a7'Jۊ@wrz<*t(E@Dcd&zp~h ɟ1\I&Xc/"fߋ\~PEE4ϩ WeRʶnsOuI9I76P,DPCE\j=9 @TfsM.O.ϡ%lв9ײI0fD2.JShf$eiZ[OHJZJ{Jrf*O5>4HMB+,4e6AYO6p\7'߽y7߽< bqe# i}PȭD.vhp?&FHw(e I 3 r w.,YdVKT=̺xt'q~uZc:NIzއ=ɟZ˹ oOO{<^N|f|tр*5MՐ`n%q&pu_mmC[ײꡭkں>zW[o?ٵ[C(OO` 1GOfБa+/W`G ūN~ۓ FUd3yT" 3yR{4H89QvnKN$KHVIB|=I |(kmI X5X)"R.6djcK~$˺Cm:8v%3}kg<]D쟱N?c\Dd џkMn4(wnP&ngm EJ9zs]Ni[- JD}!O Lijpa/EYznOnϖs6YumgޜٛӋswQ(̟>{yU`RfIsd{vBLj̆m!3W saa'p5U!f pvOUщÑyo SiȁVY-YgMBm0)ƁgIpn/Ya< ҹuAm!tck+KoW{@b ="Gpj;q5|òo#}p!jBŧU6>ܢӧ*45 80H| h S`m9$?FR.P>85H1Έ&pQ6Ӂ41LJ<"?8f ֬ y9@(he]$"ǤDhgA]؀cgoܹCSlu 4@Kqqv] gIp\]-AQ*>ʅf@#rL "%5[ ,nbi;Ra!AܗT25 /ֵQA%M4)ҒXgY} y,#MބɹlHFR+Rj駹1A *:D ?{eP񤽏e<7F\j#.I%Fkp$ཧlh|DHVZ]Jx"?8Iӌ@5Ҽh4)@sV B [o1>CBHvF'MBrg 5Fc&*l7@ 1xH{:0oAfy6§Chs&&! Z2((AFU  ֐EiʙiY^:kِ(2FvXx`*[=Ns."Օ&i&N}0߇@Ō̞Q,(峪/+g/ BQ;WX@([N ׉\,!cCwIX x%>!tV=:bXmm􋪯SX3n)\Na +fc"?9+|)1RHqY[هepLI?+{7^~z}P]~z~u:pqŵdR~)Bt-»+ݫj89[clLj2J6fr[u+Åyi%<\II[zIUIqp҅yrҭehC>JK˙-ÄN H vfU ޺+sprVn.4[) ns#/+7FUL}45d~9~zfsp) .ޝ3~~NEtt璬״|s3vҡHt'VwͼYN8igmF'sȩ}SteF^\>6=%!1sa8bRfilI 9if>m^՛\#w?|mšiCoZz.jR"N\6[Fqg?Im.bzfis J+ (Ot$.|B|{9Olv5k9Jj lT,mP, OJfj€E[T[$o|9-QSD `0EFF)/@_tBe4weQ,|~W֓?hY޷KpWī?5Y7/x͏$+w!o(R}~ KwzQVfjȹ.??A>.aj2!CD4ûN_lIjI1'eE!ɐnKmM't!l}HA;>`hnT0` .V?~r;uK v7jPb{K׎SASwM]E?ofوM1K=(0,<.XTx-!61Xu | ݭ8lY+‘d9; .a6|C1a2e`$U8;!4<\H5~茂 YήͬV)KkKbs ͌Kbtm׌ z+vЎD\ah69 c㯦?4F~W4]w];~@7s,XZ ΅ĘJRf5BR87;hJ=IYyx:=o+2{'vq@2IմTKHpپC'q*U,!5-klbE)hdNIR9rŐEȂN,&no]UӲ`U?yCGż*>Ub X{9}J:t>8bV 6G-3=tɃҽGo(䕪o~K={RzԸ >Vus r6]p=XYU 낆U*hx0*}3 .SUާ]?a\m`gD罫_? gŌE^yr2F)o QF;zaխA3Q$TS[X6& +4hRs/;`ō c%{xĤ6NC5HG|jg&[`\s W% }nSGvb}RU[6P~ʤ+#ح x֍N42h@8E"3Ƨ'Z ܢqWgs.>x.Kiֽ}Nw^y;u ?7~]4ϵ.a2D+ 9I*:9#$}I:/;Ӄ=mſ%mŃ=_N&k8_ [rfq=o OɖkNJ\4`3CkX=(%rO3俷o` 8_fO..aٳ07s_策QWۿˇ~m|3u7y9*'*s>S6ѻBn%z"xwO*kOn_cOQ :9J~~"VEX {*I>>> z=C`e R銞i*#DTmQcv;JC`f͆ヅ]O"ՒuR{UlUꈂ'mx 9mق=A!8j)ԕR`%qIhh#R64HpXd-C9=xª,dx{7Mo>}ԧ>[I%n`dFD41DɎ07i^x\yjByDeR#ƭf<>.ɢ%<f?$VfY)-9d3Cサ'$Ҫ$U T 2Btxj;E\nfYH:GwQ}Hf-:t\)Y`yi$N&/ȨhE%#xD,קZ0%Ě3deHSC4f2PW<'wLN2lĀȻ? nGQHƓ_%B!P豃! j )$i#(!''z1/Uc3iA|?Fil'xR+VUV\W|l؏S}HQ >Wc#%PtJgY!|_R^BW^R+㖫Vuu-v{GЋ?n͝%Vנv" dO5$7_cp}¯~s2F9QЈOmP9K8Æw hOnrK#>>A/r]c.ۯ%bIsI@M7v7Շ]lar wxkέo am HGImŶCԜ !X#24!'MCaGu]H;B 9KrBe QݭwɼԻUPxdd)ABuI *TGYp]ZŖvlSH6vcxj{ۺKH`:1Ln~ HjR=iR-UPrq/ݽzx~8J GuogX>F̶cWBυ~RW7 ic0 AO0K: M`p:t>k9Jj l%YY-D bN׶oQWhj(zU=wqFr@cUEQ !/jmDz}ߏv[ĞvՈSo,Y|s{wvKR +>h9Xv?y e99&Szۗ̽m6IlH/e6>D^ןJE"=>XU<(vmpDZ͜$wWD~c-cNܠbn>}?JY]1( :d/nTC1~[Rd_Ta?}VXU埋˷.ןn[CSrb) ߾߂v ŕ#;NIYxO߽+cUб2%0mQ֫;hּe-!@#b i]'A LWs;FF^ 2Nj<^" oJ¶ܴ[rz'툴?1b'{3uJ7YPi~l/V Jp3Pi雳,í@3FxƨIƥv6o;kVDkBrv:rJNި_kԌ$I EønbVٯgf(vVdD c'V3a^]uﯛD/|~yOvYQ#wc6aZ Cy_hh>hP5G'dhP -K!EkssW l(.?]kūuHvEC}"6 TQMÕW*wo->||Y-ndwښEeu;Ѩ;Wv/%Z|J%|KL,xbI.֎yA^uh"%n OD8x)[A(~o5Qݻ_*WyYv9/ω?ԱDkAd磮"Y IB+ BFrS1]P>JyBZ&^w}Js{lypz-#̡<3}^:DCs}'ʷ0/'*Sb$Α ]J[weunWe9|Ԙ; uFʼb+2C7Wr1/vN%V`cp6mGEi0)gz%Q)hE9K<(Zs">AhԠax Ďl$*7>*ZILR[qR(1r=E*)@ځS`I|q \Iꏗi<}z$PWX",2LA[sJei[1B(;5 R;T|Y1's1ic,Fk)wH)RKj{$ d Ilc(9[ Pv d eGEcY@| Ll:st̺\7%Rȩ pΗKu$e I(`=n}V (s(ۥ&hy2+ˎU]@!PRRaIMcdWRU<&5]UPcxcJB ;Q`K}x Aq&ws_ƶJ\I03\hXYzh׎v#n)v7&NE wWhrlL6پ%Q|Bڵvm[ΧRík;1f]QOF5*폠];]'V&?-Ev2_.M6ƻR3<=2lg8 k'-48ЮP,f%T@{/3)z7KMnrZ+FJLTk< (cܿC*=(3X [wc ΂Rؙ4k+אkVV[Z y䜜mW5JnQ"+bW'eg%'6>;y59C.\)Wg/owX{ugRq\J<K=ǢxpP"A`/W (֚tkڭfɧuZU'Z;ge=IhWKvNǹJ(ιIw#GXyJ 3i7uKӯ=q;Qk>u|LסRJ ov MW.jgFWl6WoB]?\t] j8|Wa8܌>yߡ9*W~R *}R, 6dHG)++}H:_>ޞ7Vz}߼w:SE=q|]R}#Jۧw^|! t{M$}sת f_?AHww[-9RwAf&]^OlsHH=BvGڵ+6ŮP%a>|CGng  f*Qy Ƭq/I-g ? mQ RA%"Jr_꒱/:JuBBΙe뿋۰~7J·ߦ-w`u{)Z˔ՕTk5*5 KA̢IݎK2}hQetԥrHZ!)%l{L B-U;Se 6Vk,qxL{ZCt~W=h{Fk**`R7C|j+w(fF521bfhS5OTuՊQ{v i+nxC>Jݳcq%bOD 3nSmsR3jM~Zt\gwFg'9i6eqĸQ0-j؟Ɨ6#--7 k}]oQY=Sܠg j"{9])"_/߯| {yOX;gގe&yMY سzuhX=6'N lt1cEa){YxH䶥.p3/1:m?+(ygTco}#q&vtw=l\"(NGbX߷ow|}Xĉ佦nCDe.Mq·cTF7O¼Zzs0ka9;ꀣ-Us^l4%x8fӲFlM@F dY /o-͂şR؞ 7z}Zb㍿Wo:~=g{b67ʉQn6^YvZM)S**"Fé3%ʱ&4D( Arcb½(3߲}f/GvzIs=9l":hR}`R fԀ'`1/ûglFZ A k iػ=2u}3NvѬKnް@YOrg`&%Ȭ1MrvtQ#?NpF:w:3VY{:Fl>}sۿmVNY7<'O|ܔϜu/-䭅`7沰qu͇X(mG']5GZ4ZlrGߋ{<چbELnrhvM. aw?~~׊]ur_n3`lFOxg_pfu*!|iGF- 6Aր)SV?+7jxV<8yK|z^)䪷':SOĹ{R)/[tXR|'\Gk٪l~-B6v|[<&?|2EL%w: @0g=k ݈ǩ aX̯]s5աɯ}pp \y5tbpبn*pEG$kf:Zmm8gWD{6&KuPuѢ!lѠO>&UK^bNdD%|F)eҸ( grqZ񃈜J}mZ$0=p (o "g;U@Aµs*iql̡ Ռ9R=4U@FkIe|6DR_Km|ሜQXּe۞#b,rB#Cy"~1վ/ʲ}}FLHꉇF8aQRI!_w{axbٝ/70ќ-dbdDI6e`]X,UOa:ow9E"NwYtۙn9l&ҫcPs(QсICr̒ pГ˸~1h~7t_sOu&n!SӇ] 6b*AT)Ii3O Q:hc 2'J\Y^8@*@S0̯FOmЀ)rD=B6Zr<>8=:N1*z(gwcsKww+Wo?[ lD8hiަ^(։i`90W@h)GJ9a0Q"6/C86/9a0p}.;seg‚iɁOPp١nS`9ȱ?05R-sBs_bE`'ÕvO, P\ĆNyoE{wy#ȟ:#{4UMͼ|/W/a3,8K%B?w lLqxKR$-ɣOlP*!-JF2*}ddL,`<˹J0@)-P~eD&ȨU1-"~7mj r*p2A D*QUMD}n \ٛ<Q#!5|Ulw;Rq5r\^qbyCoT!֦Nٸ0 IʪuX/8>'r'ܝ2/ ֭ S!k\K5@ñC}^F /Mٍ~M!Qۑ](֌ 2$p{z&y:vw~υ"qNwbVmh91複GƣGN %ĂrOV"h…vLF {bLT,4`X3v)RFy&c`lq?z@N15TPO8<-A:oP~&c6Gd,30x=0 v/GAn!i`}xBP(0 PRDIvaY*>c.xs=\ gg6M P HgCl-Qx<(cLj9j(QxMC.q5@֤8*q˕+m8C!#fwkT*t \Q+xqTVWֲ3u)@3j`tKVn04X׌R]F5R+"KdbFaZSBq RQ/<|Ԥղ|qCyƉXIYF©sR97&(+=,+MT2Zy[tR93$"Hq-ĶކutUblȶ{mVaf"yc!Nw&q2Z3Lq;o?a$Dy})D~>YjA<4xmg4)u=KG\s˾Q6rԫ?iyVye9#,焓,` +G+ゃ,Q R2T9AT%V.g P фM/-e~KQ+ܼ]m0ZTxQۇm6bWnAfo>OS̻l̓X/D :Av~r];d/ k)>M?J=)y}h $,顅㕧)=xHډo%81& c'^+чer5,(3s{.i?b (kRd_vO?ˑܟ3a3qƻn).Ag`Yf^V,[,1u4>?iҤ^бu'ҤX@2'_k:}{uj$(r@_'rC$FWA䱮Y` Inki!|9>w⮶uԏJX~5?F쯵Aqj9"p`Q,\ dV`.Y#QD)z$ʄ̶02e54b:7`YXLp~(%OUTM_U,ogRM B&I,R zkUm\VGwF/W|M;o';(Ggp5:0rF3ljb*nO?x|.2?mB_ -Q몶p}[,J o'MxKT}#z䱮;>0I] Ahq3Fǿ}V$ig6=~vs⭤8~s`JUࢹ1QSa<ObǦmAߑvVv|-6^Λ?`f+SU\|BsLj*)SVW|Z2YL9a(x.9Rm{$ZTtPw_ܰz  =<6o/!Uw);X4{[8:Ƀ^LK-24Aq8nc9qd`?8M12,lc@P@e,"bL2\g80pcAȽr0 [m`ܧaG -In~roثK\hi™UE,+mAaSK@BFr 1.eբnRgwsU'7HJ0-0-P]ԢD@eRkt ':: !ZlÞ uϸM$ǻqHjvx{< |QE GIĩD@W3L$& L(< wLP@Bvh6G孻!NjWjwvNco,WVhl{Vdlw㍳RFZ2RNݬSk&hCZ!vfDrey9v[vJpR,fTvN'v,h;tym׏=a:RT3"4A?oD"nn$/0^bMSorԫ"2fsHTy6*1:F w\d= n!ݫӇSW!Nǥpntr<Q &h$ftFefD;y}/ * p HzPxpa]r8΀[2qIT- 1`# ?GQգа= F1 sdL .G>{Ap 6#ohkDߖ_[C~';q 3$I,N/H$1E#/}bBf\7F⽠,c؝9c<,mPAHE=ȎrPcDh ٱԎ>nfcBHxf 9A.9R=nN Iv2ьlrA3%$\~Algظ /3BU),@.crewBU%)n>3q tHCR_JM#Y3yXRQ= hP ! % HA,`l4Ff7h$l RjDQ@lXLf>GN$RtѴA!X:ҴoVU?x$-)ct%MfD1jFvl/J%6߉ ^ïh6Q@NfPfFPc&IIRb0IJzERG !"Lr?NNN;U k\ fॕ*JeX]k Pݡ2~L( EkY%Nح s3 sWoSo9FY Wî|ح's33B@{?#6uGSQ pPv$Gh*X>K kי9bΘ7ܽcW2<8{ lOOߝh.}"Ls<9}0ETjs):n>bC(y|a:Bz%s)?l:,z+U.¥j>6_KayRì(>EŁ32*Io]-wUGJQ$r?1Dp8&)KNU뉦9kOհIފ cfo9=ͼ|/WNrmg jWp0 0!=>h-I|$ >!Ƀ'XI aȂ{[k},<zK\ȣf\u@Q %=s Q;҅2ɚ),e)uII]FO[˭v!%\ pL77_ܷNvr ňAɺzX^S۳/ EPDiS*`Kȗ =ho7 ʄ@m $q,N10M #Dc?cO1 =y81tR$V㜋B(Tc>h' C7:LdG:c*,Ip:M0dqU"Yŧ.Ygv&܇ ,w52߬oVцe|>dm˅Gft.o1I(X 8 kae 4M3RAj/ U\݆ϙ40@Q=iZy8cӉ61HˤQ{5A@uOl澤Ǎٱ> 3qĵ,8>z_OF*ꝵ7Dn lcroO_k>̚_?~܇Yㇻ/Z]yH&CAm) =TGޞoŸ7M#72qݨ2pY.oWBZ{i-ix}[}x`Y,|ؗDBYeg%caҥ/3Mm?VdeSJ&|SHSC`B @O!8T|Luul>#7whvBcy%#ޚ.mevk>sy"ߩr%z4GC5-8#ߑf2 cXOӏ[ &r9S٨50a3=h ĩy ~\G=E駸F~f^E;^Ea1zUQcglRAwAW('2άw}@g,1=Z0͠9ZmFήQKOHEB;`k%93amۊ޼ܿzd-5GZ֛$uvxʕ5EXu-Eo?YX UduHGK-!{cТYy`*|C9| grZߑZ+}zۦr4Zl p{?_qH 4?cOrwov5*X_P2X74yMv=&eA략<)*י,䍛hMY9@i;A~#Ż"jnŀޭ y&eS'nt-Fws`n6"[MtǦ"/wpX0RT|^6~Nsznf˛k, ng?@tY8O:?|{g2\̨T(Xn]~YmHTTʮ eB~sIR.}}w\cz^^_>f \{)մLX b.@演w4eU|3ETM[<.<05pZU\]}]F˧+,7 Z]f*7BVJ iek!; bQNF-V>*iR,QIC(t]ʾOG|OehW"Uو}'x™wɯjIA겸z_ׁ&v4b W4̑B-Q{%7Xxc %R@!FJJS=mH FrvISQs46#dq"[41&ZJ_ZA!rN)!ujV֮ȁQ10"W )QѬ<ƒ+6ɧDr(P \ ^H@3"\A"d}_E#W\kRr5 >RMYDy4z ;Tri0\\ p+ȍ';B&Obȇ =kT`V!QUF rf3Z\>-ȍÒO;bY䑬2ṅ7D{H$]h+G8;[O?π|J|Ǧ-#h^Fu02>i+"Ն-h+ӴМ®M[~zp-$* 4 )wX3= jb:ry~4ix.XNf}gWf.2s0`weM"HlgHmpdcӑg'ŏB`5E^kHVz6{hyd㌵z[K☐6Ro1ȄA L7R%b0&5P$RM4˦DNn[.16N,2yHևqͱ)h5sPjw tr%*=yuHևqݱ)b=X=?b,<`oX,m99 dkOV ٢K(\zFLs2D{_v2 4Qsu#N;|%L}UVP 缶;+jr %00sVL}|\Mr@AQ*d 7\W]3vjމxƱUNUH]H 䉨5xdЕqN@J3[zm7A(QdʾDJ/DzVƆNAҺӋ*#m࠯ARZ9.T`v '$)J/-Q9%PpJ[^ FxnaF8\~iŏWy+Uyt*tJ]%OӀ'+ݘs8 b_-E!6Z*E WJ4#.-CrruƓ`4)/.ZZ,NYzټDLay=5fO~2 ֍w?OOkF j @gnE(u|㵨˪@n"!~^t"~waɺljo_QLS!~z'~B ;y{ 2)[ mǠ 'ޤHyC#%acV;tL5hokKSaSz"ޕlQJ+a 6D;UP]6f`/ý7RSWpmgї QC"CY("a /qd327 5*SNDK檤B*00+u][?_o{8EV'ӝc8~C::qm4Z>腲u`nY(yat5 ?Ly7W](1yvQؿm/3JMؗV;)fyTs`r4N!m`(;S4cG=7H b-ߞg'>~H) LpC.# l,{C샟C$8Wp1Zs58g> n:mym" .]wCEZ\c׎O+PQV֗V9**n+T\ VB%A]Z?  N;K9 *%Bɝ9(Z^G ʂ+-@FVZOZy+M]W+YTt]0r!`= ]0ñ[7Yj$h]mt:>6+nNKѨFHq7 pQe}*ԁk :フ4|I!v&͂3(@ Z_HE)Iɶ)yh5FW=6v&tl$DcN޽:I xX_\UM J t *chBG]+x*DqQ{Nk*EŌTCn\GUZPNU#>m܅Jp!sŸ"ӏv%itgBa8N"%-wL)-nKJ!N8Qhj`q@'Y4! Ofl#v''H`N "S 'OɭwGۧvMBΣ:9I'vk 47=JPՒ(YU7~&a5ZlE lu+Ls d"զCaMӖ G'{g'~zp-~IX+;>`a274+p}pd QȄ"v+ , JXe\ۼ@_hx;+w8CJ!#e>Gj8FX G6IlH0J{6RSTjӮԆOV.1 rck<20\EFd>,䍛hM 9Pt-Fwqyci]3:w˿ 7nY6uFkw 'v trEX8w'on}X7ѣmJ -r;)sgv+?Ť}:t% )x@47L˱+!`[̀~t"yuzc0sCgښƑ_Qe^~qUfy835f%[.^mmdI+dԅ) 93Ib}1*r- S[TFLϡa&C՚E|F*P PAYCPՇh-q^ Hk3qS!ܳsk8x\|(~v-m;9* /RM/j0"X9C2Uؔh"[Asa"vQT^z&0 cZMF#>Z@uʜM #@"qpm?'H] rǞ1b|+|k˃kytcTJXKɽȥ8 ͛6;Nep㷝^1΄e^{AC"$ ga@ y"B/)҄GQ %82Lf^{-OJ^>m@5f{ Ch;ڂvY%kd!w[S}G5lgOw.jRoR?-ñe\Ne7cY <* =EgM7?JX,F<5&*`)oTB?ܛyoLuf?#*F_5bX.00PebDS= -@yN'1\{pw2cjw1WOr x@ ^uqrɊ0,cYc)]~Ugh! AD@LI{CGKU?H f[x1ʽW(tYz+Ȩ M϶1h޵:L@953$KGr6B&9k8V&Iۖp?*VO|F 4;x?QXBsy'Zoiz{Wh , R8 y 2@yGQ(L%y@STC)G~=ϧy+d 퇮:'i1 xni5?,o~Qb=-(qs5j*nA*VQS֤bDpB$ea)F0*n/򘈄JtXrȣ(~I<9}mNy*n5|MxINi@)EAT%_|DORoϯP`EyoGeǙ E?RcX7QzF| ׹HA`.]59>9$qdz>93)WJ>M&]I̥zV1(Ubҵx^JW]#lzܐ6iS+fT&tԫc?ٴ>'a8.գ.fyTb+<_~9W<i>S <Lhӥ5>,wCB)ʻ4/\]Q8,`wDDi,(٣O M)c4 i!.m-_| USm_PRdǠ0@B"@8 Q %h* (haPBb̕hvX\ei+R0+#Ƈ,4 _Wk YDPxHAu)VƗO*9sX1 XiӤ{#Ym8,զ\ ׶4+afǥ#ROS|L*?є$NRN3!A\&`jf2ƙ:8Ʃ.l3aFq$PQ +ߗWb.;σoX{5,Ί6ὢk6厵FAe߃!0όvݲ;Us QZ9 ?漿gcqjj\ ͅlj`u4?]iwWa ◫Ѳ/WtR-͛yxЖ/Kiww,GX&`dW+ֈ#pߝT1у?|jA&s Wf:֜%E;[[WFu磽3䩪 Di)|+ |SBf[s$ „) Eq$Kd c$)2,aFe#(d JS6SR F-mN>պMro/`cCL9.1Fwzn}r`AnE^fo &Ci*7XRRtqfA,̘;v!2;y7;({ NuIl| Qn\_epkjZOժVvR܌ϻJωJݓn;yv>4~̍m4PCwﮇb׈6<ihVq ΍TFTMRQ#y4Sv[:vwжnĊRC@޺X%96V?XMwt@G=wt]FxnĭzY[EoP:0 dwt|S֝3S?aNJ~IҢ|nBt$4d(G?ϣ8._}ncӍ; RbXK)od=KHĚL{R`W˛oݥ| ^cJ@Q[;.Nv2>lI zMֻ-̽M ]>+)n&e9?,0^k,Cu7Ԝ%@a(XV?|8Uj}LF'=- u}FxTΒ$l>-ñe,\N 6fLy*g F D1}&Р)źSjO4!ۍJm8bJ*p˜Kl"̑p\Lg/W~(v^Pvod4^-M)o>hȜ=:MŎp) q0'rq*QKlN ,F8Y&?\YK t4*LseE1gQ|pvzi& <+dNCoeiVF]]9lY8z8l&<[Ss)R@!PJ B$ gsw+RYFbsJqC&Q>74dل߾!|o V=jodQ[Q7B" ` =.!Dgo i.u<nVA;)U#[vfip S Ca;)E0")boWۃt{~%ε 4@TKes4̠EFwY v Q* 9bC,[1ܐz>R 6[Ղe0R ?glX[ ƌD̔1H!Y5uBȅ@,jz교e (8"Y[|Bd8D % 9 $Bc f0 H qa(~>KUP2 u`С6К)݄η5@Ruɘ{ dC# 3kOPxe}Jnz/L`I^!jc z3x+_*R=IJK1}'_/xʤhF Zz}9}/R̴TږnQcTS͈^"a(_;놖"a9՜`kEk)1\IvB":pϩ3-DRaK)1Ҝjݹ Ry z[2);ǩݣr%^K/QK4R(k -=F>s^C㴖٦bb9\uZ* T 9B6]Ս^[^WThr/0Z7z%, 0(C(KKZ|)G8pnGȞbL݌]$ [%w3(0B. t.Uwl";)rA aւ<%CNafgU0:ƾ[X]%VoVZMF#}!KSԡ 'I|O_ʇ vll ].kyM((vwRs'%ܤfNY`GMA충y-t ƛz=0W[8 4ܩ8@&`JI7mSyI~V^ҵ@{w$E_4;^WmTbNK -|<Ҟw#ZPhjQ/1Zr3-<ϭZJT Vjukei)@wJK㆐SX#~v_5cxxb '^4Sh-$)"@߳m9shO]38m\kaB-{TwA^SnTm^幫g;)gL m{UymܯҊa}X(, HD ]B 0يչ)A11$H'$1"AGQE ٬wîNqe~n0gK)#|#U[N6:hF1$嬷7ݗ|Mwj[F DoB(J TםKА18\r!:@L!IH- ܯ%(I !LD"Ldd) z.aqe ("ӪZ爀z`nP@O6eRh63@X'Ibζ Pp I 8a$2JņC-c*(O7[/V"g]gNXbwh߂g[磽#YMu*Vp(~.6fT[h>4H@Z}Sa4oRSPyX^s2A'ZP:MΊ21eՓ"o&Ň۟n]F1 _=yƠNH?wLPC&P!_FgJ݌pi6gkkn̤%/C֤6qBh԰)9)I&1SKNunyȌ+/҄HFITjg\/j2N\jM5/ VI}8,ͦsi>nt]aZz5BEj+R.$/$Ubfc"S-uwvqkg8u0]-)Eagt_pP)5־MX r .0P>PC_[2&8[!ڒ>H<䞕#_ףD؂%%`0rMaJOw0ڃ/|\|> ,_QŏIb{)4\LЄd(KV(:|aRaSI{c }'c_ &>iCfBLD]'aWn*{.HX~Y_-O7>׷?xs3^xN h󗗗1^;^ ߅X᷾ x*@jx);HmQEI/5T4>w*[cS"F/5]Kj/Vl!ɜu0bʱ cdRXMI r?f6/5A6ߛM)d϶s2\je xG[GkΔ Z\kl>_>[(Imlz01eRy,/~,ףUxټ}/8E^?+3\>=./m nmDl*Rѐo\Et>Tu b:Ϩcs ά[I|ukBCq)'n AnN3X}w6HNc 9c}$Fl!QBh/;SKR0ϒJϥ:p<r[lED_uFCᄟͿVYla噔bØ[K:Ų47e|;HT^ Ҏ)4'qZuϩr3+k;Y&)bϏ}WnOR< {q;+FXʴ/lHu Iϫb~Q Z0Fv"E\숬u03s3r'<Gfu; *ivrK`)łvlxVY 1"P[3~qQϽ5 6MdkNRjeTnn#gET}Q"(뎐jqUۉ(9nCmi΀}tBTG1:Q!~LN|ڒ(!`&2~ü O=Ű ؃OFEa; ɳAakTlp9N+ɻ'$dpҡ1$ݓp};t@N#CObB &-M<&~  &[izSMPNGsC?o޷ò%;!K)mHD7_ } ۚb })a֕_z3}(b D.&y-jk#_1'/+R+]KwZ "{B#}puGpE^g5:DUtA=:rW[R緤MM4=)jԝ7s"!z Q'o@k7QqkSkJXT͡9ؖ)(˦,Wj~5WjWEc8H1)a8ˠq-F(%Sr2 :sGXOqƳdI!Nj&/簐X? ,()/I1_*_~4_Q//ź5r2f&jv|v)L-3osq\*E㻁y JiK.+R /;*!WR7?}> ZPײ%C]|TZ E߼V ~CiSFN(MݺnY -1Qā gD 2 RŔUQx S\[mV L`@ڙLjYɩb8wn3tC0@s.ͦW}q/ov^<7B(w75w|vY,EAjΘxxqNj˜kaO )Zl gw}-, L2x<L[Ð˭Jd Xj21*FP8w?Xbi2C$Մף9ꢇiL2Y8*?5 bRə+3[H8X00 Z6ca- P$ak,LP2P%Qgkf *8F(̊-7-|#} *q%ksvxׅ@ tJg=ƻqM cQD^ H}ӻTy -"+ =D/.w-C5lN;<%p.ڼ/V_ϾLo C翿$l;Kﰋm:ݔcwE!krOc<+IG$V霺VQ0 NҶ\9w[dB âpIH{~xl &ᡘ'_EeM~l:nsϵ{NYD]Y2c$ DЫG*8D i&]:brWׂWj{0>j,vF" ?]|c3zt1# mo@ P"OM='bZ*/!#Ȇ~ђf'6 w)m}NM(@HN&['^/ZY/Q1PS^99ӝ+5Qxw݇Y<޹k0zNkLF )<0zm!BfIYOk6qNcW=̝t/(FԿ*$b^\]xWfɛeCr??Ws1B*uO-'i]&9faH5s}#* w>&jA,|⣢W 0Bu`>s_O<BaeAL‘l8g13ӞXZ9T yր-$vbѵH`naA)ed.5@{9)?RjOt{VQG̈́#N"_P@ϿvTw!ء [r79}V-[FcO9Ч?-So紦Sz8-c"S9ց&9ii=EcvZtm@-oj"yEV+g;^}ub0K:zѤi<=iy05qD ࠻&ؾ bwSS>"HLW96[KP9 ٯW;Q}u>ׇAS>xToWӟJ+?;\LT8qpc٧0J h-s@Wę&7ӬD$1P8P^TNM9 =t/I})k >A9z"Qn1˃C:%$kJ]B)R)V nIˠ H4,J"8&[1*AN)J@W]<0W^?O!X'u% tæj s;׆IԆ) @9vz+QÅdlߧ bIFHd̡(=AUM)5 m]*/: 5CIL $ݨXos{dZXR{K_*beAD\U\iՕaEY=2gF$ܴ!J zI3FB JT`eMa4!L6ar G<' =J <2)y_"%jf}rcqDŮ0Ĭ,kzȼC:o5 Huظ~﫲ч}\jD0]/ojӽ_ x^,I}/L?0 _8{L "x8cqT 3ZV&Z4QW@0Ye%ۛZ!N oT|/|?^\LDzDyxM¸Eq1oWҪSYxՁARC}-rWCe/;Ql@84Q.`ha9>~}y~qeƘ"Gki{076{P'VwMS˻wy_ǯޝ-%.<^~zp-%>ҩF/drιW:ԝ_>48#:Ѿ|S}6*⯖#%\?tgЊ؜13QEm,1yF]oydG`46aV)DkyrRn@L(~åVQFV3=®yGTKA1&B. AOy]aJ? }Q@}RB?nMl>^H-b8bf_<{:1]Kc>ݭ? 1P9ȴtiX$ٯ.} ~?f4!nvKEOE빿}ܳy_4T)B޹)[M݈ޭ-)9m+ꔆn[MM !'qn{H7$i,6*B+fnOz,䝛MDs>2$ќ~Eǻ1cz,ndB>_|-/ʋի/b>!)zZT@ͫ]:g{j|4"V'>^rSV9`R%n("~Vֲ\OD-7 y.]z0wOWpէ/H]V !]%-Tûp݅أR4st Ş;ĿPrfSw{2Ds&hxF.cP+'%Kq3y{~kzyVdI嬬Ua)*8 -%)IxնrEI%)h?ϑ*E#ELRkWG q ":.yC&dC \OsBFD YwD;:9*ߢmqˊSl1bXWR(]- d ֆU@Q3BR`=49>qMy!UFJnZ|)@PR%ȅ/JB2Wޟ 4U!mԧsӊD"bcYN8/YXQ/t[S*VUB0n+B)NBdEYVoEܴ"H x`HᯘFXpx0RUX Q0pno%REZb;p0bE7T->cP*X0aJ60|[ c5jb36 &alHA<; @ؓujTF 2x]ſY:+;Z]Uu@$b"b,#wU!bcW.Oω(DW.~9 aj=)g `&B̃%c|;KȦeR-`2 ;4:Ah% T'B21j:UDbEIzx jZ-F'Vy -b/ EY3H1"CѡUCQ6 JAʖ d18De<)D׉,Isvx:mδB&ȳs Dk>\j]v_p%F,K{Km4=r"8C\$B+Wra8W[RmߑÈ0O :EcDh$2"ta!D[ٔӛ 4`jAZwdq{q˘zk]|ׁsmcSP&-D`jA}GwoA25UDօsݰ)hSjC5UZ%6YetylVWW5xZ}r*ѣ¢;ڵ8nfV ;j1>ZvSt)S)]ء5J^fOTc.d?lA'A~ϦO҂O#)Dvh&g3kfXHP老]e{j:9>UgݤW|$h;ފ<R$E?fq4BGu@f 2 55ڷ쯐 ջo!Fo +=\,hurBc/Mf44}PؙRF8" gBN[()%ȖU󬢲P<bٯ$^nPU3[:J_YQBP9R N_4d+ &)_1} ^CP\3oƲЮtexuKQL^)JTR+ q dx0M!#<۸Ob`ֺ{ZRֹ%rta({m ,W oDŹ`^@& Yi'60 õ/`wY& ӵҀt0 YT@q;+0I% N( ^NjCa&#g|G{ ;CE2ޝ !1 zQa$J]Td4/1,h0ʰ$`" G8sV,yU q0P΀ñ3xw{SݽMmtFϿ#TlIp4CLSl*3 [U{;[^ x^,\iB%O.$s >xdJ'7șBGZnkiݮ/.}%,rcqDŮ2 Fh^ !92 H]|^vSN.մD3c f`]DKi:)D^ awMQ.t0ZuLUǴw@m c^ZEYZ49k 9 tޢ,Trzژ8[RR9cJ`[ǹ2 s UJ 2U%-B`[E)]Axky$n &8w|nnM6ߢ]Xu}Ň|ƞ=``VTd  57%`Yķ4ŷ[qU/WCxz&:H"٥X_NL\YQ|$䔫#9/rvb1H/SgJBq|ӷ΄81{ȥTҗL%Q7cփV kMbFz%(KbʣrBǵ %_7kY4B(S"x-r5_DjyWηZ=L5`LaLr<=tzt^0%6уbRoDIα8W9*$ /4£%yP46,Z ,@%kQ-TE6>jĈzIfTqb%ќ'}neJPHbmfimG@nh\REe` XCtwtwGc}Ǧ݄{$+_\YqJu26i8ɆZȲ,PK;\) rI bha8_T?nt2JECԺ;7=tE|rn\*jQ=+/{sj4-z0kjqs:T2_S=9?OՀ}A&A,G4LZ9>vdu:̸ʨt K6P飥0VK%F)&f{nhߒ0\[[&i_TLAZ̠ϮLW;v> 5j3͌g ͗!mG~ N5)9gIWڇʻ'vЎ[qUq)i&MUd8_%Ge 'X r';1&ѥr dORzoRwEԈ߽{\@_WWؖA:ꨰPa]wWyk9v)˨7N*(-BIP# clt3Cd}/_nn~MbvrbMbI>Pwy[gdspJB8"=gj_oȈMFeZ _,~͠>ˤhuWhaKr7S$)ݗ#vΒ^,|,YWչRp)բf5Uunط |[xg}߾uMzM>T)?۫7Ʋ7gԪ_?9$ MTCe† 7t֒l Y q7}!9ϩo[yg yX43i6vlYyVpsZf> 1jg7,XVo,<:A{UUX.6ŃZֵUNIR#rT_oݴ/q/``\dz')G#>:k*wLz͈[z}=sĪ8M!e)} ")tAAN!\.:4[eיalf> 2JpB)&wZa,Jr=(`g1•@WiL![fhG UjfIeTTk$'afMk1}K88ᆥkK>ԑCI"2_BsK#b.nGo7'p] C4aFdTGV3 _A.p/t:kF^ʆkގ 1%+O;MY%'~ۉ'* ^H!0~}'MWWz+5&ʾ\ҡ@  +:aϮoy{[·}'zHe=OdI ēoSEH ?HN{h]J .`=?J k=$H%lS'C6C&Gmtm4YFTf2%.v#=d췱80kCVM]p  ݯ Nd,mj%n~Q~Cʦ.Qy.#˹(0(a%r"2?I73v?Bxj?\o{LT7pi[Ͽ@}C^>{yw.K[̈!in uEݚb]pěۛp5yM)׼3|w~|vѣϷ?O^_?&?gԥ{p҃[VC-!nCi/\v2qr {q'B^6?u.6 >VkStwz0)#*"q衯3 2?| ٺQ@, V @2Ai#X$B#i%;\gF+s0\kANw Պ!ެ_jvo&{eCho'ߑ^ΰvʽKzP-͂d xQQmjM {5o. yA9tߨ _]\.N/{qt^!wowqٓ]ulZ;zzbp7>^ӂNӏp]D&hS;[xOGvīW h?a;MT$䙋hGnRڭ'Y/-%3n &s$䙋h/vӵ),;n}1vOkMk72:$䙋hCxLSΗ~'P*ӫϡHs{9Tx~prG .O/O^|-^|>+_L"9Y!g71o5vcj6XsˏkX5/\O)8eTTk.QJZJI)]J)<)MT cRzR*3O|YRyWT[QJXJ)S}w*S=nRm=-% )'Z ÖR<)EHRj%.zܤZRyR*H i9&gaKyR*eҥFljqjZǻ,=Qb=BI<)frO˩7 j#hRzR'U ^H)bVTk~qR*2uHTZ#BJE.V}CR,I!%>إYH8Z+[i Wj5t{Wk-VEEٲdvwZ3m)/a%)w=D0}%Z,]lq݂d 73$)_[IYHYh Ҝy5\n<k*c8^ϟbNK{e<Ŗm9X:4sbmۺiKaYXwO?Kdo N%l#"bĸV\*O@VEa9jsR%̀} oDpP> l-]st^VA[(b"B)gL'# ,-e|, 2uqw@_٦Y=76X}΍[vRV)aHά`R2}җW`heLrhZ Q2W]t|2*)r4oEC,n?߼HC_U 2Rʚjǥk#^EK/B8 D2},ta)zBRh% Aj[6GS>ޞzmo?to2IIH ALY0"wBY(5{Bo  <ҷD%#6g6)'&8>o2sqfn }ݤ^NBնN+ޅk2&Q+:/no.oo^%W3}t!!΃|r*| 9;k&Wխ&؋N˩QM>ƫ8^%2oύy4篳ўn[_K koqa|.aU<:')1\ˡt@" $)8V- ӹi51lnak&,)\S~J1M q̒-S46}+y+qÈ-"5U7ZjygeXRCb=Wͤ9OjXGԅ#B1{/ *kz8} 7 OR7NI1S6dޔ3ay{E+-?o ;FL11cpWgC% .-cKoJł;L+fK+*p~?LB{i0AClI.k:Sh;/5Âj#LJ$s4N#h]z+0mU ,Fq◴6Z\hŹnE596\΂|o0!¹|_Ekxo'U;q'!)AAe)tY/J+"F,xMY hif0lN3܌R`N _|t9gRd3!ZE1Hs6P}li|)d}'wA]U)փ/ e(1$z,Ax-f)B{u`\۪a5d#ZUE2J**ұqjjVj!xn#SD6hЃ8Ws+F"e߳ŕQIQ`c Cf RsbǂW@<[| fJ[o?jW00!-f4<$ffvY A;PZU,+˂0R"(1MYQ(DpD>])+&ut&r4kknHŗ= _T[JmrJ싷T``sMZ^8[g!HJ$&.٢ FL_5u׸u3L4uhv~v|,|[y*Btb6ƄPF>&ԖyoW?k+ud%9Q&9VT:qN뉀"}v5EHA=vh){'`nN5ͨP- ~Ĩ޼w7=TKf"Zt-RLMVOft92-Y:U'׳/7Ӧ##7V{c~ߌ?T>z]ś]=ߪz/7[~o 2߫jֵ:2x?2tnoH>Y,{㝰ʮ2k+RXb}mK $V3W?eL!/\ECtJ{׎uܡu >u΋i1%5^ibh W ]놩u >um15^#ibh W:%Br9[Do]+t$qRH[`v0o$Y f"nD0Q*މVIpc=rPa)@* .67QOF"D#T.52C"Y߾Opk뇅Li=?VBH `KU CY~5Չ=nVX %9F.gt}fW^׳3ńtcn>ی=w59/`E-0 {zܮ{3Wc'@ᓟujmJ,{s1 #5*SM?ԭB֛F$ *I3}ݜjF3EB"W,T>gԉ7vgO xc y*S4loޱnԭm2W,P^8Ϩ+ݾQL:zX$699>_W=s̺%ۥ]]9潰l5 ĥNEs_]&޾؎d|.B1\X5л.r GHRծ $Y؅~Е mtEY6\qX+z2c=aw`A: WLTA)j"`n6H:ϓ=o rKg,TB&uKgS+- :$CHWŊ9f#,Oo6<.{<:qeBW;֡Y9UTIYx1(vt`dS+WV)R\JOf#aʙG%D\*Nri"BJba_ 9|piKdAbVSj_EˏZ7ɂ?TS{gB<O4d0"`NyK@dj4Es0##cJXt'BPG9b Ѕm,$^y"d "2 rR%Yn@LL2K3S u'P #[)[4fP`خ8s { \Wggnj!#B륪QmFF<+Uѭ_1[;c`ORY{\:8e}CDN8vQTMiCcy8ҠuzHߪu![hDFCIuinǺW~2u(DG/aܾulQ1C@gUՙSQWƟKVsM S٥ӌ$$Bg, uJ9ӈ i@<99#@>WGAt_f=amT;3&7.-*`zAb+\^4ʶT|gN٪|56 cˏonuǒS̓χ A䢸+ѯkm(51ShV ~}5b6SoWþ߱%=Tu3ғcd "f6M׾C+7fvt~ qnp0[ ͍ele!vؖX xݔ_.O#c@99rDq'[ Ddz2A 6iH(IP6H)#vPc m=aJx2o m1dՐ30O7AmȕdfZSeFK k}ONˏ_cP֞9~X*Tl*@N<-Jmj6~B0fK,wvIWӘIՋW b[˟JF|^TrgN,rpK&^D_/Zo"r!DbǚKfC1 Z^K?F RPQCSh/h{ʫ[Gq?B~5բGLZ] j ehB0GIXk÷2"ky-'?aZ2g }jPf oN5b6ޜjh87[c~` z!;TK02T>CQgewoT77|ewCC^~%ٱnmԡu הgԱn, fK<[ y*S~ǹkݤ$}n2QgԱn" NSc[ٞ-pg/D6ۏU%wfr]끏*X5c"#5=yR z)`mFgJ8c@v"zz@*(*fJcN  I%VL!2QZ" x T) eT",>TƜ@O"0)ļs(+#ϳkc# wnYbX}=Wj֕wx>qY2.nVs=f~q=?; .YjlLKu;ؾXoZna7;bQ:˾^YGύxyo vObx'="{ 3ˑ@8b F88TNC{5?j,7saиXs䅅`ӛ _SvR O=(N/{+|-!C$IѤbAç>@,!ފ-@E{eÁ8xfyᇺy5悋4Cqfgs,r3)S0VbR.r(FQ k/bfjxno_&өMfְ Uڴʤ'VboIrlU0*XO,Ș%Ԋ]!0f2WJ䣩](@ $jbDP*L@JT"2+Le<:$GE$9 -EPAYnI4'J$*I,"k`[e=6Ce=vm!K'ǎ6'塒ӡ2$6s'k;Ka^Wd}|~؁U&1x @W^p5-$EuԖhZ Z2HQ,l/+Ts+%' 6~Z`F腖>Dej18iRD:@^hCT_P->=-E~#,ƀb"Da/T3LIKzeD%RCzZ񟶖"ꧥR{Q}Y#OQKR,RzQ}Y6:iR*! -}* y'o(HqݗR'ߝ>Lej28iSR(R OT_P-c?-^h)~ZZP:% ɿڝjeٗ.7Fs1;``6|++&Hd'F3P@œTg%yAt.Y{p5i}O"!DP觺@OO]l]iq61q8|1N]Yב؜|} ;Vdj 98=pOLat(ʮ“^1H g=QJp2I/vOu&4a*R%IiANה6J@)b8ϰ́840@XBB]%j:g l$ÝgTd@` uh!BYJ '%H`HeB$4WAbdBnuM<6M)5Ysm$d]$_o}jn$n~5y._'=UD߮V!3k1& <8IA.\!%G L+kb r0DȄPH2tRi h'FǘaYW7񃶷jYcz/͑ړz(f(HxZ{?Xm0` h[9&BJ/թwT䇬ށ_^ҴzzV<-Lڎ [u~x~PR$T/?xƠzH~_\ 93nǧw\>RIg|X]Y/Hm*%>hLH.0nOO߾ѻ\vO$@>Ј$4\Pv^_ջRE 05aD{Lu;`: 4$YOt m0웠?MˋN!hyb]~\B4FL4sG\ h{\ڝ{vkׂNvoE<|=FQֵG0UjpOtqHW)nO V{=OwqQp@ZE]V-!hSOE^=;~䛅R=^v;\!E+d1W%x5M9i"x?-~޹o?-P7th{麐!PʹhGI8C}K6 ZR!K =j1b՞,5Nç?ߣkY=n3Xr)5bcc:q/UwV+3M%W}]/w?C cNn ʫ.iSJwfa_E7]wcW=.&3>Mk;/})\/ZΪ,ay`rDt4zv_ mwT.\nu!'Q>eq#Vtq-V>6^E6 +&=[r.SRN>Cފn ܦ[|\'!mYٔ`-,gtCNE7|twŶJپ,ٗʹ2ut?{Z[:Z=8s{GQ뇋7ӗnr<f=ϲF% gyt%3eEP҇.\5__:,Sb"C@KT ;LX$FM#"nCpPH0JMPvG5Dg0iM`|oCX΀s{;u97#0󊕩ᕭҒ/~ޭFMIJtIdiC<̲dx a>>/2ΞoSq褻&}l &O[u;>Rpd)3<xZÂaOnm=`B[>D_XeǞm{]jAJT^&L\L&E^UB EX !aDVfY !rt9hQ5 2/Sҹ49J %Ye"}L)z2TeMrCCn'sxX- ;_y;^Ьd^}vf& Ua1G = E*mW:HgYeVT ,ĸ3;W^ >͙Bn3piGʽ%[ eFkU39jz)9B{{pN`^?PuC`[$rXZʔݟ_ 럺 a9qo?Pk>O顤Z|bqWש/.$c/l7f13@?[ Vиv̉@X%1NgD]cCė4`ڮhž;b ~+,\ Gf@ډckIKI [P@5no/|/F-J]h6Vr-Քmk8&ZX13ϬVLbֆYeᘵ7vV-F' jNrwOt[+1?%(`?I5B5 ǣM{sh+nO:nڣqR3@cߤz1HV s 캮.PKm DDŖDHM֣. OaE~ +P-ѥ^6L LT7B@gtkxbߠtօgfctHtSN/F^70takV}1xȉhO!N>܎nW1ubƜEhި-;şn]xȉOOAZ;5a'%=/0W"G!=.EBcp&004$ZaMaƤnlÙŚ1M#R@G]3dsIM=#@ qE;5V3<7s-?BH֕gKGB{:-2V"=E(sŎ(#z-HY<dz0hj=,&xp0F*ݜD#Okh = -3!oeL=q?"L^q]E/\X1 \iRWG2ڲ\()*iXi9 $JJr=/` j5?9Wni.xQHY5e C#s)SҌ PC(A*@qnv̍Xd P(؋Xh PRB0[!V̭-Je yڦWZ<+JҬDߋT $?~/Vy@0-1"Y,DgSC-6+lY(sKw jR1Z(9 kܡ]FH$_4j5*'>Z5:>G .A%}iP+!]Gpnp kHi)4mO[' `KXjwK3rl=':Hs5N:|k&;\yIHeBL!7V|<+uU{*qx^zXM58{Q{)~^ʭR$[Rg/=F/ =W> yIƩnʢqkvqXuaQ?Uab]`vmnd̲ Sj`L7a$BMrlp@tYwձ[# u?owP]:626g񧅱ńǢbrY:O&4֔U=x9O#>y{ԇ{YaҤU/:rfiְMeGDH}#/mBm[=l[8ǫ~ 7y'Oz殞 H& zBtw0hƞ˫(51g|jrW6+@8 Ug*҂UXRF}gBUuOl:i!28Jnȳ,e.Ӳ2&, dlxfEisDY..X,̖S8=w 6̢ u"iYm42ǣٶBX \vr4U9c~ޔXaڦwۦT{k@Z&-crV[!:ԠmKTa{]=ª[墉ys#+KKIl5(v)5}TOT7BhLΦ >h9' 9q)G&y㮤1ݑ:!G~#=Zljp8`wt!/9qTS86O!ubƜEhˆOxF.<]tç{1z~^*JMJ^ 5p@- e%3he+h ]kP0d)]ZQR\J<܊>\B9i޻3J_c$#n-JV~^]eSo)n[M;\tC.FPOIө_|],./1WF[s<&&Mz;['˷2_ IPVOn_G ?Cl, w fnZCS2.0Bl<+a?Oe'\ a-,.[hqiu}DJͅzGw{M^qS&KwڒbXNhX뜔OWӒn|l=Y7$W$WiN]VkR==BMb{kMv m*0PYz9Bͭ0/lQ2Mw ywOG Mu#a><)YGԣh\krN}=#@MBl'ZqgR uoZ.Y=_߻6҉㿈j0Z] @>~s{2`8DhXCD(6lp 8vrd)PkWR =4ng5J'.WfKd2KYUYZ(XV̲1*XKC[| 7Uߖ [[ŜFƯǹRP~3+ozµNC\^6pM"$ 6hgMaF1&yeQ(4Brj±CƦo?D 2 h}gY fA+bwO&?xN}/ 2{5bAL*!V6 k?P'ŝOYaL`=k۶޽an6h[,ڠD*c'M/)cٔd,"yyxs-*`6 ΛKy;'.MX%ei$0ёŅJضv9B26xD,;lpUSߩ0';lM̸'~h{g6B] |9A.)6eY4&5h=4aG' mb ~ߍKsc.LpO#&5SZӇ**q҄(R  8HӘ%Ȕ *0ӘS⧮ LlYU/tWIwk- a$I1a)LrEV%$FKJ$ &1O GR z[ [<ĔZI-x(Ĕ:\ë_u@.H{`.Mk%[#!;m$u[mv?aM*cN=:e2W"˞-_g=[ѝe4ѳ*Flg,:GЖlg ƵYRwrkA!nAKBwkr>W5\ XQqQZQb·qVrT{wu6'2xn %\=Tރw'F+ld7A=7U$6\2H3( "%5DjN;x%K7di7E=] 69U$1 P 1Av>b(H +Dv Tl\J0UM d Ɛ>h1E^sڮ&J@zmx |E bhp ȨJ}E2*  d%#2NvQ,jL_[iGyQΠXAt8Ь2 4Y2D 3ԮRcՂۭ6W}jU3lg VĽR4[5"?*(kV%X/تoWiWMdx5aTz|շ @OJ1B5USNYJY'0w_F<%]v%l@UKem_?( Ŗjĺv(.dƵqJ߀Ug Umj'6{K0vWe[T>Dzx&ڀ$&јF I(њQὛ4ð*$F`:ocboU{^Gz!dGB"*QunJ[6|~C1eT%秲H+3?VDp`ct%NC]K?NhP< dG_m1DRY7b&N.v7}p5guRU R@$\a$\ JjEEbPB ;0C,R׉eƞCq! )F*2fR8Z(.u\ K$0e,@"/ֽ~SJ2œYs9mS~ݲy:x4 wqS5z( 2w:, st7v>F6G/Jz2qd~noFZ9/rK3d/MYۼjkv_mEgyxmǡgyl6Wwg>0w/]\YN|/?:ϗ?Y? l5*Bw71'_۩n*mZp $CMg˔f-kNfw0b>->ݼZYI8ib֟__5|gO$_lONM&.HiƣTמx8n5Vƽ)f9{*>9z\BK!  VGbؖGQe+.@11{W&fޮU ~0h{Ig#O` +2&i~_?~.ދ`8I;٪E ]u0_7iŷ[ Jߠ7(A oP|P|+]јB(z\tH B"ePWZ,BvP(6`N%':XDT*fOAa)(J( ԇJUyɸȢvژ(2/F]9f˓HV3BT( )r5tYU\!"I~U7ٗ`|F0(\KxGr~KpB(.{3=!s|i%RrDA#a>d,B OR&$kR(W‚T*aA )*AivQyuBә0C+Aʯ@$)X 'KhXƱ@C%jETT.% 9Lba #p|!L (L =C@aaKID T1 !gG 7^'Xu!J(.00~cXl՜"8B5cO-8մERH$ Q Q ޖɱ```` Q`Ax"2K/|lշV+["_'W6Px+$+(^!a0 ZWƩ֏Bp|-tnQ̂waw2uiUZ#!%1?|C8C|u|U- >``>஬/jYtk}/)Ъ2())9O9T}wT7pʸw, s 嫫?I~EP!r&@dXF;]] 'HI{Yk*J.W`[+\޽RS7!FjDK T1s %H|,$"Q˜BvK~`$.R>5#a0ň# )Z8&HٷJ`TVx2JJ-]d=aAX/03r&[5B/U{a*P)L6,khQ.zUZu(S-PLI)Z#~)LcDqӈĚD$bv*R<ˠU}^}Rn*DzT#jQ/,{/WUT2"kOa W㠆GbJ"Kr4PPe&CrBUyU,PU(PCUntX8B8Ҏ"YQ 3M<=oDO^b! T/G #7ړr͙0GM)>0HM!aB r,Q*`#Z2{n!^#yoB @, |3nh,:bRxsa#0H;BM Vs7ۏQ|'~o.stߊt~A"d@@tl$*QX FPPLTL1 ~vm#i{.G;>`x;h*S2+׹?Vٜ0iưǺgPHxًRA HX]b(4ҤH۔)1q1* N[ *=ϻQ^;UP@x |X$>"8$8/K7)c#ep^y$yyy#w#|.*HenKv>}= bժ%jA mh Іjg` Z%I$QM[Y܆ JECBJRO>u},..O=e SOAŐ@N qNv`%x Bv`G{D `ف$\`IWفTzO:UЩy7nh5iPԋ{U {AE EP/B^GE{Qvџ=>41X[kW+ѝ6=k~kXryYH#xsk\rռ%n ZB hoWmEY,'oGBPOY?. fւ!-`v B vk ~ tB[jF U3!]'"))8_B v&kr`M1ح[+]x&( IԌoC 6D6!r->E"U@t4 H2 R:F]0q_2˄, C xǔ20 @, җb4'iJ') E +t7T?3թifԳ! J<-xʏa],HK&>DDmia%{.!꽕{f!>`OGH?MNvu8VT|蝚?9Λ7|~Y7Z^G'BM2-A/{kTHB!b;[р@W''{hNfV?,oAdwsX5 a@Q~t\:EaHg0o|>ߣ?̇&y>Ô@О"Z6 ԇwG\/(g!4@xڨŷwpc*+g}57IB&ON>ܿOoJ ǿ?jroG1Ay֧As]JMSS:)pÝc#B@66ڕ_vN觗tk!LSڌ'=mWBƖp]ã;GeC*bb?0ڀ[в񝣴l!ǐ(F" -=FU)663w$N%Jz)6w]c5=ӤY7S<7b"BV'w%6 ^7y(ҽtt[Q Z| pt Y5j탣I}U`qtj$IVw/x}n.6:{fxKW@c8(k!T*w7 >tt96_B0r0,h,9L=n ĂiĐELR0{"R JEV%]sHWT^M<y[7{|'%o%H` ̈n\% BMw?t}X6#etq,-T2K۳,cS${V"tRTuN{=A'~`m))~m|lltOaA.8}VΜBus+OgQ:(%i3lI97͏߾5|{ioOoj[րX5 w1`; )r.*..m3w͓'VY9[36>aaYmv0*GIC#RE36w47U!Tlxa"g/R)MTt󃹊}YuZP#Ix#v%]Ge0NЁvA/.:Ãz_)Baw"QpF&|[@$=-Ca渶^-+nA\R|y9 :ʕ"<?Xߌ)aVIQ:4\'Qu`F<,҈Y[kJk ~M}P|7ꡅ>$ͼ>E.8g1N. 7iIA;M{-3~g9dƩLT³v>;=mTs VXu_TǛ׸.xEMntJg*V|bA,쵊x{U讈([),zbvk8jS62 TЄ l_TV8gGqfG:`vR2WOcm8Y6($ضV-FgAg,{v4=Etva0 @[P|ih~T2Q0CC~쒡lQTId(j :@h. N71d⹊ о&t !&GPc`bB -Ƅ3-„Nw*dn"rLa'zYY$27| J$J83o7>ǁȐ뛢C|smgm:$Qm+A"5c1c[H#Ͻ@*m="{ӥR Kz/zׄQGRjR7o\3ιߑ܎{˂:oL](Q|R EқKxޥtep)%ȍ:$Ky]J}H@1l fB B2нK.k93wgt5P EZ0tg3Tq\=!TTRH!5BXC,ha!.Z UC )٣:gmkFihЭF[U澈@xC B!A6+7|˸ܙ#:T=C䘄L48`,4PI{OЎ ̲^E/_ݢ0r݌7nyZ:TOJoRsDn٦cMTO8ѤYNySEH8D&i"$PnU!՟£a4ϣ9VKAAAZn]POi vCeOslxQ8o q'XJmqINz]7)>E(L6uׅ\S2̘n'+i >vJ`txn4_ ̆pp𯃃nTY(t:B_j G~fTτsI彑 KApكJ%T~m}g%69 7"3v'TRVHp7P. Hie=(;4NӮb˗_;7@,N,_~}ȌtslS_'+Mܧr{97a NNk?}~n;6"::6/yzXӣ?W>֞<}mdyDRZ%m~s'vu[5cX p\7E2N5''ez<Fq9Z[ÞLyY"y}ٙ>tOO}$ƕ&ƽ|X:Ӌh{LU||5CMG:KXtþWkhl;Q+&Ƽк2}r0TK \3e/}X *S@(za*$RLʜ 6/03}dgJ`!Q\(LoٙXNԵBbYMggFDb׹Dc6O F gH0OBA>fHjӑ0PT\?^``5Z^W\`/ `7``7[LKXm0%`#---V<'-V{-a MMbY\-%3sh GK -v~pRZ%EZ%}Z DK( s+%f$9^W,9Jrŝ$'Vk/jR)GD,=k/Oym|C{n"'?_gW`EBgZb󐄳*P-+sq Kgׯ#rA7ﴂ \"4+I~՞-i \O${V2UH0 >4_+b*ic]@ƋnW+qQvPZCZV&94:F5$g\P(,F]ft;l0ofq%+,b4|ܮG0}Aٌ.$<{2}(=mq }rktahgQɁV`Q^ѳg&O)l!RZ!1Bq,@DYXTua{DD/,Ù%(Ĕ@uz*<4 vx3cP6K;p 0bXM&6{ȸMh}k4\dZ hkx~m%t#>f?8B U # #Vܞ4Ie{XZ{F";}f0Cd5SɊIJZrqb2k&],?i#M-#_LH' *el];G,i~Frw;ْqەܰ-] ѵ0z_3 ~4\/K{uفUc\nخѣLka`42cFv5>_/zE6y"%XxHxt|ᗤ/ۏl0*icqn!ԽGs]G(ҽw2ёfd4̢{&-O]Z Zm+cԅ-RB,deu4~0#bap"ƒe>*!Q#QZK#=<1<h @18.E & aUaxV/HfbA:A!ޞO,rw]H=!LU(Bd{x3t)(7'n7'  PBqlj2:DaN7ćqwLMkoZ K\0)Qy!B(Fب?dPc͠var/home/core/zuul-output/logs/kubelet.log0000644000000000000000005020354415145422006017677 0ustar rootrootFeb 18 19:17:19 crc systemd[1]: Starting Kubernetes Kubelet... Feb 18 19:17:19 crc restorecon[4685]: Relabeled /var/lib/kubelet/config.json from system_u:object_r:unlabeled_t:s0 to system_u:object_r:container_var_lib_t:s0 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/device-plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/device-plugins/kubelet.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/volumes/kubernetes.io~configmap/nginx-conf/..2025_02_23_05_40_35.4114275528/nginx.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/22e96971 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/21c98286 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/0f1869e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/46889d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/5b6a5969 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/6c7921f5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4804f443 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/2a46b283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/a6b5573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4f88ee5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/5a4eee4b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/cd87c521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/38602af4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/1483b002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/0346718b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/d3ed4ada not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/3bb473a5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/8cd075a9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/00ab4760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/54a21c09 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/70478888 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/43802770 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/955a0edc not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/bca2d009 not reset as customized by admin to system_u:object_r:container_file_t:s0:c140,c1009 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/b295f9bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/bc46ea27 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5731fc1b not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5e1b2a3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/943f0936 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/3f764ee4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/8695e3f9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/aed7aa86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/c64d7448 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/0ba16bd2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/207a939f not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/54aa8cdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/1f5fa595 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/bf9c8153 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/47fba4ea not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/7ae55ce9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7906a268 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/ce43fa69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7fc7ea3a not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/d8c38b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/9ef015fb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/b9db6a41 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/b1733d79 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/afccd338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/9df0a185 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/18938cf8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/7ab4eb23 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/56930be6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_35.630010865 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/0d8e3722 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/d22b2e76 not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/e036759f not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/2734c483 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/57878fe7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/3f3c2e58 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/375bec3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/7bc41e08 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/48c7a72d not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/4b66701f not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/a5a1c202 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_40.1388695756 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/26f3df5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/6d8fb21d not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/50e94777 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208473b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/ec9e08ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3b787c39 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208eaed5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/93aa3a2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3c697968 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/ba950ec9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/cb5cdb37 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/f2df9827 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/fedaa673 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/9ca2df95 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/b2d7460e not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2207853c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/241c1c29 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2d910eaf not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/c6c0f2e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/399edc97 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8049f7cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/0cec5484 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/312446d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c406,c828 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8e56a35d not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/2d30ddb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/eca8053d not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/c3a25c9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c168,c522 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/b9609c22 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/e8b0eca9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/b36a9c3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/38af7b07 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/ae821620 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/baa23338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/2c534809 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/59b29eae not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/c91a8e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/4d87494a not reset as customized by admin to system_u:object_r:container_file_t:s0:c442,c857 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/1e33ca63 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/8dea7be2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d0b04a99 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d84f01e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/4109059b not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/a7258a3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/05bdf2b6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/f3261b51 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/315d045e not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/5fdcf278 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/d053f757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/c2850dc7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fcfb0b2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c7ac9b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fa0c0d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c609b6ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/2be6c296 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/89a32653 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/4eb9afeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/13af6efa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/b03f9724 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/e3d105cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/3aed4d83 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/0765fa6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/2cefc627 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/3dcc6345 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/365af391 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b1130c0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/236a5913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b9432e26 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/5ddb0e3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/986dc4fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/8a23ff9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/9728ae68 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/665f31d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/136c9b42 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/98a1575b not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/cac69136 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/5deb77a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/2ae53400 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/e46f2326 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/dc688d3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/3497c3cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/177eb008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/af5a2afa not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/d780cb1f not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/49b0f374 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/26fbb125 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/cf14125a not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/b7f86972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/e51d739c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/88ba6a69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/669a9acf not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/5cd51231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/75349ec7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/15c26839 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/45023dcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/2bb66a50 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/64d03bdd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/ab8e7ca0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/bb9be25f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/9a0b61d3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/d471b9d2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/8cb76b8e not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/11a00840 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/ec355a92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/992f735e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d59cdbbc not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/72133ff0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/c56c834c not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d13724c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/0a498258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa471982 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fc900d92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa7d68da not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/4bacf9b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/424021b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/fc2e31a3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/f51eefac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/c8997f2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/7481f599 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/fdafea19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/d0e1c571 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/ee398915 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/682bb6b8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a3e67855 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a989f289 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/915431bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/7796fdab not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/dcdb5f19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/a3aaa88c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/5508e3e6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/160585de not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/e99f8da3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/8bc85570 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/a5861c91 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/84db1135 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/9e1a6043 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/c1aba1c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/d55ccd6d not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/971cc9f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/8f2e3dcf not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/ceb35e9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/1c192745 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/5209e501 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/f83de4df not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/e7b978ac not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/c64304a1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/5384386b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/cce3e3ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/8fb75465 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/740f573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/32fd1134 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/0a861bd3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/80363026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/bfa952a8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..2025_02_23_05_33_31.333075221 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/793bf43d not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/7db1bb6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/4f6a0368 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/c12c7d86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/36c4a773 not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/4c1e98ae not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/a4c8115c not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/setup/7db1802e not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver/a008a7ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-syncer/2c836bac not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-regeneration-controller/0ce62299 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-insecure-readyz/945d2457 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-check-endpoints/7d5c1dd8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/index.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/bundle-v1.15.0.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/channel.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/package.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/bc8d0691 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/6b76097a not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/34d1af30 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/312ba61c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/645d5dd1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/16e825f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/4cf51fc9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/2a23d348 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/075dbd49 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/dd585ddd not reset as customized by admin to system_u:object_r:container_file_t:s0:c377,c642 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/17ebd0ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c343 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/005579f4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_23_11.1287037894 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/bf5f3b9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/af276eb7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/ea28e322 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/692e6683 not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/871746a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/4eb2e958 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/ca9b62da not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/0edd6fce not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/containers/controller-manager/89b4555f not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/655fcd71 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/0d43c002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/e68efd17 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/9acf9b65 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/5ae3ff11 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/1e59206a not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/27af16d1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c304,c1017 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/7918e729 not reset as customized by admin to system_u:object_r:container_file_t:s0:c853,c893 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/5d976d0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c585,c981 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/d7f55cbb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/f0812073 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/1a56cbeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/7fdd437e not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/cdfb5652 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/fix-audit-permissions/fb93119e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver/f1e8fc0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver-check-endpoints/218511f3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server/serving-certs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/ca8af7b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/72cc8a75 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/6e8a3760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4c3455c0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/2278acb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4b453e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/3ec09bda not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2/cacerts.bin not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java/cacerts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl/ca-bundle.trust.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/email-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/objsign-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2ae6433e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fde84897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75680d2e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/openshift-service-serving-signer_1740288168.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/facfc4fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f5a969c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CFCA_EV_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9ef4a08a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ingress-operator_1740288202.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2f332aed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/248c8271.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d10a21f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ACCVRAIZ1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a94d09e5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c9a4d3b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40193066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd8c0d63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b936d1c6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CA_Disig_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4fd49c6c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM_SERVIDORES_SEGUROS.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b81b93f0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f9a69fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b30d5fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ANF_Secure_Server_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b433981b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93851c9e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9282e51c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7dd1bc4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Actalis_Authentication_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/930ac5d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f47b495.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e113c810.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5931b5bc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Commercial.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2b349938.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e48193cf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/302904dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a716d4ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Networking.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93bc0acc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/86212b19.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b727005e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbc54cab.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f51bb24c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c28a8a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9c8dfbd4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ccc52f49.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cb1c3204.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ce5e74ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd08c599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6d41d539.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb5fa911.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e35234b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8cb5ee0f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a7c655d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f8fc53da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/de6d66f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d41b5e2a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/41a3f684.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1df5a75f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_2011.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e36a6752.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b872f2b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9576d26b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/228f89db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_ECC_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb717492.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d21b73c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b1b94ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/595e996b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_RSA_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b46e03d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/128f4b91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_3_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81f2d2b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Autoridad_de_Certificacion_Firmaprofesional_CIF_A62634068.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3bde41ac.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d16a5865.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_EC-384_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0179095f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ffa7f1eb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9482e63a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4dae3dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e359ba6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7e067d03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/95aff9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7746a63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Baltimore_CyberTrust_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/653b494a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3ad48a91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_2_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/54657681.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/82223c44.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8de2f56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d9dafe4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d96b65e2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee64a828.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40547a79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5a3f0ff8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a780d93.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/34d996fb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/eed8c118.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/89c02a45.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b1159c4c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d6325660.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4c339cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8312c4c1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_E1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8508e720.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5fdd185d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48bec511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/69105f4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b9bc432.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/32888f65.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b03dec0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/219d9499.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5acf816d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbf06781.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc99f41e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AAA_Certificate_Services.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/985c1f52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8794b4e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_BR_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7c037b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ef954a4e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_EV_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2add47b6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/90c5a3c8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0f3e76e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/53a1b57a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_EV_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5ad8a5d6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/68dd7389.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d04f354.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d6437c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/062cdee6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bd43e1dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7f3d5d1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c491639e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3513523f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/399e7759.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/feffd413.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d18e9066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/607986c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c90bc37d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1b0f7e5c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e08bfd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dd8e9d41.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed39abd0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a3418fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bc3f2570.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_High_Assurance_EV_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/244b5494.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81b9768f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4be590e0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_ECC_P384_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9846683b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/252252d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e8e7201.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_RSA4096_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d52c538d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c44cc0c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Trusted_Root_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75d1b2ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a2c66da8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ecccd8db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust.net_Certification_Authority__2048_.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/aee5f10d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e7271e8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0e59380.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4c3982f2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b99d060.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf64f35b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0a775a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/002c0b4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cc450945.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_EC1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/106f3e4d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b3fb433b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4042bcee.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/02265526.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/455f1b52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0d69c7e1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9f727ac7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5e98733a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0cd152c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc4d6a89.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6187b673.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/FIRMAPROFESIONAL_CA_ROOT-A_WEB.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ba8887ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/068570d1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f081611a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48a195d8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GDCA_TrustAUTH_R5_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f6fa695.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab59055e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b92fd57f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GLOBALTRUST_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fa5da96b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ec40989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7719f463.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1001acf7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f013ecaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/626dceaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c559d742.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1d3472b9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9479c8c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a81e292b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4bfab552.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e071171e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/57bcb2da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_ECC_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab5346f4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5046c355.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_RSA_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/865fbdf9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da0cfd1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/85cde254.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_ECC_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbb3f32b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureSign_RootCA11.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5860aaa6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/31188b5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HiPKI_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c7f1359b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f15c80c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hongkong_Post_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/09789157.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/18856ac4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e09d511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Commercial_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cf701eeb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d06393bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Public_Sector_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/10531352.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Izenpe.com.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureTrust_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0ed035a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsec_e-Szigno_Root_CA_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8160b96c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8651083.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2c63f966.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_ECC_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d89cda1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/01419da9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_RSA_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7a5b843.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_RSA_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf53fb88.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9591a472.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3afde786.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Gold_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NAVER_Global_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3fb36b73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d39b0a2c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a89d74c2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd58d51e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7db1890.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NetLock_Arany__Class_Gold__F__tan__s__tv__ny.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/988a38cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/60afe812.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f39fc864.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5443e9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GB_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e73d606e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dfc0fe80.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b66938e9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e1eab7c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GC_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/773e07ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c899c73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d59297b8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ddcda989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_1_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/749e9e03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/52b525c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7e8dc79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a819ef2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/08063a00.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b483515.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/064e0aa9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1f58a078.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6f7454b3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7fa05551.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76faf6c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9339512a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f387163d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee37c333.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e18bfb83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e442e424.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fe8a2cd8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/23f4c490.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5cd81ad7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0c70a8d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7892ad52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SZAFIR_ROOT_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4f316efb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_RSA_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/06dc52d5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/583d0756.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0bf05006.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/88950faa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9046744a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c860d51.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_RSA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6fa5da56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/33ee480d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Secure_Global_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/63a2c897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_ECC_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bdacca6f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ff34af3f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbff3a01.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_ECC_RootCA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_C1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/406c9bb1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_C3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Services_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Silver_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/99e1b953.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/14bc7599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TUBITAK_Kamu_SM_SSL_Kok_Sertifikasi_-_Surum_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a3adc42.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f459871d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_ECC_Root_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_RSA_Root_2023.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TeliaSonera_Root_CA_v1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telia_Root_CA_v2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f103249.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f058632f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-certificates.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9bf03295.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/98aaf404.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1cef98f5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/073bfcc5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2923b3f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f249de83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/edcbddb5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P256_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b5697b0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ae85e5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b74d2bd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P384_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d887a5bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9aef356c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TunTrust_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd64f3fc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e13665f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Extended_Validation_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f5dc4f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da7377f6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Global_G2_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c01eb047.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/304d27c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed858448.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f30dd6ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/04f60c28.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_ECC_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fc5a8f99.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/35105088.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee532fd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/XRamp_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/706f604c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76579174.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d86cdd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/882de061.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f618aec.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a9d40e02.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e-Szigno_Root_CA_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e868b802.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/83e9984f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ePKI_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca6e4ad9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d6523ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4b718d9b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/869fbf79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/containers/registry/f8d22bdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/6e8bbfac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/54dd7996 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/a4f1bb05 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/207129da not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/c1df39e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/15b8f1cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/77bd6913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/2382c1b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/704ce128 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/70d16fe0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/bfb95535 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/57a8e8e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/1b9d3e5e not reset as customized by admin to system_u:object_r:container_file_t:s0:c107,c917 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/fddb173c not reset as customized by admin to system_u:object_r:container_file_t:s0:c202,c983 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/95d3c6c4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/bfb5fff5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/2aef40aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/c0391cad not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/1119e69d not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/660608b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/8220bd53 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/85f99d5c not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/4b0225f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/9c2a3394 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/e820b243 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/1ca52ea0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/e6988e45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/6655f00b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/98bc3986 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/08e3458a not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/2a191cb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/6c4eeefb not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/f61a549c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/24891863 not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/fbdfd89c not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/9b63b3bc not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/8acde6d6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/node-driver-registrar/59ecbba3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/csi-provisioner/685d4be3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/containers/route-controller-manager/feaea55e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/63709497 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/d966b7fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/f5773757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/81c9edb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/57bf57ee not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/86f5e6aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/0aabe31d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/d2af85c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/09d157d9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:19 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c0fe7256 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c30319e4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/e6b1dd45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/2bb643f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/920de426 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/70fa1e87 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/a1c12a2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/9442e6c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/5b45ec72 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/3c9f3a59 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/1091c11b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/9a6821c6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/ec0c35e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/517f37e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/6214fe78 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/ba189c8b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/351e4f31 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/c0f219ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/8069f607 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/559c3d82 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/605ad488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/148df488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/3bf6dcb4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/022a2feb not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/938c3924 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/729fe23e not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/1fd5cbd4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/a96697e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/e155ddca not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/10dd0e0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/6f2c8392 not reset as customized by admin to system_u:object_r:container_file_t:s0:c267,c588 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/bd241ad9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/plugins/csi-hostpath not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/plugins/csi-hostpath/csi.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/plugins/kubernetes.io not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/plugins/kubernetes.io/csi not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983 not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/vol_data.json not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 18 19:17:20 crc restorecon[4685]: /var/lib/kubelet/plugins_registry not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 18 19:17:20 crc restorecon[4685]: Relabeled /var/usrlocal/bin/kubenswrapper from system_u:object_r:bin_t:s0 to system_u:object_r:kubelet_exec_t:s0 Feb 18 19:17:20 crc kubenswrapper[4942]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 18 19:17:20 crc kubenswrapper[4942]: Flag --minimum-container-ttl-duration has been deprecated, Use --eviction-hard or --eviction-soft instead. Will be removed in a future version. Feb 18 19:17:20 crc kubenswrapper[4942]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 18 19:17:20 crc kubenswrapper[4942]: Flag --register-with-taints has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 18 19:17:20 crc kubenswrapper[4942]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Feb 18 19:17:20 crc kubenswrapper[4942]: Flag --system-reserved has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.756875 4942 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 18 19:17:20 crc kubenswrapper[4942]: W0218 19:17:20.763251 4942 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Feb 18 19:17:20 crc kubenswrapper[4942]: W0218 19:17:20.763297 4942 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Feb 18 19:17:20 crc kubenswrapper[4942]: W0218 19:17:20.763302 4942 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Feb 18 19:17:20 crc kubenswrapper[4942]: W0218 19:17:20.763308 4942 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Feb 18 19:17:20 crc kubenswrapper[4942]: W0218 19:17:20.763314 4942 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Feb 18 19:17:20 crc kubenswrapper[4942]: W0218 19:17:20.763320 4942 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Feb 18 19:17:20 crc kubenswrapper[4942]: W0218 19:17:20.763325 4942 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Feb 18 19:17:20 crc kubenswrapper[4942]: W0218 19:17:20.763330 4942 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Feb 18 19:17:20 crc kubenswrapper[4942]: W0218 19:17:20.763335 4942 feature_gate.go:330] unrecognized feature gate: InsightsConfig Feb 18 19:17:20 crc kubenswrapper[4942]: W0218 19:17:20.763340 4942 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Feb 18 19:17:20 crc kubenswrapper[4942]: W0218 19:17:20.763345 4942 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Feb 18 19:17:20 crc kubenswrapper[4942]: W0218 19:17:20.763350 4942 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Feb 18 19:17:20 crc kubenswrapper[4942]: W0218 19:17:20.763355 4942 feature_gate.go:330] unrecognized feature gate: PinnedImages Feb 18 19:17:20 crc kubenswrapper[4942]: W0218 19:17:20.763359 4942 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Feb 18 19:17:20 crc kubenswrapper[4942]: W0218 19:17:20.763363 4942 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Feb 18 19:17:20 crc kubenswrapper[4942]: W0218 19:17:20.763367 4942 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Feb 18 19:17:20 crc kubenswrapper[4942]: W0218 19:17:20.763371 4942 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Feb 18 19:17:20 crc kubenswrapper[4942]: W0218 19:17:20.763376 4942 feature_gate.go:330] unrecognized feature gate: OVNObservability Feb 18 19:17:20 crc kubenswrapper[4942]: W0218 19:17:20.763379 4942 feature_gate.go:330] unrecognized feature gate: NewOLM Feb 18 19:17:20 crc kubenswrapper[4942]: W0218 19:17:20.763384 4942 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Feb 18 19:17:20 crc kubenswrapper[4942]: W0218 19:17:20.763389 4942 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Feb 18 19:17:20 crc kubenswrapper[4942]: W0218 19:17:20.763404 4942 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Feb 18 19:17:20 crc kubenswrapper[4942]: W0218 19:17:20.763408 4942 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Feb 18 19:17:20 crc kubenswrapper[4942]: W0218 19:17:20.763412 4942 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Feb 18 19:17:20 crc kubenswrapper[4942]: W0218 19:17:20.763416 4942 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Feb 18 19:17:20 crc kubenswrapper[4942]: W0218 19:17:20.763420 4942 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Feb 18 19:17:20 crc kubenswrapper[4942]: W0218 19:17:20.763424 4942 feature_gate.go:330] unrecognized feature gate: GatewayAPI Feb 18 19:17:20 crc kubenswrapper[4942]: W0218 19:17:20.763429 4942 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Feb 18 19:17:20 crc kubenswrapper[4942]: W0218 19:17:20.763434 4942 feature_gate.go:330] unrecognized feature gate: PlatformOperators Feb 18 19:17:20 crc kubenswrapper[4942]: W0218 19:17:20.763438 4942 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Feb 18 19:17:20 crc kubenswrapper[4942]: W0218 19:17:20.763442 4942 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Feb 18 19:17:20 crc kubenswrapper[4942]: W0218 19:17:20.763445 4942 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Feb 18 19:17:20 crc kubenswrapper[4942]: W0218 19:17:20.763451 4942 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Feb 18 19:17:20 crc kubenswrapper[4942]: W0218 19:17:20.763455 4942 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Feb 18 19:17:20 crc kubenswrapper[4942]: W0218 19:17:20.763459 4942 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Feb 18 19:17:20 crc kubenswrapper[4942]: W0218 19:17:20.763463 4942 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Feb 18 19:17:20 crc kubenswrapper[4942]: W0218 19:17:20.763466 4942 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Feb 18 19:17:20 crc kubenswrapper[4942]: W0218 19:17:20.763470 4942 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Feb 18 19:17:20 crc kubenswrapper[4942]: W0218 19:17:20.763476 4942 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Feb 18 19:17:20 crc kubenswrapper[4942]: W0218 19:17:20.763480 4942 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Feb 18 19:17:20 crc kubenswrapper[4942]: W0218 19:17:20.763483 4942 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Feb 18 19:17:20 crc kubenswrapper[4942]: W0218 19:17:20.763487 4942 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Feb 18 19:17:20 crc kubenswrapper[4942]: W0218 19:17:20.763490 4942 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Feb 18 19:17:20 crc kubenswrapper[4942]: W0218 19:17:20.763494 4942 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Feb 18 19:17:20 crc kubenswrapper[4942]: W0218 19:17:20.763497 4942 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Feb 18 19:17:20 crc kubenswrapper[4942]: W0218 19:17:20.763501 4942 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Feb 18 19:17:20 crc kubenswrapper[4942]: W0218 19:17:20.763505 4942 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Feb 18 19:17:20 crc kubenswrapper[4942]: W0218 19:17:20.763510 4942 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Feb 18 19:17:20 crc kubenswrapper[4942]: W0218 19:17:20.763515 4942 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Feb 18 19:17:20 crc kubenswrapper[4942]: W0218 19:17:20.763528 4942 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Feb 18 19:17:20 crc kubenswrapper[4942]: W0218 19:17:20.763532 4942 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Feb 18 19:17:20 crc kubenswrapper[4942]: W0218 19:17:20.763535 4942 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Feb 18 19:17:20 crc kubenswrapper[4942]: W0218 19:17:20.763539 4942 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Feb 18 19:17:20 crc kubenswrapper[4942]: W0218 19:17:20.763545 4942 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Feb 18 19:17:20 crc kubenswrapper[4942]: W0218 19:17:20.763549 4942 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Feb 18 19:17:20 crc kubenswrapper[4942]: W0218 19:17:20.763553 4942 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Feb 18 19:17:20 crc kubenswrapper[4942]: W0218 19:17:20.763556 4942 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Feb 18 19:17:20 crc kubenswrapper[4942]: W0218 19:17:20.763559 4942 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Feb 18 19:17:20 crc kubenswrapper[4942]: W0218 19:17:20.763563 4942 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Feb 18 19:17:20 crc kubenswrapper[4942]: W0218 19:17:20.763567 4942 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Feb 18 19:17:20 crc kubenswrapper[4942]: W0218 19:17:20.763570 4942 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Feb 18 19:17:20 crc kubenswrapper[4942]: W0218 19:17:20.763573 4942 feature_gate.go:330] unrecognized feature gate: Example Feb 18 19:17:20 crc kubenswrapper[4942]: W0218 19:17:20.763577 4942 feature_gate.go:330] unrecognized feature gate: SignatureStores Feb 18 19:17:20 crc kubenswrapper[4942]: W0218 19:17:20.763580 4942 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Feb 18 19:17:20 crc kubenswrapper[4942]: W0218 19:17:20.763584 4942 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Feb 18 19:17:20 crc kubenswrapper[4942]: W0218 19:17:20.763587 4942 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Feb 18 19:17:20 crc kubenswrapper[4942]: W0218 19:17:20.763590 4942 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Feb 18 19:17:20 crc kubenswrapper[4942]: W0218 19:17:20.763594 4942 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Feb 18 19:17:20 crc kubenswrapper[4942]: W0218 19:17:20.763597 4942 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Feb 18 19:17:20 crc kubenswrapper[4942]: W0218 19:17:20.763600 4942 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Feb 18 19:17:20 crc kubenswrapper[4942]: W0218 19:17:20.763605 4942 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.763719 4942 flags.go:64] FLAG: --address="0.0.0.0" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.763732 4942 flags.go:64] FLAG: --allowed-unsafe-sysctls="[]" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.763743 4942 flags.go:64] FLAG: --anonymous-auth="true" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.763750 4942 flags.go:64] FLAG: --application-metrics-count-limit="100" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.763775 4942 flags.go:64] FLAG: --authentication-token-webhook="false" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.763782 4942 flags.go:64] FLAG: --authentication-token-webhook-cache-ttl="2m0s" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.763792 4942 flags.go:64] FLAG: --authorization-mode="AlwaysAllow" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.763801 4942 flags.go:64] FLAG: --authorization-webhook-cache-authorized-ttl="5m0s" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.763807 4942 flags.go:64] FLAG: --authorization-webhook-cache-unauthorized-ttl="30s" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.763812 4942 flags.go:64] FLAG: --boot-id-file="/proc/sys/kernel/random/boot_id" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.763818 4942 flags.go:64] FLAG: --bootstrap-kubeconfig="/etc/kubernetes/kubeconfig" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.763825 4942 flags.go:64] FLAG: --cert-dir="/var/lib/kubelet/pki" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.763830 4942 flags.go:64] FLAG: --cgroup-driver="cgroupfs" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.763835 4942 flags.go:64] FLAG: --cgroup-root="" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.763852 4942 flags.go:64] FLAG: --cgroups-per-qos="true" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.763858 4942 flags.go:64] FLAG: --client-ca-file="" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.763863 4942 flags.go:64] FLAG: --cloud-config="" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.763868 4942 flags.go:64] FLAG: --cloud-provider="" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.763873 4942 flags.go:64] FLAG: --cluster-dns="[]" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.763884 4942 flags.go:64] FLAG: --cluster-domain="" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.763889 4942 flags.go:64] FLAG: --config="/etc/kubernetes/kubelet.conf" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.763894 4942 flags.go:64] FLAG: --config-dir="" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.763899 4942 flags.go:64] FLAG: --container-hints="/etc/cadvisor/container_hints.json" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.763904 4942 flags.go:64] FLAG: --container-log-max-files="5" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.763914 4942 flags.go:64] FLAG: --container-log-max-size="10Mi" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.763919 4942 flags.go:64] FLAG: --container-runtime-endpoint="/var/run/crio/crio.sock" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.763925 4942 flags.go:64] FLAG: --containerd="/run/containerd/containerd.sock" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.763934 4942 flags.go:64] FLAG: --containerd-namespace="k8s.io" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.763939 4942 flags.go:64] FLAG: --contention-profiling="false" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.763944 4942 flags.go:64] FLAG: --cpu-cfs-quota="true" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.763948 4942 flags.go:64] FLAG: --cpu-cfs-quota-period="100ms" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.763955 4942 flags.go:64] FLAG: --cpu-manager-policy="none" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.763960 4942 flags.go:64] FLAG: --cpu-manager-policy-options="" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.763966 4942 flags.go:64] FLAG: --cpu-manager-reconcile-period="10s" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.763970 4942 flags.go:64] FLAG: --enable-controller-attach-detach="true" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.763975 4942 flags.go:64] FLAG: --enable-debugging-handlers="true" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.763979 4942 flags.go:64] FLAG: --enable-load-reader="false" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.763983 4942 flags.go:64] FLAG: --enable-server="true" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.763988 4942 flags.go:64] FLAG: --enforce-node-allocatable="[pods]" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.764000 4942 flags.go:64] FLAG: --event-burst="100" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.764006 4942 flags.go:64] FLAG: --event-qps="50" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.764011 4942 flags.go:64] FLAG: --event-storage-age-limit="default=0" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.764016 4942 flags.go:64] FLAG: --event-storage-event-limit="default=0" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.764021 4942 flags.go:64] FLAG: --eviction-hard="" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.764029 4942 flags.go:64] FLAG: --eviction-max-pod-grace-period="0" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.764034 4942 flags.go:64] FLAG: --eviction-minimum-reclaim="" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.764039 4942 flags.go:64] FLAG: --eviction-pressure-transition-period="5m0s" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.764045 4942 flags.go:64] FLAG: --eviction-soft="" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.764050 4942 flags.go:64] FLAG: --eviction-soft-grace-period="" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.764055 4942 flags.go:64] FLAG: --exit-on-lock-contention="false" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.764069 4942 flags.go:64] FLAG: --experimental-allocatable-ignore-eviction="false" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.764074 4942 flags.go:64] FLAG: --experimental-mounter-path="" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.764078 4942 flags.go:64] FLAG: --fail-cgroupv1="false" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.764082 4942 flags.go:64] FLAG: --fail-swap-on="true" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.764087 4942 flags.go:64] FLAG: --feature-gates="" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.764093 4942 flags.go:64] FLAG: --file-check-frequency="20s" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.764097 4942 flags.go:64] FLAG: --global-housekeeping-interval="1m0s" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.764102 4942 flags.go:64] FLAG: --hairpin-mode="promiscuous-bridge" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.764106 4942 flags.go:64] FLAG: --healthz-bind-address="127.0.0.1" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.764111 4942 flags.go:64] FLAG: --healthz-port="10248" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.764116 4942 flags.go:64] FLAG: --help="false" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.764120 4942 flags.go:64] FLAG: --hostname-override="" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.764124 4942 flags.go:64] FLAG: --housekeeping-interval="10s" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.764130 4942 flags.go:64] FLAG: --http-check-frequency="20s" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.764134 4942 flags.go:64] FLAG: --image-credential-provider-bin-dir="" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.764139 4942 flags.go:64] FLAG: --image-credential-provider-config="" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.764143 4942 flags.go:64] FLAG: --image-gc-high-threshold="85" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.764148 4942 flags.go:64] FLAG: --image-gc-low-threshold="80" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.764152 4942 flags.go:64] FLAG: --image-service-endpoint="" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.764156 4942 flags.go:64] FLAG: --kernel-memcg-notification="false" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.764160 4942 flags.go:64] FLAG: --kube-api-burst="100" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.764164 4942 flags.go:64] FLAG: --kube-api-content-type="application/vnd.kubernetes.protobuf" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.764169 4942 flags.go:64] FLAG: --kube-api-qps="50" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.764173 4942 flags.go:64] FLAG: --kube-reserved="" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.764177 4942 flags.go:64] FLAG: --kube-reserved-cgroup="" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.764181 4942 flags.go:64] FLAG: --kubeconfig="/var/lib/kubelet/kubeconfig" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.764186 4942 flags.go:64] FLAG: --kubelet-cgroups="" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.764190 4942 flags.go:64] FLAG: --local-storage-capacity-isolation="true" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.764194 4942 flags.go:64] FLAG: --lock-file="" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.764198 4942 flags.go:64] FLAG: --log-cadvisor-usage="false" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.764202 4942 flags.go:64] FLAG: --log-flush-frequency="5s" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.764207 4942 flags.go:64] FLAG: --log-json-info-buffer-size="0" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.764214 4942 flags.go:64] FLAG: --log-json-split-stream="false" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.764218 4942 flags.go:64] FLAG: --log-text-info-buffer-size="0" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.764222 4942 flags.go:64] FLAG: --log-text-split-stream="false" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.764227 4942 flags.go:64] FLAG: --logging-format="text" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.764237 4942 flags.go:64] FLAG: --machine-id-file="/etc/machine-id,/var/lib/dbus/machine-id" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.764242 4942 flags.go:64] FLAG: --make-iptables-util-chains="true" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.764247 4942 flags.go:64] FLAG: --manifest-url="" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.764251 4942 flags.go:64] FLAG: --manifest-url-header="" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.764258 4942 flags.go:64] FLAG: --max-housekeeping-interval="15s" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.764263 4942 flags.go:64] FLAG: --max-open-files="1000000" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.764269 4942 flags.go:64] FLAG: --max-pods="110" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.764274 4942 flags.go:64] FLAG: --maximum-dead-containers="-1" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.764286 4942 flags.go:64] FLAG: --maximum-dead-containers-per-container="1" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.764290 4942 flags.go:64] FLAG: --memory-manager-policy="None" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.764295 4942 flags.go:64] FLAG: --minimum-container-ttl-duration="6m0s" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.764300 4942 flags.go:64] FLAG: --minimum-image-ttl-duration="2m0s" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.764304 4942 flags.go:64] FLAG: --node-ip="192.168.126.11" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.764309 4942 flags.go:64] FLAG: --node-labels="node-role.kubernetes.io/control-plane=,node-role.kubernetes.io/master=,node.openshift.io/os_id=rhcos" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.764323 4942 flags.go:64] FLAG: --node-status-max-images="50" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.764327 4942 flags.go:64] FLAG: --node-status-update-frequency="10s" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.764332 4942 flags.go:64] FLAG: --oom-score-adj="-999" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.764337 4942 flags.go:64] FLAG: --pod-cidr="" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.764341 4942 flags.go:64] FLAG: --pod-infra-container-image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:33549946e22a9ffa738fd94b1345f90921bc8f92fa6137784cb33c77ad806f9d" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.764347 4942 flags.go:64] FLAG: --pod-manifest-path="" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.764351 4942 flags.go:64] FLAG: --pod-max-pids="-1" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.764356 4942 flags.go:64] FLAG: --pods-per-core="0" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.764360 4942 flags.go:64] FLAG: --port="10250" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.764364 4942 flags.go:64] FLAG: --protect-kernel-defaults="false" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.764368 4942 flags.go:64] FLAG: --provider-id="" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.764373 4942 flags.go:64] FLAG: --qos-reserved="" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.764377 4942 flags.go:64] FLAG: --read-only-port="10255" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.764381 4942 flags.go:64] FLAG: --register-node="true" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.764385 4942 flags.go:64] FLAG: --register-schedulable="true" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.764390 4942 flags.go:64] FLAG: --register-with-taints="node-role.kubernetes.io/master=:NoSchedule" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.764399 4942 flags.go:64] FLAG: --registry-burst="10" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.764403 4942 flags.go:64] FLAG: --registry-qps="5" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.764407 4942 flags.go:64] FLAG: --reserved-cpus="" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.764411 4942 flags.go:64] FLAG: --reserved-memory="" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.764418 4942 flags.go:64] FLAG: --resolv-conf="/etc/resolv.conf" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.764611 4942 flags.go:64] FLAG: --root-dir="/var/lib/kubelet" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.764623 4942 flags.go:64] FLAG: --rotate-certificates="false" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.764628 4942 flags.go:64] FLAG: --rotate-server-certificates="false" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.764632 4942 flags.go:64] FLAG: --runonce="false" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.764636 4942 flags.go:64] FLAG: --runtime-cgroups="/system.slice/crio.service" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.764640 4942 flags.go:64] FLAG: --runtime-request-timeout="2m0s" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.764646 4942 flags.go:64] FLAG: --seccomp-default="false" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.764651 4942 flags.go:64] FLAG: --serialize-image-pulls="true" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.764655 4942 flags.go:64] FLAG: --storage-driver-buffer-duration="1m0s" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.764660 4942 flags.go:64] FLAG: --storage-driver-db="cadvisor" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.764667 4942 flags.go:64] FLAG: --storage-driver-host="localhost:8086" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.764671 4942 flags.go:64] FLAG: --storage-driver-password="root" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.764675 4942 flags.go:64] FLAG: --storage-driver-secure="false" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.764680 4942 flags.go:64] FLAG: --storage-driver-table="stats" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.764685 4942 flags.go:64] FLAG: --storage-driver-user="root" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.764689 4942 flags.go:64] FLAG: --streaming-connection-idle-timeout="4h0m0s" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.764693 4942 flags.go:64] FLAG: --sync-frequency="1m0s" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.764699 4942 flags.go:64] FLAG: --system-cgroups="" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.764703 4942 flags.go:64] FLAG: --system-reserved="cpu=200m,ephemeral-storage=350Mi,memory=350Mi" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.764719 4942 flags.go:64] FLAG: --system-reserved-cgroup="" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.764724 4942 flags.go:64] FLAG: --tls-cert-file="" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.764728 4942 flags.go:64] FLAG: --tls-cipher-suites="[]" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.764740 4942 flags.go:64] FLAG: --tls-min-version="" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.764745 4942 flags.go:64] FLAG: --tls-private-key-file="" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.764749 4942 flags.go:64] FLAG: --topology-manager-policy="none" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.764753 4942 flags.go:64] FLAG: --topology-manager-policy-options="" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.764784 4942 flags.go:64] FLAG: --topology-manager-scope="container" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.764803 4942 flags.go:64] FLAG: --v="2" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.764812 4942 flags.go:64] FLAG: --version="false" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.764820 4942 flags.go:64] FLAG: --vmodule="" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.764827 4942 flags.go:64] FLAG: --volume-plugin-dir="/etc/kubernetes/kubelet-plugins/volume/exec" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.764831 4942 flags.go:64] FLAG: --volume-stats-agg-period="1m0s" Feb 18 19:17:20 crc kubenswrapper[4942]: W0218 19:17:20.765003 4942 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Feb 18 19:17:20 crc kubenswrapper[4942]: W0218 19:17:20.765009 4942 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Feb 18 19:17:20 crc kubenswrapper[4942]: W0218 19:17:20.765013 4942 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Feb 18 19:17:20 crc kubenswrapper[4942]: W0218 19:17:20.765017 4942 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Feb 18 19:17:20 crc kubenswrapper[4942]: W0218 19:17:20.765022 4942 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Feb 18 19:17:20 crc kubenswrapper[4942]: W0218 19:17:20.765033 4942 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Feb 18 19:17:20 crc kubenswrapper[4942]: W0218 19:17:20.765039 4942 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Feb 18 19:17:20 crc kubenswrapper[4942]: W0218 19:17:20.765043 4942 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Feb 18 19:17:20 crc kubenswrapper[4942]: W0218 19:17:20.765046 4942 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Feb 18 19:17:20 crc kubenswrapper[4942]: W0218 19:17:20.765050 4942 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Feb 18 19:17:20 crc kubenswrapper[4942]: W0218 19:17:20.765056 4942 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Feb 18 19:17:20 crc kubenswrapper[4942]: W0218 19:17:20.765059 4942 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Feb 18 19:17:20 crc kubenswrapper[4942]: W0218 19:17:20.765063 4942 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Feb 18 19:17:20 crc kubenswrapper[4942]: W0218 19:17:20.765067 4942 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Feb 18 19:17:20 crc kubenswrapper[4942]: W0218 19:17:20.765070 4942 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Feb 18 19:17:20 crc kubenswrapper[4942]: W0218 19:17:20.765073 4942 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Feb 18 19:17:20 crc kubenswrapper[4942]: W0218 19:17:20.765077 4942 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Feb 18 19:17:20 crc kubenswrapper[4942]: W0218 19:17:20.765080 4942 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Feb 18 19:17:20 crc kubenswrapper[4942]: W0218 19:17:20.765084 4942 feature_gate.go:330] unrecognized feature gate: GatewayAPI Feb 18 19:17:20 crc kubenswrapper[4942]: W0218 19:17:20.765088 4942 feature_gate.go:330] unrecognized feature gate: PinnedImages Feb 18 19:17:20 crc kubenswrapper[4942]: W0218 19:17:20.765091 4942 feature_gate.go:330] unrecognized feature gate: Example Feb 18 19:17:20 crc kubenswrapper[4942]: W0218 19:17:20.765094 4942 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Feb 18 19:17:20 crc kubenswrapper[4942]: W0218 19:17:20.765099 4942 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Feb 18 19:17:20 crc kubenswrapper[4942]: W0218 19:17:20.765144 4942 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Feb 18 19:17:20 crc kubenswrapper[4942]: W0218 19:17:20.765149 4942 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Feb 18 19:17:20 crc kubenswrapper[4942]: W0218 19:17:20.765154 4942 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Feb 18 19:17:20 crc kubenswrapper[4942]: W0218 19:17:20.765158 4942 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Feb 18 19:17:20 crc kubenswrapper[4942]: W0218 19:17:20.765163 4942 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Feb 18 19:17:20 crc kubenswrapper[4942]: W0218 19:17:20.765167 4942 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Feb 18 19:17:20 crc kubenswrapper[4942]: W0218 19:17:20.765172 4942 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Feb 18 19:17:20 crc kubenswrapper[4942]: W0218 19:17:20.765176 4942 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Feb 18 19:17:20 crc kubenswrapper[4942]: W0218 19:17:20.765181 4942 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Feb 18 19:17:20 crc kubenswrapper[4942]: W0218 19:17:20.765185 4942 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Feb 18 19:17:20 crc kubenswrapper[4942]: W0218 19:17:20.765189 4942 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Feb 18 19:17:20 crc kubenswrapper[4942]: W0218 19:17:20.765193 4942 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Feb 18 19:17:20 crc kubenswrapper[4942]: W0218 19:17:20.765197 4942 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Feb 18 19:17:20 crc kubenswrapper[4942]: W0218 19:17:20.765202 4942 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Feb 18 19:17:20 crc kubenswrapper[4942]: W0218 19:17:20.765206 4942 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Feb 18 19:17:20 crc kubenswrapper[4942]: W0218 19:17:20.765256 4942 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Feb 18 19:17:20 crc kubenswrapper[4942]: W0218 19:17:20.765260 4942 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Feb 18 19:17:20 crc kubenswrapper[4942]: W0218 19:17:20.765264 4942 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Feb 18 19:17:20 crc kubenswrapper[4942]: W0218 19:17:20.765276 4942 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Feb 18 19:17:20 crc kubenswrapper[4942]: W0218 19:17:20.765282 4942 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Feb 18 19:17:20 crc kubenswrapper[4942]: W0218 19:17:20.765285 4942 feature_gate.go:330] unrecognized feature gate: SignatureStores Feb 18 19:17:20 crc kubenswrapper[4942]: W0218 19:17:20.765289 4942 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Feb 18 19:17:20 crc kubenswrapper[4942]: W0218 19:17:20.765292 4942 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Feb 18 19:17:20 crc kubenswrapper[4942]: W0218 19:17:20.765296 4942 feature_gate.go:330] unrecognized feature gate: OVNObservability Feb 18 19:17:20 crc kubenswrapper[4942]: W0218 19:17:20.765300 4942 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Feb 18 19:17:20 crc kubenswrapper[4942]: W0218 19:17:20.765304 4942 feature_gate.go:330] unrecognized feature gate: NewOLM Feb 18 19:17:20 crc kubenswrapper[4942]: W0218 19:17:20.765309 4942 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Feb 18 19:17:20 crc kubenswrapper[4942]: W0218 19:17:20.765313 4942 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Feb 18 19:17:20 crc kubenswrapper[4942]: W0218 19:17:20.765317 4942 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Feb 18 19:17:20 crc kubenswrapper[4942]: W0218 19:17:20.765321 4942 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Feb 18 19:17:20 crc kubenswrapper[4942]: W0218 19:17:20.765325 4942 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Feb 18 19:17:20 crc kubenswrapper[4942]: W0218 19:17:20.765329 4942 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Feb 18 19:17:20 crc kubenswrapper[4942]: W0218 19:17:20.765333 4942 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Feb 18 19:17:20 crc kubenswrapper[4942]: W0218 19:17:20.765338 4942 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Feb 18 19:17:20 crc kubenswrapper[4942]: W0218 19:17:20.765342 4942 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Feb 18 19:17:20 crc kubenswrapper[4942]: W0218 19:17:20.765346 4942 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Feb 18 19:17:20 crc kubenswrapper[4942]: W0218 19:17:20.765350 4942 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Feb 18 19:17:20 crc kubenswrapper[4942]: W0218 19:17:20.765362 4942 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Feb 18 19:17:20 crc kubenswrapper[4942]: W0218 19:17:20.765366 4942 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Feb 18 19:17:20 crc kubenswrapper[4942]: W0218 19:17:20.765369 4942 feature_gate.go:330] unrecognized feature gate: InsightsConfig Feb 18 19:17:20 crc kubenswrapper[4942]: W0218 19:17:20.765373 4942 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Feb 18 19:17:20 crc kubenswrapper[4942]: W0218 19:17:20.765376 4942 feature_gate.go:330] unrecognized feature gate: PlatformOperators Feb 18 19:17:20 crc kubenswrapper[4942]: W0218 19:17:20.765380 4942 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Feb 18 19:17:20 crc kubenswrapper[4942]: W0218 19:17:20.765383 4942 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Feb 18 19:17:20 crc kubenswrapper[4942]: W0218 19:17:20.765387 4942 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Feb 18 19:17:20 crc kubenswrapper[4942]: W0218 19:17:20.765391 4942 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Feb 18 19:17:20 crc kubenswrapper[4942]: W0218 19:17:20.765394 4942 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Feb 18 19:17:20 crc kubenswrapper[4942]: W0218 19:17:20.765402 4942 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.765410 4942 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.782237 4942 server.go:491] "Kubelet version" kubeletVersion="v1.31.5" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.782303 4942 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 18 19:17:20 crc kubenswrapper[4942]: W0218 19:17:20.782413 4942 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Feb 18 19:17:20 crc kubenswrapper[4942]: W0218 19:17:20.782423 4942 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Feb 18 19:17:20 crc kubenswrapper[4942]: W0218 19:17:20.782430 4942 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Feb 18 19:17:20 crc kubenswrapper[4942]: W0218 19:17:20.782435 4942 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Feb 18 19:17:20 crc kubenswrapper[4942]: W0218 19:17:20.782441 4942 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Feb 18 19:17:20 crc kubenswrapper[4942]: W0218 19:17:20.782447 4942 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Feb 18 19:17:20 crc kubenswrapper[4942]: W0218 19:17:20.782455 4942 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Feb 18 19:17:20 crc kubenswrapper[4942]: W0218 19:17:20.782465 4942 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Feb 18 19:17:20 crc kubenswrapper[4942]: W0218 19:17:20.782471 4942 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Feb 18 19:17:20 crc kubenswrapper[4942]: W0218 19:17:20.782477 4942 feature_gate.go:330] unrecognized feature gate: PinnedImages Feb 18 19:17:20 crc kubenswrapper[4942]: W0218 19:17:20.782483 4942 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Feb 18 19:17:20 crc kubenswrapper[4942]: W0218 19:17:20.782488 4942 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Feb 18 19:17:20 crc kubenswrapper[4942]: W0218 19:17:20.782494 4942 feature_gate.go:330] unrecognized feature gate: GatewayAPI Feb 18 19:17:20 crc kubenswrapper[4942]: W0218 19:17:20.782500 4942 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Feb 18 19:17:20 crc kubenswrapper[4942]: W0218 19:17:20.782506 4942 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Feb 18 19:17:20 crc kubenswrapper[4942]: W0218 19:17:20.782513 4942 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Feb 18 19:17:20 crc kubenswrapper[4942]: W0218 19:17:20.782519 4942 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Feb 18 19:17:20 crc kubenswrapper[4942]: W0218 19:17:20.782525 4942 feature_gate.go:330] unrecognized feature gate: InsightsConfig Feb 18 19:17:20 crc kubenswrapper[4942]: W0218 19:17:20.782533 4942 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Feb 18 19:17:20 crc kubenswrapper[4942]: W0218 19:17:20.782539 4942 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Feb 18 19:17:20 crc kubenswrapper[4942]: W0218 19:17:20.782545 4942 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Feb 18 19:17:20 crc kubenswrapper[4942]: W0218 19:17:20.782550 4942 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Feb 18 19:17:20 crc kubenswrapper[4942]: W0218 19:17:20.782555 4942 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Feb 18 19:17:20 crc kubenswrapper[4942]: W0218 19:17:20.782561 4942 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Feb 18 19:17:20 crc kubenswrapper[4942]: W0218 19:17:20.782566 4942 feature_gate.go:330] unrecognized feature gate: OVNObservability Feb 18 19:17:20 crc kubenswrapper[4942]: W0218 19:17:20.782571 4942 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Feb 18 19:17:20 crc kubenswrapper[4942]: W0218 19:17:20.782577 4942 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Feb 18 19:17:20 crc kubenswrapper[4942]: W0218 19:17:20.782582 4942 feature_gate.go:330] unrecognized feature gate: SignatureStores Feb 18 19:17:20 crc kubenswrapper[4942]: W0218 19:17:20.782587 4942 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Feb 18 19:17:20 crc kubenswrapper[4942]: W0218 19:17:20.782592 4942 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Feb 18 19:17:20 crc kubenswrapper[4942]: W0218 19:17:20.782598 4942 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Feb 18 19:17:20 crc kubenswrapper[4942]: W0218 19:17:20.782603 4942 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Feb 18 19:17:20 crc kubenswrapper[4942]: W0218 19:17:20.782611 4942 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Feb 18 19:17:20 crc kubenswrapper[4942]: W0218 19:17:20.782617 4942 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Feb 18 19:17:20 crc kubenswrapper[4942]: W0218 19:17:20.782624 4942 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Feb 18 19:17:20 crc kubenswrapper[4942]: W0218 19:17:20.782631 4942 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Feb 18 19:17:20 crc kubenswrapper[4942]: W0218 19:17:20.782637 4942 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Feb 18 19:17:20 crc kubenswrapper[4942]: W0218 19:17:20.782644 4942 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Feb 18 19:17:20 crc kubenswrapper[4942]: W0218 19:17:20.782650 4942 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Feb 18 19:17:20 crc kubenswrapper[4942]: W0218 19:17:20.782660 4942 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Feb 18 19:17:20 crc kubenswrapper[4942]: W0218 19:17:20.782668 4942 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Feb 18 19:17:20 crc kubenswrapper[4942]: W0218 19:17:20.782677 4942 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Feb 18 19:17:20 crc kubenswrapper[4942]: W0218 19:17:20.782685 4942 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Feb 18 19:17:20 crc kubenswrapper[4942]: W0218 19:17:20.782692 4942 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Feb 18 19:17:20 crc kubenswrapper[4942]: W0218 19:17:20.782700 4942 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Feb 18 19:17:20 crc kubenswrapper[4942]: W0218 19:17:20.782707 4942 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Feb 18 19:17:20 crc kubenswrapper[4942]: W0218 19:17:20.782714 4942 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Feb 18 19:17:20 crc kubenswrapper[4942]: W0218 19:17:20.782721 4942 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Feb 18 19:17:20 crc kubenswrapper[4942]: W0218 19:17:20.782727 4942 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Feb 18 19:17:20 crc kubenswrapper[4942]: W0218 19:17:20.782736 4942 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Feb 18 19:17:20 crc kubenswrapper[4942]: W0218 19:17:20.782745 4942 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Feb 18 19:17:20 crc kubenswrapper[4942]: W0218 19:17:20.782753 4942 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Feb 18 19:17:20 crc kubenswrapper[4942]: W0218 19:17:20.782787 4942 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Feb 18 19:17:20 crc kubenswrapper[4942]: W0218 19:17:20.782801 4942 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Feb 18 19:17:20 crc kubenswrapper[4942]: W0218 19:17:20.782809 4942 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Feb 18 19:17:20 crc kubenswrapper[4942]: W0218 19:17:20.782815 4942 feature_gate.go:330] unrecognized feature gate: PlatformOperators Feb 18 19:17:20 crc kubenswrapper[4942]: W0218 19:17:20.782821 4942 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Feb 18 19:17:20 crc kubenswrapper[4942]: W0218 19:17:20.782826 4942 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Feb 18 19:17:20 crc kubenswrapper[4942]: W0218 19:17:20.782832 4942 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Feb 18 19:17:20 crc kubenswrapper[4942]: W0218 19:17:20.782839 4942 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Feb 18 19:17:20 crc kubenswrapper[4942]: W0218 19:17:20.782844 4942 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Feb 18 19:17:20 crc kubenswrapper[4942]: W0218 19:17:20.782849 4942 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Feb 18 19:17:20 crc kubenswrapper[4942]: W0218 19:17:20.782855 4942 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Feb 18 19:17:20 crc kubenswrapper[4942]: W0218 19:17:20.782862 4942 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Feb 18 19:17:20 crc kubenswrapper[4942]: W0218 19:17:20.782869 4942 feature_gate.go:330] unrecognized feature gate: NewOLM Feb 18 19:17:20 crc kubenswrapper[4942]: W0218 19:17:20.782876 4942 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Feb 18 19:17:20 crc kubenswrapper[4942]: W0218 19:17:20.782882 4942 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Feb 18 19:17:20 crc kubenswrapper[4942]: W0218 19:17:20.782887 4942 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Feb 18 19:17:20 crc kubenswrapper[4942]: W0218 19:17:20.782892 4942 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Feb 18 19:17:20 crc kubenswrapper[4942]: W0218 19:17:20.782898 4942 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Feb 18 19:17:20 crc kubenswrapper[4942]: W0218 19:17:20.782903 4942 feature_gate.go:330] unrecognized feature gate: Example Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.782914 4942 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Feb 18 19:17:20 crc kubenswrapper[4942]: W0218 19:17:20.783109 4942 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Feb 18 19:17:20 crc kubenswrapper[4942]: W0218 19:17:20.783118 4942 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Feb 18 19:17:20 crc kubenswrapper[4942]: W0218 19:17:20.783124 4942 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Feb 18 19:17:20 crc kubenswrapper[4942]: W0218 19:17:20.783130 4942 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Feb 18 19:17:20 crc kubenswrapper[4942]: W0218 19:17:20.783135 4942 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Feb 18 19:17:20 crc kubenswrapper[4942]: W0218 19:17:20.783140 4942 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Feb 18 19:17:20 crc kubenswrapper[4942]: W0218 19:17:20.783145 4942 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Feb 18 19:17:20 crc kubenswrapper[4942]: W0218 19:17:20.783151 4942 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Feb 18 19:17:20 crc kubenswrapper[4942]: W0218 19:17:20.783159 4942 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Feb 18 19:17:20 crc kubenswrapper[4942]: W0218 19:17:20.783169 4942 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Feb 18 19:17:20 crc kubenswrapper[4942]: W0218 19:17:20.783177 4942 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Feb 18 19:17:20 crc kubenswrapper[4942]: W0218 19:17:20.783184 4942 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Feb 18 19:17:20 crc kubenswrapper[4942]: W0218 19:17:20.783192 4942 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Feb 18 19:17:20 crc kubenswrapper[4942]: W0218 19:17:20.783199 4942 feature_gate.go:330] unrecognized feature gate: GatewayAPI Feb 18 19:17:20 crc kubenswrapper[4942]: W0218 19:17:20.783207 4942 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Feb 18 19:17:20 crc kubenswrapper[4942]: W0218 19:17:20.783212 4942 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Feb 18 19:17:20 crc kubenswrapper[4942]: W0218 19:17:20.783217 4942 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Feb 18 19:17:20 crc kubenswrapper[4942]: W0218 19:17:20.783222 4942 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Feb 18 19:17:20 crc kubenswrapper[4942]: W0218 19:17:20.783228 4942 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Feb 18 19:17:20 crc kubenswrapper[4942]: W0218 19:17:20.783233 4942 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Feb 18 19:17:20 crc kubenswrapper[4942]: W0218 19:17:20.783239 4942 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Feb 18 19:17:20 crc kubenswrapper[4942]: W0218 19:17:20.783244 4942 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Feb 18 19:17:20 crc kubenswrapper[4942]: W0218 19:17:20.783249 4942 feature_gate.go:330] unrecognized feature gate: InsightsConfig Feb 18 19:17:20 crc kubenswrapper[4942]: W0218 19:17:20.783254 4942 feature_gate.go:330] unrecognized feature gate: SignatureStores Feb 18 19:17:20 crc kubenswrapper[4942]: W0218 19:17:20.783260 4942 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Feb 18 19:17:20 crc kubenswrapper[4942]: W0218 19:17:20.783265 4942 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Feb 18 19:17:20 crc kubenswrapper[4942]: W0218 19:17:20.783272 4942 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Feb 18 19:17:20 crc kubenswrapper[4942]: W0218 19:17:20.783278 4942 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Feb 18 19:17:20 crc kubenswrapper[4942]: W0218 19:17:20.783284 4942 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Feb 18 19:17:20 crc kubenswrapper[4942]: W0218 19:17:20.783290 4942 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Feb 18 19:17:20 crc kubenswrapper[4942]: W0218 19:17:20.783295 4942 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Feb 18 19:17:20 crc kubenswrapper[4942]: W0218 19:17:20.783300 4942 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Feb 18 19:17:20 crc kubenswrapper[4942]: W0218 19:17:20.783305 4942 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Feb 18 19:17:20 crc kubenswrapper[4942]: W0218 19:17:20.783311 4942 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Feb 18 19:17:20 crc kubenswrapper[4942]: W0218 19:17:20.783316 4942 feature_gate.go:330] unrecognized feature gate: OVNObservability Feb 18 19:17:20 crc kubenswrapper[4942]: W0218 19:17:20.783321 4942 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Feb 18 19:17:20 crc kubenswrapper[4942]: W0218 19:17:20.783327 4942 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Feb 18 19:17:20 crc kubenswrapper[4942]: W0218 19:17:20.783332 4942 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Feb 18 19:17:20 crc kubenswrapper[4942]: W0218 19:17:20.783337 4942 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Feb 18 19:17:20 crc kubenswrapper[4942]: W0218 19:17:20.783342 4942 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Feb 18 19:17:20 crc kubenswrapper[4942]: W0218 19:17:20.783347 4942 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Feb 18 19:17:20 crc kubenswrapper[4942]: W0218 19:17:20.783353 4942 feature_gate.go:330] unrecognized feature gate: PlatformOperators Feb 18 19:17:20 crc kubenswrapper[4942]: W0218 19:17:20.783359 4942 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Feb 18 19:17:20 crc kubenswrapper[4942]: W0218 19:17:20.783364 4942 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Feb 18 19:17:20 crc kubenswrapper[4942]: W0218 19:17:20.783369 4942 feature_gate.go:330] unrecognized feature gate: PinnedImages Feb 18 19:17:20 crc kubenswrapper[4942]: W0218 19:17:20.783374 4942 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Feb 18 19:17:20 crc kubenswrapper[4942]: W0218 19:17:20.783380 4942 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Feb 18 19:17:20 crc kubenswrapper[4942]: W0218 19:17:20.783385 4942 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Feb 18 19:17:20 crc kubenswrapper[4942]: W0218 19:17:20.783391 4942 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Feb 18 19:17:20 crc kubenswrapper[4942]: W0218 19:17:20.783398 4942 feature_gate.go:330] unrecognized feature gate: Example Feb 18 19:17:20 crc kubenswrapper[4942]: W0218 19:17:20.783403 4942 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Feb 18 19:17:20 crc kubenswrapper[4942]: W0218 19:17:20.783408 4942 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Feb 18 19:17:20 crc kubenswrapper[4942]: W0218 19:17:20.783414 4942 feature_gate.go:330] unrecognized feature gate: NewOLM Feb 18 19:17:20 crc kubenswrapper[4942]: W0218 19:17:20.783419 4942 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Feb 18 19:17:20 crc kubenswrapper[4942]: W0218 19:17:20.783424 4942 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Feb 18 19:17:20 crc kubenswrapper[4942]: W0218 19:17:20.783429 4942 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Feb 18 19:17:20 crc kubenswrapper[4942]: W0218 19:17:20.783436 4942 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Feb 18 19:17:20 crc kubenswrapper[4942]: W0218 19:17:20.783443 4942 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Feb 18 19:17:20 crc kubenswrapper[4942]: W0218 19:17:20.783449 4942 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Feb 18 19:17:20 crc kubenswrapper[4942]: W0218 19:17:20.783455 4942 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Feb 18 19:17:20 crc kubenswrapper[4942]: W0218 19:17:20.783461 4942 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Feb 18 19:17:20 crc kubenswrapper[4942]: W0218 19:17:20.783467 4942 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Feb 18 19:17:20 crc kubenswrapper[4942]: W0218 19:17:20.783473 4942 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Feb 18 19:17:20 crc kubenswrapper[4942]: W0218 19:17:20.783478 4942 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Feb 18 19:17:20 crc kubenswrapper[4942]: W0218 19:17:20.783484 4942 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Feb 18 19:17:20 crc kubenswrapper[4942]: W0218 19:17:20.783489 4942 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Feb 18 19:17:20 crc kubenswrapper[4942]: W0218 19:17:20.783494 4942 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Feb 18 19:17:20 crc kubenswrapper[4942]: W0218 19:17:20.783499 4942 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Feb 18 19:17:20 crc kubenswrapper[4942]: W0218 19:17:20.783504 4942 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Feb 18 19:17:20 crc kubenswrapper[4942]: W0218 19:17:20.783509 4942 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Feb 18 19:17:20 crc kubenswrapper[4942]: W0218 19:17:20.783515 4942 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.783524 4942 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.784726 4942 server.go:940] "Client rotation is on, will bootstrap in background" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.792948 4942 bootstrap.go:85] "Current kubeconfig file contents are still valid, no bootstrap necessary" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.793159 4942 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.795003 4942 server.go:997] "Starting client certificate rotation" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.795061 4942 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate rotation is enabled Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.795328 4942 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2026-02-24 05:52:08 +0000 UTC, rotation deadline is 2025-11-22 22:01:25.886076315 +0000 UTC Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.795506 4942 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.825099 4942 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Feb 18 19:17:20 crc kubenswrapper[4942]: E0218 19:17:20.826985 4942 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 38.102.83.188:6443: connect: connection refused" logger="UnhandledError" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.829357 4942 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.846753 4942 log.go:25] "Validated CRI v1 runtime API" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.884490 4942 log.go:25] "Validated CRI v1 image API" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.887174 4942 server.go:1437] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.895456 4942 fs.go:133] Filesystem UUIDs: map[0b076daa-c26a-46d2-b3a6-72a8dbc6e257:/dev/vda4 2026-02-18-19-12-30-00:/dev/sr0 7B77-95E7:/dev/vda2 de0497b0-db1b-465a-b278-03db02455c71:/dev/vda3] Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.895504 4942 fs.go:134] Filesystem partitions: map[/dev/shm:{mountpoint:/dev/shm major:0 minor:22 fsType:tmpfs blockSize:0} /dev/vda3:{mountpoint:/boot major:252 minor:3 fsType:ext4 blockSize:0} /dev/vda4:{mountpoint:/var major:252 minor:4 fsType:xfs blockSize:0} /run:{mountpoint:/run major:0 minor:24 fsType:tmpfs blockSize:0} /run/user/1000:{mountpoint:/run/user/1000 major:0 minor:42 fsType:tmpfs blockSize:0} /tmp:{mountpoint:/tmp major:0 minor:30 fsType:tmpfs blockSize:0} /var/lib/etcd:{mountpoint:/var/lib/etcd major:0 minor:43 fsType:tmpfs blockSize:0}] Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.924865 4942 manager.go:217] Machine: {Timestamp:2026-02-18 19:17:20.920004752 +0000 UTC m=+0.624937487 CPUVendorID:AuthenticAMD NumCores:12 NumPhysicalCores:1 NumSockets:12 CpuFrequency:2800000 MemoryCapacity:33654128640 SwapCapacity:0 MemoryByType:map[] NVMInfo:{MemoryModeCapacity:0 AppDirectModeCapacity:0 AvgPowerBudget:0} HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] MachineID:21801e6708c44f15b81395eb736a7cec SystemUUID:15e4da6b-0b96-4412-ada2-f835d7e5f88a BootID:26ba8477-3134-4454-b1a3-81cc0f315017 Filesystems:[{Device:/dev/vda3 DeviceMajor:252 DeviceMinor:3 Capacity:366869504 Type:vfs Inodes:98304 HasInodes:true} {Device:/run/user/1000 DeviceMajor:0 DeviceMinor:42 Capacity:3365412864 Type:vfs Inodes:821634 HasInodes:true} {Device:/var/lib/etcd DeviceMajor:0 DeviceMinor:43 Capacity:1073741824 Type:vfs Inodes:4108170 HasInodes:true} {Device:/dev/shm DeviceMajor:0 DeviceMinor:22 Capacity:16827064320 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run DeviceMajor:0 DeviceMinor:24 Capacity:6730825728 Type:vfs Inodes:819200 HasInodes:true} {Device:/dev/vda4 DeviceMajor:252 DeviceMinor:4 Capacity:85292941312 Type:vfs Inodes:41679680 HasInodes:true} {Device:/tmp DeviceMajor:0 DeviceMinor:30 Capacity:16827064320 Type:vfs Inodes:1048576 HasInodes:true}] DiskMap:map[252:0:{Name:vda Major:252 Minor:0 Size:214748364800 Scheduler:none}] NetworkDevices:[{Name:br-ex MacAddress:fa:16:3e:88:f4:b2 Speed:0 Mtu:1500} {Name:br-int MacAddress:d6:39:55:2e:22:71 Speed:0 Mtu:1400} {Name:ens3 MacAddress:fa:16:3e:88:f4:b2 Speed:-1 Mtu:1500} {Name:ens7 MacAddress:fa:16:3e:3d:2a:4a Speed:-1 Mtu:1500} {Name:ens7.20 MacAddress:52:54:00:ef:3a:9e Speed:-1 Mtu:1496} {Name:ens7.21 MacAddress:52:54:00:62:2d:57 Speed:-1 Mtu:1496} {Name:ens7.22 MacAddress:52:54:00:4a:ea:0d Speed:-1 Mtu:1496} {Name:eth10 MacAddress:2a:2f:04:c9:87:aa Speed:0 Mtu:1500} {Name:ovn-k8s-mp0 MacAddress:0a:58:0a:d9:00:02 Speed:0 Mtu:1400} {Name:ovs-system MacAddress:9a:c4:64:70:dc:12 Speed:0 Mtu:1500}] Topology:[{Id:0 Memory:33654128640 HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] Cores:[{Id:0 Threads:[0] Caches:[{Id:0 Size:32768 Type:Data Level:1} {Id:0 Size:32768 Type:Instruction Level:1} {Id:0 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:0 Size:16777216 Type:Unified Level:3}] SocketID:0 BookID: DrawerID:} {Id:0 Threads:[1] Caches:[{Id:1 Size:32768 Type:Data Level:1} {Id:1 Size:32768 Type:Instruction Level:1} {Id:1 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:1 Size:16777216 Type:Unified Level:3}] SocketID:1 BookID: DrawerID:} {Id:0 Threads:[10] Caches:[{Id:10 Size:32768 Type:Data Level:1} {Id:10 Size:32768 Type:Instruction Level:1} {Id:10 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:10 Size:16777216 Type:Unified Level:3}] SocketID:10 BookID: DrawerID:} {Id:0 Threads:[11] Caches:[{Id:11 Size:32768 Type:Data Level:1} {Id:11 Size:32768 Type:Instruction Level:1} {Id:11 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:11 Size:16777216 Type:Unified Level:3}] SocketID:11 BookID: DrawerID:} {Id:0 Threads:[2] Caches:[{Id:2 Size:32768 Type:Data Level:1} {Id:2 Size:32768 Type:Instruction Level:1} {Id:2 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:2 Size:16777216 Type:Unified Level:3}] SocketID:2 BookID: DrawerID:} {Id:0 Threads:[3] Caches:[{Id:3 Size:32768 Type:Data Level:1} {Id:3 Size:32768 Type:Instruction Level:1} {Id:3 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:3 Size:16777216 Type:Unified Level:3}] SocketID:3 BookID: DrawerID:} {Id:0 Threads:[4] Caches:[{Id:4 Size:32768 Type:Data Level:1} {Id:4 Size:32768 Type:Instruction Level:1} {Id:4 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:4 Size:16777216 Type:Unified Level:3}] SocketID:4 BookID: DrawerID:} {Id:0 Threads:[5] Caches:[{Id:5 Size:32768 Type:Data Level:1} {Id:5 Size:32768 Type:Instruction Level:1} {Id:5 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:5 Size:16777216 Type:Unified Level:3}] SocketID:5 BookID: DrawerID:} {Id:0 Threads:[6] Caches:[{Id:6 Size:32768 Type:Data Level:1} {Id:6 Size:32768 Type:Instruction Level:1} {Id:6 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:6 Size:16777216 Type:Unified Level:3}] SocketID:6 BookID: DrawerID:} {Id:0 Threads:[7] Caches:[{Id:7 Size:32768 Type:Data Level:1} {Id:7 Size:32768 Type:Instruction Level:1} {Id:7 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:7 Size:16777216 Type:Unified Level:3}] SocketID:7 BookID: DrawerID:} {Id:0 Threads:[8] Caches:[{Id:8 Size:32768 Type:Data Level:1} {Id:8 Size:32768 Type:Instruction Level:1} {Id:8 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:8 Size:16777216 Type:Unified Level:3}] SocketID:8 BookID: DrawerID:} {Id:0 Threads:[9] Caches:[{Id:9 Size:32768 Type:Data Level:1} {Id:9 Size:32768 Type:Instruction Level:1} {Id:9 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:9 Size:16777216 Type:Unified Level:3}] SocketID:9 BookID: DrawerID:}] Caches:[] Distances:[10]}] CloudProvider:Unknown InstanceType:Unknown InstanceID:None} Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.925332 4942 manager_no_libpfm.go:29] cAdvisor is build without cgo and/or libpfm support. Perf event counters are not available. Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.925637 4942 manager.go:233] Version: {KernelVersion:5.14.0-427.50.2.el9_4.x86_64 ContainerOsVersion:Red Hat Enterprise Linux CoreOS 418.94.202502100215-0 DockerVersion: DockerAPIVersion: CadvisorVersion: CadvisorRevision:} Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.926259 4942 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.926631 4942 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.926686 4942 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"crc","RuntimeCgroupsName":"/system.slice/crio.service","SystemCgroupsName":"/system.slice","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":true,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":{"cpu":"200m","ephemeral-storage":"350Mi","memory":"350Mi"},"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":4096,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.927068 4942 topology_manager.go:138] "Creating topology manager with none policy" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.927089 4942 container_manager_linux.go:303] "Creating device plugin manager" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.927655 4942 manager.go:142] "Creating Device Plugin manager" path="/var/lib/kubelet/device-plugins/kubelet.sock" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.927713 4942 server.go:66] "Creating device plugin registration server" version="v1beta1" socket="/var/lib/kubelet/device-plugins/kubelet.sock" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.928106 4942 state_mem.go:36] "Initialized new in-memory state store" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.928266 4942 server.go:1245] "Using root directory" path="/var/lib/kubelet" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.933054 4942 kubelet.go:418] "Attempting to sync node with API server" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.933113 4942 kubelet.go:313] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.933172 4942 file.go:69] "Watching path" path="/etc/kubernetes/manifests" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.933198 4942 kubelet.go:324] "Adding apiserver pod source" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.933229 4942 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 18 19:17:20 crc kubenswrapper[4942]: W0218 19:17:20.937203 4942 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.188:6443: connect: connection refused Feb 18 19:17:20 crc kubenswrapper[4942]: E0218 19:17:20.937398 4942 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.188:6443: connect: connection refused" logger="UnhandledError" Feb 18 19:17:20 crc kubenswrapper[4942]: W0218 19:17:20.937732 4942 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.188:6443: connect: connection refused Feb 18 19:17:20 crc kubenswrapper[4942]: E0218 19:17:20.937862 4942 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.188:6443: connect: connection refused" logger="UnhandledError" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.939930 4942 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="cri-o" version="1.31.5-4.rhaos4.18.gitdad78d5.el9" apiVersion="v1" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.941102 4942 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-server-current.pem". Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.944108 4942 kubelet.go:854] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.945988 4942 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/portworx-volume" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.946051 4942 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/empty-dir" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.946074 4942 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/git-repo" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.946092 4942 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/host-path" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.946127 4942 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/nfs" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.946146 4942 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/secret" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.946164 4942 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/iscsi" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.946197 4942 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/downward-api" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.946218 4942 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/fc" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.946237 4942 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/configmap" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.946295 4942 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/projected" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.946314 4942 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/local-volume" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.947486 4942 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/csi" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.948340 4942 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.188:6443: connect: connection refused Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.948628 4942 server.go:1280] "Started kubelet" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.950158 4942 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.950145 4942 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.951940 4942 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Feb 18 19:17:20 crc systemd[1]: Started Kubernetes Kubelet. Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.952074 4942 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate rotation is enabled Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.952115 4942 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.952319 4942 volume_manager.go:287] "The desired_state_of_world populator starts" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.953152 4942 volume_manager.go:289] "Starting Kubelet Volume Manager" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.952424 4942 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-30 09:13:41.229528457 +0000 UTC Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.952603 4942 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Feb 18 19:17:20 crc kubenswrapper[4942]: E0218 19:17:20.952380 4942 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 18 19:17:20 crc kubenswrapper[4942]: W0218 19:17:20.954554 4942 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.188:6443: connect: connection refused Feb 18 19:17:20 crc kubenswrapper[4942]: E0218 19:17:20.956335 4942 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.188:6443: connect: connection refused" logger="UnhandledError" Feb 18 19:17:20 crc kubenswrapper[4942]: E0218 19:17:20.954474 4942 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.188:6443: connect: connection refused" interval="200ms" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.960212 4942 server.go:460] "Adding debug handlers to kubelet server" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.964120 4942 factory.go:55] Registering systemd factory Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.964239 4942 factory.go:221] Registration of the systemd container factory successfully Feb 18 19:17:20 crc kubenswrapper[4942]: E0218 19:17:20.966070 4942 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp 38.102.83.188:6443: connect: connection refused" event="&Event{ObjectMeta:{crc.18956d5527dc0823 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-18 19:17:20.948537379 +0000 UTC m=+0.653470114,LastTimestamp:2026-02-18 19:17:20.948537379 +0000 UTC m=+0.653470114,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.971482 4942 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" seLinuxMountContext="" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.971595 4942 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" seLinuxMountContext="" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.971630 4942 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" seLinuxMountContext="" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.971659 4942 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" seLinuxMountContext="" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.971690 4942 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" seLinuxMountContext="" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.971716 4942 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" seLinuxMountContext="" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.971743 4942 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" seLinuxMountContext="" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.971812 4942 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" seLinuxMountContext="" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.971844 4942 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" seLinuxMountContext="" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.971872 4942 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" seLinuxMountContext="" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.971900 4942 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" seLinuxMountContext="" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.971926 4942 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" seLinuxMountContext="" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.971952 4942 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" seLinuxMountContext="" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.971979 4942 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf" seLinuxMountContext="" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.972003 4942 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" seLinuxMountContext="" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.972050 4942 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" seLinuxMountContext="" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.972074 4942 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" seLinuxMountContext="" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.972099 4942 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" seLinuxMountContext="" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.972157 4942 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" seLinuxMountContext="" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.972182 4942 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" seLinuxMountContext="" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.972206 4942 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" seLinuxMountContext="" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.972234 4942 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" seLinuxMountContext="" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.972260 4942 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" seLinuxMountContext="" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.972288 4942 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" seLinuxMountContext="" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.972312 4942 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" seLinuxMountContext="" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.972336 4942 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" seLinuxMountContext="" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.972385 4942 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" seLinuxMountContext="" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.972413 4942 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script" seLinuxMountContext="" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.972440 4942 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" seLinuxMountContext="" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.972468 4942 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" seLinuxMountContext="" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.972496 4942 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" seLinuxMountContext="" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.972522 4942 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" seLinuxMountContext="" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.972550 4942 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" seLinuxMountContext="" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.972580 4942 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" seLinuxMountContext="" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.972610 4942 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb" seLinuxMountContext="" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.972636 4942 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" seLinuxMountContext="" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.972662 4942 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" seLinuxMountContext="" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.972688 4942 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" seLinuxMountContext="" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.972720 4942 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" seLinuxMountContext="" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.972748 4942 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" seLinuxMountContext="" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.972823 4942 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" seLinuxMountContext="" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.972853 4942 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" seLinuxMountContext="" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.972881 4942 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" seLinuxMountContext="" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.972908 4942 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" seLinuxMountContext="" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.972933 4942 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" seLinuxMountContext="" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.972959 4942 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" seLinuxMountContext="" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.973017 4942 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" seLinuxMountContext="" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.973044 4942 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" seLinuxMountContext="" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.973073 4942 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" seLinuxMountContext="" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.973097 4942 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" seLinuxMountContext="" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.973123 4942 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5" seLinuxMountContext="" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.973147 4942 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" seLinuxMountContext="" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.973185 4942 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" seLinuxMountContext="" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.973216 4942 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" seLinuxMountContext="" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.973276 4942 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" seLinuxMountContext="" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.973304 4942 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" seLinuxMountContext="" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.973338 4942 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" seLinuxMountContext="" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.973367 4942 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" seLinuxMountContext="" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.973396 4942 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" seLinuxMountContext="" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.973425 4942 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" seLinuxMountContext="" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.973454 4942 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" seLinuxMountContext="" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.973479 4942 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" seLinuxMountContext="" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.973508 4942 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" seLinuxMountContext="" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.973537 4942 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" seLinuxMountContext="" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.973565 4942 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" seLinuxMountContext="" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.973591 4942 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" seLinuxMountContext="" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.973618 4942 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" seLinuxMountContext="" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.973646 4942 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" seLinuxMountContext="" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.973697 4942 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" seLinuxMountContext="" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.973723 4942 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" seLinuxMountContext="" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.973754 4942 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" seLinuxMountContext="" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.973825 4942 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" seLinuxMountContext="" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.973853 4942 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" seLinuxMountContext="" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.973879 4942 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" seLinuxMountContext="" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.973905 4942 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" seLinuxMountContext="" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.973965 4942 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" seLinuxMountContext="" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.973998 4942 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" seLinuxMountContext="" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.973303 4942 factory.go:153] Registering CRI-O factory Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.974113 4942 factory.go:221] Registration of the crio container factory successfully Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.974035 4942 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" seLinuxMountContext="" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.974283 4942 factory.go:219] Registration of the containerd container factory failed: unable to create containerd client: containerd: cannot unix dial containerd api service: dial unix /run/containerd/containerd.sock: connect: no such file or directory Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.974274 4942 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" seLinuxMountContext="" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.974339 4942 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" seLinuxMountContext="" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.974363 4942 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" seLinuxMountContext="" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.974390 4942 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" seLinuxMountContext="" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.974413 4942 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" seLinuxMountContext="" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.974433 4942 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" seLinuxMountContext="" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.974456 4942 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" seLinuxMountContext="" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.974474 4942 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" seLinuxMountContext="" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.974491 4942 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" seLinuxMountContext="" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.974511 4942 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" seLinuxMountContext="" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.974529 4942 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="44663579-783b-4372-86d6-acf235a62d72" volumeName="kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" seLinuxMountContext="" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.974548 4942 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" seLinuxMountContext="" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.974567 4942 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" seLinuxMountContext="" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.974588 4942 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" seLinuxMountContext="" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.974606 4942 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" seLinuxMountContext="" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.974623 4942 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" seLinuxMountContext="" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.974647 4942 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" seLinuxMountContext="" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.974664 4942 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" seLinuxMountContext="" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.974680 4942 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" seLinuxMountContext="" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.974698 4942 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" seLinuxMountContext="" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.974716 4942 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" seLinuxMountContext="" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.974734 4942 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" seLinuxMountContext="" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.974752 4942 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" seLinuxMountContext="" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.974884 4942 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49ef4625-1d3a-4a9f-b595-c2433d32326d" volumeName="kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" seLinuxMountContext="" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.974905 4942 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" seLinuxMountContext="" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.974925 4942 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" seLinuxMountContext="" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.974964 4942 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" seLinuxMountContext="" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.974362 4942 factory.go:103] Registering Raw factory Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.974988 4942 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3b6479f0-333b-4a96-9adf-2099afdc2447" volumeName="kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr" seLinuxMountContext="" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.975162 4942 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" seLinuxMountContext="" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.975043 4942 manager.go:1196] Started watching for new ooms in manager Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.975218 4942 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" seLinuxMountContext="" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.975383 4942 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" seLinuxMountContext="" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.975413 4942 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" seLinuxMountContext="" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.975443 4942 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" seLinuxMountContext="" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.975469 4942 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" seLinuxMountContext="" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.975494 4942 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" seLinuxMountContext="" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.975521 4942 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" seLinuxMountContext="" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.975546 4942 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" seLinuxMountContext="" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.975572 4942 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" seLinuxMountContext="" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.975593 4942 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" seLinuxMountContext="" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.975614 4942 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" seLinuxMountContext="" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.975638 4942 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" seLinuxMountContext="" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.975663 4942 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" seLinuxMountContext="" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.975683 4942 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" seLinuxMountContext="" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.975742 4942 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d751cbb-f2e2-430d-9754-c882a5e924a5" volumeName="kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl" seLinuxMountContext="" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.975791 4942 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" seLinuxMountContext="" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.975815 4942 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" seLinuxMountContext="" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.975840 4942 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" seLinuxMountContext="" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.975862 4942 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert" seLinuxMountContext="" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.975887 4942 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" seLinuxMountContext="" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.975908 4942 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" seLinuxMountContext="" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.975933 4942 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" seLinuxMountContext="" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.975953 4942 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" seLinuxMountContext="" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.975973 4942 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" seLinuxMountContext="" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.975993 4942 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" volumeName="kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" seLinuxMountContext="" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.976017 4942 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" seLinuxMountContext="" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.976041 4942 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" seLinuxMountContext="" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.976061 4942 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert" seLinuxMountContext="" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.976085 4942 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" seLinuxMountContext="" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.976107 4942 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" seLinuxMountContext="" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.976129 4942 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" seLinuxMountContext="" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.976154 4942 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" seLinuxMountContext="" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.976177 4942 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" seLinuxMountContext="" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.976201 4942 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" seLinuxMountContext="" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.976223 4942 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" seLinuxMountContext="" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.976245 4942 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" seLinuxMountContext="" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.976283 4942 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" seLinuxMountContext="" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.976303 4942 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" seLinuxMountContext="" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.976321 4942 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" seLinuxMountContext="" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.976341 4942 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" seLinuxMountContext="" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.976362 4942 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" seLinuxMountContext="" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.976383 4942 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" seLinuxMountContext="" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.976403 4942 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" seLinuxMountContext="" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.976423 4942 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" seLinuxMountContext="" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.976485 4942 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" seLinuxMountContext="" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.976506 4942 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" seLinuxMountContext="" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.976526 4942 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" volumeName="kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" seLinuxMountContext="" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.976547 4942 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" seLinuxMountContext="" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.976566 4942 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" seLinuxMountContext="" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.976585 4942 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" seLinuxMountContext="" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.976604 4942 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" seLinuxMountContext="" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.976624 4942 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" seLinuxMountContext="" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.976643 4942 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" seLinuxMountContext="" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.976662 4942 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" seLinuxMountContext="" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.976679 4942 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" seLinuxMountContext="" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.976702 4942 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" seLinuxMountContext="" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.978315 4942 manager.go:319] Starting recovery of all containers Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.979000 4942 reconstruct.go:144] "Volume is marked device as uncertain and added into the actual state" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" deviceMountPath="/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.979168 4942 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" seLinuxMountContext="" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.979212 4942 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" seLinuxMountContext="" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.979244 4942 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" seLinuxMountContext="" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.979278 4942 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" seLinuxMountContext="" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.979307 4942 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" seLinuxMountContext="" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.979330 4942 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" seLinuxMountContext="" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.979355 4942 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf" seLinuxMountContext="" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.979378 4942 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" seLinuxMountContext="" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.979424 4942 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" seLinuxMountContext="" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.979448 4942 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" seLinuxMountContext="" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.979475 4942 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" seLinuxMountContext="" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.979513 4942 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" seLinuxMountContext="" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.979564 4942 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" seLinuxMountContext="" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.979595 4942 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" seLinuxMountContext="" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.979627 4942 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" seLinuxMountContext="" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.979657 4942 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" seLinuxMountContext="" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.979684 4942 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" seLinuxMountContext="" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.979712 4942 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" seLinuxMountContext="" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.979740 4942 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" seLinuxMountContext="" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.979814 4942 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" seLinuxMountContext="" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.979845 4942 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" seLinuxMountContext="" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.979876 4942 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" seLinuxMountContext="" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.979903 4942 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" seLinuxMountContext="" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.979929 4942 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" seLinuxMountContext="" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.979959 4942 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" seLinuxMountContext="" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.979988 4942 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" seLinuxMountContext="" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.980019 4942 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" seLinuxMountContext="" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.980050 4942 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" seLinuxMountContext="" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.980077 4942 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" seLinuxMountContext="" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.980107 4942 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" seLinuxMountContext="" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.980128 4942 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" seLinuxMountContext="" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.980148 4942 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides" seLinuxMountContext="" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.980167 4942 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" seLinuxMountContext="" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.980186 4942 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" seLinuxMountContext="" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.980206 4942 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" seLinuxMountContext="" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.980228 4942 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" seLinuxMountContext="" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.980246 4942 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" seLinuxMountContext="" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.980263 4942 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm" seLinuxMountContext="" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.980284 4942 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" seLinuxMountContext="" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.980303 4942 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" seLinuxMountContext="" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.980322 4942 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" seLinuxMountContext="" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.980341 4942 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" seLinuxMountContext="" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.980362 4942 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" seLinuxMountContext="" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.980383 4942 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls" seLinuxMountContext="" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.980400 4942 reconstruct.go:97] "Volume reconstruction finished" Feb 18 19:17:20 crc kubenswrapper[4942]: I0218 19:17:20.980472 4942 reconciler.go:26] "Reconciler: start to sync state" Feb 18 19:17:21 crc kubenswrapper[4942]: I0218 19:17:21.005214 4942 manager.go:324] Recovery completed Feb 18 19:17:21 crc kubenswrapper[4942]: I0218 19:17:21.021296 4942 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 18 19:17:21 crc kubenswrapper[4942]: I0218 19:17:21.023308 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:17:21 crc kubenswrapper[4942]: I0218 19:17:21.023363 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:17:21 crc kubenswrapper[4942]: I0218 19:17:21.023374 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:17:21 crc kubenswrapper[4942]: I0218 19:17:21.024380 4942 cpu_manager.go:225] "Starting CPU manager" policy="none" Feb 18 19:17:21 crc kubenswrapper[4942]: I0218 19:17:21.024405 4942 cpu_manager.go:226] "Reconciling" reconcilePeriod="10s" Feb 18 19:17:21 crc kubenswrapper[4942]: I0218 19:17:21.024440 4942 state_mem.go:36] "Initialized new in-memory state store" Feb 18 19:17:21 crc kubenswrapper[4942]: I0218 19:17:21.031422 4942 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Feb 18 19:17:21 crc kubenswrapper[4942]: I0218 19:17:21.034078 4942 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Feb 18 19:17:21 crc kubenswrapper[4942]: I0218 19:17:21.034512 4942 status_manager.go:217] "Starting to sync pod status with apiserver" Feb 18 19:17:21 crc kubenswrapper[4942]: I0218 19:17:21.034554 4942 kubelet.go:2335] "Starting kubelet main sync loop" Feb 18 19:17:21 crc kubenswrapper[4942]: E0218 19:17:21.034691 4942 kubelet.go:2359] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Feb 18 19:17:21 crc kubenswrapper[4942]: W0218 19:17:21.036670 4942 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.188:6443: connect: connection refused Feb 18 19:17:21 crc kubenswrapper[4942]: E0218 19:17:21.036754 4942 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.188:6443: connect: connection refused" logger="UnhandledError" Feb 18 19:17:21 crc kubenswrapper[4942]: I0218 19:17:21.049056 4942 policy_none.go:49] "None policy: Start" Feb 18 19:17:21 crc kubenswrapper[4942]: I0218 19:17:21.050269 4942 memory_manager.go:170] "Starting memorymanager" policy="None" Feb 18 19:17:21 crc kubenswrapper[4942]: I0218 19:17:21.050329 4942 state_mem.go:35] "Initializing new in-memory state store" Feb 18 19:17:21 crc kubenswrapper[4942]: E0218 19:17:21.053839 4942 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 18 19:17:21 crc kubenswrapper[4942]: I0218 19:17:21.131315 4942 manager.go:334] "Starting Device Plugin manager" Feb 18 19:17:21 crc kubenswrapper[4942]: I0218 19:17:21.131738 4942 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 18 19:17:21 crc kubenswrapper[4942]: I0218 19:17:21.131752 4942 server.go:79] "Starting device plugin registration server" Feb 18 19:17:21 crc kubenswrapper[4942]: I0218 19:17:21.132301 4942 eviction_manager.go:189] "Eviction manager: starting control loop" Feb 18 19:17:21 crc kubenswrapper[4942]: I0218 19:17:21.132318 4942 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Feb 18 19:17:21 crc kubenswrapper[4942]: I0218 19:17:21.132853 4942 plugin_watcher.go:51] "Plugin Watcher Start" path="/var/lib/kubelet/plugins_registry" Feb 18 19:17:21 crc kubenswrapper[4942]: I0218 19:17:21.133095 4942 plugin_manager.go:116] "The desired_state_of_world populator (plugin watcher) starts" Feb 18 19:17:21 crc kubenswrapper[4942]: I0218 19:17:21.133111 4942 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 18 19:17:21 crc kubenswrapper[4942]: I0218 19:17:21.135500 4942 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc","openshift-kube-controller-manager/kube-controller-manager-crc","openshift-kube-scheduler/openshift-kube-scheduler-crc","openshift-machine-config-operator/kube-rbac-proxy-crio-crc","openshift-etcd/etcd-crc"] Feb 18 19:17:21 crc kubenswrapper[4942]: I0218 19:17:21.135678 4942 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 18 19:17:21 crc kubenswrapper[4942]: I0218 19:17:21.137246 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:17:21 crc kubenswrapper[4942]: I0218 19:17:21.137293 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:17:21 crc kubenswrapper[4942]: I0218 19:17:21.137310 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:17:21 crc kubenswrapper[4942]: I0218 19:17:21.137562 4942 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 18 19:17:21 crc kubenswrapper[4942]: I0218 19:17:21.137748 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 18 19:17:21 crc kubenswrapper[4942]: I0218 19:17:21.137815 4942 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 18 19:17:21 crc kubenswrapper[4942]: I0218 19:17:21.139238 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:17:21 crc kubenswrapper[4942]: I0218 19:17:21.139295 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:17:21 crc kubenswrapper[4942]: I0218 19:17:21.139317 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:17:21 crc kubenswrapper[4942]: I0218 19:17:21.139504 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:17:21 crc kubenswrapper[4942]: I0218 19:17:21.139538 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:17:21 crc kubenswrapper[4942]: I0218 19:17:21.139555 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:17:21 crc kubenswrapper[4942]: I0218 19:17:21.139710 4942 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 18 19:17:21 crc kubenswrapper[4942]: I0218 19:17:21.139844 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 18 19:17:21 crc kubenswrapper[4942]: I0218 19:17:21.139877 4942 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 18 19:17:21 crc kubenswrapper[4942]: I0218 19:17:21.141077 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:17:21 crc kubenswrapper[4942]: I0218 19:17:21.141145 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:17:21 crc kubenswrapper[4942]: I0218 19:17:21.141173 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:17:21 crc kubenswrapper[4942]: I0218 19:17:21.141592 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:17:21 crc kubenswrapper[4942]: I0218 19:17:21.141629 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:17:21 crc kubenswrapper[4942]: I0218 19:17:21.141647 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:17:21 crc kubenswrapper[4942]: I0218 19:17:21.141796 4942 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 18 19:17:21 crc kubenswrapper[4942]: I0218 19:17:21.141922 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 18 19:17:21 crc kubenswrapper[4942]: I0218 19:17:21.141959 4942 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 18 19:17:21 crc kubenswrapper[4942]: E0218 19:17:21.143901 4942 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Feb 18 19:17:21 crc kubenswrapper[4942]: I0218 19:17:21.144241 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:17:21 crc kubenswrapper[4942]: I0218 19:17:21.144267 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:17:21 crc kubenswrapper[4942]: I0218 19:17:21.144278 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:17:21 crc kubenswrapper[4942]: I0218 19:17:21.145650 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:17:21 crc kubenswrapper[4942]: I0218 19:17:21.145682 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:17:21 crc kubenswrapper[4942]: I0218 19:17:21.145691 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:17:21 crc kubenswrapper[4942]: I0218 19:17:21.145881 4942 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 18 19:17:21 crc kubenswrapper[4942]: I0218 19:17:21.146210 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 18 19:17:21 crc kubenswrapper[4942]: I0218 19:17:21.146253 4942 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 18 19:17:21 crc kubenswrapper[4942]: I0218 19:17:21.146921 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:17:21 crc kubenswrapper[4942]: I0218 19:17:21.146950 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:17:21 crc kubenswrapper[4942]: I0218 19:17:21.146968 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:17:21 crc kubenswrapper[4942]: I0218 19:17:21.147208 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:17:21 crc kubenswrapper[4942]: I0218 19:17:21.147272 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:17:21 crc kubenswrapper[4942]: I0218 19:17:21.147291 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:17:21 crc kubenswrapper[4942]: I0218 19:17:21.147353 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Feb 18 19:17:21 crc kubenswrapper[4942]: I0218 19:17:21.147387 4942 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 18 19:17:21 crc kubenswrapper[4942]: I0218 19:17:21.148335 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:17:21 crc kubenswrapper[4942]: I0218 19:17:21.148374 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:17:21 crc kubenswrapper[4942]: I0218 19:17:21.148387 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:17:21 crc kubenswrapper[4942]: E0218 19:17:21.157241 4942 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.188:6443: connect: connection refused" interval="400ms" Feb 18 19:17:21 crc kubenswrapper[4942]: I0218 19:17:21.183505 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 18 19:17:21 crc kubenswrapper[4942]: I0218 19:17:21.183599 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 18 19:17:21 crc kubenswrapper[4942]: I0218 19:17:21.183687 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 18 19:17:21 crc kubenswrapper[4942]: I0218 19:17:21.183710 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 18 19:17:21 crc kubenswrapper[4942]: I0218 19:17:21.183730 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 18 19:17:21 crc kubenswrapper[4942]: I0218 19:17:21.183749 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 18 19:17:21 crc kubenswrapper[4942]: I0218 19:17:21.183806 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 18 19:17:21 crc kubenswrapper[4942]: I0218 19:17:21.183826 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 18 19:17:21 crc kubenswrapper[4942]: I0218 19:17:21.183845 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 18 19:17:21 crc kubenswrapper[4942]: I0218 19:17:21.183860 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 18 19:17:21 crc kubenswrapper[4942]: I0218 19:17:21.183875 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 18 19:17:21 crc kubenswrapper[4942]: I0218 19:17:21.183891 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 18 19:17:21 crc kubenswrapper[4942]: I0218 19:17:21.183906 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 18 19:17:21 crc kubenswrapper[4942]: I0218 19:17:21.183931 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 18 19:17:21 crc kubenswrapper[4942]: I0218 19:17:21.183965 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 18 19:17:21 crc kubenswrapper[4942]: I0218 19:17:21.233677 4942 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 18 19:17:21 crc kubenswrapper[4942]: I0218 19:17:21.235052 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:17:21 crc kubenswrapper[4942]: I0218 19:17:21.235087 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:17:21 crc kubenswrapper[4942]: I0218 19:17:21.235098 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:17:21 crc kubenswrapper[4942]: I0218 19:17:21.235122 4942 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 18 19:17:21 crc kubenswrapper[4942]: E0218 19:17:21.235812 4942 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.188:6443: connect: connection refused" node="crc" Feb 18 19:17:21 crc kubenswrapper[4942]: I0218 19:17:21.284862 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 18 19:17:21 crc kubenswrapper[4942]: I0218 19:17:21.285163 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 18 19:17:21 crc kubenswrapper[4942]: I0218 19:17:21.285256 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 18 19:17:21 crc kubenswrapper[4942]: I0218 19:17:21.285329 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 18 19:17:21 crc kubenswrapper[4942]: I0218 19:17:21.285409 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 18 19:17:21 crc kubenswrapper[4942]: I0218 19:17:21.285487 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 18 19:17:21 crc kubenswrapper[4942]: I0218 19:17:21.285353 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 18 19:17:21 crc kubenswrapper[4942]: I0218 19:17:21.285612 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 18 19:17:21 crc kubenswrapper[4942]: I0218 19:17:21.285500 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 18 19:17:21 crc kubenswrapper[4942]: I0218 19:17:21.285098 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 18 19:17:21 crc kubenswrapper[4942]: I0218 19:17:21.285426 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 18 19:17:21 crc kubenswrapper[4942]: I0218 19:17:21.285566 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 18 19:17:21 crc kubenswrapper[4942]: I0218 19:17:21.285556 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 18 19:17:21 crc kubenswrapper[4942]: I0218 19:17:21.285814 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 18 19:17:21 crc kubenswrapper[4942]: I0218 19:17:21.285850 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 18 19:17:21 crc kubenswrapper[4942]: I0218 19:17:21.285879 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 18 19:17:21 crc kubenswrapper[4942]: I0218 19:17:21.285908 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 18 19:17:21 crc kubenswrapper[4942]: I0218 19:17:21.285972 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 18 19:17:21 crc kubenswrapper[4942]: I0218 19:17:21.285990 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 18 19:17:21 crc kubenswrapper[4942]: I0218 19:17:21.286015 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 18 19:17:21 crc kubenswrapper[4942]: I0218 19:17:21.286051 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 18 19:17:21 crc kubenswrapper[4942]: I0218 19:17:21.286062 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 18 19:17:21 crc kubenswrapper[4942]: I0218 19:17:21.286098 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 18 19:17:21 crc kubenswrapper[4942]: I0218 19:17:21.286107 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 18 19:17:21 crc kubenswrapper[4942]: I0218 19:17:21.286156 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 18 19:17:21 crc kubenswrapper[4942]: I0218 19:17:21.285925 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 18 19:17:21 crc kubenswrapper[4942]: I0218 19:17:21.286218 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 18 19:17:21 crc kubenswrapper[4942]: I0218 19:17:21.286264 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 18 19:17:21 crc kubenswrapper[4942]: I0218 19:17:21.286410 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 18 19:17:21 crc kubenswrapper[4942]: I0218 19:17:21.286410 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 18 19:17:21 crc kubenswrapper[4942]: I0218 19:17:21.289927 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Feb 18 19:17:21 crc kubenswrapper[4942]: W0218 19:17:21.333900 4942 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2139d3e2895fc6797b9c76a1b4c9886d.slice/crio-beee5200fae558a39fda42c2ce66cd18696637ea9ca22dd80f1dfe753e4826ff WatchSource:0}: Error finding container beee5200fae558a39fda42c2ce66cd18696637ea9ca22dd80f1dfe753e4826ff: Status 404 returned error can't find the container with id beee5200fae558a39fda42c2ce66cd18696637ea9ca22dd80f1dfe753e4826ff Feb 18 19:17:21 crc kubenswrapper[4942]: I0218 19:17:21.437086 4942 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 18 19:17:21 crc kubenswrapper[4942]: I0218 19:17:21.438693 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:17:21 crc kubenswrapper[4942]: I0218 19:17:21.438798 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:17:21 crc kubenswrapper[4942]: I0218 19:17:21.438818 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:17:21 crc kubenswrapper[4942]: I0218 19:17:21.438865 4942 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 18 19:17:21 crc kubenswrapper[4942]: E0218 19:17:21.439693 4942 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.188:6443: connect: connection refused" node="crc" Feb 18 19:17:21 crc kubenswrapper[4942]: I0218 19:17:21.496461 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 18 19:17:21 crc kubenswrapper[4942]: W0218 19:17:21.519128 4942 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf4b27818a5e8e43d0dc095d08835c792.slice/crio-bd22406eca633f3eda21f4ba8b3c78ef17d4029502dd2e6d2f3becaee32dce29 WatchSource:0}: Error finding container bd22406eca633f3eda21f4ba8b3c78ef17d4029502dd2e6d2f3becaee32dce29: Status 404 returned error can't find the container with id bd22406eca633f3eda21f4ba8b3c78ef17d4029502dd2e6d2f3becaee32dce29 Feb 18 19:17:21 crc kubenswrapper[4942]: I0218 19:17:21.524724 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 18 19:17:21 crc kubenswrapper[4942]: I0218 19:17:21.551844 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 18 19:17:21 crc kubenswrapper[4942]: E0218 19:17:21.558838 4942 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.188:6443: connect: connection refused" interval="800ms" Feb 18 19:17:21 crc kubenswrapper[4942]: I0218 19:17:21.582255 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 18 19:17:21 crc kubenswrapper[4942]: W0218 19:17:21.600033 4942 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd1b160f5dda77d281dd8e69ec8d817f9.slice/crio-86abd5e6c59a8801c977070aaf2c8b8d3b2fe729948843999bc5b18915157a5a WatchSource:0}: Error finding container 86abd5e6c59a8801c977070aaf2c8b8d3b2fe729948843999bc5b18915157a5a: Status 404 returned error can't find the container with id 86abd5e6c59a8801c977070aaf2c8b8d3b2fe729948843999bc5b18915157a5a Feb 18 19:17:21 crc kubenswrapper[4942]: W0218 19:17:21.800813 4942 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.188:6443: connect: connection refused Feb 18 19:17:21 crc kubenswrapper[4942]: E0218 19:17:21.800941 4942 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.188:6443: connect: connection refused" logger="UnhandledError" Feb 18 19:17:21 crc kubenswrapper[4942]: W0218 19:17:21.838553 4942 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.188:6443: connect: connection refused Feb 18 19:17:21 crc kubenswrapper[4942]: E0218 19:17:21.838686 4942 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.188:6443: connect: connection refused" logger="UnhandledError" Feb 18 19:17:21 crc kubenswrapper[4942]: I0218 19:17:21.839889 4942 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 18 19:17:21 crc kubenswrapper[4942]: I0218 19:17:21.841582 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:17:21 crc kubenswrapper[4942]: I0218 19:17:21.841635 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:17:21 crc kubenswrapper[4942]: I0218 19:17:21.841653 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:17:21 crc kubenswrapper[4942]: I0218 19:17:21.841691 4942 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 18 19:17:21 crc kubenswrapper[4942]: E0218 19:17:21.842325 4942 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.188:6443: connect: connection refused" node="crc" Feb 18 19:17:21 crc kubenswrapper[4942]: I0218 19:17:21.949850 4942 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.188:6443: connect: connection refused Feb 18 19:17:21 crc kubenswrapper[4942]: I0218 19:17:21.954047 4942 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-08 00:09:19.222309154 +0000 UTC Feb 18 19:17:22 crc kubenswrapper[4942]: I0218 19:17:22.040220 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"beee5200fae558a39fda42c2ce66cd18696637ea9ca22dd80f1dfe753e4826ff"} Feb 18 19:17:22 crc kubenswrapper[4942]: I0218 19:17:22.041898 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"86abd5e6c59a8801c977070aaf2c8b8d3b2fe729948843999bc5b18915157a5a"} Feb 18 19:17:22 crc kubenswrapper[4942]: I0218 19:17:22.044124 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"8f965989f2401534556e39f4940e0a03935cf6ff85e89a9401fdfc20fc84dbc3"} Feb 18 19:17:22 crc kubenswrapper[4942]: I0218 19:17:22.044205 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"24c5ff3d077169128b674657bf2669ef0b4d72ad21d4062d7fa7f76aa83eaa2a"} Feb 18 19:17:22 crc kubenswrapper[4942]: I0218 19:17:22.044453 4942 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 18 19:17:22 crc kubenswrapper[4942]: I0218 19:17:22.046906 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:17:22 crc kubenswrapper[4942]: I0218 19:17:22.046966 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:17:22 crc kubenswrapper[4942]: I0218 19:17:22.046986 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:17:22 crc kubenswrapper[4942]: I0218 19:17:22.049715 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"a3654d3b4a5084ce9ffb9ef8aeab6155788b56ac636aee44b098f6e9d457a8d4"} Feb 18 19:17:22 crc kubenswrapper[4942]: I0218 19:17:22.049827 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"f6dfa8dbd907625a0d654282e664bb179d300b2c0437dda1c58b1ecb350c77ba"} Feb 18 19:17:22 crc kubenswrapper[4942]: W0218 19:17:22.050734 4942 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.188:6443: connect: connection refused Feb 18 19:17:22 crc kubenswrapper[4942]: E0218 19:17:22.050892 4942 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.188:6443: connect: connection refused" logger="UnhandledError" Feb 18 19:17:22 crc kubenswrapper[4942]: I0218 19:17:22.051749 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"bd22406eca633f3eda21f4ba8b3c78ef17d4029502dd2e6d2f3becaee32dce29"} Feb 18 19:17:22 crc kubenswrapper[4942]: I0218 19:17:22.051917 4942 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 18 19:17:22 crc kubenswrapper[4942]: I0218 19:17:22.052937 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:17:22 crc kubenswrapper[4942]: I0218 19:17:22.052995 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:17:22 crc kubenswrapper[4942]: I0218 19:17:22.053019 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:17:22 crc kubenswrapper[4942]: W0218 19:17:22.139919 4942 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.188:6443: connect: connection refused Feb 18 19:17:22 crc kubenswrapper[4942]: E0218 19:17:22.140007 4942 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.188:6443: connect: connection refused" logger="UnhandledError" Feb 18 19:17:22 crc kubenswrapper[4942]: E0218 19:17:22.359630 4942 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.188:6443: connect: connection refused" interval="1.6s" Feb 18 19:17:22 crc kubenswrapper[4942]: I0218 19:17:22.642875 4942 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 18 19:17:22 crc kubenswrapper[4942]: I0218 19:17:22.644321 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:17:22 crc kubenswrapper[4942]: I0218 19:17:22.644377 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:17:22 crc kubenswrapper[4942]: I0218 19:17:22.644389 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:17:22 crc kubenswrapper[4942]: I0218 19:17:22.644425 4942 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 18 19:17:22 crc kubenswrapper[4942]: E0218 19:17:22.645147 4942 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.188:6443: connect: connection refused" node="crc" Feb 18 19:17:22 crc kubenswrapper[4942]: I0218 19:17:22.856290 4942 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Feb 18 19:17:22 crc kubenswrapper[4942]: E0218 19:17:22.858144 4942 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 38.102.83.188:6443: connect: connection refused" logger="UnhandledError" Feb 18 19:17:22 crc kubenswrapper[4942]: I0218 19:17:22.949976 4942 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.188:6443: connect: connection refused Feb 18 19:17:22 crc kubenswrapper[4942]: I0218 19:17:22.954201 4942 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-12 23:44:07.621929573 +0000 UTC Feb 18 19:17:23 crc kubenswrapper[4942]: I0218 19:17:23.056432 4942 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="6adb31d2464d541db843bddad1c43811ca11900db12deeb0d7c13ff8a3186d09" exitCode=0 Feb 18 19:17:23 crc kubenswrapper[4942]: I0218 19:17:23.056906 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"6adb31d2464d541db843bddad1c43811ca11900db12deeb0d7c13ff8a3186d09"} Feb 18 19:17:23 crc kubenswrapper[4942]: I0218 19:17:23.057130 4942 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 18 19:17:23 crc kubenswrapper[4942]: I0218 19:17:23.058013 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:17:23 crc kubenswrapper[4942]: I0218 19:17:23.058051 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:17:23 crc kubenswrapper[4942]: I0218 19:17:23.058062 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:17:23 crc kubenswrapper[4942]: I0218 19:17:23.058196 4942 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="56411e12886c9a228da08cd2a84af4beda72fb5b0a8a51a10d38558853b1d748" exitCode=0 Feb 18 19:17:23 crc kubenswrapper[4942]: I0218 19:17:23.058253 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"56411e12886c9a228da08cd2a84af4beda72fb5b0a8a51a10d38558853b1d748"} Feb 18 19:17:23 crc kubenswrapper[4942]: I0218 19:17:23.058435 4942 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 18 19:17:23 crc kubenswrapper[4942]: I0218 19:17:23.060124 4942 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 18 19:17:23 crc kubenswrapper[4942]: I0218 19:17:23.060231 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerDied","Data":"de8f04ef11faf93e27b40bb3839d1dabcfbb8248407854c379262f626810c92a"} Feb 18 19:17:23 crc kubenswrapper[4942]: I0218 19:17:23.060387 4942 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 18 19:17:23 crc kubenswrapper[4942]: I0218 19:17:23.060128 4942 generic.go:334] "Generic (PLEG): container finished" podID="d1b160f5dda77d281dd8e69ec8d817f9" containerID="de8f04ef11faf93e27b40bb3839d1dabcfbb8248407854c379262f626810c92a" exitCode=0 Feb 18 19:17:23 crc kubenswrapper[4942]: I0218 19:17:23.061280 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:17:23 crc kubenswrapper[4942]: I0218 19:17:23.061322 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:17:23 crc kubenswrapper[4942]: I0218 19:17:23.061341 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:17:23 crc kubenswrapper[4942]: I0218 19:17:23.061342 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:17:23 crc kubenswrapper[4942]: I0218 19:17:23.061433 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:17:23 crc kubenswrapper[4942]: I0218 19:17:23.061441 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:17:23 crc kubenswrapper[4942]: I0218 19:17:23.062158 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:17:23 crc kubenswrapper[4942]: I0218 19:17:23.062201 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:17:23 crc kubenswrapper[4942]: I0218 19:17:23.062218 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:17:23 crc kubenswrapper[4942]: I0218 19:17:23.063209 4942 generic.go:334] "Generic (PLEG): container finished" podID="3dcd261975c3d6b9a6ad6367fd4facd3" containerID="8f965989f2401534556e39f4940e0a03935cf6ff85e89a9401fdfc20fc84dbc3" exitCode=0 Feb 18 19:17:23 crc kubenswrapper[4942]: I0218 19:17:23.063276 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerDied","Data":"8f965989f2401534556e39f4940e0a03935cf6ff85e89a9401fdfc20fc84dbc3"} Feb 18 19:17:23 crc kubenswrapper[4942]: I0218 19:17:23.063689 4942 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 18 19:17:23 crc kubenswrapper[4942]: I0218 19:17:23.064700 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:17:23 crc kubenswrapper[4942]: I0218 19:17:23.064739 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:17:23 crc kubenswrapper[4942]: I0218 19:17:23.064756 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:17:23 crc kubenswrapper[4942]: I0218 19:17:23.066869 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"117748c4c4fa5e68d4b927639faa447ed3a984e0d7364a2224abe27e178d5746"} Feb 18 19:17:23 crc kubenswrapper[4942]: I0218 19:17:23.066960 4942 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 18 19:17:23 crc kubenswrapper[4942]: I0218 19:17:23.066974 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"e3a247d311cfbec62a54df5757a344bbc7ea516a66ccdeb67aecbbe268a4fbe4"} Feb 18 19:17:23 crc kubenswrapper[4942]: I0218 19:17:23.067112 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"2c8b81c113e461032be39d6328308bad3189a9e84d987da987d43e8e2f6449fe"} Feb 18 19:17:23 crc kubenswrapper[4942]: I0218 19:17:23.071660 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:17:23 crc kubenswrapper[4942]: I0218 19:17:23.071747 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:17:23 crc kubenswrapper[4942]: I0218 19:17:23.071819 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:17:23 crc kubenswrapper[4942]: W0218 19:17:23.817453 4942 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.188:6443: connect: connection refused Feb 18 19:17:23 crc kubenswrapper[4942]: E0218 19:17:23.817644 4942 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.188:6443: connect: connection refused" logger="UnhandledError" Feb 18 19:17:23 crc kubenswrapper[4942]: I0218 19:17:23.949611 4942 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.188:6443: connect: connection refused Feb 18 19:17:23 crc kubenswrapper[4942]: I0218 19:17:23.954755 4942 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-16 01:51:25.269150799 +0000 UTC Feb 18 19:17:23 crc kubenswrapper[4942]: E0218 19:17:23.960377 4942 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.188:6443: connect: connection refused" interval="3.2s" Feb 18 19:17:24 crc kubenswrapper[4942]: I0218 19:17:24.073617 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"c16c164479a6aa22042dd8b972db6fc6b802a7a1fc1a50b1538e85b6afe9b913"} Feb 18 19:17:24 crc kubenswrapper[4942]: I0218 19:17:24.073782 4942 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 18 19:17:24 crc kubenswrapper[4942]: I0218 19:17:24.074619 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:17:24 crc kubenswrapper[4942]: I0218 19:17:24.074645 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:17:24 crc kubenswrapper[4942]: I0218 19:17:24.074657 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:17:24 crc kubenswrapper[4942]: I0218 19:17:24.078773 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"7a3ed5634c2ead9b37bd3c51e5ba9f710e1a2b4430552bfce39b234bc7efdac5"} Feb 18 19:17:24 crc kubenswrapper[4942]: I0218 19:17:24.078812 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"691cb927291454a41fe8552c32737d52f8430e180870cd9c2bdc827926f15cd0"} Feb 18 19:17:24 crc kubenswrapper[4942]: I0218 19:17:24.078825 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"bf61d811b92484ed6f2e49184a29d51957000ce926d74afe7b452b8845673afb"} Feb 18 19:17:24 crc kubenswrapper[4942]: I0218 19:17:24.078945 4942 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 18 19:17:24 crc kubenswrapper[4942]: I0218 19:17:24.079997 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:17:24 crc kubenswrapper[4942]: I0218 19:17:24.080024 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:17:24 crc kubenswrapper[4942]: I0218 19:17:24.080036 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:17:24 crc kubenswrapper[4942]: I0218 19:17:24.089524 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"5fcd5de3303bba82e4a354de9f77b9aac574912955c2e49e2e74232f4d432a88"} Feb 18 19:17:24 crc kubenswrapper[4942]: I0218 19:17:24.089575 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"ca3d8e99733c89b17e7211c9bae268f8e75942d896d32a6e2e9fc7e613000a6d"} Feb 18 19:17:24 crc kubenswrapper[4942]: I0218 19:17:24.089592 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"ee5e19c2c5a503ae69e8052828713b9b399137e0fb7f3a06865d4d7f6b29c954"} Feb 18 19:17:24 crc kubenswrapper[4942]: I0218 19:17:24.089605 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"beecfbdf76954e7b9895240b52a2ec033ec3b81094ece02095f67a5f389d0383"} Feb 18 19:17:24 crc kubenswrapper[4942]: I0218 19:17:24.091400 4942 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="d9c537e3da2b2161286e254413b53d277aa3f40704439fabadbc37848f2b2fc7" exitCode=0 Feb 18 19:17:24 crc kubenswrapper[4942]: I0218 19:17:24.091546 4942 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 18 19:17:24 crc kubenswrapper[4942]: I0218 19:17:24.091561 4942 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 18 19:17:24 crc kubenswrapper[4942]: I0218 19:17:24.091543 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"d9c537e3da2b2161286e254413b53d277aa3f40704439fabadbc37848f2b2fc7"} Feb 18 19:17:24 crc kubenswrapper[4942]: I0218 19:17:24.092711 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:17:24 crc kubenswrapper[4942]: I0218 19:17:24.092735 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:17:24 crc kubenswrapper[4942]: I0218 19:17:24.092744 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:17:24 crc kubenswrapper[4942]: I0218 19:17:24.092771 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:17:24 crc kubenswrapper[4942]: I0218 19:17:24.092777 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:17:24 crc kubenswrapper[4942]: I0218 19:17:24.092784 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:17:24 crc kubenswrapper[4942]: I0218 19:17:24.245330 4942 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 18 19:17:24 crc kubenswrapper[4942]: I0218 19:17:24.246864 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:17:24 crc kubenswrapper[4942]: I0218 19:17:24.246909 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:17:24 crc kubenswrapper[4942]: I0218 19:17:24.246922 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:17:24 crc kubenswrapper[4942]: I0218 19:17:24.246947 4942 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 18 19:17:24 crc kubenswrapper[4942]: E0218 19:17:24.247486 4942 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.188:6443: connect: connection refused" node="crc" Feb 18 19:17:24 crc kubenswrapper[4942]: I0218 19:17:24.713828 4942 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 18 19:17:24 crc kubenswrapper[4942]: I0218 19:17:24.720134 4942 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 18 19:17:24 crc kubenswrapper[4942]: I0218 19:17:24.955017 4942 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-08 05:18:53.171483231 +0000 UTC Feb 18 19:17:25 crc kubenswrapper[4942]: I0218 19:17:25.096629 4942 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="05c476815b1a0d0fcac36ccf894fb3b31e2829b84816a3da48e1f6bbcb476065" exitCode=0 Feb 18 19:17:25 crc kubenswrapper[4942]: I0218 19:17:25.096781 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"05c476815b1a0d0fcac36ccf894fb3b31e2829b84816a3da48e1f6bbcb476065"} Feb 18 19:17:25 crc kubenswrapper[4942]: I0218 19:17:25.096823 4942 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 18 19:17:25 crc kubenswrapper[4942]: I0218 19:17:25.097918 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:17:25 crc kubenswrapper[4942]: I0218 19:17:25.098023 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:17:25 crc kubenswrapper[4942]: I0218 19:17:25.098088 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:17:25 crc kubenswrapper[4942]: I0218 19:17:25.100018 4942 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 18 19:17:25 crc kubenswrapper[4942]: I0218 19:17:25.100041 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"5246513a84d5da4c946e19dabd015225e05065daacd217fe981038f9c572b73f"} Feb 18 19:17:25 crc kubenswrapper[4942]: I0218 19:17:25.100118 4942 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 18 19:17:25 crc kubenswrapper[4942]: I0218 19:17:25.100133 4942 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 18 19:17:25 crc kubenswrapper[4942]: I0218 19:17:25.101211 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:17:25 crc kubenswrapper[4942]: I0218 19:17:25.101233 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:17:25 crc kubenswrapper[4942]: I0218 19:17:25.101243 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:17:25 crc kubenswrapper[4942]: I0218 19:17:25.101350 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:17:25 crc kubenswrapper[4942]: I0218 19:17:25.101384 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:17:25 crc kubenswrapper[4942]: I0218 19:17:25.101398 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:17:25 crc kubenswrapper[4942]: I0218 19:17:25.102163 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:17:25 crc kubenswrapper[4942]: I0218 19:17:25.102197 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:17:25 crc kubenswrapper[4942]: I0218 19:17:25.102209 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:17:25 crc kubenswrapper[4942]: I0218 19:17:25.850235 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 18 19:17:25 crc kubenswrapper[4942]: I0218 19:17:25.955390 4942 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-06 00:13:29.344341464 +0000 UTC Feb 18 19:17:26 crc kubenswrapper[4942]: I0218 19:17:26.107034 4942 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 18 19:17:26 crc kubenswrapper[4942]: I0218 19:17:26.108010 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"4c8a67008b781ea71caada1442830007b0bd3da48a88497babddf482144bfec0"} Feb 18 19:17:26 crc kubenswrapper[4942]: I0218 19:17:26.108103 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"76699f1fec7e64b06e2cc8478d06b157701ccfb88e09c32be80176f7ff7036b6"} Feb 18 19:17:26 crc kubenswrapper[4942]: I0218 19:17:26.108127 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"a85175aa81681c668f7d94ca0deceeac84a65b61bcca1eea90227320748655e7"} Feb 18 19:17:26 crc kubenswrapper[4942]: I0218 19:17:26.108143 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"9bfe09f5c6e255c5b82f078a14b9a0e6d1e9160a992d135aa89d3b64899315ea"} Feb 18 19:17:26 crc kubenswrapper[4942]: I0218 19:17:26.108255 4942 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 18 19:17:26 crc kubenswrapper[4942]: I0218 19:17:26.108293 4942 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 18 19:17:26 crc kubenswrapper[4942]: I0218 19:17:26.109293 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:17:26 crc kubenswrapper[4942]: I0218 19:17:26.109336 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:17:26 crc kubenswrapper[4942]: I0218 19:17:26.109352 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:17:26 crc kubenswrapper[4942]: I0218 19:17:26.109951 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:17:26 crc kubenswrapper[4942]: I0218 19:17:26.109982 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:17:26 crc kubenswrapper[4942]: I0218 19:17:26.109999 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:17:26 crc kubenswrapper[4942]: I0218 19:17:26.670015 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 18 19:17:26 crc kubenswrapper[4942]: I0218 19:17:26.776962 4942 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 18 19:17:26 crc kubenswrapper[4942]: I0218 19:17:26.956322 4942 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-01 22:42:48.466110864 +0000 UTC Feb 18 19:17:27 crc kubenswrapper[4942]: I0218 19:17:27.015700 4942 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Feb 18 19:17:27 crc kubenswrapper[4942]: I0218 19:17:27.115200 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"4104133a81e916b706c0e0f75486e2e71f4f98f4329b84ec1320e500f810fbfc"} Feb 18 19:17:27 crc kubenswrapper[4942]: I0218 19:17:27.115272 4942 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 18 19:17:27 crc kubenswrapper[4942]: I0218 19:17:27.115342 4942 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 18 19:17:27 crc kubenswrapper[4942]: I0218 19:17:27.115553 4942 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 18 19:17:27 crc kubenswrapper[4942]: I0218 19:17:27.116368 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:17:27 crc kubenswrapper[4942]: I0218 19:17:27.116395 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:17:27 crc kubenswrapper[4942]: I0218 19:17:27.116403 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:17:27 crc kubenswrapper[4942]: I0218 19:17:27.116559 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:17:27 crc kubenswrapper[4942]: I0218 19:17:27.116611 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:17:27 crc kubenswrapper[4942]: I0218 19:17:27.116642 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:17:27 crc kubenswrapper[4942]: I0218 19:17:27.117330 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:17:27 crc kubenswrapper[4942]: I0218 19:17:27.117361 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:17:27 crc kubenswrapper[4942]: I0218 19:17:27.117371 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:17:27 crc kubenswrapper[4942]: I0218 19:17:27.448092 4942 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 18 19:17:27 crc kubenswrapper[4942]: I0218 19:17:27.449920 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:17:27 crc kubenswrapper[4942]: I0218 19:17:27.449998 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:17:27 crc kubenswrapper[4942]: I0218 19:17:27.450019 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:17:27 crc kubenswrapper[4942]: I0218 19:17:27.450064 4942 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 18 19:17:27 crc kubenswrapper[4942]: I0218 19:17:27.659367 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 18 19:17:27 crc kubenswrapper[4942]: I0218 19:17:27.659662 4942 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 18 19:17:27 crc kubenswrapper[4942]: I0218 19:17:27.661557 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:17:27 crc kubenswrapper[4942]: I0218 19:17:27.661616 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:17:27 crc kubenswrapper[4942]: I0218 19:17:27.661642 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:17:27 crc kubenswrapper[4942]: I0218 19:17:27.957298 4942 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-25 14:25:43.190270707 +0000 UTC Feb 18 19:17:28 crc kubenswrapper[4942]: I0218 19:17:28.118603 4942 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 18 19:17:28 crc kubenswrapper[4942]: I0218 19:17:28.119239 4942 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 18 19:17:28 crc kubenswrapper[4942]: I0218 19:17:28.121195 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:17:28 crc kubenswrapper[4942]: I0218 19:17:28.121251 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:17:28 crc kubenswrapper[4942]: I0218 19:17:28.121266 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:17:28 crc kubenswrapper[4942]: I0218 19:17:28.121953 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:17:28 crc kubenswrapper[4942]: I0218 19:17:28.122030 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:17:28 crc kubenswrapper[4942]: I0218 19:17:28.122064 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:17:28 crc kubenswrapper[4942]: I0218 19:17:28.848332 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 18 19:17:28 crc kubenswrapper[4942]: I0218 19:17:28.848675 4942 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 18 19:17:28 crc kubenswrapper[4942]: I0218 19:17:28.850525 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:17:28 crc kubenswrapper[4942]: I0218 19:17:28.850613 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:17:28 crc kubenswrapper[4942]: I0218 19:17:28.850635 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:17:28 crc kubenswrapper[4942]: I0218 19:17:28.958146 4942 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-17 09:25:52.618003459 +0000 UTC Feb 18 19:17:29 crc kubenswrapper[4942]: I0218 19:17:29.179669 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 18 19:17:29 crc kubenswrapper[4942]: I0218 19:17:29.180050 4942 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 18 19:17:29 crc kubenswrapper[4942]: I0218 19:17:29.182002 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:17:29 crc kubenswrapper[4942]: I0218 19:17:29.182067 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:17:29 crc kubenswrapper[4942]: I0218 19:17:29.182085 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:17:29 crc kubenswrapper[4942]: I0218 19:17:29.959060 4942 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-08 23:07:30.682345415 +0000 UTC Feb 18 19:17:30 crc kubenswrapper[4942]: I0218 19:17:30.959591 4942 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-22 14:04:43.431687801 +0000 UTC Feb 18 19:17:31 crc kubenswrapper[4942]: I0218 19:17:31.014923 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-etcd/etcd-crc" Feb 18 19:17:31 crc kubenswrapper[4942]: I0218 19:17:31.015227 4942 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 18 19:17:31 crc kubenswrapper[4942]: I0218 19:17:31.018263 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:17:31 crc kubenswrapper[4942]: I0218 19:17:31.018315 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:17:31 crc kubenswrapper[4942]: I0218 19:17:31.018338 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:17:31 crc kubenswrapper[4942]: E0218 19:17:31.144040 4942 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Feb 18 19:17:31 crc kubenswrapper[4942]: I0218 19:17:31.154083 4942 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 18 19:17:31 crc kubenswrapper[4942]: I0218 19:17:31.154297 4942 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 18 19:17:31 crc kubenswrapper[4942]: I0218 19:17:31.155834 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:17:31 crc kubenswrapper[4942]: I0218 19:17:31.155905 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:17:31 crc kubenswrapper[4942]: I0218 19:17:31.155929 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:17:31 crc kubenswrapper[4942]: I0218 19:17:31.960460 4942 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-14 04:27:02.853296037 +0000 UTC Feb 18 19:17:32 crc kubenswrapper[4942]: I0218 19:17:32.960682 4942 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-13 03:30:32.187339887 +0000 UTC Feb 18 19:17:33 crc kubenswrapper[4942]: I0218 19:17:33.961388 4942 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-13 21:09:33.8663889 +0000 UTC Feb 18 19:17:34 crc kubenswrapper[4942]: I0218 19:17:34.154266 4942 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 18 19:17:34 crc kubenswrapper[4942]: I0218 19:17:34.154394 4942 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 18 19:17:34 crc kubenswrapper[4942]: W0218 19:17:34.789479 4942 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": net/http: TLS handshake timeout Feb 18 19:17:34 crc kubenswrapper[4942]: I0218 19:17:34.789637 4942 trace.go:236] Trace[1125017472]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (18-Feb-2026 19:17:24.788) (total time: 10001ms): Feb 18 19:17:34 crc kubenswrapper[4942]: Trace[1125017472]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": net/http: TLS handshake timeout 10001ms (19:17:34.789) Feb 18 19:17:34 crc kubenswrapper[4942]: Trace[1125017472]: [10.001521808s] [10.001521808s] END Feb 18 19:17:34 crc kubenswrapper[4942]: E0218 19:17:34.789680 4942 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" Feb 18 19:17:34 crc kubenswrapper[4942]: W0218 19:17:34.864977 4942 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": net/http: TLS handshake timeout Feb 18 19:17:34 crc kubenswrapper[4942]: I0218 19:17:34.865094 4942 trace.go:236] Trace[260678053]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (18-Feb-2026 19:17:24.863) (total time: 10001ms): Feb 18 19:17:34 crc kubenswrapper[4942]: Trace[260678053]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": net/http: TLS handshake timeout 10001ms (19:17:34.864) Feb 18 19:17:34 crc kubenswrapper[4942]: Trace[260678053]: [10.001283813s] [10.001283813s] END Feb 18 19:17:34 crc kubenswrapper[4942]: E0218 19:17:34.865127 4942 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" Feb 18 19:17:34 crc kubenswrapper[4942]: I0218 19:17:34.950950 4942 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": net/http: TLS handshake timeout Feb 18 19:17:34 crc kubenswrapper[4942]: I0218 19:17:34.962520 4942 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-18 11:37:38.747421841 +0000 UTC Feb 18 19:17:35 crc kubenswrapper[4942]: W0218 19:17:35.213482 4942 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": net/http: TLS handshake timeout Feb 18 19:17:35 crc kubenswrapper[4942]: I0218 19:17:35.213632 4942 trace.go:236] Trace[372135767]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (18-Feb-2026 19:17:25.211) (total time: 10001ms): Feb 18 19:17:35 crc kubenswrapper[4942]: Trace[372135767]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": net/http: TLS handshake timeout 10001ms (19:17:35.213) Feb 18 19:17:35 crc kubenswrapper[4942]: Trace[372135767]: [10.001607131s] [10.001607131s] END Feb 18 19:17:35 crc kubenswrapper[4942]: E0218 19:17:35.213667 4942 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" Feb 18 19:17:35 crc kubenswrapper[4942]: I0218 19:17:35.407325 4942 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Feb 18 19:17:35 crc kubenswrapper[4942]: I0218 19:17:35.407417 4942 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Feb 18 19:17:35 crc kubenswrapper[4942]: I0218 19:17:35.430880 4942 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Feb 18 19:17:35 crc kubenswrapper[4942]: I0218 19:17:35.430959 4942 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Feb 18 19:17:35 crc kubenswrapper[4942]: I0218 19:17:35.962669 4942 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-28 00:56:00.678431319 +0000 UTC Feb 18 19:17:36 crc kubenswrapper[4942]: I0218 19:17:36.050627 4942 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-etcd/etcd-crc" Feb 18 19:17:36 crc kubenswrapper[4942]: I0218 19:17:36.050910 4942 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 18 19:17:36 crc kubenswrapper[4942]: I0218 19:17:36.052051 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:17:36 crc kubenswrapper[4942]: I0218 19:17:36.052090 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:17:36 crc kubenswrapper[4942]: I0218 19:17:36.052103 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:17:36 crc kubenswrapper[4942]: I0218 19:17:36.089263 4942 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-etcd/etcd-crc" Feb 18 19:17:36 crc kubenswrapper[4942]: I0218 19:17:36.145071 4942 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 18 19:17:36 crc kubenswrapper[4942]: I0218 19:17:36.146233 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:17:36 crc kubenswrapper[4942]: I0218 19:17:36.146326 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:17:36 crc kubenswrapper[4942]: I0218 19:17:36.146352 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:17:36 crc kubenswrapper[4942]: I0218 19:17:36.181958 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-etcd/etcd-crc" Feb 18 19:17:36 crc kubenswrapper[4942]: I0218 19:17:36.674102 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 18 19:17:36 crc kubenswrapper[4942]: I0218 19:17:36.674250 4942 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 18 19:17:36 crc kubenswrapper[4942]: I0218 19:17:36.675587 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:17:36 crc kubenswrapper[4942]: I0218 19:17:36.675649 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:17:36 crc kubenswrapper[4942]: I0218 19:17:36.675665 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:17:36 crc kubenswrapper[4942]: I0218 19:17:36.783462 4942 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 18 19:17:36 crc kubenswrapper[4942]: I0218 19:17:36.783728 4942 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 18 19:17:36 crc kubenswrapper[4942]: I0218 19:17:36.785427 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:17:36 crc kubenswrapper[4942]: I0218 19:17:36.785475 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:17:36 crc kubenswrapper[4942]: I0218 19:17:36.785492 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:17:36 crc kubenswrapper[4942]: I0218 19:17:36.788783 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 18 19:17:36 crc kubenswrapper[4942]: I0218 19:17:36.963507 4942 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-18 06:33:39.616257798 +0000 UTC Feb 18 19:17:37 crc kubenswrapper[4942]: I0218 19:17:37.148262 4942 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 18 19:17:37 crc kubenswrapper[4942]: I0218 19:17:37.148339 4942 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 18 19:17:37 crc kubenswrapper[4942]: I0218 19:17:37.148339 4942 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 18 19:17:37 crc kubenswrapper[4942]: I0218 19:17:37.149992 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:17:37 crc kubenswrapper[4942]: I0218 19:17:37.150067 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:17:37 crc kubenswrapper[4942]: I0218 19:17:37.150091 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:17:37 crc kubenswrapper[4942]: I0218 19:17:37.150435 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:17:37 crc kubenswrapper[4942]: I0218 19:17:37.150499 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:17:37 crc kubenswrapper[4942]: I0218 19:17:37.150518 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:17:37 crc kubenswrapper[4942]: I0218 19:17:37.964510 4942 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-29 12:26:05.66862836 +0000 UTC Feb 18 19:17:38 crc kubenswrapper[4942]: I0218 19:17:38.965045 4942 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-12 18:23:20.409513321 +0000 UTC Feb 18 19:17:39 crc kubenswrapper[4942]: I0218 19:17:39.965730 4942 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-05 02:18:14.771144861 +0000 UTC Feb 18 19:17:40 crc kubenswrapper[4942]: I0218 19:17:40.161158 4942 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Feb 18 19:17:40 crc kubenswrapper[4942]: E0218 19:17:40.399245 4942 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": context deadline exceeded" interval="6.4s" Feb 18 19:17:40 crc kubenswrapper[4942]: I0218 19:17:40.402130 4942 trace.go:236] Trace[188569331]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (18-Feb-2026 19:17:29.443) (total time: 10958ms): Feb 18 19:17:40 crc kubenswrapper[4942]: Trace[188569331]: ---"Objects listed" error: 10958ms (19:17:40.401) Feb 18 19:17:40 crc kubenswrapper[4942]: Trace[188569331]: [10.958232127s] [10.958232127s] END Feb 18 19:17:40 crc kubenswrapper[4942]: I0218 19:17:40.402172 4942 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Feb 18 19:17:40 crc kubenswrapper[4942]: I0218 19:17:40.404462 4942 reconstruct.go:205] "DevicePaths of reconstructed volumes updated" Feb 18 19:17:40 crc kubenswrapper[4942]: E0218 19:17:40.404560 4942 kubelet_node_status.go:99] "Unable to register node with API server" err="nodes \"crc\" is forbidden: autoscaling.openshift.io/ManagedNode infra config cache not synchronized" node="crc" Feb 18 19:17:40 crc kubenswrapper[4942]: I0218 19:17:40.422742 4942 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Feb 18 19:17:40 crc kubenswrapper[4942]: I0218 19:17:40.440032 4942 csr.go:261] certificate signing request csr-ccn9p is approved, waiting to be issued Feb 18 19:17:40 crc kubenswrapper[4942]: I0218 19:17:40.455326 4942 csr.go:257] certificate signing request csr-ccn9p is issued Feb 18 19:17:40 crc kubenswrapper[4942]: I0218 19:17:40.472800 4942 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Liveness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": EOF" start-of-body= Feb 18 19:17:40 crc kubenswrapper[4942]: I0218 19:17:40.472874 4942 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": EOF" start-of-body= Feb 18 19:17:40 crc kubenswrapper[4942]: I0218 19:17:40.472884 4942 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": EOF" Feb 18 19:17:40 crc kubenswrapper[4942]: I0218 19:17:40.472959 4942 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": EOF" Feb 18 19:17:40 crc kubenswrapper[4942]: I0218 19:17:40.477955 4942 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:33612->192.168.126.11:17697: read: connection reset by peer" start-of-body= Feb 18 19:17:40 crc kubenswrapper[4942]: I0218 19:17:40.478037 4942 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:33612->192.168.126.11:17697: read: connection reset by peer" Feb 18 19:17:40 crc kubenswrapper[4942]: I0218 19:17:40.755876 4942 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Feb 18 19:17:40 crc kubenswrapper[4942]: I0218 19:17:40.796112 4942 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Feb 18 19:17:40 crc kubenswrapper[4942]: W0218 19:17:40.796808 4942 reflector.go:484] k8s.io/client-go/informers/factory.go:160: watch of *v1.Service ended with: very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received Feb 18 19:17:40 crc kubenswrapper[4942]: E0218 19:17:40.796676 4942 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": read tcp 38.102.83.188:54284->38.102.83.188:6443: use of closed network connection" event="&Event{ObjectMeta:{kube-apiserver-crc.18956d55691e14e1 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Started,Message:Started container setup,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-18 19:17:22.043385057 +0000 UTC m=+1.748317732,LastTimestamp:2026-02-18 19:17:22.043385057 +0000 UTC m=+1.748317732,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 18 19:17:40 crc kubenswrapper[4942]: W0218 19:17:40.796823 4942 reflector.go:484] k8s.io/client-go/informers/factory.go:160: watch of *v1.RuntimeClass ended with: very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received Feb 18 19:17:40 crc kubenswrapper[4942]: W0218 19:17:40.796823 4942 reflector.go:484] k8s.io/client-go/informers/factory.go:160: watch of *v1.CSIDriver ended with: very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received Feb 18 19:17:40 crc kubenswrapper[4942]: I0218 19:17:40.966181 4942 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-26 19:38:30.097128882 +0000 UTC Feb 18 19:17:41 crc kubenswrapper[4942]: E0218 19:17:41.145123 4942 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Feb 18 19:17:41 crc kubenswrapper[4942]: I0218 19:17:41.160635 4942 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Feb 18 19:17:41 crc kubenswrapper[4942]: I0218 19:17:41.162524 4942 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="5246513a84d5da4c946e19dabd015225e05065daacd217fe981038f9c572b73f" exitCode=255 Feb 18 19:17:41 crc kubenswrapper[4942]: I0218 19:17:41.162565 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"5246513a84d5da4c946e19dabd015225e05065daacd217fe981038f9c572b73f"} Feb 18 19:17:41 crc kubenswrapper[4942]: I0218 19:17:41.162726 4942 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 18 19:17:41 crc kubenswrapper[4942]: I0218 19:17:41.163651 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:17:41 crc kubenswrapper[4942]: I0218 19:17:41.163680 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:17:41 crc kubenswrapper[4942]: I0218 19:17:41.163688 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:17:41 crc kubenswrapper[4942]: I0218 19:17:41.164477 4942 scope.go:117] "RemoveContainer" containerID="5246513a84d5da4c946e19dabd015225e05065daacd217fe981038f9c572b73f" Feb 18 19:17:41 crc kubenswrapper[4942]: I0218 19:17:41.254402 4942 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Feb 18 19:17:41 crc kubenswrapper[4942]: I0218 19:17:41.261860 4942 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 18 19:17:41 crc kubenswrapper[4942]: I0218 19:17:41.267263 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 18 19:17:41 crc kubenswrapper[4942]: I0218 19:17:41.457917 4942 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2027-02-18 19:12:40 +0000 UTC, rotation deadline is 2026-12-26 19:06:59.826001153 +0000 UTC Feb 18 19:17:41 crc kubenswrapper[4942]: I0218 19:17:41.457970 4942 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Waiting 7463h49m18.368034067s for next certificate rotation Feb 18 19:17:41 crc kubenswrapper[4942]: I0218 19:17:41.945977 4942 apiserver.go:52] "Watching apiserver" Feb 18 19:17:41 crc kubenswrapper[4942]: I0218 19:17:41.952627 4942 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Feb 18 19:17:41 crc kubenswrapper[4942]: I0218 19:17:41.953064 4942 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-network-node-identity/network-node-identity-vrzqb","openshift-network-operator/iptables-alerter-4ln5h","openshift-network-operator/network-operator-58b4c7f79c-55gtf","openshift-dns/node-resolver-5pgvt","openshift-kube-controller-manager/kube-controller-manager-crc","openshift-network-console/networking-console-plugin-85b44fc459-gdk6g","openshift-network-diagnostics/network-check-source-55646444c4-trplf","openshift-network-diagnostics/network-check-target-xd92c"] Feb 18 19:17:41 crc kubenswrapper[4942]: I0218 19:17:41.953529 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 18 19:17:41 crc kubenswrapper[4942]: I0218 19:17:41.953558 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 19:17:41 crc kubenswrapper[4942]: I0218 19:17:41.953617 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 18 19:17:41 crc kubenswrapper[4942]: I0218 19:17:41.953695 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 19:17:41 crc kubenswrapper[4942]: E0218 19:17:41.953903 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 19:17:41 crc kubenswrapper[4942]: E0218 19:17:41.953993 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 19:17:41 crc kubenswrapper[4942]: I0218 19:17:41.954246 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 18 19:17:41 crc kubenswrapper[4942]: I0218 19:17:41.954289 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-5pgvt" Feb 18 19:17:41 crc kubenswrapper[4942]: I0218 19:17:41.954303 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 19:17:41 crc kubenswrapper[4942]: E0218 19:17:41.954359 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 19:17:41 crc kubenswrapper[4942]: I0218 19:17:41.956830 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Feb 18 19:17:41 crc kubenswrapper[4942]: I0218 19:17:41.956946 4942 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Feb 18 19:17:41 crc kubenswrapper[4942]: I0218 19:17:41.957050 4942 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Feb 18 19:17:41 crc kubenswrapper[4942]: I0218 19:17:41.957085 4942 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Feb 18 19:17:41 crc kubenswrapper[4942]: I0218 19:17:41.957242 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Feb 18 19:17:41 crc kubenswrapper[4942]: I0218 19:17:41.957554 4942 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Feb 18 19:17:41 crc kubenswrapper[4942]: I0218 19:17:41.957723 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Feb 18 19:17:41 crc kubenswrapper[4942]: I0218 19:17:41.958293 4942 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Feb 18 19:17:41 crc kubenswrapper[4942]: I0218 19:17:41.958360 4942 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Feb 18 19:17:41 crc kubenswrapper[4942]: I0218 19:17:41.958591 4942 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Feb 18 19:17:41 crc kubenswrapper[4942]: I0218 19:17:41.958607 4942 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Feb 18 19:17:41 crc kubenswrapper[4942]: I0218 19:17:41.960730 4942 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Feb 18 19:17:41 crc kubenswrapper[4942]: I0218 19:17:41.966307 4942 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-09 09:42:00.198047894 +0000 UTC Feb 18 19:17:41 crc kubenswrapper[4942]: I0218 19:17:41.989028 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.001369 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b5d2b9d-7ec0-41fa-a073-399c6fd41eb6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2c8b81c113e461032be39d6328308bad3189a9e84d987da987d43e8e2f6449fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3654d3b4a5084ce9ffb9ef8aeab6155788b56ac636aee44b098f6e9d457a8d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3a247d311cfbec62a54df5757a344bbc7ea516a66ccdeb67aecbbe268a4fbe4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://117748c4c4fa5e68d4b927639faa447ed3a984e0d7364a2224abe27e178d5746\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:17:21Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.016231 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.016364 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.016704 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.016851 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.016945 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.017060 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.017147 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.017222 4942 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.017235 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.017304 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.017332 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.017363 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.017387 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.017403 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.018037 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.018245 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.018787 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.026021 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.027832 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 19:17:42 crc kubenswrapper[4942]: E0218 19:17:42.032007 4942 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 18 19:17:42 crc kubenswrapper[4942]: E0218 19:17:42.032051 4942 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 18 19:17:42 crc kubenswrapper[4942]: E0218 19:17:42.032066 4942 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 18 19:17:42 crc kubenswrapper[4942]: E0218 19:17:42.032181 4942 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-18 19:17:42.532133907 +0000 UTC m=+22.237066782 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.034955 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 18 19:17:42 crc kubenswrapper[4942]: E0218 19:17:42.035726 4942 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 18 19:17:42 crc kubenswrapper[4942]: E0218 19:17:42.035749 4942 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 18 19:17:42 crc kubenswrapper[4942]: E0218 19:17:42.035795 4942 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 18 19:17:42 crc kubenswrapper[4942]: E0218 19:17:42.035924 4942 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-18 19:17:42.5359066 +0000 UTC m=+22.240839265 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.037294 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.037569 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.040011 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.043242 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.048642 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.054793 4942 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.059261 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.069661 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.077878 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-5pgvt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f163820b-df8b-4e07-9b74-d5f3332580a6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjg6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:17:41Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-5pgvt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.118108 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.118163 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.118189 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.118212 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.118239 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.118261 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") pod \"44663579-783b-4372-86d6-acf235a62d72\" (UID: \"44663579-783b-4372-86d6-acf235a62d72\") " Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.118285 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.118308 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.118333 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.118356 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.118378 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.118401 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.118423 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.118448 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.118469 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.118533 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.118556 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.118579 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.118600 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.118624 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.118646 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.118670 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.118694 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.118716 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.118738 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.118782 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.118809 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.118831 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.118853 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.118877 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.118920 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.118939 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.118964 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.118984 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.119005 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.119028 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.119048 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.119068 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.119090 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.119110 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.119131 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.119150 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.119181 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.119201 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.119221 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.119244 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.119265 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.119285 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.119306 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.119345 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.119365 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.119385 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.119405 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.119441 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.119460 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") pod \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\" (UID: \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\") " Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.119481 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.119503 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.119490 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" (OuterVolumeSpecName: "serviceca") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "serviceca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.119526 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.119550 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.119572 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.119592 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.119614 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.119637 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.119658 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.119680 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.119702 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.119722 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.119729 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.119747 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.119793 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.119815 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.119836 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.119858 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.119883 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.119904 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.119927 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.119947 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.119948 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" (OuterVolumeSpecName: "kube-api-access-ngvvp") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "kube-api-access-ngvvp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.119968 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.119992 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.120015 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.120037 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.120173 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.120198 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.120220 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.120241 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.120263 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" (OuterVolumeSpecName: "config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.120273 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.120321 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.120342 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.120363 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.120384 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.120404 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.120424 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.120445 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.120466 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.120487 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.120507 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.120528 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.120549 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.120570 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.120590 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.120613 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.120634 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") pod \"49ef4625-1d3a-4a9f-b595-c2433d32326d\" (UID: \"49ef4625-1d3a-4a9f-b595-c2433d32326d\") " Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.120656 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.120676 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.120697 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.120720 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.120740 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.120785 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.120809 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.120831 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.120852 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.120875 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.120897 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.120963 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.120985 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.121006 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.121030 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.121052 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.121074 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.121095 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.121117 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.121139 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.121160 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.121181 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.121202 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.121224 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.121246 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.121268 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.121293 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.121317 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.121339 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.121360 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.121383 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.121407 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.121433 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.121455 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.121476 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.121498 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.121520 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.121546 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.121567 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.121588 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.121611 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.121632 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.121659 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.121681 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.121703 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.121726 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.121748 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.121785 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.121810 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.121833 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.121855 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.121879 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.121901 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.121922 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.121945 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.121969 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.121990 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.122014 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") pod \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\" (UID: \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\") " Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.122036 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.122059 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.122081 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.122103 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.122125 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.122149 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.122173 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.122195 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.122218 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.122242 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.122265 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.122288 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.122312 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.122336 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.122358 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.122381 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.122385 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.122404 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.122430 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.122454 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.122478 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.122501 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.122525 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.122548 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.122554 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" (OuterVolumeSpecName: "kube-api-access-d4lsv") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "kube-api-access-d4lsv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.122579 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.122602 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.122626 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.122649 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.122672 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.122697 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.122706 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" (OuterVolumeSpecName: "kube-api-access-vt5rc") pod "44663579-783b-4372-86d6-acf235a62d72" (UID: "44663579-783b-4372-86d6-acf235a62d72"). InnerVolumeSpecName "kube-api-access-vt5rc". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.122723 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.122747 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.122787 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.122810 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.122837 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.122859 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.122883 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.122954 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.122983 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.123028 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.123144 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/f163820b-df8b-4e07-9b74-d5f3332580a6-hosts-file\") pod \"node-resolver-5pgvt\" (UID: \"f163820b-df8b-4e07-9b74-d5f3332580a6\") " pod="openshift-dns/node-resolver-5pgvt" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.123167 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pjg6z\" (UniqueName: \"kubernetes.io/projected/f163820b-df8b-4e07-9b74-d5f3332580a6-kube-api-access-pjg6z\") pod \"node-resolver-5pgvt\" (UID: \"f163820b-df8b-4e07-9b74-d5f3332580a6\") " pod="openshift-dns/node-resolver-5pgvt" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.123220 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.123284 4942 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") on node \"crc\" DevicePath \"\"" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.123301 4942 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") on node \"crc\" DevicePath \"\"" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.123335 4942 reconciler_common.go:293] "Volume detached for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") on node \"crc\" DevicePath \"\"" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.123350 4942 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.123365 4942 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") on node \"crc\" DevicePath \"\"" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.123379 4942 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.123436 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.123490 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" (OuterVolumeSpecName: "kube-api-access-279lb") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "kube-api-access-279lb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.123713 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" (OuterVolumeSpecName: "signing-key") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.123779 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" (OuterVolumeSpecName: "kube-api-access-qg5z5") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "kube-api-access-qg5z5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.124449 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.124575 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.124658 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" (OuterVolumeSpecName: "service-ca") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.124685 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" (OuterVolumeSpecName: "kube-api-access-rnphk") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "kube-api-access-rnphk". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.124873 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" (OuterVolumeSpecName: "kube-api-access-tk88c") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "kube-api-access-tk88c". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.125042 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.125101 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.125119 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.125284 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.125508 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" (OuterVolumeSpecName: "kube-api-access-x4zgh") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "kube-api-access-x4zgh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.128566 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" (OuterVolumeSpecName: "image-import-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "image-import-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.128421 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.125600 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" (OuterVolumeSpecName: "service-ca") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.125602 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.128876 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" (OuterVolumeSpecName: "kube-api-access-cfbct") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "kube-api-access-cfbct". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.129011 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.130134 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.130203 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.130393 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.130435 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.131224 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" (OuterVolumeSpecName: "kube-api-access-gf66m") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "kube-api-access-gf66m". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.131401 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.131397 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" (OuterVolumeSpecName: "config") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.132037 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" (OuterVolumeSpecName: "kube-api-access-fqsjt") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "kube-api-access-fqsjt". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.132305 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.132493 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" (OuterVolumeSpecName: "kube-api-access-x2m85") pod "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" (UID: "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d"). InnerVolumeSpecName "kube-api-access-x2m85". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.132879 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" (OuterVolumeSpecName: "kube-api-access-6ccd8") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "kube-api-access-6ccd8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.125623 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" (OuterVolumeSpecName: "available-featuregates") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "available-featuregates". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.134164 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.134713 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" (OuterVolumeSpecName: "image-registry-operator-tls") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "image-registry-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.134751 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" (OuterVolumeSpecName: "config") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.135229 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.135294 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" (OuterVolumeSpecName: "mcc-auth-proxy-config") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "mcc-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.136584 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" (OuterVolumeSpecName: "utilities") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.136599 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" (OuterVolumeSpecName: "kube-api-access-w7l8j") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "kube-api-access-w7l8j". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.137141 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" (OuterVolumeSpecName: "config") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.138426 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.139099 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" (OuterVolumeSpecName: "config") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.139156 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" (OuterVolumeSpecName: "kube-api-access-xcgwh") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "kube-api-access-xcgwh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.139424 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" (OuterVolumeSpecName: "kube-api-access-6g6sz") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "kube-api-access-6g6sz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.139439 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" (OuterVolumeSpecName: "utilities") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.139497 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" (OuterVolumeSpecName: "kube-api-access-pj782") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "kube-api-access-pj782". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.139746 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" (OuterVolumeSpecName: "certs") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.140008 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.140045 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" (OuterVolumeSpecName: "kube-api-access-4d4hj") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "kube-api-access-4d4hj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.133687 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.140278 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" (OuterVolumeSpecName: "kube-api-access-lzf88") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "kube-api-access-lzf88". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:17:42 crc kubenswrapper[4942]: E0218 19:17:42.140303 4942 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 18 19:17:42 crc kubenswrapper[4942]: E0218 19:17:42.140445 4942 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-18 19:17:42.640399041 +0000 UTC m=+22.345331696 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.140492 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.133811 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" (OuterVolumeSpecName: "samples-operator-tls") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "samples-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.141454 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.141627 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.141826 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" (OuterVolumeSpecName: "mcd-auth-proxy-config") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "mcd-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.141962 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" (OuterVolumeSpecName: "machine-approver-tls") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "machine-approver-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.142039 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.142294 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.142428 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.154549 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" (OuterVolumeSpecName: "webhook-certs") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "webhook-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.154885 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" (OuterVolumeSpecName: "kube-api-access-bf2bz") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "kube-api-access-bf2bz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.155552 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" (OuterVolumeSpecName: "kube-api-access-dbsvg") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "kube-api-access-dbsvg". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.155421 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.155871 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:17:42 crc kubenswrapper[4942]: E0218 19:17:42.156087 4942 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 19:17:42.656054108 +0000 UTC m=+22.360986773 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.156132 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.156545 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" (OuterVolumeSpecName: "kube-api-access-249nr") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "kube-api-access-249nr". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.156968 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.157119 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.157360 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.157407 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" (OuterVolumeSpecName: "signing-cabundle") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-cabundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.157632 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" (OuterVolumeSpecName: "kube-api-access-w4xd4") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "kube-api-access-w4xd4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.157505 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" (OuterVolumeSpecName: "client-ca") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.157698 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" (OuterVolumeSpecName: "kube-api-access-htfz6") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "kube-api-access-htfz6". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.158026 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.158113 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.158450 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.158683 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" (OuterVolumeSpecName: "kube-api-access-8tdtz") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "kube-api-access-8tdtz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.158721 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.160539 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" (OuterVolumeSpecName: "kube-api-access-v47cf") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "kube-api-access-v47cf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.160865 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" (OuterVolumeSpecName: "kube-api-access-zgdk5") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "kube-api-access-zgdk5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.161155 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" (OuterVolumeSpecName: "kube-api-access-nzwt7") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "kube-api-access-nzwt7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.161198 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.161210 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.161306 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" (OuterVolumeSpecName: "kube-api-access-qs4fp") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "kube-api-access-qs4fp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.161630 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.161797 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.161951 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" (OuterVolumeSpecName: "console-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.161992 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" (OuterVolumeSpecName: "kube-api-access-x7zkh") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "kube-api-access-x7zkh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.162398 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.162541 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" (OuterVolumeSpecName: "kube-api-access-2d4wz") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "kube-api-access-2d4wz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.162803 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.163131 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" (OuterVolumeSpecName: "node-bootstrap-token") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "node-bootstrap-token". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.163346 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.163435 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" (OuterVolumeSpecName: "config") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.163633 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" (OuterVolumeSpecName: "package-server-manager-serving-cert") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "package-server-manager-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.164017 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.164412 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" (OuterVolumeSpecName: "utilities") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.164951 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.165025 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" (OuterVolumeSpecName: "config-volume") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:17:42 crc kubenswrapper[4942]: E0218 19:17:42.165502 4942 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.165607 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:17:42 crc kubenswrapper[4942]: E0218 19:17:42.165644 4942 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-18 19:17:42.665613684 +0000 UTC m=+22.370546369 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.165694 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.166052 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" (OuterVolumeSpecName: "kube-api-access-wxkg8") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "kube-api-access-wxkg8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.166356 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" (OuterVolumeSpecName: "images") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.166605 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.166694 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.166865 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" (OuterVolumeSpecName: "kube-api-access-sb6h7") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "kube-api-access-sb6h7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.167123 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" (OuterVolumeSpecName: "control-plane-machine-set-operator-tls") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "control-plane-machine-set-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.167140 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" (OuterVolumeSpecName: "kube-api-access-jkwtn") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "kube-api-access-jkwtn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.167203 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.167237 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.135283 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.167504 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" (OuterVolumeSpecName: "kube-api-access-7c4vf") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "kube-api-access-7c4vf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.167511 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" (OuterVolumeSpecName: "kube-api-access-pjr6v") pod "49ef4625-1d3a-4a9f-b595-c2433d32326d" (UID: "49ef4625-1d3a-4a9f-b595-c2433d32326d"). InnerVolumeSpecName "kube-api-access-pjr6v". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.168745 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" (OuterVolumeSpecName: "kube-api-access-fcqwp") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "kube-api-access-fcqwp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.168752 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" (OuterVolumeSpecName: "kube-api-access-9xfj7") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "kube-api-access-9xfj7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.168865 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" (OuterVolumeSpecName: "kube-api-access-d6qdx") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "kube-api-access-d6qdx". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.169040 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.169250 4942 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/1.log" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.169290 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" (OuterVolumeSpecName: "kube-api-access-pcxfs") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "kube-api-access-pcxfs". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.168977 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" (OuterVolumeSpecName: "kube-api-access-jhbk2") pod "bd23aa5c-e532-4e53-bccf-e79f130c5ae8" (UID: "bd23aa5c-e532-4e53-bccf-e79f130c5ae8"). InnerVolumeSpecName "kube-api-access-jhbk2". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.136359 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.169677 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" (OuterVolumeSpecName: "kube-api-access-w9rds") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "kube-api-access-w9rds". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.169891 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" (OuterVolumeSpecName: "audit") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "audit". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.169942 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.170133 4942 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.170507 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" (OuterVolumeSpecName: "default-certificate") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "default-certificate". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.170677 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" (OuterVolumeSpecName: "kube-api-access-2w9zh") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "kube-api-access-2w9zh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.170716 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" (OuterVolumeSpecName: "multus-daemon-config") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "multus-daemon-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.170982 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.171086 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.171462 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" (OuterVolumeSpecName: "kube-api-access-kfwg7") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "kube-api-access-kfwg7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.172213 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" (OuterVolumeSpecName: "kube-api-access-lz9wn") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "kube-api-access-lz9wn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.171754 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.172266 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" (OuterVolumeSpecName: "stats-auth") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "stats-auth". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.173517 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.173832 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.174064 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.176218 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" (OuterVolumeSpecName: "kube-api-access-zkvpv") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "kube-api-access-zkvpv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.176623 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" (OuterVolumeSpecName: "config") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.176820 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.176900 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" (OuterVolumeSpecName: "kube-api-access-xcphl") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "kube-api-access-xcphl". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.177299 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.178422 4942 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-8jfwb"] Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.178826 4942 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-additional-cni-plugins-2rbc4"] Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.179381 4942 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-daemon-wqxh4"] Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.179750 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-wqxh4" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.180623 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-8jfwb" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.180703 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-2rbc4" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.184545 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" (OuterVolumeSpecName: "config") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.184610 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.184902 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" (OuterVolumeSpecName: "client-ca") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.190337 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" (OuterVolumeSpecName: "config") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.190420 4942 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.190527 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.190712 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" (OuterVolumeSpecName: "machine-api-operator-tls") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "machine-api-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.190878 4942 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.191033 4942 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.191109 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.191207 4942 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.191229 4942 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.191342 4942 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.191458 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.191465 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.191997 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.192604 4942 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.192738 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" (OuterVolumeSpecName: "kube-api-access-s4n52") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "kube-api-access-s4n52". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.192796 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" (OuterVolumeSpecName: "utilities") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.193267 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" (OuterVolumeSpecName: "config") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.193320 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" (OuterVolumeSpecName: "images") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.195903 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" (OuterVolumeSpecName: "kube-api-access-mg5zb") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "kube-api-access-mg5zb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.199061 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" (OuterVolumeSpecName: "ovn-control-plane-metrics-cert") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovn-control-plane-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.199534 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" (OuterVolumeSpecName: "kube-api-access-mnrrd") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "kube-api-access-mnrrd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.199671 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.199741 4942 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.199833 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.200066 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" (OuterVolumeSpecName: "config") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.200199 4942 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="b82ad9b4d66dae63d0d0fcf0dd65333cf43c3b1d6006e129b7db1c6320bfdee8" exitCode=255 Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.200430 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"b82ad9b4d66dae63d0d0fcf0dd65333cf43c3b1d6006e129b7db1c6320bfdee8"} Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.200511 4942 scope.go:117] "RemoveContainer" containerID="5246513a84d5da4c946e19dabd015225e05065daacd217fe981038f9c572b73f" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.200665 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.200773 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" (OuterVolumeSpecName: "config") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.200983 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.201371 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.201633 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" (OuterVolumeSpecName: "apiservice-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "apiservice-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.201726 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.202247 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" (OuterVolumeSpecName: "tmpfs") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "tmpfs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.202784 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.203138 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" (OuterVolumeSpecName: "cert") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.203398 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.203571 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.203668 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:17:42 crc kubenswrapper[4942]: E0218 19:17:42.207731 4942 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-crc\" already exists" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.208415 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" (OuterVolumeSpecName: "cni-sysctl-allowlist") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-sysctl-allowlist". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.209521 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.209661 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" (OuterVolumeSpecName: "config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.210405 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.211453 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.211739 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.212450 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.213556 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" (OuterVolumeSpecName: "config") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.213699 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" (OuterVolumeSpecName: "config") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.216081 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.218134 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.219123 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" (OuterVolumeSpecName: "etcd-service-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.219433 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" (OuterVolumeSpecName: "etcd-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.221088 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.221325 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.222639 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.222711 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.223864 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.224275 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/f163820b-df8b-4e07-9b74-d5f3332580a6-hosts-file\") pod \"node-resolver-5pgvt\" (UID: \"f163820b-df8b-4e07-9b74-d5f3332580a6\") " pod="openshift-dns/node-resolver-5pgvt" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.224307 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pjg6z\" (UniqueName: \"kubernetes.io/projected/f163820b-df8b-4e07-9b74-d5f3332580a6-kube-api-access-pjg6z\") pod \"node-resolver-5pgvt\" (UID: \"f163820b-df8b-4e07-9b74-d5f3332580a6\") " pod="openshift-dns/node-resolver-5pgvt" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.224352 4942 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.224367 4942 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.224378 4942 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") on node \"crc\" DevicePath \"\"" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.224549 4942 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") on node \"crc\" DevicePath \"\"" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.224839 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/f163820b-df8b-4e07-9b74-d5f3332580a6-hosts-file\") pod \"node-resolver-5pgvt\" (UID: \"f163820b-df8b-4e07-9b74-d5f3332580a6\") " pod="openshift-dns/node-resolver-5pgvt" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.225076 4942 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") on node \"crc\" DevicePath \"\"" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.225107 4942 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.225121 4942 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") on node \"crc\" DevicePath \"\"" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.225131 4942 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.225146 4942 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") on node \"crc\" DevicePath \"\"" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.225157 4942 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") on node \"crc\" DevicePath \"\"" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.225170 4942 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.225183 4942 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") on node \"crc\" DevicePath \"\"" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.225193 4942 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") on node \"crc\" DevicePath \"\"" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.225203 4942 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") on node \"crc\" DevicePath \"\"" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.225214 4942 reconciler_common.go:293] "Volume detached for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") on node \"crc\" DevicePath \"\"" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.225224 4942 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.225233 4942 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") on node \"crc\" DevicePath \"\"" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.225243 4942 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") on node \"crc\" DevicePath \"\"" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.225255 4942 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.225265 4942 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.225276 4942 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") on node \"crc\" DevicePath \"\"" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.225290 4942 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") on node \"crc\" DevicePath \"\"" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.225302 4942 reconciler_common.go:293] "Volume detached for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") on node \"crc\" DevicePath \"\"" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.225313 4942 reconciler_common.go:293] "Volume detached for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") on node \"crc\" DevicePath \"\"" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.225323 4942 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") on node \"crc\" DevicePath \"\"" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.225333 4942 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") on node \"crc\" DevicePath \"\"" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.225345 4942 reconciler_common.go:293] "Volume detached for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") on node \"crc\" DevicePath \"\"" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.225355 4942 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.225367 4942 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") on node \"crc\" DevicePath \"\"" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.225380 4942 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.225394 4942 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.225405 4942 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") on node \"crc\" DevicePath \"\"" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.225415 4942 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.225429 4942 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") on node \"crc\" DevicePath \"\"" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.225440 4942 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") on node \"crc\" DevicePath \"\"" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.225450 4942 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.225461 4942 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") on node \"crc\" DevicePath \"\"" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.225472 4942 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") on node \"crc\" DevicePath \"\"" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.225482 4942 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") on node \"crc\" DevicePath \"\"" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.225494 4942 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.225503 4942 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") on node \"crc\" DevicePath \"\"" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.225514 4942 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.225525 4942 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") on node \"crc\" DevicePath \"\"" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.225542 4942 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") on node \"crc\" DevicePath \"\"" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.225554 4942 reconciler_common.go:293] "Volume detached for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") on node \"crc\" DevicePath \"\"" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.225565 4942 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.225579 4942 reconciler_common.go:293] "Volume detached for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") on node \"crc\" DevicePath \"\"" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.225592 4942 reconciler_common.go:293] "Volume detached for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.225603 4942 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") on node \"crc\" DevicePath \"\"" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.225618 4942 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") on node \"crc\" DevicePath \"\"" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.225629 4942 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") on node \"crc\" DevicePath \"\"" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.225640 4942 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.225652 4942 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") on node \"crc\" DevicePath \"\"" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.225719 4942 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") on node \"crc\" DevicePath \"\"" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.225731 4942 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") on node \"crc\" DevicePath \"\"" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.225742 4942 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") on node \"crc\" DevicePath \"\"" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.225753 4942 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") on node \"crc\" DevicePath \"\"" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.225780 4942 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") on node \"crc\" DevicePath \"\"" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.225791 4942 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") on node \"crc\" DevicePath \"\"" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.225802 4942 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.225812 4942 reconciler_common.go:293] "Volume detached for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") on node \"crc\" DevicePath \"\"" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.225823 4942 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.225833 4942 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.225844 4942 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") on node \"crc\" DevicePath \"\"" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.225856 4942 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.225867 4942 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") on node \"crc\" DevicePath \"\"" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.225877 4942 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") on node \"crc\" DevicePath \"\"" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.225887 4942 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") on node \"crc\" DevicePath \"\"" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.225897 4942 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.225909 4942 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.225920 4942 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") on node \"crc\" DevicePath \"\"" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.225933 4942 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.225944 4942 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") on node \"crc\" DevicePath \"\"" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.225957 4942 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.225968 4942 reconciler_common.go:293] "Volume detached for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") on node \"crc\" DevicePath \"\"" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.225978 4942 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") on node \"crc\" DevicePath \"\"" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.225988 4942 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") on node \"crc\" DevicePath \"\"" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.225998 4942 reconciler_common.go:293] "Volume detached for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") on node \"crc\" DevicePath \"\"" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.226009 4942 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.226019 4942 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.226030 4942 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") on node \"crc\" DevicePath \"\"" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.226040 4942 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.226050 4942 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") on node \"crc\" DevicePath \"\"" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.226061 4942 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.226071 4942 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") on node \"crc\" DevicePath \"\"" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.226082 4942 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") on node \"crc\" DevicePath \"\"" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.226092 4942 reconciler_common.go:293] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") on node \"crc\" DevicePath \"\"" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.226103 4942 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.226113 4942 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") on node \"crc\" DevicePath \"\"" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.226124 4942 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") on node \"crc\" DevicePath \"\"" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.226134 4942 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.226144 4942 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") on node \"crc\" DevicePath \"\"" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.226154 4942 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") on node \"crc\" DevicePath \"\"" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.226163 4942 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") on node \"crc\" DevicePath \"\"" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.226173 4942 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.226183 4942 reconciler_common.go:293] "Volume detached for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") on node \"crc\" DevicePath \"\"" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.226193 4942 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") on node \"crc\" DevicePath \"\"" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.226203 4942 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") on node \"crc\" DevicePath \"\"" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.226213 4942 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") on node \"crc\" DevicePath \"\"" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.226223 4942 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") on node \"crc\" DevicePath \"\"" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.226233 4942 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") on node \"crc\" DevicePath \"\"" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.226244 4942 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") on node \"crc\" DevicePath \"\"" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.226255 4942 reconciler_common.go:293] "Volume detached for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") on node \"crc\" DevicePath \"\"" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.226265 4942 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.226276 4942 reconciler_common.go:293] "Volume detached for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.226287 4942 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") on node \"crc\" DevicePath \"\"" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.226298 4942 reconciler_common.go:293] "Volume detached for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") on node \"crc\" DevicePath \"\"" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.226309 4942 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.226320 4942 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.226335 4942 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.226347 4942 reconciler_common.go:293] "Volume detached for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.226358 4942 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") on node \"crc\" DevicePath \"\"" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.226372 4942 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") on node \"crc\" DevicePath \"\"" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.226351 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.226385 4942 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.226464 4942 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.226486 4942 reconciler_common.go:293] "Volume detached for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") on node \"crc\" DevicePath \"\"" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.226500 4942 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") on node \"crc\" DevicePath \"\"" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.226538 4942 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") on node \"crc\" DevicePath \"\"" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.226551 4942 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") on node \"crc\" DevicePath \"\"" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.226562 4942 reconciler_common.go:293] "Volume detached for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") on node \"crc\" DevicePath \"\"" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.226573 4942 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") on node \"crc\" DevicePath \"\"" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.226584 4942 reconciler_common.go:293] "Volume detached for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") on node \"crc\" DevicePath \"\"" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.226595 4942 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") on node \"crc\" DevicePath \"\"" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.226606 4942 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.226617 4942 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") on node \"crc\" DevicePath \"\"" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.226628 4942 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") on node \"crc\" DevicePath \"\"" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.226642 4942 reconciler_common.go:293] "Volume detached for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") on node \"crc\" DevicePath \"\"" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.226653 4942 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") on node \"crc\" DevicePath \"\"" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.226665 4942 reconciler_common.go:293] "Volume detached for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") on node \"crc\" DevicePath \"\"" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.226676 4942 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") on node \"crc\" DevicePath \"\"" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.226689 4942 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.226702 4942 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") on node \"crc\" DevicePath \"\"" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.226714 4942 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") on node \"crc\" DevicePath \"\"" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.226728 4942 reconciler_common.go:293] "Volume detached for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") on node \"crc\" DevicePath \"\"" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.226742 4942 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.226809 4942 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") on node \"crc\" DevicePath \"\"" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.226824 4942 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") on node \"crc\" DevicePath \"\"" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.226837 4942 reconciler_common.go:293] "Volume detached for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") on node \"crc\" DevicePath \"\"" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.226850 4942 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") on node \"crc\" DevicePath \"\"" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.226861 4942 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.226875 4942 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") on node \"crc\" DevicePath \"\"" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.226887 4942 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") on node \"crc\" DevicePath \"\"" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.226900 4942 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") on node \"crc\" DevicePath \"\"" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.226913 4942 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.226925 4942 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.226939 4942 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") on node \"crc\" DevicePath \"\"" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.226952 4942 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") on node \"crc\" DevicePath \"\"" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.226966 4942 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") on node \"crc\" DevicePath \"\"" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.226980 4942 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") on node \"crc\" DevicePath \"\"" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.226997 4942 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") on node \"crc\" DevicePath \"\"" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.227013 4942 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.227028 4942 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") on node \"crc\" DevicePath \"\"" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.227041 4942 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.227055 4942 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") on node \"crc\" DevicePath \"\"" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.227068 4942 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.227079 4942 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") on node \"crc\" DevicePath \"\"" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.227092 4942 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") on node \"crc\" DevicePath \"\"" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.227106 4942 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.227655 4942 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.227672 4942 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.227726 4942 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.227739 4942 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") on node \"crc\" DevicePath \"\"" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.227752 4942 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.227780 4942 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") on node \"crc\" DevicePath \"\"" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.227793 4942 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") on node \"crc\" DevicePath \"\"" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.227805 4942 reconciler_common.go:293] "Volume detached for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") on node \"crc\" DevicePath \"\"" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.227818 4942 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") on node \"crc\" DevicePath \"\"" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.227830 4942 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") on node \"crc\" DevicePath \"\"" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.227842 4942 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.227855 4942 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") on node \"crc\" DevicePath \"\"" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.227872 4942 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") on node \"crc\" DevicePath \"\"" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.227883 4942 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") on node \"crc\" DevicePath \"\"" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.227895 4942 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") on node \"crc\" DevicePath \"\"" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.227907 4942 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") on node \"crc\" DevicePath \"\"" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.227919 4942 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.227930 4942 reconciler_common.go:293] "Volume detached for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") on node \"crc\" DevicePath \"\"" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.227943 4942 reconciler_common.go:293] "Volume detached for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") on node \"crc\" DevicePath \"\"" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.227954 4942 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") on node \"crc\" DevicePath \"\"" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.227966 4942 reconciler_common.go:293] "Volume detached for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") on node \"crc\" DevicePath \"\"" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.227978 4942 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") on node \"crc\" DevicePath \"\"" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.227990 4942 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.228007 4942 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") on node \"crc\" DevicePath \"\"" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.228021 4942 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") on node \"crc\" DevicePath \"\"" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.241545 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.241701 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.245686 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pjg6z\" (UniqueName: \"kubernetes.io/projected/f163820b-df8b-4e07-9b74-d5f3332580a6-kube-api-access-pjg6z\") pod \"node-resolver-5pgvt\" (UID: \"f163820b-df8b-4e07-9b74-d5f3332580a6\") " pod="openshift-dns/node-resolver-5pgvt" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.249365 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.251513 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wqxh4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"28921539-823a-4439-a230-3b5aed7085cc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:42Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:42Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c2zj5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c2zj5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:17:42Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wqxh4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.251827 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.262224 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b5d2b9d-7ec0-41fa-a073-399c6fd41eb6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2c8b81c113e461032be39d6328308bad3189a9e84d987da987d43e8e2f6449fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3654d3b4a5084ce9ffb9ef8aeab6155788b56ac636aee44b098f6e9d457a8d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3a247d311cfbec62a54df5757a344bbc7ea516a66ccdeb67aecbbe268a4fbe4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://117748c4c4fa5e68d4b927639faa447ed3a984e0d7364a2224abe27e178d5746\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:17:21Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.269641 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.272694 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.273126 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.277336 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.284069 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-5pgvt" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.293923 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.294315 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.306043 4942 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.306474 4942 scope.go:117] "RemoveContainer" containerID="b82ad9b4d66dae63d0d0fcf0dd65333cf43c3b1d6006e129b7db1c6320bfdee8" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.307055 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-5pgvt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f163820b-df8b-4e07-9b74-d5f3332580a6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjg6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:17:41Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-5pgvt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 19:17:42 crc kubenswrapper[4942]: E0218 19:17:42.308516 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.321718 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.329688 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/75150b8c-7a02-497b-86c3-eabc9c8dbc55-system-cni-dir\") pod \"multus-8jfwb\" (UID: \"75150b8c-7a02-497b-86c3-eabc9c8dbc55\") " pod="openshift-multus/multus-8jfwb" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.329727 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/75150b8c-7a02-497b-86c3-eabc9c8dbc55-multus-daemon-config\") pod \"multus-8jfwb\" (UID: \"75150b8c-7a02-497b-86c3-eabc9c8dbc55\") " pod="openshift-multus/multus-8jfwb" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.329747 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/1ec6cacf-a09d-43b8-89fc-aea1d5a0ca9d-cnibin\") pod \"multus-additional-cni-plugins-2rbc4\" (UID: \"1ec6cacf-a09d-43b8-89fc-aea1d5a0ca9d\") " pod="openshift-multus/multus-additional-cni-plugins-2rbc4" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.329781 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-27v7m\" (UniqueName: \"kubernetes.io/projected/1ec6cacf-a09d-43b8-89fc-aea1d5a0ca9d-kube-api-access-27v7m\") pod \"multus-additional-cni-plugins-2rbc4\" (UID: \"1ec6cacf-a09d-43b8-89fc-aea1d5a0ca9d\") " pod="openshift-multus/multus-additional-cni-plugins-2rbc4" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.329801 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/75150b8c-7a02-497b-86c3-eabc9c8dbc55-hostroot\") pod \"multus-8jfwb\" (UID: \"75150b8c-7a02-497b-86c3-eabc9c8dbc55\") " pod="openshift-multus/multus-8jfwb" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.329815 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/28921539-823a-4439-a230-3b5aed7085cc-proxy-tls\") pod \"machine-config-daemon-wqxh4\" (UID: \"28921539-823a-4439-a230-3b5aed7085cc\") " pod="openshift-machine-config-operator/machine-config-daemon-wqxh4" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.329832 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/1ec6cacf-a09d-43b8-89fc-aea1d5a0ca9d-cni-binary-copy\") pod \"multus-additional-cni-plugins-2rbc4\" (UID: \"1ec6cacf-a09d-43b8-89fc-aea1d5a0ca9d\") " pod="openshift-multus/multus-additional-cni-plugins-2rbc4" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.329854 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/1ec6cacf-a09d-43b8-89fc-aea1d5a0ca9d-tuning-conf-dir\") pod \"multus-additional-cni-plugins-2rbc4\" (UID: \"1ec6cacf-a09d-43b8-89fc-aea1d5a0ca9d\") " pod="openshift-multus/multus-additional-cni-plugins-2rbc4" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.329974 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/28921539-823a-4439-a230-3b5aed7085cc-mcd-auth-proxy-config\") pod \"machine-config-daemon-wqxh4\" (UID: \"28921539-823a-4439-a230-3b5aed7085cc\") " pod="openshift-machine-config-operator/machine-config-daemon-wqxh4" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.330017 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/75150b8c-7a02-497b-86c3-eabc9c8dbc55-host-run-netns\") pod \"multus-8jfwb\" (UID: \"75150b8c-7a02-497b-86c3-eabc9c8dbc55\") " pod="openshift-multus/multus-8jfwb" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.330037 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/1ec6cacf-a09d-43b8-89fc-aea1d5a0ca9d-system-cni-dir\") pod \"multus-additional-cni-plugins-2rbc4\" (UID: \"1ec6cacf-a09d-43b8-89fc-aea1d5a0ca9d\") " pod="openshift-multus/multus-additional-cni-plugins-2rbc4" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.330054 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/75150b8c-7a02-497b-86c3-eabc9c8dbc55-host-var-lib-cni-multus\") pod \"multus-8jfwb\" (UID: \"75150b8c-7a02-497b-86c3-eabc9c8dbc55\") " pod="openshift-multus/multus-8jfwb" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.330073 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/1ec6cacf-a09d-43b8-89fc-aea1d5a0ca9d-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-2rbc4\" (UID: \"1ec6cacf-a09d-43b8-89fc-aea1d5a0ca9d\") " pod="openshift-multus/multus-additional-cni-plugins-2rbc4" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.330092 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/75150b8c-7a02-497b-86c3-eabc9c8dbc55-host-run-multus-certs\") pod \"multus-8jfwb\" (UID: \"75150b8c-7a02-497b-86c3-eabc9c8dbc55\") " pod="openshift-multus/multus-8jfwb" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.330107 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-65c5q\" (UniqueName: \"kubernetes.io/projected/75150b8c-7a02-497b-86c3-eabc9c8dbc55-kube-api-access-65c5q\") pod \"multus-8jfwb\" (UID: \"75150b8c-7a02-497b-86c3-eabc9c8dbc55\") " pod="openshift-multus/multus-8jfwb" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.330122 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/75150b8c-7a02-497b-86c3-eabc9c8dbc55-multus-socket-dir-parent\") pod \"multus-8jfwb\" (UID: \"75150b8c-7a02-497b-86c3-eabc9c8dbc55\") " pod="openshift-multus/multus-8jfwb" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.330142 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/75150b8c-7a02-497b-86c3-eabc9c8dbc55-os-release\") pod \"multus-8jfwb\" (UID: \"75150b8c-7a02-497b-86c3-eabc9c8dbc55\") " pod="openshift-multus/multus-8jfwb" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.330157 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/75150b8c-7a02-497b-86c3-eabc9c8dbc55-host-run-k8s-cni-cncf-io\") pod \"multus-8jfwb\" (UID: \"75150b8c-7a02-497b-86c3-eabc9c8dbc55\") " pod="openshift-multus/multus-8jfwb" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.330173 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/75150b8c-7a02-497b-86c3-eabc9c8dbc55-multus-cni-dir\") pod \"multus-8jfwb\" (UID: \"75150b8c-7a02-497b-86c3-eabc9c8dbc55\") " pod="openshift-multus/multus-8jfwb" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.330187 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/75150b8c-7a02-497b-86c3-eabc9c8dbc55-cnibin\") pod \"multus-8jfwb\" (UID: \"75150b8c-7a02-497b-86c3-eabc9c8dbc55\") " pod="openshift-multus/multus-8jfwb" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.330202 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/75150b8c-7a02-497b-86c3-eabc9c8dbc55-host-var-lib-cni-bin\") pod \"multus-8jfwb\" (UID: \"75150b8c-7a02-497b-86c3-eabc9c8dbc55\") " pod="openshift-multus/multus-8jfwb" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.330233 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/75150b8c-7a02-497b-86c3-eabc9c8dbc55-multus-conf-dir\") pod \"multus-8jfwb\" (UID: \"75150b8c-7a02-497b-86c3-eabc9c8dbc55\") " pod="openshift-multus/multus-8jfwb" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.330251 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c2zj5\" (UniqueName: \"kubernetes.io/projected/28921539-823a-4439-a230-3b5aed7085cc-kube-api-access-c2zj5\") pod \"machine-config-daemon-wqxh4\" (UID: \"28921539-823a-4439-a230-3b5aed7085cc\") " pod="openshift-machine-config-operator/machine-config-daemon-wqxh4" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.330275 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/75150b8c-7a02-497b-86c3-eabc9c8dbc55-host-var-lib-kubelet\") pod \"multus-8jfwb\" (UID: \"75150b8c-7a02-497b-86c3-eabc9c8dbc55\") " pod="openshift-multus/multus-8jfwb" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.330289 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/75150b8c-7a02-497b-86c3-eabc9c8dbc55-etc-kubernetes\") pod \"multus-8jfwb\" (UID: \"75150b8c-7a02-497b-86c3-eabc9c8dbc55\") " pod="openshift-multus/multus-8jfwb" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.330304 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/28921539-823a-4439-a230-3b5aed7085cc-rootfs\") pod \"machine-config-daemon-wqxh4\" (UID: \"28921539-823a-4439-a230-3b5aed7085cc\") " pod="openshift-machine-config-operator/machine-config-daemon-wqxh4" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.330321 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/75150b8c-7a02-497b-86c3-eabc9c8dbc55-cni-binary-copy\") pod \"multus-8jfwb\" (UID: \"75150b8c-7a02-497b-86c3-eabc9c8dbc55\") " pod="openshift-multus/multus-8jfwb" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.330357 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/1ec6cacf-a09d-43b8-89fc-aea1d5a0ca9d-os-release\") pod \"multus-additional-cni-plugins-2rbc4\" (UID: \"1ec6cacf-a09d-43b8-89fc-aea1d5a0ca9d\") " pod="openshift-multus/multus-additional-cni-plugins-2rbc4" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.330381 4942 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.330392 4942 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.330402 4942 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.330412 4942 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.330422 4942 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.337002 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-5pgvt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f163820b-df8b-4e07-9b74-d5f3332580a6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjg6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:17:41Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-5pgvt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.351181 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-2rbc4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1ec6cacf-a09d-43b8-89fc-aea1d5a0ca9d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:42Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:42Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:42Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-27v7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-27v7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-27v7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-27v7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-27v7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-27v7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-27v7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:17:42Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-2rbc4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.363854 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-8jfwb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"75150b8c-7a02-497b-86c3-eabc9c8dbc55\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:42Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:42Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-65c5q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:17:42Z\\\"}}\" for pod \"openshift-multus\"/\"multus-8jfwb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.378573 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.390150 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.399069 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.408371 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wqxh4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"28921539-823a-4439-a230-3b5aed7085cc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:42Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:42Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c2zj5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c2zj5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:17:42Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wqxh4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.419807 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4da93830-99a3-4d84-91c8-a5352a987b3f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://beecfbdf76954e7b9895240b52a2ec033ec3b81094ece02095f67a5f389d0383\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca3d8e99733c89b17e7211c9bae268f8e75942d896d32a6e2e9fc7e613000a6d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee5e19c2c5a503ae69e8052828713b9b399137e0fb7f3a06865d4d7f6b29c954\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b82ad9b4d66dae63d0d0fcf0dd65333cf43c3b1d6006e129b7db1c6320bfdee8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5246513a84d5da4c946e19dabd015225e05065daacd217fe981038f9c572b73f\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-18T19:17:40Z\\\",\\\"message\\\":\\\"-1433084409/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1771442244\\\\\\\\\\\\\\\" (2026-02-18 19:17:23 +0000 UTC to 2026-03-20 19:17:24 +0000 UTC (now=2026-02-18 19:17:40.45438601 +0000 UTC))\\\\\\\"\\\\nI0218 19:17:40.454440 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0218 19:17:40.454315 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0218 19:17:40.454727 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0218 19:17:40.454262 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1433084409/tls.crt::/tmp/serving-cert-1433084409/tls.key\\\\\\\"\\\\nI0218 19:17:40.454787 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1771442254\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1771442254\\\\\\\\\\\\\\\" (2026-02-18 18:17:34 +0000 UTC to 2027-02-18 18:17:34 +0000 UTC (now=2026-02-18 19:17:40.454709698 +0000 UTC))\\\\\\\"\\\\nI0218 19:17:40.454828 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0218 19:17:40.454834 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0218 19:17:40.454852 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0218 19:17:40.454856 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0218 19:17:40.454883 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI0218 19:17:40.455174 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nF0218 19:17:40.456995 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T19:17:24Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b82ad9b4d66dae63d0d0fcf0dd65333cf43c3b1d6006e129b7db1c6320bfdee8\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"message\\\":\\\"le observer\\\\nW0218 19:17:41.723890 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0218 19:17:41.724123 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0218 19:17:41.725411 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3231040961/tls.crt::/tmp/serving-cert-3231040961/tls.key\\\\\\\"\\\\nI0218 19:17:41.923908 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0218 19:17:41.936017 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0218 19:17:41.936045 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0218 19:17:41.936073 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0218 19:17:41.936079 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0218 19:17:41.944174 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0218 19:17:41.944200 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 19:17:41.944205 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 19:17:41.944211 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0218 19:17:41.944214 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0218 19:17:41.944217 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0218 19:17:41.944220 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0218 19:17:41.944371 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0218 19:17:41.958094 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T19:17:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5fcd5de3303bba82e4a354de9f77b9aac574912955c2e49e2e74232f4d432a88\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:23Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6adb31d2464d541db843bddad1c43811ca11900db12deeb0d7c13ff8a3186d09\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6adb31d2464d541db843bddad1c43811ca11900db12deeb0d7c13ff8a3186d09\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:17:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:17:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:17:21Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.429479 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.430788 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c2zj5\" (UniqueName: \"kubernetes.io/projected/28921539-823a-4439-a230-3b5aed7085cc-kube-api-access-c2zj5\") pod \"machine-config-daemon-wqxh4\" (UID: \"28921539-823a-4439-a230-3b5aed7085cc\") " pod="openshift-machine-config-operator/machine-config-daemon-wqxh4" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.430842 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/75150b8c-7a02-497b-86c3-eabc9c8dbc55-multus-conf-dir\") pod \"multus-8jfwb\" (UID: \"75150b8c-7a02-497b-86c3-eabc9c8dbc55\") " pod="openshift-multus/multus-8jfwb" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.430862 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/75150b8c-7a02-497b-86c3-eabc9c8dbc55-host-var-lib-kubelet\") pod \"multus-8jfwb\" (UID: \"75150b8c-7a02-497b-86c3-eabc9c8dbc55\") " pod="openshift-multus/multus-8jfwb" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.430887 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/75150b8c-7a02-497b-86c3-eabc9c8dbc55-etc-kubernetes\") pod \"multus-8jfwb\" (UID: \"75150b8c-7a02-497b-86c3-eabc9c8dbc55\") " pod="openshift-multus/multus-8jfwb" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.430907 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/28921539-823a-4439-a230-3b5aed7085cc-rootfs\") pod \"machine-config-daemon-wqxh4\" (UID: \"28921539-823a-4439-a230-3b5aed7085cc\") " pod="openshift-machine-config-operator/machine-config-daemon-wqxh4" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.430944 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/75150b8c-7a02-497b-86c3-eabc9c8dbc55-cni-binary-copy\") pod \"multus-8jfwb\" (UID: \"75150b8c-7a02-497b-86c3-eabc9c8dbc55\") " pod="openshift-multus/multus-8jfwb" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.430979 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/1ec6cacf-a09d-43b8-89fc-aea1d5a0ca9d-os-release\") pod \"multus-additional-cni-plugins-2rbc4\" (UID: \"1ec6cacf-a09d-43b8-89fc-aea1d5a0ca9d\") " pod="openshift-multus/multus-additional-cni-plugins-2rbc4" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.431015 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/75150b8c-7a02-497b-86c3-eabc9c8dbc55-system-cni-dir\") pod \"multus-8jfwb\" (UID: \"75150b8c-7a02-497b-86c3-eabc9c8dbc55\") " pod="openshift-multus/multus-8jfwb" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.431033 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/75150b8c-7a02-497b-86c3-eabc9c8dbc55-multus-daemon-config\") pod \"multus-8jfwb\" (UID: \"75150b8c-7a02-497b-86c3-eabc9c8dbc55\") " pod="openshift-multus/multus-8jfwb" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.431048 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/1ec6cacf-a09d-43b8-89fc-aea1d5a0ca9d-cnibin\") pod \"multus-additional-cni-plugins-2rbc4\" (UID: \"1ec6cacf-a09d-43b8-89fc-aea1d5a0ca9d\") " pod="openshift-multus/multus-additional-cni-plugins-2rbc4" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.431101 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/75150b8c-7a02-497b-86c3-eabc9c8dbc55-multus-conf-dir\") pod \"multus-8jfwb\" (UID: \"75150b8c-7a02-497b-86c3-eabc9c8dbc55\") " pod="openshift-multus/multus-8jfwb" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.431273 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/75150b8c-7a02-497b-86c3-eabc9c8dbc55-etc-kubernetes\") pod \"multus-8jfwb\" (UID: \"75150b8c-7a02-497b-86c3-eabc9c8dbc55\") " pod="openshift-multus/multus-8jfwb" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.431333 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/75150b8c-7a02-497b-86c3-eabc9c8dbc55-host-var-lib-kubelet\") pod \"multus-8jfwb\" (UID: \"75150b8c-7a02-497b-86c3-eabc9c8dbc55\") " pod="openshift-multus/multus-8jfwb" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.431375 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/28921539-823a-4439-a230-3b5aed7085cc-rootfs\") pod \"machine-config-daemon-wqxh4\" (UID: \"28921539-823a-4439-a230-3b5aed7085cc\") " pod="openshift-machine-config-operator/machine-config-daemon-wqxh4" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.431549 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/1ec6cacf-a09d-43b8-89fc-aea1d5a0ca9d-cnibin\") pod \"multus-additional-cni-plugins-2rbc4\" (UID: \"1ec6cacf-a09d-43b8-89fc-aea1d5a0ca9d\") " pod="openshift-multus/multus-additional-cni-plugins-2rbc4" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.431622 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/75150b8c-7a02-497b-86c3-eabc9c8dbc55-system-cni-dir\") pod \"multus-8jfwb\" (UID: \"75150b8c-7a02-497b-86c3-eabc9c8dbc55\") " pod="openshift-multus/multus-8jfwb" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.431683 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/1ec6cacf-a09d-43b8-89fc-aea1d5a0ca9d-os-release\") pod \"multus-additional-cni-plugins-2rbc4\" (UID: \"1ec6cacf-a09d-43b8-89fc-aea1d5a0ca9d\") " pod="openshift-multus/multus-additional-cni-plugins-2rbc4" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.432224 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/75150b8c-7a02-497b-86c3-eabc9c8dbc55-cni-binary-copy\") pod \"multus-8jfwb\" (UID: \"75150b8c-7a02-497b-86c3-eabc9c8dbc55\") " pod="openshift-multus/multus-8jfwb" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.432585 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/75150b8c-7a02-497b-86c3-eabc9c8dbc55-multus-daemon-config\") pod \"multus-8jfwb\" (UID: \"75150b8c-7a02-497b-86c3-eabc9c8dbc55\") " pod="openshift-multus/multus-8jfwb" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.432843 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-27v7m\" (UniqueName: \"kubernetes.io/projected/1ec6cacf-a09d-43b8-89fc-aea1d5a0ca9d-kube-api-access-27v7m\") pod \"multus-additional-cni-plugins-2rbc4\" (UID: \"1ec6cacf-a09d-43b8-89fc-aea1d5a0ca9d\") " pod="openshift-multus/multus-additional-cni-plugins-2rbc4" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.433895 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/75150b8c-7a02-497b-86c3-eabc9c8dbc55-hostroot\") pod \"multus-8jfwb\" (UID: \"75150b8c-7a02-497b-86c3-eabc9c8dbc55\") " pod="openshift-multus/multus-8jfwb" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.433925 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/28921539-823a-4439-a230-3b5aed7085cc-proxy-tls\") pod \"machine-config-daemon-wqxh4\" (UID: \"28921539-823a-4439-a230-3b5aed7085cc\") " pod="openshift-machine-config-operator/machine-config-daemon-wqxh4" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.433947 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/28921539-823a-4439-a230-3b5aed7085cc-mcd-auth-proxy-config\") pod \"machine-config-daemon-wqxh4\" (UID: \"28921539-823a-4439-a230-3b5aed7085cc\") " pod="openshift-machine-config-operator/machine-config-daemon-wqxh4" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.434000 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/75150b8c-7a02-497b-86c3-eabc9c8dbc55-hostroot\") pod \"multus-8jfwb\" (UID: \"75150b8c-7a02-497b-86c3-eabc9c8dbc55\") " pod="openshift-multus/multus-8jfwb" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.435264 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/28921539-823a-4439-a230-3b5aed7085cc-mcd-auth-proxy-config\") pod \"machine-config-daemon-wqxh4\" (UID: \"28921539-823a-4439-a230-3b5aed7085cc\") " pod="openshift-machine-config-operator/machine-config-daemon-wqxh4" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.436699 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/1ec6cacf-a09d-43b8-89fc-aea1d5a0ca9d-cni-binary-copy\") pod \"multus-additional-cni-plugins-2rbc4\" (UID: \"1ec6cacf-a09d-43b8-89fc-aea1d5a0ca9d\") " pod="openshift-multus/multus-additional-cni-plugins-2rbc4" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.436803 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/1ec6cacf-a09d-43b8-89fc-aea1d5a0ca9d-tuning-conf-dir\") pod \"multus-additional-cni-plugins-2rbc4\" (UID: \"1ec6cacf-a09d-43b8-89fc-aea1d5a0ca9d\") " pod="openshift-multus/multus-additional-cni-plugins-2rbc4" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.436847 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/75150b8c-7a02-497b-86c3-eabc9c8dbc55-host-run-netns\") pod \"multus-8jfwb\" (UID: \"75150b8c-7a02-497b-86c3-eabc9c8dbc55\") " pod="openshift-multus/multus-8jfwb" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.436870 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/1ec6cacf-a09d-43b8-89fc-aea1d5a0ca9d-system-cni-dir\") pod \"multus-additional-cni-plugins-2rbc4\" (UID: \"1ec6cacf-a09d-43b8-89fc-aea1d5a0ca9d\") " pod="openshift-multus/multus-additional-cni-plugins-2rbc4" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.436935 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/75150b8c-7a02-497b-86c3-eabc9c8dbc55-host-var-lib-cni-multus\") pod \"multus-8jfwb\" (UID: \"75150b8c-7a02-497b-86c3-eabc9c8dbc55\") " pod="openshift-multus/multus-8jfwb" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.436958 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/1ec6cacf-a09d-43b8-89fc-aea1d5a0ca9d-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-2rbc4\" (UID: \"1ec6cacf-a09d-43b8-89fc-aea1d5a0ca9d\") " pod="openshift-multus/multus-additional-cni-plugins-2rbc4" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.437120 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/75150b8c-7a02-497b-86c3-eabc9c8dbc55-host-run-multus-certs\") pod \"multus-8jfwb\" (UID: \"75150b8c-7a02-497b-86c3-eabc9c8dbc55\") " pod="openshift-multus/multus-8jfwb" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.437159 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-65c5q\" (UniqueName: \"kubernetes.io/projected/75150b8c-7a02-497b-86c3-eabc9c8dbc55-kube-api-access-65c5q\") pod \"multus-8jfwb\" (UID: \"75150b8c-7a02-497b-86c3-eabc9c8dbc55\") " pod="openshift-multus/multus-8jfwb" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.437186 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/75150b8c-7a02-497b-86c3-eabc9c8dbc55-multus-socket-dir-parent\") pod \"multus-8jfwb\" (UID: \"75150b8c-7a02-497b-86c3-eabc9c8dbc55\") " pod="openshift-multus/multus-8jfwb" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.437245 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/75150b8c-7a02-497b-86c3-eabc9c8dbc55-host-run-k8s-cni-cncf-io\") pod \"multus-8jfwb\" (UID: \"75150b8c-7a02-497b-86c3-eabc9c8dbc55\") " pod="openshift-multus/multus-8jfwb" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.437275 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/75150b8c-7a02-497b-86c3-eabc9c8dbc55-os-release\") pod \"multus-8jfwb\" (UID: \"75150b8c-7a02-497b-86c3-eabc9c8dbc55\") " pod="openshift-multus/multus-8jfwb" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.437301 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/75150b8c-7a02-497b-86c3-eabc9c8dbc55-cnibin\") pod \"multus-8jfwb\" (UID: \"75150b8c-7a02-497b-86c3-eabc9c8dbc55\") " pod="openshift-multus/multus-8jfwb" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.437330 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/75150b8c-7a02-497b-86c3-eabc9c8dbc55-host-var-lib-cni-bin\") pod \"multus-8jfwb\" (UID: \"75150b8c-7a02-497b-86c3-eabc9c8dbc55\") " pod="openshift-multus/multus-8jfwb" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.437361 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/75150b8c-7a02-497b-86c3-eabc9c8dbc55-multus-cni-dir\") pod \"multus-8jfwb\" (UID: \"75150b8c-7a02-497b-86c3-eabc9c8dbc55\") " pod="openshift-multus/multus-8jfwb" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.437492 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/1ec6cacf-a09d-43b8-89fc-aea1d5a0ca9d-cni-binary-copy\") pod \"multus-additional-cni-plugins-2rbc4\" (UID: \"1ec6cacf-a09d-43b8-89fc-aea1d5a0ca9d\") " pod="openshift-multus/multus-additional-cni-plugins-2rbc4" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.437509 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/75150b8c-7a02-497b-86c3-eabc9c8dbc55-multus-cni-dir\") pod \"multus-8jfwb\" (UID: \"75150b8c-7a02-497b-86c3-eabc9c8dbc55\") " pod="openshift-multus/multus-8jfwb" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.437585 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/75150b8c-7a02-497b-86c3-eabc9c8dbc55-multus-socket-dir-parent\") pod \"multus-8jfwb\" (UID: \"75150b8c-7a02-497b-86c3-eabc9c8dbc55\") " pod="openshift-multus/multus-8jfwb" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.437626 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/75150b8c-7a02-497b-86c3-eabc9c8dbc55-host-run-k8s-cni-cncf-io\") pod \"multus-8jfwb\" (UID: \"75150b8c-7a02-497b-86c3-eabc9c8dbc55\") " pod="openshift-multus/multus-8jfwb" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.437679 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/75150b8c-7a02-497b-86c3-eabc9c8dbc55-os-release\") pod \"multus-8jfwb\" (UID: \"75150b8c-7a02-497b-86c3-eabc9c8dbc55\") " pod="openshift-multus/multus-8jfwb" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.437724 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/75150b8c-7a02-497b-86c3-eabc9c8dbc55-cnibin\") pod \"multus-8jfwb\" (UID: \"75150b8c-7a02-497b-86c3-eabc9c8dbc55\") " pod="openshift-multus/multus-8jfwb" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.437794 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/75150b8c-7a02-497b-86c3-eabc9c8dbc55-host-run-netns\") pod \"multus-8jfwb\" (UID: \"75150b8c-7a02-497b-86c3-eabc9c8dbc55\") " pod="openshift-multus/multus-8jfwb" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.437812 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/75150b8c-7a02-497b-86c3-eabc9c8dbc55-host-var-lib-cni-bin\") pod \"multus-8jfwb\" (UID: \"75150b8c-7a02-497b-86c3-eabc9c8dbc55\") " pod="openshift-multus/multus-8jfwb" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.437853 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/75150b8c-7a02-497b-86c3-eabc9c8dbc55-host-var-lib-cni-multus\") pod \"multus-8jfwb\" (UID: \"75150b8c-7a02-497b-86c3-eabc9c8dbc55\") " pod="openshift-multus/multus-8jfwb" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.437895 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/1ec6cacf-a09d-43b8-89fc-aea1d5a0ca9d-system-cni-dir\") pod \"multus-additional-cni-plugins-2rbc4\" (UID: \"1ec6cacf-a09d-43b8-89fc-aea1d5a0ca9d\") " pod="openshift-multus/multus-additional-cni-plugins-2rbc4" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.437980 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/75150b8c-7a02-497b-86c3-eabc9c8dbc55-host-run-multus-certs\") pod \"multus-8jfwb\" (UID: \"75150b8c-7a02-497b-86c3-eabc9c8dbc55\") " pod="openshift-multus/multus-8jfwb" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.438547 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/1ec6cacf-a09d-43b8-89fc-aea1d5a0ca9d-tuning-conf-dir\") pod \"multus-additional-cni-plugins-2rbc4\" (UID: \"1ec6cacf-a09d-43b8-89fc-aea1d5a0ca9d\") " pod="openshift-multus/multus-additional-cni-plugins-2rbc4" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.439774 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b5d2b9d-7ec0-41fa-a073-399c6fd41eb6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2c8b81c113e461032be39d6328308bad3189a9e84d987da987d43e8e2f6449fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3654d3b4a5084ce9ffb9ef8aeab6155788b56ac636aee44b098f6e9d457a8d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3a247d311cfbec62a54df5757a344bbc7ea516a66ccdeb67aecbbe268a4fbe4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://117748c4c4fa5e68d4b927639faa447ed3a984e0d7364a2224abe27e178d5746\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:17:21Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.439983 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/1ec6cacf-a09d-43b8-89fc-aea1d5a0ca9d-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-2rbc4\" (UID: \"1ec6cacf-a09d-43b8-89fc-aea1d5a0ca9d\") " pod="openshift-multus/multus-additional-cni-plugins-2rbc4" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.448438 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/28921539-823a-4439-a230-3b5aed7085cc-proxy-tls\") pod \"machine-config-daemon-wqxh4\" (UID: \"28921539-823a-4439-a230-3b5aed7085cc\") " pod="openshift-machine-config-operator/machine-config-daemon-wqxh4" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.451315 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-27v7m\" (UniqueName: \"kubernetes.io/projected/1ec6cacf-a09d-43b8-89fc-aea1d5a0ca9d-kube-api-access-27v7m\") pod \"multus-additional-cni-plugins-2rbc4\" (UID: \"1ec6cacf-a09d-43b8-89fc-aea1d5a0ca9d\") " pod="openshift-multus/multus-additional-cni-plugins-2rbc4" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.452873 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c2zj5\" (UniqueName: \"kubernetes.io/projected/28921539-823a-4439-a230-3b5aed7085cc-kube-api-access-c2zj5\") pod \"machine-config-daemon-wqxh4\" (UID: \"28921539-823a-4439-a230-3b5aed7085cc\") " pod="openshift-machine-config-operator/machine-config-daemon-wqxh4" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.455416 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-65c5q\" (UniqueName: \"kubernetes.io/projected/75150b8c-7a02-497b-86c3-eabc9c8dbc55-kube-api-access-65c5q\") pod \"multus-8jfwb\" (UID: \"75150b8c-7a02-497b-86c3-eabc9c8dbc55\") " pod="openshift-multus/multus-8jfwb" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.458816 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.518646 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-wqxh4" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.525558 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-2rbc4" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.538258 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.538296 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 19:17:42 crc kubenswrapper[4942]: E0218 19:17:42.538434 4942 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 18 19:17:42 crc kubenswrapper[4942]: E0218 19:17:42.538450 4942 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 18 19:17:42 crc kubenswrapper[4942]: E0218 19:17:42.538462 4942 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 18 19:17:42 crc kubenswrapper[4942]: E0218 19:17:42.538519 4942 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-18 19:17:43.538499183 +0000 UTC m=+23.243431848 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.538545 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-8jfwb" Feb 18 19:17:42 crc kubenswrapper[4942]: E0218 19:17:42.538599 4942 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 18 19:17:42 crc kubenswrapper[4942]: E0218 19:17:42.538640 4942 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 18 19:17:42 crc kubenswrapper[4942]: E0218 19:17:42.538658 4942 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 18 19:17:42 crc kubenswrapper[4942]: E0218 19:17:42.538749 4942 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-18 19:17:43.538722649 +0000 UTC m=+23.243655344 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.546617 4942 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-89fzv"] Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.549549 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-89fzv" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.551152 4942 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.551638 4942 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.551854 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.551949 4942 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.552633 4942 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.553051 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.553047 4942 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.561318 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.576073 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b5d2b9d-7ec0-41fa-a073-399c6fd41eb6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2c8b81c113e461032be39d6328308bad3189a9e84d987da987d43e8e2f6449fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3654d3b4a5084ce9ffb9ef8aeab6155788b56ac636aee44b098f6e9d457a8d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3a247d311cfbec62a54df5757a344bbc7ea516a66ccdeb67aecbbe268a4fbe4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://117748c4c4fa5e68d4b927639faa447ed3a984e0d7364a2224abe27e178d5746\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:17:21Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.590609 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.600031 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-5pgvt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f163820b-df8b-4e07-9b74-d5f3332580a6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjg6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:17:41Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-5pgvt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.611868 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-2rbc4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1ec6cacf-a09d-43b8-89fc-aea1d5a0ca9d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:42Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:42Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:42Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-27v7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-27v7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-27v7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-27v7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-27v7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-27v7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-27v7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:17:42Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-2rbc4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.624857 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.637601 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.639697 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/45dc4164-81a9-44cf-b86a-dff571bc0417-etc-openvswitch\") pod \"ovnkube-node-89fzv\" (UID: \"45dc4164-81a9-44cf-b86a-dff571bc0417\") " pod="openshift-ovn-kubernetes/ovnkube-node-89fzv" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.639785 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/45dc4164-81a9-44cf-b86a-dff571bc0417-log-socket\") pod \"ovnkube-node-89fzv\" (UID: \"45dc4164-81a9-44cf-b86a-dff571bc0417\") " pod="openshift-ovn-kubernetes/ovnkube-node-89fzv" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.639813 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/45dc4164-81a9-44cf-b86a-dff571bc0417-systemd-units\") pod \"ovnkube-node-89fzv\" (UID: \"45dc4164-81a9-44cf-b86a-dff571bc0417\") " pod="openshift-ovn-kubernetes/ovnkube-node-89fzv" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.639828 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/45dc4164-81a9-44cf-b86a-dff571bc0417-ovnkube-config\") pod \"ovnkube-node-89fzv\" (UID: \"45dc4164-81a9-44cf-b86a-dff571bc0417\") " pod="openshift-ovn-kubernetes/ovnkube-node-89fzv" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.639885 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/45dc4164-81a9-44cf-b86a-dff571bc0417-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-89fzv\" (UID: \"45dc4164-81a9-44cf-b86a-dff571bc0417\") " pod="openshift-ovn-kubernetes/ovnkube-node-89fzv" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.639906 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/45dc4164-81a9-44cf-b86a-dff571bc0417-ovnkube-script-lib\") pod \"ovnkube-node-89fzv\" (UID: \"45dc4164-81a9-44cf-b86a-dff571bc0417\") " pod="openshift-ovn-kubernetes/ovnkube-node-89fzv" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.639928 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/45dc4164-81a9-44cf-b86a-dff571bc0417-host-run-netns\") pod \"ovnkube-node-89fzv\" (UID: \"45dc4164-81a9-44cf-b86a-dff571bc0417\") " pod="openshift-ovn-kubernetes/ovnkube-node-89fzv" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.639945 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/45dc4164-81a9-44cf-b86a-dff571bc0417-run-openvswitch\") pod \"ovnkube-node-89fzv\" (UID: \"45dc4164-81a9-44cf-b86a-dff571bc0417\") " pod="openshift-ovn-kubernetes/ovnkube-node-89fzv" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.639969 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/45dc4164-81a9-44cf-b86a-dff571bc0417-host-cni-bin\") pod \"ovnkube-node-89fzv\" (UID: \"45dc4164-81a9-44cf-b86a-dff571bc0417\") " pod="openshift-ovn-kubernetes/ovnkube-node-89fzv" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.639988 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/45dc4164-81a9-44cf-b86a-dff571bc0417-node-log\") pod \"ovnkube-node-89fzv\" (UID: \"45dc4164-81a9-44cf-b86a-dff571bc0417\") " pod="openshift-ovn-kubernetes/ovnkube-node-89fzv" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.640007 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/45dc4164-81a9-44cf-b86a-dff571bc0417-var-lib-openvswitch\") pod \"ovnkube-node-89fzv\" (UID: \"45dc4164-81a9-44cf-b86a-dff571bc0417\") " pod="openshift-ovn-kubernetes/ovnkube-node-89fzv" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.640023 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/45dc4164-81a9-44cf-b86a-dff571bc0417-host-cni-netd\") pod \"ovnkube-node-89fzv\" (UID: \"45dc4164-81a9-44cf-b86a-dff571bc0417\") " pod="openshift-ovn-kubernetes/ovnkube-node-89fzv" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.640045 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/45dc4164-81a9-44cf-b86a-dff571bc0417-host-slash\") pod \"ovnkube-node-89fzv\" (UID: \"45dc4164-81a9-44cf-b86a-dff571bc0417\") " pod="openshift-ovn-kubernetes/ovnkube-node-89fzv" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.640060 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/45dc4164-81a9-44cf-b86a-dff571bc0417-host-run-ovn-kubernetes\") pod \"ovnkube-node-89fzv\" (UID: \"45dc4164-81a9-44cf-b86a-dff571bc0417\") " pod="openshift-ovn-kubernetes/ovnkube-node-89fzv" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.640078 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cl7tj\" (UniqueName: \"kubernetes.io/projected/45dc4164-81a9-44cf-b86a-dff571bc0417-kube-api-access-cl7tj\") pod \"ovnkube-node-89fzv\" (UID: \"45dc4164-81a9-44cf-b86a-dff571bc0417\") " pod="openshift-ovn-kubernetes/ovnkube-node-89fzv" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.640099 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/45dc4164-81a9-44cf-b86a-dff571bc0417-host-kubelet\") pod \"ovnkube-node-89fzv\" (UID: \"45dc4164-81a9-44cf-b86a-dff571bc0417\") " pod="openshift-ovn-kubernetes/ovnkube-node-89fzv" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.640114 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/45dc4164-81a9-44cf-b86a-dff571bc0417-ovn-node-metrics-cert\") pod \"ovnkube-node-89fzv\" (UID: \"45dc4164-81a9-44cf-b86a-dff571bc0417\") " pod="openshift-ovn-kubernetes/ovnkube-node-89fzv" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.640133 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/45dc4164-81a9-44cf-b86a-dff571bc0417-env-overrides\") pod \"ovnkube-node-89fzv\" (UID: \"45dc4164-81a9-44cf-b86a-dff571bc0417\") " pod="openshift-ovn-kubernetes/ovnkube-node-89fzv" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.640152 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/45dc4164-81a9-44cf-b86a-dff571bc0417-run-systemd\") pod \"ovnkube-node-89fzv\" (UID: \"45dc4164-81a9-44cf-b86a-dff571bc0417\") " pod="openshift-ovn-kubernetes/ovnkube-node-89fzv" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.640175 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/45dc4164-81a9-44cf-b86a-dff571bc0417-run-ovn\") pod \"ovnkube-node-89fzv\" (UID: \"45dc4164-81a9-44cf-b86a-dff571bc0417\") " pod="openshift-ovn-kubernetes/ovnkube-node-89fzv" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.649286 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.660228 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wqxh4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"28921539-823a-4439-a230-3b5aed7085cc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:42Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:42Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c2zj5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c2zj5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:17:42Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wqxh4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.670854 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-8jfwb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"75150b8c-7a02-497b-86c3-eabc9c8dbc55\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:42Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:42Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-65c5q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:17:42Z\\\"}}\" for pod \"openshift-multus\"/\"multus-8jfwb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.687032 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-89fzv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"45dc4164-81a9-44cf-b86a-dff571bc0417\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:42Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:42Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:42Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl7tj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl7tj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl7tj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl7tj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl7tj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl7tj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl7tj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl7tj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl7tj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:17:42Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-89fzv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.697991 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.711597 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4da93830-99a3-4d84-91c8-a5352a987b3f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://beecfbdf76954e7b9895240b52a2ec033ec3b81094ece02095f67a5f389d0383\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca3d8e99733c89b17e7211c9bae268f8e75942d896d32a6e2e9fc7e613000a6d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee5e19c2c5a503ae69e8052828713b9b399137e0fb7f3a06865d4d7f6b29c954\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b82ad9b4d66dae63d0d0fcf0dd65333cf43c3b1d6006e129b7db1c6320bfdee8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5246513a84d5da4c946e19dabd015225e05065daacd217fe981038f9c572b73f\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-18T19:17:40Z\\\",\\\"message\\\":\\\"-1433084409/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1771442244\\\\\\\\\\\\\\\" (2026-02-18 19:17:23 +0000 UTC to 2026-03-20 19:17:24 +0000 UTC (now=2026-02-18 19:17:40.45438601 +0000 UTC))\\\\\\\"\\\\nI0218 19:17:40.454440 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0218 19:17:40.454315 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0218 19:17:40.454727 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0218 19:17:40.454262 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1433084409/tls.crt::/tmp/serving-cert-1433084409/tls.key\\\\\\\"\\\\nI0218 19:17:40.454787 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1771442254\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1771442254\\\\\\\\\\\\\\\" (2026-02-18 18:17:34 +0000 UTC to 2027-02-18 18:17:34 +0000 UTC (now=2026-02-18 19:17:40.454709698 +0000 UTC))\\\\\\\"\\\\nI0218 19:17:40.454828 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0218 19:17:40.454834 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0218 19:17:40.454852 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0218 19:17:40.454856 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0218 19:17:40.454883 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI0218 19:17:40.455174 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nF0218 19:17:40.456995 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T19:17:24Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b82ad9b4d66dae63d0d0fcf0dd65333cf43c3b1d6006e129b7db1c6320bfdee8\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"message\\\":\\\"le observer\\\\nW0218 19:17:41.723890 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0218 19:17:41.724123 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0218 19:17:41.725411 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3231040961/tls.crt::/tmp/serving-cert-3231040961/tls.key\\\\\\\"\\\\nI0218 19:17:41.923908 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0218 19:17:41.936017 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0218 19:17:41.936045 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0218 19:17:41.936073 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0218 19:17:41.936079 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0218 19:17:41.944174 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0218 19:17:41.944200 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 19:17:41.944205 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 19:17:41.944211 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0218 19:17:41.944214 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0218 19:17:41.944217 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0218 19:17:41.944220 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0218 19:17:41.944371 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0218 19:17:41.958094 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T19:17:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5fcd5de3303bba82e4a354de9f77b9aac574912955c2e49e2e74232f4d432a88\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:23Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6adb31d2464d541db843bddad1c43811ca11900db12deeb0d7c13ff8a3186d09\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6adb31d2464d541db843bddad1c43811ca11900db12deeb0d7c13ff8a3186d09\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:17:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:17:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:17:21Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.741112 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.741199 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/45dc4164-81a9-44cf-b86a-dff571bc0417-node-log\") pod \"ovnkube-node-89fzv\" (UID: \"45dc4164-81a9-44cf-b86a-dff571bc0417\") " pod="openshift-ovn-kubernetes/ovnkube-node-89fzv" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.741222 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/45dc4164-81a9-44cf-b86a-dff571bc0417-var-lib-openvswitch\") pod \"ovnkube-node-89fzv\" (UID: \"45dc4164-81a9-44cf-b86a-dff571bc0417\") " pod="openshift-ovn-kubernetes/ovnkube-node-89fzv" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.741241 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/45dc4164-81a9-44cf-b86a-dff571bc0417-host-cni-netd\") pod \"ovnkube-node-89fzv\" (UID: \"45dc4164-81a9-44cf-b86a-dff571bc0417\") " pod="openshift-ovn-kubernetes/ovnkube-node-89fzv" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.741259 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/45dc4164-81a9-44cf-b86a-dff571bc0417-host-slash\") pod \"ovnkube-node-89fzv\" (UID: \"45dc4164-81a9-44cf-b86a-dff571bc0417\") " pod="openshift-ovn-kubernetes/ovnkube-node-89fzv" Feb 18 19:17:42 crc kubenswrapper[4942]: E0218 19:17:42.741290 4942 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 19:17:43.741257711 +0000 UTC m=+23.446190376 (durationBeforeRetry 1s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.741307 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/45dc4164-81a9-44cf-b86a-dff571bc0417-host-slash\") pod \"ovnkube-node-89fzv\" (UID: \"45dc4164-81a9-44cf-b86a-dff571bc0417\") " pod="openshift-ovn-kubernetes/ovnkube-node-89fzv" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.741335 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/45dc4164-81a9-44cf-b86a-dff571bc0417-host-run-ovn-kubernetes\") pod \"ovnkube-node-89fzv\" (UID: \"45dc4164-81a9-44cf-b86a-dff571bc0417\") " pod="openshift-ovn-kubernetes/ovnkube-node-89fzv" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.741358 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/45dc4164-81a9-44cf-b86a-dff571bc0417-host-cni-netd\") pod \"ovnkube-node-89fzv\" (UID: \"45dc4164-81a9-44cf-b86a-dff571bc0417\") " pod="openshift-ovn-kubernetes/ovnkube-node-89fzv" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.741373 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cl7tj\" (UniqueName: \"kubernetes.io/projected/45dc4164-81a9-44cf-b86a-dff571bc0417-kube-api-access-cl7tj\") pod \"ovnkube-node-89fzv\" (UID: \"45dc4164-81a9-44cf-b86a-dff571bc0417\") " pod="openshift-ovn-kubernetes/ovnkube-node-89fzv" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.741362 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/45dc4164-81a9-44cf-b86a-dff571bc0417-var-lib-openvswitch\") pod \"ovnkube-node-89fzv\" (UID: \"45dc4164-81a9-44cf-b86a-dff571bc0417\") " pod="openshift-ovn-kubernetes/ovnkube-node-89fzv" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.741393 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/45dc4164-81a9-44cf-b86a-dff571bc0417-host-run-ovn-kubernetes\") pod \"ovnkube-node-89fzv\" (UID: \"45dc4164-81a9-44cf-b86a-dff571bc0417\") " pod="openshift-ovn-kubernetes/ovnkube-node-89fzv" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.741406 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/45dc4164-81a9-44cf-b86a-dff571bc0417-host-kubelet\") pod \"ovnkube-node-89fzv\" (UID: \"45dc4164-81a9-44cf-b86a-dff571bc0417\") " pod="openshift-ovn-kubernetes/ovnkube-node-89fzv" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.741430 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/45dc4164-81a9-44cf-b86a-dff571bc0417-node-log\") pod \"ovnkube-node-89fzv\" (UID: \"45dc4164-81a9-44cf-b86a-dff571bc0417\") " pod="openshift-ovn-kubernetes/ovnkube-node-89fzv" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.741431 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/45dc4164-81a9-44cf-b86a-dff571bc0417-ovn-node-metrics-cert\") pod \"ovnkube-node-89fzv\" (UID: \"45dc4164-81a9-44cf-b86a-dff571bc0417\") " pod="openshift-ovn-kubernetes/ovnkube-node-89fzv" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.741467 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/45dc4164-81a9-44cf-b86a-dff571bc0417-env-overrides\") pod \"ovnkube-node-89fzv\" (UID: \"45dc4164-81a9-44cf-b86a-dff571bc0417\") " pod="openshift-ovn-kubernetes/ovnkube-node-89fzv" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.741489 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/45dc4164-81a9-44cf-b86a-dff571bc0417-run-systemd\") pod \"ovnkube-node-89fzv\" (UID: \"45dc4164-81a9-44cf-b86a-dff571bc0417\") " pod="openshift-ovn-kubernetes/ovnkube-node-89fzv" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.741505 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/45dc4164-81a9-44cf-b86a-dff571bc0417-run-ovn\") pod \"ovnkube-node-89fzv\" (UID: \"45dc4164-81a9-44cf-b86a-dff571bc0417\") " pod="openshift-ovn-kubernetes/ovnkube-node-89fzv" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.741543 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.741573 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.741593 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/45dc4164-81a9-44cf-b86a-dff571bc0417-etc-openvswitch\") pod \"ovnkube-node-89fzv\" (UID: \"45dc4164-81a9-44cf-b86a-dff571bc0417\") " pod="openshift-ovn-kubernetes/ovnkube-node-89fzv" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.741647 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/45dc4164-81a9-44cf-b86a-dff571bc0417-log-socket\") pod \"ovnkube-node-89fzv\" (UID: \"45dc4164-81a9-44cf-b86a-dff571bc0417\") " pod="openshift-ovn-kubernetes/ovnkube-node-89fzv" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.741668 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/45dc4164-81a9-44cf-b86a-dff571bc0417-systemd-units\") pod \"ovnkube-node-89fzv\" (UID: \"45dc4164-81a9-44cf-b86a-dff571bc0417\") " pod="openshift-ovn-kubernetes/ovnkube-node-89fzv" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.741683 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/45dc4164-81a9-44cf-b86a-dff571bc0417-ovnkube-config\") pod \"ovnkube-node-89fzv\" (UID: \"45dc4164-81a9-44cf-b86a-dff571bc0417\") " pod="openshift-ovn-kubernetes/ovnkube-node-89fzv" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.741746 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/45dc4164-81a9-44cf-b86a-dff571bc0417-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-89fzv\" (UID: \"45dc4164-81a9-44cf-b86a-dff571bc0417\") " pod="openshift-ovn-kubernetes/ovnkube-node-89fzv" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.741780 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/45dc4164-81a9-44cf-b86a-dff571bc0417-ovnkube-script-lib\") pod \"ovnkube-node-89fzv\" (UID: \"45dc4164-81a9-44cf-b86a-dff571bc0417\") " pod="openshift-ovn-kubernetes/ovnkube-node-89fzv" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.741799 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/45dc4164-81a9-44cf-b86a-dff571bc0417-host-run-netns\") pod \"ovnkube-node-89fzv\" (UID: \"45dc4164-81a9-44cf-b86a-dff571bc0417\") " pod="openshift-ovn-kubernetes/ovnkube-node-89fzv" Feb 18 19:17:42 crc kubenswrapper[4942]: E0218 19:17:42.741804 4942 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.741817 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/45dc4164-81a9-44cf-b86a-dff571bc0417-run-openvswitch\") pod \"ovnkube-node-89fzv\" (UID: \"45dc4164-81a9-44cf-b86a-dff571bc0417\") " pod="openshift-ovn-kubernetes/ovnkube-node-89fzv" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.741841 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/45dc4164-81a9-44cf-b86a-dff571bc0417-host-cni-bin\") pod \"ovnkube-node-89fzv\" (UID: \"45dc4164-81a9-44cf-b86a-dff571bc0417\") " pod="openshift-ovn-kubernetes/ovnkube-node-89fzv" Feb 18 19:17:42 crc kubenswrapper[4942]: E0218 19:17:42.741879 4942 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-18 19:17:43.741863046 +0000 UTC m=+23.446795711 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.741908 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/45dc4164-81a9-44cf-b86a-dff571bc0417-host-cni-bin\") pod \"ovnkube-node-89fzv\" (UID: \"45dc4164-81a9-44cf-b86a-dff571bc0417\") " pod="openshift-ovn-kubernetes/ovnkube-node-89fzv" Feb 18 19:17:42 crc kubenswrapper[4942]: E0218 19:17:42.741954 4942 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 18 19:17:42 crc kubenswrapper[4942]: E0218 19:17:42.741986 4942 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-18 19:17:43.741978699 +0000 UTC m=+23.446911364 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.742007 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/45dc4164-81a9-44cf-b86a-dff571bc0417-etc-openvswitch\") pod \"ovnkube-node-89fzv\" (UID: \"45dc4164-81a9-44cf-b86a-dff571bc0417\") " pod="openshift-ovn-kubernetes/ovnkube-node-89fzv" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.742057 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/45dc4164-81a9-44cf-b86a-dff571bc0417-log-socket\") pod \"ovnkube-node-89fzv\" (UID: \"45dc4164-81a9-44cf-b86a-dff571bc0417\") " pod="openshift-ovn-kubernetes/ovnkube-node-89fzv" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.742077 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/45dc4164-81a9-44cf-b86a-dff571bc0417-systemd-units\") pod \"ovnkube-node-89fzv\" (UID: \"45dc4164-81a9-44cf-b86a-dff571bc0417\") " pod="openshift-ovn-kubernetes/ovnkube-node-89fzv" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.742059 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/45dc4164-81a9-44cf-b86a-dff571bc0417-host-kubelet\") pod \"ovnkube-node-89fzv\" (UID: \"45dc4164-81a9-44cf-b86a-dff571bc0417\") " pod="openshift-ovn-kubernetes/ovnkube-node-89fzv" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.743661 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/45dc4164-81a9-44cf-b86a-dff571bc0417-run-ovn\") pod \"ovnkube-node-89fzv\" (UID: \"45dc4164-81a9-44cf-b86a-dff571bc0417\") " pod="openshift-ovn-kubernetes/ovnkube-node-89fzv" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.743682 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/45dc4164-81a9-44cf-b86a-dff571bc0417-env-overrides\") pod \"ovnkube-node-89fzv\" (UID: \"45dc4164-81a9-44cf-b86a-dff571bc0417\") " pod="openshift-ovn-kubernetes/ovnkube-node-89fzv" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.743718 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/45dc4164-81a9-44cf-b86a-dff571bc0417-run-systemd\") pod \"ovnkube-node-89fzv\" (UID: \"45dc4164-81a9-44cf-b86a-dff571bc0417\") " pod="openshift-ovn-kubernetes/ovnkube-node-89fzv" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.743867 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/45dc4164-81a9-44cf-b86a-dff571bc0417-run-openvswitch\") pod \"ovnkube-node-89fzv\" (UID: \"45dc4164-81a9-44cf-b86a-dff571bc0417\") " pod="openshift-ovn-kubernetes/ovnkube-node-89fzv" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.743884 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/45dc4164-81a9-44cf-b86a-dff571bc0417-host-run-netns\") pod \"ovnkube-node-89fzv\" (UID: \"45dc4164-81a9-44cf-b86a-dff571bc0417\") " pod="openshift-ovn-kubernetes/ovnkube-node-89fzv" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.744015 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/45dc4164-81a9-44cf-b86a-dff571bc0417-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-89fzv\" (UID: \"45dc4164-81a9-44cf-b86a-dff571bc0417\") " pod="openshift-ovn-kubernetes/ovnkube-node-89fzv" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.744206 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/45dc4164-81a9-44cf-b86a-dff571bc0417-ovnkube-script-lib\") pod \"ovnkube-node-89fzv\" (UID: \"45dc4164-81a9-44cf-b86a-dff571bc0417\") " pod="openshift-ovn-kubernetes/ovnkube-node-89fzv" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.745775 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/45dc4164-81a9-44cf-b86a-dff571bc0417-ovn-node-metrics-cert\") pod \"ovnkube-node-89fzv\" (UID: \"45dc4164-81a9-44cf-b86a-dff571bc0417\") " pod="openshift-ovn-kubernetes/ovnkube-node-89fzv" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.746591 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/45dc4164-81a9-44cf-b86a-dff571bc0417-ovnkube-config\") pod \"ovnkube-node-89fzv\" (UID: \"45dc4164-81a9-44cf-b86a-dff571bc0417\") " pod="openshift-ovn-kubernetes/ovnkube-node-89fzv" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.767298 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cl7tj\" (UniqueName: \"kubernetes.io/projected/45dc4164-81a9-44cf-b86a-dff571bc0417-kube-api-access-cl7tj\") pod \"ovnkube-node-89fzv\" (UID: \"45dc4164-81a9-44cf-b86a-dff571bc0417\") " pod="openshift-ovn-kubernetes/ovnkube-node-89fzv" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.870744 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-89fzv" Feb 18 19:17:42 crc kubenswrapper[4942]: I0218 19:17:42.966737 4942 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-16 19:11:17.901458771 +0000 UTC Feb 18 19:17:43 crc kubenswrapper[4942]: I0218 19:17:43.035372 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 19:17:43 crc kubenswrapper[4942]: E0218 19:17:43.035545 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 19:17:43 crc kubenswrapper[4942]: I0218 19:17:43.040253 4942 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="01ab3dd5-8196-46d0-ad33-122e2ca51def" path="/var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes" Feb 18 19:17:43 crc kubenswrapper[4942]: I0218 19:17:43.041241 4942 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" path="/var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes" Feb 18 19:17:43 crc kubenswrapper[4942]: I0218 19:17:43.042716 4942 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09efc573-dbb6-4249-bd59-9b87aba8dd28" path="/var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes" Feb 18 19:17:43 crc kubenswrapper[4942]: I0218 19:17:43.043788 4942 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b574797-001e-440a-8f4e-c0be86edad0f" path="/var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes" Feb 18 19:17:43 crc kubenswrapper[4942]: I0218 19:17:43.044442 4942 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b78653f-4ff9-4508-8672-245ed9b561e3" path="/var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes" Feb 18 19:17:43 crc kubenswrapper[4942]: I0218 19:17:43.045037 4942 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1386a44e-36a2-460c-96d0-0359d2b6f0f5" path="/var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes" Feb 18 19:17:43 crc kubenswrapper[4942]: I0218 19:17:43.045634 4942 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1bf7eb37-55a3-4c65-b768-a94c82151e69" path="/var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes" Feb 18 19:17:43 crc kubenswrapper[4942]: I0218 19:17:43.046225 4942 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1d611f23-29be-4491-8495-bee1670e935f" path="/var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes" Feb 18 19:17:43 crc kubenswrapper[4942]: I0218 19:17:43.046925 4942 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="20b0d48f-5fd6-431c-a545-e3c800c7b866" path="/var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/volumes" Feb 18 19:17:43 crc kubenswrapper[4942]: I0218 19:17:43.047448 4942 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" path="/var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes" Feb 18 19:17:43 crc kubenswrapper[4942]: I0218 19:17:43.047986 4942 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="22c825df-677d-4ca6-82db-3454ed06e783" path="/var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes" Feb 18 19:17:43 crc kubenswrapper[4942]: I0218 19:17:43.048664 4942 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="25e176fe-21b4-4974-b1ed-c8b94f112a7f" path="/var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes" Feb 18 19:17:43 crc kubenswrapper[4942]: I0218 19:17:43.049206 4942 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" path="/var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes" Feb 18 19:17:43 crc kubenswrapper[4942]: I0218 19:17:43.049745 4942 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="31d8b7a1-420e-4252-a5b7-eebe8a111292" path="/var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes" Feb 18 19:17:43 crc kubenswrapper[4942]: I0218 19:17:43.050265 4942 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3ab1a177-2de0-46d9-b765-d0d0649bb42e" path="/var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/volumes" Feb 18 19:17:43 crc kubenswrapper[4942]: I0218 19:17:43.050815 4942 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" path="/var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes" Feb 18 19:17:43 crc kubenswrapper[4942]: I0218 19:17:43.051414 4942 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="43509403-f426-496e-be36-56cef71462f5" path="/var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes" Feb 18 19:17:43 crc kubenswrapper[4942]: I0218 19:17:43.051910 4942 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="44663579-783b-4372-86d6-acf235a62d72" path="/var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/volumes" Feb 18 19:17:43 crc kubenswrapper[4942]: I0218 19:17:43.052490 4942 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="496e6271-fb68-4057-954e-a0d97a4afa3f" path="/var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes" Feb 18 19:17:43 crc kubenswrapper[4942]: I0218 19:17:43.053101 4942 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" path="/var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes" Feb 18 19:17:43 crc kubenswrapper[4942]: I0218 19:17:43.054623 4942 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49ef4625-1d3a-4a9f-b595-c2433d32326d" path="/var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/volumes" Feb 18 19:17:43 crc kubenswrapper[4942]: I0218 19:17:43.055558 4942 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4bb40260-dbaa-4fb0-84df-5e680505d512" path="/var/lib/kubelet/pods/4bb40260-dbaa-4fb0-84df-5e680505d512/volumes" Feb 18 19:17:43 crc kubenswrapper[4942]: I0218 19:17:43.056098 4942 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5225d0e4-402f-4861-b410-819f433b1803" path="/var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes" Feb 18 19:17:43 crc kubenswrapper[4942]: I0218 19:17:43.056857 4942 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5441d097-087c-4d9a-baa8-b210afa90fc9" path="/var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes" Feb 18 19:17:43 crc kubenswrapper[4942]: I0218 19:17:43.057354 4942 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="57a731c4-ef35-47a8-b875-bfb08a7f8011" path="/var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes" Feb 18 19:17:43 crc kubenswrapper[4942]: I0218 19:17:43.058002 4942 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5b88f790-22fa-440e-b583-365168c0b23d" path="/var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/volumes" Feb 18 19:17:43 crc kubenswrapper[4942]: I0218 19:17:43.058654 4942 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5fe579f8-e8a6-4643-bce5-a661393c4dde" path="/var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/volumes" Feb 18 19:17:43 crc kubenswrapper[4942]: I0218 19:17:43.059219 4942 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6402fda4-df10-493c-b4e5-d0569419652d" path="/var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes" Feb 18 19:17:43 crc kubenswrapper[4942]: I0218 19:17:43.059871 4942 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6509e943-70c6-444c-bc41-48a544e36fbd" path="/var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes" Feb 18 19:17:43 crc kubenswrapper[4942]: I0218 19:17:43.060350 4942 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6731426b-95fe-49ff-bb5f-40441049fde2" path="/var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/volumes" Feb 18 19:17:43 crc kubenswrapper[4942]: I0218 19:17:43.060863 4942 kubelet_volumes.go:152] "Cleaned up orphaned volume subpath from pod" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volume-subpaths/run-systemd/ovnkube-controller/6" Feb 18 19:17:43 crc kubenswrapper[4942]: I0218 19:17:43.060971 4942 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volumes" Feb 18 19:17:43 crc kubenswrapper[4942]: I0218 19:17:43.062403 4942 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7539238d-5fe0-46ed-884e-1c3b566537ec" path="/var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes" Feb 18 19:17:43 crc kubenswrapper[4942]: I0218 19:17:43.063024 4942 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7583ce53-e0fe-4a16-9e4d-50516596a136" path="/var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes" Feb 18 19:17:43 crc kubenswrapper[4942]: I0218 19:17:43.063541 4942 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7bb08738-c794-4ee8-9972-3a62ca171029" path="/var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes" Feb 18 19:17:43 crc kubenswrapper[4942]: I0218 19:17:43.068684 4942 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="87cf06ed-a83f-41a7-828d-70653580a8cb" path="/var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes" Feb 18 19:17:43 crc kubenswrapper[4942]: I0218 19:17:43.069489 4942 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" path="/var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes" Feb 18 19:17:43 crc kubenswrapper[4942]: I0218 19:17:43.070451 4942 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="925f1c65-6136-48ba-85aa-3a3b50560753" path="/var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes" Feb 18 19:17:43 crc kubenswrapper[4942]: I0218 19:17:43.071214 4942 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" path="/var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/volumes" Feb 18 19:17:43 crc kubenswrapper[4942]: I0218 19:17:43.072296 4942 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9d4552c7-cd75-42dd-8880-30dd377c49a4" path="/var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes" Feb 18 19:17:43 crc kubenswrapper[4942]: I0218 19:17:43.072757 4942 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" path="/var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/volumes" Feb 18 19:17:43 crc kubenswrapper[4942]: I0218 19:17:43.073973 4942 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a31745f5-9847-4afe-82a5-3161cc66ca93" path="/var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes" Feb 18 19:17:43 crc kubenswrapper[4942]: I0218 19:17:43.074637 4942 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" path="/var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes" Feb 18 19:17:43 crc kubenswrapper[4942]: I0218 19:17:43.075589 4942 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6312bbd-5731-4ea0-a20f-81d5a57df44a" path="/var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/volumes" Feb 18 19:17:43 crc kubenswrapper[4942]: I0218 19:17:43.076082 4942 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" path="/var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes" Feb 18 19:17:43 crc kubenswrapper[4942]: I0218 19:17:43.077001 4942 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" path="/var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes" Feb 18 19:17:43 crc kubenswrapper[4942]: I0218 19:17:43.077541 4942 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" path="/var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/volumes" Feb 18 19:17:43 crc kubenswrapper[4942]: I0218 19:17:43.078641 4942 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bf126b07-da06-4140-9a57-dfd54fc6b486" path="/var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes" Feb 18 19:17:43 crc kubenswrapper[4942]: I0218 19:17:43.079204 4942 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c03ee662-fb2f-4fc4-a2c1-af487c19d254" path="/var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes" Feb 18 19:17:43 crc kubenswrapper[4942]: I0218 19:17:43.080395 4942 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" path="/var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/volumes" Feb 18 19:17:43 crc kubenswrapper[4942]: I0218 19:17:43.080875 4942 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e7e6199b-1264-4501-8953-767f51328d08" path="/var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes" Feb 18 19:17:43 crc kubenswrapper[4942]: I0218 19:17:43.081429 4942 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="efdd0498-1daa-4136-9a4a-3b948c2293fc" path="/var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/volumes" Feb 18 19:17:43 crc kubenswrapper[4942]: I0218 19:17:43.082625 4942 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" path="/var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/volumes" Feb 18 19:17:43 crc kubenswrapper[4942]: I0218 19:17:43.083127 4942 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fda69060-fa79-4696-b1a6-7980f124bf7c" path="/var/lib/kubelet/pods/fda69060-fa79-4696-b1a6-7980f124bf7c/volumes" Feb 18 19:17:43 crc kubenswrapper[4942]: I0218 19:17:43.206671 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"8cc6e8b6926e9cadf0bfdedb3a9fd0e5a7a902ba1cc703cd0396c3d7b2ec8666"} Feb 18 19:17:43 crc kubenswrapper[4942]: I0218 19:17:43.206730 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"45c0716738e2acbb0104b2ce05e3f23fd6933b653297d10972914500f3e55cd3"} Feb 18 19:17:43 crc kubenswrapper[4942]: I0218 19:17:43.206744 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"cc43bdfa8f87b18c190d672f65ec19dd854a057cb070b3b7e69d0c61de7de1b1"} Feb 18 19:17:43 crc kubenswrapper[4942]: I0218 19:17:43.208049 4942 generic.go:334] "Generic (PLEG): container finished" podID="45dc4164-81a9-44cf-b86a-dff571bc0417" containerID="581689b9e064557a35e24e6d5c15a73036b8499700959fd330e9ebee15543edc" exitCode=0 Feb 18 19:17:43 crc kubenswrapper[4942]: I0218 19:17:43.208110 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-89fzv" event={"ID":"45dc4164-81a9-44cf-b86a-dff571bc0417","Type":"ContainerDied","Data":"581689b9e064557a35e24e6d5c15a73036b8499700959fd330e9ebee15543edc"} Feb 18 19:17:43 crc kubenswrapper[4942]: I0218 19:17:43.208157 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-89fzv" event={"ID":"45dc4164-81a9-44cf-b86a-dff571bc0417","Type":"ContainerStarted","Data":"9d4b5c04c361e209886b1bb004385933e7d66c1477df3ba1ff39b92720286780"} Feb 18 19:17:43 crc kubenswrapper[4942]: I0218 19:17:43.215639 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-8jfwb" event={"ID":"75150b8c-7a02-497b-86c3-eabc9c8dbc55","Type":"ContainerStarted","Data":"f6aba9b40a3a963de7e8fb8f2a121318f0800350a41caa30b6aef71468e5e0e4"} Feb 18 19:17:43 crc kubenswrapper[4942]: I0218 19:17:43.215693 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-8jfwb" event={"ID":"75150b8c-7a02-497b-86c3-eabc9c8dbc55","Type":"ContainerStarted","Data":"795f7eedc1033efe306a5370120d08da83424ccdc74730cd7ad43f9f0455be94"} Feb 18 19:17:43 crc kubenswrapper[4942]: I0218 19:17:43.219479 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wqxh4" event={"ID":"28921539-823a-4439-a230-3b5aed7085cc","Type":"ContainerStarted","Data":"d1f426cf3a46e9dbd6da2d7e0d1dc2649a781bb63b9b116e2e96e297ffe685f8"} Feb 18 19:17:43 crc kubenswrapper[4942]: I0218 19:17:43.219525 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wqxh4" event={"ID":"28921539-823a-4439-a230-3b5aed7085cc","Type":"ContainerStarted","Data":"d3f2583de812c35d32f50918d2ea1071672e650d7bb1eca09416558ca25526b1"} Feb 18 19:17:43 crc kubenswrapper[4942]: I0218 19:17:43.219537 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wqxh4" event={"ID":"28921539-823a-4439-a230-3b5aed7085cc","Type":"ContainerStarted","Data":"423e7dd637f41bd59e9f4610d40651483c31e98ed1a93cc5a3b51823c029a0da"} Feb 18 19:17:43 crc kubenswrapper[4942]: I0218 19:17:43.221216 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"36f5db0de79285e1aca04aee9ebb8824353d8746f2f7df24be858a55db3c9abf"} Feb 18 19:17:43 crc kubenswrapper[4942]: I0218 19:17:43.221255 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"9f98ead60d7d7388ed8e2f826325cdf4fb3f733d0c86b21634a5a15f4660b1dc"} Feb 18 19:17:43 crc kubenswrapper[4942]: I0218 19:17:43.222609 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-5pgvt" event={"ID":"f163820b-df8b-4e07-9b74-d5f3332580a6","Type":"ContainerStarted","Data":"97b02b2ef091c462632d385e824d90a6dc8270726bb3b5dfaa6c3036e99d323f"} Feb 18 19:17:43 crc kubenswrapper[4942]: I0218 19:17:43.222665 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-5pgvt" event={"ID":"f163820b-df8b-4e07-9b74-d5f3332580a6","Type":"ContainerStarted","Data":"2b2efaa19b8957c73861f12e23848fb6ad4f5187a5b63fc0525873d9908beb87"} Feb 18 19:17:43 crc kubenswrapper[4942]: I0218 19:17:43.224267 4942 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/1.log" Feb 18 19:17:43 crc kubenswrapper[4942]: I0218 19:17:43.226392 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:17:43Z is after 2025-08-24T17:21:41Z" Feb 18 19:17:43 crc kubenswrapper[4942]: I0218 19:17:43.226701 4942 scope.go:117] "RemoveContainer" containerID="b82ad9b4d66dae63d0d0fcf0dd65333cf43c3b1d6006e129b7db1c6320bfdee8" Feb 18 19:17:43 crc kubenswrapper[4942]: E0218 19:17:43.226956 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Feb 18 19:17:43 crc kubenswrapper[4942]: I0218 19:17:43.227731 4942 generic.go:334] "Generic (PLEG): container finished" podID="1ec6cacf-a09d-43b8-89fc-aea1d5a0ca9d" containerID="26e15915f01864e6357f914244e51cc52eb7c1c79fdda6cebfb23c7723978fc7" exitCode=0 Feb 18 19:17:43 crc kubenswrapper[4942]: I0218 19:17:43.227802 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-2rbc4" event={"ID":"1ec6cacf-a09d-43b8-89fc-aea1d5a0ca9d","Type":"ContainerDied","Data":"26e15915f01864e6357f914244e51cc52eb7c1c79fdda6cebfb23c7723978fc7"} Feb 18 19:17:43 crc kubenswrapper[4942]: I0218 19:17:43.227822 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-2rbc4" event={"ID":"1ec6cacf-a09d-43b8-89fc-aea1d5a0ca9d","Type":"ContainerStarted","Data":"ebb430bd7e3fcbe29a36455e3bd0b6b975dcd2edfe5d779405ff6d6129a46903"} Feb 18 19:17:43 crc kubenswrapper[4942]: I0218 19:17:43.229836 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"ddb883da8855a447ab89d150d48183c16c8676db0c8a228fdca5f0546356c698"} Feb 18 19:17:43 crc kubenswrapper[4942]: I0218 19:17:43.240507 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:17:43Z is after 2025-08-24T17:21:41Z" Feb 18 19:17:43 crc kubenswrapper[4942]: I0218 19:17:43.254499 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:17:43Z is after 2025-08-24T17:21:41Z" Feb 18 19:17:43 crc kubenswrapper[4942]: I0218 19:17:43.270822 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wqxh4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"28921539-823a-4439-a230-3b5aed7085cc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:42Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:42Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c2zj5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c2zj5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:17:42Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wqxh4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:17:43Z is after 2025-08-24T17:21:41Z" Feb 18 19:17:43 crc kubenswrapper[4942]: I0218 19:17:43.282992 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-8jfwb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"75150b8c-7a02-497b-86c3-eabc9c8dbc55\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:42Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:42Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-65c5q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:17:42Z\\\"}}\" for pod \"openshift-multus\"/\"multus-8jfwb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:17:43Z is after 2025-08-24T17:21:41Z" Feb 18 19:17:43 crc kubenswrapper[4942]: I0218 19:17:43.308884 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-89fzv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"45dc4164-81a9-44cf-b86a-dff571bc0417\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:42Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:42Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:42Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl7tj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl7tj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl7tj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl7tj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl7tj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl7tj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl7tj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl7tj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl7tj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:17:42Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-89fzv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:17:43Z is after 2025-08-24T17:21:41Z" Feb 18 19:17:43 crc kubenswrapper[4942]: I0218 19:17:43.325380 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:17:43Z is after 2025-08-24T17:21:41Z" Feb 18 19:17:43 crc kubenswrapper[4942]: I0218 19:17:43.341231 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4da93830-99a3-4d84-91c8-a5352a987b3f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://beecfbdf76954e7b9895240b52a2ec033ec3b81094ece02095f67a5f389d0383\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca3d8e99733c89b17e7211c9bae268f8e75942d896d32a6e2e9fc7e613000a6d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee5e19c2c5a503ae69e8052828713b9b399137e0fb7f3a06865d4d7f6b29c954\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b82ad9b4d66dae63d0d0fcf0dd65333cf43c3b1d6006e129b7db1c6320bfdee8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5246513a84d5da4c946e19dabd015225e05065daacd217fe981038f9c572b73f\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-18T19:17:40Z\\\",\\\"message\\\":\\\"-1433084409/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1771442244\\\\\\\\\\\\\\\" (2026-02-18 19:17:23 +0000 UTC to 2026-03-20 19:17:24 +0000 UTC (now=2026-02-18 19:17:40.45438601 +0000 UTC))\\\\\\\"\\\\nI0218 19:17:40.454440 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0218 19:17:40.454315 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0218 19:17:40.454727 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0218 19:17:40.454262 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1433084409/tls.crt::/tmp/serving-cert-1433084409/tls.key\\\\\\\"\\\\nI0218 19:17:40.454787 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1771442254\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1771442254\\\\\\\\\\\\\\\" (2026-02-18 18:17:34 +0000 UTC to 2027-02-18 18:17:34 +0000 UTC (now=2026-02-18 19:17:40.454709698 +0000 UTC))\\\\\\\"\\\\nI0218 19:17:40.454828 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0218 19:17:40.454834 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0218 19:17:40.454852 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0218 19:17:40.454856 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0218 19:17:40.454883 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI0218 19:17:40.455174 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nF0218 19:17:40.456995 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T19:17:24Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b82ad9b4d66dae63d0d0fcf0dd65333cf43c3b1d6006e129b7db1c6320bfdee8\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"message\\\":\\\"le observer\\\\nW0218 19:17:41.723890 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0218 19:17:41.724123 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0218 19:17:41.725411 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3231040961/tls.crt::/tmp/serving-cert-3231040961/tls.key\\\\\\\"\\\\nI0218 19:17:41.923908 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0218 19:17:41.936017 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0218 19:17:41.936045 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0218 19:17:41.936073 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0218 19:17:41.936079 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0218 19:17:41.944174 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0218 19:17:41.944200 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 19:17:41.944205 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 19:17:41.944211 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0218 19:17:41.944214 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0218 19:17:41.944217 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0218 19:17:41.944220 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0218 19:17:41.944371 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0218 19:17:41.958094 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T19:17:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5fcd5de3303bba82e4a354de9f77b9aac574912955c2e49e2e74232f4d432a88\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:23Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6adb31d2464d541db843bddad1c43811ca11900db12deeb0d7c13ff8a3186d09\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6adb31d2464d541db843bddad1c43811ca11900db12deeb0d7c13ff8a3186d09\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:17:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:17:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:17:21Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:17:43Z is after 2025-08-24T17:21:41Z" Feb 18 19:17:43 crc kubenswrapper[4942]: I0218 19:17:43.359372 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8cc6e8b6926e9cadf0bfdedb3a9fd0e5a7a902ba1cc703cd0396c3d7b2ec8666\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://45c0716738e2acbb0104b2ce05e3f23fd6933b653297d10972914500f3e55cd3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:17:43Z is after 2025-08-24T17:21:41Z" Feb 18 19:17:43 crc kubenswrapper[4942]: I0218 19:17:43.372705 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b5d2b9d-7ec0-41fa-a073-399c6fd41eb6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2c8b81c113e461032be39d6328308bad3189a9e84d987da987d43e8e2f6449fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3654d3b4a5084ce9ffb9ef8aeab6155788b56ac636aee44b098f6e9d457a8d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3a247d311cfbec62a54df5757a344bbc7ea516a66ccdeb67aecbbe268a4fbe4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://117748c4c4fa5e68d4b927639faa447ed3a984e0d7364a2224abe27e178d5746\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:17:21Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:17:43Z is after 2025-08-24T17:21:41Z" Feb 18 19:17:43 crc kubenswrapper[4942]: I0218 19:17:43.389226 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:17:43Z is after 2025-08-24T17:21:41Z" Feb 18 19:17:43 crc kubenswrapper[4942]: I0218 19:17:43.400360 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-5pgvt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f163820b-df8b-4e07-9b74-d5f3332580a6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjg6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:17:41Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-5pgvt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:17:43Z is after 2025-08-24T17:21:41Z" Feb 18 19:17:43 crc kubenswrapper[4942]: I0218 19:17:43.426305 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-2rbc4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1ec6cacf-a09d-43b8-89fc-aea1d5a0ca9d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:42Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:42Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:42Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-27v7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-27v7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-27v7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-27v7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-27v7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-27v7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-27v7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:17:42Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-2rbc4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:17:43Z is after 2025-08-24T17:21:41Z" Feb 18 19:17:43 crc kubenswrapper[4942]: I0218 19:17:43.439027 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b5d2b9d-7ec0-41fa-a073-399c6fd41eb6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2c8b81c113e461032be39d6328308bad3189a9e84d987da987d43e8e2f6449fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3654d3b4a5084ce9ffb9ef8aeab6155788b56ac636aee44b098f6e9d457a8d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3a247d311cfbec62a54df5757a344bbc7ea516a66ccdeb67aecbbe268a4fbe4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://117748c4c4fa5e68d4b927639faa447ed3a984e0d7364a2224abe27e178d5746\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:17:21Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:17:43Z is after 2025-08-24T17:21:41Z" Feb 18 19:17:43 crc kubenswrapper[4942]: I0218 19:17:43.452627 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8cc6e8b6926e9cadf0bfdedb3a9fd0e5a7a902ba1cc703cd0396c3d7b2ec8666\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://45c0716738e2acbb0104b2ce05e3f23fd6933b653297d10972914500f3e55cd3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:17:43Z is after 2025-08-24T17:21:41Z" Feb 18 19:17:43 crc kubenswrapper[4942]: I0218 19:17:43.466980 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:17:43Z is after 2025-08-24T17:21:41Z" Feb 18 19:17:43 crc kubenswrapper[4942]: I0218 19:17:43.481904 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-5pgvt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f163820b-df8b-4e07-9b74-d5f3332580a6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://97b02b2ef091c462632d385e824d90a6dc8270726bb3b5dfaa6c3036e99d323f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjg6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:17:41Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-5pgvt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:17:43Z is after 2025-08-24T17:21:41Z" Feb 18 19:17:43 crc kubenswrapper[4942]: I0218 19:17:43.501710 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-2rbc4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1ec6cacf-a09d-43b8-89fc-aea1d5a0ca9d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:42Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:42Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:42Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-27v7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://26e15915f01864e6357f914244e51cc52eb7c1c79fdda6cebfb23c7723978fc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://26e15915f01864e6357f914244e51cc52eb7c1c79fdda6cebfb23c7723978fc7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:17:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-27v7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-27v7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-27v7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-27v7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-27v7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-27v7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:17:42Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-2rbc4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:17:43Z is after 2025-08-24T17:21:41Z" Feb 18 19:17:43 crc kubenswrapper[4942]: I0218 19:17:43.515139 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-8jfwb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"75150b8c-7a02-497b-86c3-eabc9c8dbc55\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f6aba9b40a3a963de7e8fb8f2a121318f0800350a41caa30b6aef71468e5e0e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-65c5q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:17:42Z\\\"}}\" for pod \"openshift-multus\"/\"multus-8jfwb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:17:43Z is after 2025-08-24T17:21:41Z" Feb 18 19:17:43 crc kubenswrapper[4942]: I0218 19:17:43.534568 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-89fzv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"45dc4164-81a9-44cf-b86a-dff571bc0417\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:42Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:42Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl7tj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl7tj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl7tj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl7tj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl7tj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl7tj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl7tj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl7tj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://581689b9e064557a35e24e6d5c15a73036b8499700959fd330e9ebee15543edc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://581689b9e064557a35e24e6d5c15a73036b8499700959fd330e9ebee15543edc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:17:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:17:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl7tj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:17:42Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-89fzv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:17:43Z is after 2025-08-24T17:21:41Z" Feb 18 19:17:43 crc kubenswrapper[4942]: I0218 19:17:43.550332 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 19:17:43 crc kubenswrapper[4942]: I0218 19:17:43.550371 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 19:17:43 crc kubenswrapper[4942]: E0218 19:17:43.550506 4942 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 18 19:17:43 crc kubenswrapper[4942]: E0218 19:17:43.550525 4942 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 18 19:17:43 crc kubenswrapper[4942]: E0218 19:17:43.550541 4942 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 18 19:17:43 crc kubenswrapper[4942]: E0218 19:17:43.550584 4942 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-18 19:17:45.55056812 +0000 UTC m=+25.255500775 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 18 19:17:43 crc kubenswrapper[4942]: E0218 19:17:43.550684 4942 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 18 19:17:43 crc kubenswrapper[4942]: E0218 19:17:43.550739 4942 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 18 19:17:43 crc kubenswrapper[4942]: E0218 19:17:43.550808 4942 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 18 19:17:43 crc kubenswrapper[4942]: E0218 19:17:43.550899 4942 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-18 19:17:45.550877928 +0000 UTC m=+25.255810593 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 18 19:17:43 crc kubenswrapper[4942]: I0218 19:17:43.559790 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://36f5db0de79285e1aca04aee9ebb8824353d8746f2f7df24be858a55db3c9abf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:17:43Z is after 2025-08-24T17:21:41Z" Feb 18 19:17:43 crc kubenswrapper[4942]: I0218 19:17:43.576701 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:17:43Z is after 2025-08-24T17:21:41Z" Feb 18 19:17:43 crc kubenswrapper[4942]: I0218 19:17:43.598130 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:17:43Z is after 2025-08-24T17:21:41Z" Feb 18 19:17:43 crc kubenswrapper[4942]: I0218 19:17:43.609860 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wqxh4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"28921539-823a-4439-a230-3b5aed7085cc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d1f426cf3a46e9dbd6da2d7e0d1dc2649a781bb63b9b116e2e96e297ffe685f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c2zj5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3f2583de812c35d32f50918d2ea1071672e650d7bb1eca09416558ca25526b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c2zj5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:17:42Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wqxh4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:17:43Z is after 2025-08-24T17:21:41Z" Feb 18 19:17:43 crc kubenswrapper[4942]: I0218 19:17:43.627586 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4da93830-99a3-4d84-91c8-a5352a987b3f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://beecfbdf76954e7b9895240b52a2ec033ec3b81094ece02095f67a5f389d0383\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca3d8e99733c89b17e7211c9bae268f8e75942d896d32a6e2e9fc7e613000a6d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee5e19c2c5a503ae69e8052828713b9b399137e0fb7f3a06865d4d7f6b29c954\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b82ad9b4d66dae63d0d0fcf0dd65333cf43c3b1d6006e129b7db1c6320bfdee8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b82ad9b4d66dae63d0d0fcf0dd65333cf43c3b1d6006e129b7db1c6320bfdee8\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"message\\\":\\\"le observer\\\\nW0218 19:17:41.723890 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0218 19:17:41.724123 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0218 19:17:41.725411 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3231040961/tls.crt::/tmp/serving-cert-3231040961/tls.key\\\\\\\"\\\\nI0218 19:17:41.923908 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0218 19:17:41.936017 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0218 19:17:41.936045 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0218 19:17:41.936073 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0218 19:17:41.936079 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0218 19:17:41.944174 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0218 19:17:41.944200 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 19:17:41.944205 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 19:17:41.944211 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0218 19:17:41.944214 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0218 19:17:41.944217 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0218 19:17:41.944220 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0218 19:17:41.944371 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0218 19:17:41.958094 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T19:17:41Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5fcd5de3303bba82e4a354de9f77b9aac574912955c2e49e2e74232f4d432a88\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:23Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6adb31d2464d541db843bddad1c43811ca11900db12deeb0d7c13ff8a3186d09\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6adb31d2464d541db843bddad1c43811ca11900db12deeb0d7c13ff8a3186d09\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:17:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:17:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:17:21Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:17:43Z is after 2025-08-24T17:21:41Z" Feb 18 19:17:43 crc kubenswrapper[4942]: I0218 19:17:43.642229 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:17:43Z is after 2025-08-24T17:21:41Z" Feb 18 19:17:43 crc kubenswrapper[4942]: I0218 19:17:43.752143 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 19:17:43 crc kubenswrapper[4942]: E0218 19:17:43.752387 4942 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 19:17:45.752348854 +0000 UTC m=+25.457281519 (durationBeforeRetry 2s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 19:17:43 crc kubenswrapper[4942]: I0218 19:17:43.752798 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 19:17:43 crc kubenswrapper[4942]: I0218 19:17:43.752831 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 19:17:43 crc kubenswrapper[4942]: E0218 19:17:43.752940 4942 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 18 19:17:43 crc kubenswrapper[4942]: E0218 19:17:43.753011 4942 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-18 19:17:45.75299498 +0000 UTC m=+25.457927655 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 18 19:17:43 crc kubenswrapper[4942]: E0218 19:17:43.753021 4942 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 18 19:17:43 crc kubenswrapper[4942]: E0218 19:17:43.753100 4942 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-18 19:17:45.753091092 +0000 UTC m=+25.458023757 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 18 19:17:43 crc kubenswrapper[4942]: I0218 19:17:43.967634 4942 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-05 06:12:22.137867181 +0000 UTC Feb 18 19:17:44 crc kubenswrapper[4942]: I0218 19:17:44.035219 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 19:17:44 crc kubenswrapper[4942]: I0218 19:17:44.035291 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 19:17:44 crc kubenswrapper[4942]: E0218 19:17:44.035398 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 19:17:44 crc kubenswrapper[4942]: E0218 19:17:44.035462 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 19:17:44 crc kubenswrapper[4942]: I0218 19:17:44.238282 4942 generic.go:334] "Generic (PLEG): container finished" podID="1ec6cacf-a09d-43b8-89fc-aea1d5a0ca9d" containerID="3845cae9bbde573b86221f80bf461886dadf5c5149bb19824e505c703e168ba3" exitCode=0 Feb 18 19:17:44 crc kubenswrapper[4942]: I0218 19:17:44.238385 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-2rbc4" event={"ID":"1ec6cacf-a09d-43b8-89fc-aea1d5a0ca9d","Type":"ContainerDied","Data":"3845cae9bbde573b86221f80bf461886dadf5c5149bb19824e505c703e168ba3"} Feb 18 19:17:44 crc kubenswrapper[4942]: I0218 19:17:44.244542 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-89fzv" event={"ID":"45dc4164-81a9-44cf-b86a-dff571bc0417","Type":"ContainerStarted","Data":"b2e222b580b244e85a382499ae61c72779f95fdab87e4d4c723d29b488219f94"} Feb 18 19:17:44 crc kubenswrapper[4942]: I0218 19:17:44.244585 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-89fzv" event={"ID":"45dc4164-81a9-44cf-b86a-dff571bc0417","Type":"ContainerStarted","Data":"9333dac09e056ca12a248589ed4a097788b86ab83f9a1014d76d8bad88f1800c"} Feb 18 19:17:44 crc kubenswrapper[4942]: I0218 19:17:44.244597 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-89fzv" event={"ID":"45dc4164-81a9-44cf-b86a-dff571bc0417","Type":"ContainerStarted","Data":"e988175a524e389ddf3e3a47acb65910ac3bf3b812e14b76d988f13e2cdc5dc7"} Feb 18 19:17:44 crc kubenswrapper[4942]: I0218 19:17:44.244607 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-89fzv" event={"ID":"45dc4164-81a9-44cf-b86a-dff571bc0417","Type":"ContainerStarted","Data":"6351d0088a3e9c170ebe043fa700ef7f870c52f40d751b4fd13ac7b5bfa5e3b7"} Feb 18 19:17:44 crc kubenswrapper[4942]: I0218 19:17:44.244616 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-89fzv" event={"ID":"45dc4164-81a9-44cf-b86a-dff571bc0417","Type":"ContainerStarted","Data":"427d7c083c5040fc6afe217c7850f1114323977542e83eb35d0a71b4bef6ecc6"} Feb 18 19:17:44 crc kubenswrapper[4942]: I0218 19:17:44.284674 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://36f5db0de79285e1aca04aee9ebb8824353d8746f2f7df24be858a55db3c9abf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:17:44Z is after 2025-08-24T17:21:41Z" Feb 18 19:17:44 crc kubenswrapper[4942]: I0218 19:17:44.314851 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:17:44Z is after 2025-08-24T17:21:41Z" Feb 18 19:17:44 crc kubenswrapper[4942]: I0218 19:17:44.342828 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:17:44Z is after 2025-08-24T17:21:41Z" Feb 18 19:17:44 crc kubenswrapper[4942]: I0218 19:17:44.367838 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wqxh4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"28921539-823a-4439-a230-3b5aed7085cc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d1f426cf3a46e9dbd6da2d7e0d1dc2649a781bb63b9b116e2e96e297ffe685f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c2zj5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3f2583de812c35d32f50918d2ea1071672e650d7bb1eca09416558ca25526b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c2zj5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:17:42Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wqxh4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:17:44Z is after 2025-08-24T17:21:41Z" Feb 18 19:17:44 crc kubenswrapper[4942]: I0218 19:17:44.382984 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-8jfwb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"75150b8c-7a02-497b-86c3-eabc9c8dbc55\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f6aba9b40a3a963de7e8fb8f2a121318f0800350a41caa30b6aef71468e5e0e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-65c5q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:17:42Z\\\"}}\" for pod \"openshift-multus\"/\"multus-8jfwb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:17:44Z is after 2025-08-24T17:21:41Z" Feb 18 19:17:44 crc kubenswrapper[4942]: I0218 19:17:44.455078 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-89fzv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"45dc4164-81a9-44cf-b86a-dff571bc0417\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:42Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:42Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl7tj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl7tj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl7tj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl7tj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl7tj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl7tj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl7tj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl7tj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://581689b9e064557a35e24e6d5c15a73036b8499700959fd330e9ebee15543edc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://581689b9e064557a35e24e6d5c15a73036b8499700959fd330e9ebee15543edc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:17:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:17:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl7tj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:17:42Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-89fzv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:17:44Z is after 2025-08-24T17:21:41Z" Feb 18 19:17:44 crc kubenswrapper[4942]: I0218 19:17:44.477960 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4da93830-99a3-4d84-91c8-a5352a987b3f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://beecfbdf76954e7b9895240b52a2ec033ec3b81094ece02095f67a5f389d0383\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca3d8e99733c89b17e7211c9bae268f8e75942d896d32a6e2e9fc7e613000a6d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee5e19c2c5a503ae69e8052828713b9b399137e0fb7f3a06865d4d7f6b29c954\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b82ad9b4d66dae63d0d0fcf0dd65333cf43c3b1d6006e129b7db1c6320bfdee8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b82ad9b4d66dae63d0d0fcf0dd65333cf43c3b1d6006e129b7db1c6320bfdee8\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"message\\\":\\\"le observer\\\\nW0218 19:17:41.723890 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0218 19:17:41.724123 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0218 19:17:41.725411 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3231040961/tls.crt::/tmp/serving-cert-3231040961/tls.key\\\\\\\"\\\\nI0218 19:17:41.923908 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0218 19:17:41.936017 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0218 19:17:41.936045 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0218 19:17:41.936073 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0218 19:17:41.936079 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0218 19:17:41.944174 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0218 19:17:41.944200 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 19:17:41.944205 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 19:17:41.944211 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0218 19:17:41.944214 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0218 19:17:41.944217 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0218 19:17:41.944220 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0218 19:17:41.944371 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0218 19:17:41.958094 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T19:17:41Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5fcd5de3303bba82e4a354de9f77b9aac574912955c2e49e2e74232f4d432a88\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:23Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6adb31d2464d541db843bddad1c43811ca11900db12deeb0d7c13ff8a3186d09\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6adb31d2464d541db843bddad1c43811ca11900db12deeb0d7c13ff8a3186d09\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:17:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:17:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:17:21Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:17:44Z is after 2025-08-24T17:21:41Z" Feb 18 19:17:44 crc kubenswrapper[4942]: I0218 19:17:44.495702 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:17:44Z is after 2025-08-24T17:21:41Z" Feb 18 19:17:44 crc kubenswrapper[4942]: I0218 19:17:44.512963 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b5d2b9d-7ec0-41fa-a073-399c6fd41eb6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2c8b81c113e461032be39d6328308bad3189a9e84d987da987d43e8e2f6449fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3654d3b4a5084ce9ffb9ef8aeab6155788b56ac636aee44b098f6e9d457a8d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3a247d311cfbec62a54df5757a344bbc7ea516a66ccdeb67aecbbe268a4fbe4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://117748c4c4fa5e68d4b927639faa447ed3a984e0d7364a2224abe27e178d5746\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:17:21Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:17:44Z is after 2025-08-24T17:21:41Z" Feb 18 19:17:44 crc kubenswrapper[4942]: I0218 19:17:44.536048 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8cc6e8b6926e9cadf0bfdedb3a9fd0e5a7a902ba1cc703cd0396c3d7b2ec8666\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://45c0716738e2acbb0104b2ce05e3f23fd6933b653297d10972914500f3e55cd3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:17:44Z is after 2025-08-24T17:21:41Z" Feb 18 19:17:44 crc kubenswrapper[4942]: I0218 19:17:44.549016 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-5pgvt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f163820b-df8b-4e07-9b74-d5f3332580a6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://97b02b2ef091c462632d385e824d90a6dc8270726bb3b5dfaa6c3036e99d323f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjg6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:17:41Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-5pgvt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:17:44Z is after 2025-08-24T17:21:41Z" Feb 18 19:17:44 crc kubenswrapper[4942]: I0218 19:17:44.563586 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-2rbc4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1ec6cacf-a09d-43b8-89fc-aea1d5a0ca9d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:42Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:42Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:42Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-27v7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://26e15915f01864e6357f914244e51cc52eb7c1c79fdda6cebfb23c7723978fc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://26e15915f01864e6357f914244e51cc52eb7c1c79fdda6cebfb23c7723978fc7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:17:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-27v7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3845cae9bbde573b86221f80bf461886dadf5c5149bb19824e505c703e168ba3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3845cae9bbde573b86221f80bf461886dadf5c5149bb19824e505c703e168ba3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:17:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-27v7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-27v7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-27v7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-27v7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-27v7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:17:42Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-2rbc4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:17:44Z is after 2025-08-24T17:21:41Z" Feb 18 19:17:44 crc kubenswrapper[4942]: I0218 19:17:44.579413 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:17:44Z is after 2025-08-24T17:21:41Z" Feb 18 19:17:44 crc kubenswrapper[4942]: I0218 19:17:44.968748 4942 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-08 12:25:30.469713948 +0000 UTC Feb 18 19:17:45 crc kubenswrapper[4942]: I0218 19:17:45.035437 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 19:17:45 crc kubenswrapper[4942]: E0218 19:17:45.035605 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 19:17:45 crc kubenswrapper[4942]: I0218 19:17:45.214265 4942 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/node-ca-wxck8"] Feb 18 19:17:45 crc kubenswrapper[4942]: I0218 19:17:45.214721 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-wxck8" Feb 18 19:17:45 crc kubenswrapper[4942]: I0218 19:17:45.216963 4942 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Feb 18 19:17:45 crc kubenswrapper[4942]: I0218 19:17:45.217224 4942 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Feb 18 19:17:45 crc kubenswrapper[4942]: I0218 19:17:45.217400 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Feb 18 19:17:45 crc kubenswrapper[4942]: I0218 19:17:45.217572 4942 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Feb 18 19:17:45 crc kubenswrapper[4942]: I0218 19:17:45.231359 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:17:45Z is after 2025-08-24T17:21:41Z" Feb 18 19:17:45 crc kubenswrapper[4942]: I0218 19:17:45.250335 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-89fzv" event={"ID":"45dc4164-81a9-44cf-b86a-dff571bc0417","Type":"ContainerStarted","Data":"bcc9ee5f12cc3a3518c9fe13c16743e946e59b82dc01239767afb1e4afb2e4b9"} Feb 18 19:17:45 crc kubenswrapper[4942]: I0218 19:17:45.252170 4942 generic.go:334] "Generic (PLEG): container finished" podID="1ec6cacf-a09d-43b8-89fc-aea1d5a0ca9d" containerID="83b7a573c632fbc4da1c088193202dcbe4f5f09d82c98b12e901a06acc877b80" exitCode=0 Feb 18 19:17:45 crc kubenswrapper[4942]: I0218 19:17:45.252209 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-2rbc4" event={"ID":"1ec6cacf-a09d-43b8-89fc-aea1d5a0ca9d","Type":"ContainerDied","Data":"83b7a573c632fbc4da1c088193202dcbe4f5f09d82c98b12e901a06acc877b80"} Feb 18 19:17:45 crc kubenswrapper[4942]: I0218 19:17:45.259220 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:17:45Z is after 2025-08-24T17:21:41Z" Feb 18 19:17:45 crc kubenswrapper[4942]: I0218 19:17:45.272718 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wqxh4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"28921539-823a-4439-a230-3b5aed7085cc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d1f426cf3a46e9dbd6da2d7e0d1dc2649a781bb63b9b116e2e96e297ffe685f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c2zj5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3f2583de812c35d32f50918d2ea1071672e650d7bb1eca09416558ca25526b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c2zj5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:17:42Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wqxh4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:17:45Z is after 2025-08-24T17:21:41Z" Feb 18 19:17:45 crc kubenswrapper[4942]: I0218 19:17:45.287134 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-8jfwb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"75150b8c-7a02-497b-86c3-eabc9c8dbc55\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f6aba9b40a3a963de7e8fb8f2a121318f0800350a41caa30b6aef71468e5e0e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-65c5q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:17:42Z\\\"}}\" for pod \"openshift-multus\"/\"multus-8jfwb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:17:45Z is after 2025-08-24T17:21:41Z" Feb 18 19:17:45 crc kubenswrapper[4942]: I0218 19:17:45.310971 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-89fzv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"45dc4164-81a9-44cf-b86a-dff571bc0417\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:42Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:42Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl7tj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl7tj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl7tj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl7tj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl7tj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl7tj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl7tj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl7tj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://581689b9e064557a35e24e6d5c15a73036b8499700959fd330e9ebee15543edc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://581689b9e064557a35e24e6d5c15a73036b8499700959fd330e9ebee15543edc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:17:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:17:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl7tj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:17:42Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-89fzv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:17:45Z is after 2025-08-24T17:21:41Z" Feb 18 19:17:45 crc kubenswrapper[4942]: I0218 19:17:45.330892 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://36f5db0de79285e1aca04aee9ebb8824353d8746f2f7df24be858a55db3c9abf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:17:45Z is after 2025-08-24T17:21:41Z" Feb 18 19:17:45 crc kubenswrapper[4942]: I0218 19:17:45.348825 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4da93830-99a3-4d84-91c8-a5352a987b3f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://beecfbdf76954e7b9895240b52a2ec033ec3b81094ece02095f67a5f389d0383\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca3d8e99733c89b17e7211c9bae268f8e75942d896d32a6e2e9fc7e613000a6d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee5e19c2c5a503ae69e8052828713b9b399137e0fb7f3a06865d4d7f6b29c954\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b82ad9b4d66dae63d0d0fcf0dd65333cf43c3b1d6006e129b7db1c6320bfdee8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b82ad9b4d66dae63d0d0fcf0dd65333cf43c3b1d6006e129b7db1c6320bfdee8\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"message\\\":\\\"le observer\\\\nW0218 19:17:41.723890 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0218 19:17:41.724123 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0218 19:17:41.725411 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3231040961/tls.crt::/tmp/serving-cert-3231040961/tls.key\\\\\\\"\\\\nI0218 19:17:41.923908 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0218 19:17:41.936017 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0218 19:17:41.936045 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0218 19:17:41.936073 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0218 19:17:41.936079 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0218 19:17:41.944174 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0218 19:17:41.944200 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 19:17:41.944205 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 19:17:41.944211 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0218 19:17:41.944214 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0218 19:17:41.944217 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0218 19:17:41.944220 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0218 19:17:41.944371 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0218 19:17:41.958094 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T19:17:41Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5fcd5de3303bba82e4a354de9f77b9aac574912955c2e49e2e74232f4d432a88\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:23Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6adb31d2464d541db843bddad1c43811ca11900db12deeb0d7c13ff8a3186d09\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6adb31d2464d541db843bddad1c43811ca11900db12deeb0d7c13ff8a3186d09\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:17:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:17:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:17:21Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:17:45Z is after 2025-08-24T17:21:41Z" Feb 18 19:17:45 crc kubenswrapper[4942]: I0218 19:17:45.363892 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:17:45Z is after 2025-08-24T17:21:41Z" Feb 18 19:17:45 crc kubenswrapper[4942]: I0218 19:17:45.369133 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vscpp\" (UniqueName: \"kubernetes.io/projected/69ef2748-687e-4223-998e-7bd92ad8aaaf-kube-api-access-vscpp\") pod \"node-ca-wxck8\" (UID: \"69ef2748-687e-4223-998e-7bd92ad8aaaf\") " pod="openshift-image-registry/node-ca-wxck8" Feb 18 19:17:45 crc kubenswrapper[4942]: I0218 19:17:45.369165 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/69ef2748-687e-4223-998e-7bd92ad8aaaf-serviceca\") pod \"node-ca-wxck8\" (UID: \"69ef2748-687e-4223-998e-7bd92ad8aaaf\") " pod="openshift-image-registry/node-ca-wxck8" Feb 18 19:17:45 crc kubenswrapper[4942]: I0218 19:17:45.369204 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/69ef2748-687e-4223-998e-7bd92ad8aaaf-host\") pod \"node-ca-wxck8\" (UID: \"69ef2748-687e-4223-998e-7bd92ad8aaaf\") " pod="openshift-image-registry/node-ca-wxck8" Feb 18 19:17:45 crc kubenswrapper[4942]: I0218 19:17:45.375257 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-wxck8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"69ef2748-687e-4223-998e-7bd92ad8aaaf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:45Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:45Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vscpp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:17:45Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-wxck8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:17:45Z is after 2025-08-24T17:21:41Z" Feb 18 19:17:45 crc kubenswrapper[4942]: I0218 19:17:45.389002 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b5d2b9d-7ec0-41fa-a073-399c6fd41eb6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2c8b81c113e461032be39d6328308bad3189a9e84d987da987d43e8e2f6449fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3654d3b4a5084ce9ffb9ef8aeab6155788b56ac636aee44b098f6e9d457a8d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3a247d311cfbec62a54df5757a344bbc7ea516a66ccdeb67aecbbe268a4fbe4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://117748c4c4fa5e68d4b927639faa447ed3a984e0d7364a2224abe27e178d5746\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:17:21Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:17:45Z is after 2025-08-24T17:21:41Z" Feb 18 19:17:45 crc kubenswrapper[4942]: I0218 19:17:45.403272 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8cc6e8b6926e9cadf0bfdedb3a9fd0e5a7a902ba1cc703cd0396c3d7b2ec8666\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://45c0716738e2acbb0104b2ce05e3f23fd6933b653297d10972914500f3e55cd3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:17:45Z is after 2025-08-24T17:21:41Z" Feb 18 19:17:45 crc kubenswrapper[4942]: I0218 19:17:45.420479 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-2rbc4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1ec6cacf-a09d-43b8-89fc-aea1d5a0ca9d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:42Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:42Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:42Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-27v7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://26e15915f01864e6357f914244e51cc52eb7c1c79fdda6cebfb23c7723978fc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://26e15915f01864e6357f914244e51cc52eb7c1c79fdda6cebfb23c7723978fc7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:17:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-27v7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3845cae9bbde573b86221f80bf461886dadf5c5149bb19824e505c703e168ba3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3845cae9bbde573b86221f80bf461886dadf5c5149bb19824e505c703e168ba3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:17:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-27v7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-27v7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-27v7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-27v7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-27v7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:17:42Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-2rbc4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:17:45Z is after 2025-08-24T17:21:41Z" Feb 18 19:17:45 crc kubenswrapper[4942]: I0218 19:17:45.434904 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:17:45Z is after 2025-08-24T17:21:41Z" Feb 18 19:17:45 crc kubenswrapper[4942]: I0218 19:17:45.450032 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-5pgvt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f163820b-df8b-4e07-9b74-d5f3332580a6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://97b02b2ef091c462632d385e824d90a6dc8270726bb3b5dfaa6c3036e99d323f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjg6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:17:41Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-5pgvt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:17:45Z is after 2025-08-24T17:21:41Z" Feb 18 19:17:45 crc kubenswrapper[4942]: I0218 19:17:45.464594 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b5d2b9d-7ec0-41fa-a073-399c6fd41eb6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2c8b81c113e461032be39d6328308bad3189a9e84d987da987d43e8e2f6449fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3654d3b4a5084ce9ffb9ef8aeab6155788b56ac636aee44b098f6e9d457a8d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3a247d311cfbec62a54df5757a344bbc7ea516a66ccdeb67aecbbe268a4fbe4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://117748c4c4fa5e68d4b927639faa447ed3a984e0d7364a2224abe27e178d5746\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:17:21Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:17:45Z is after 2025-08-24T17:21:41Z" Feb 18 19:17:45 crc kubenswrapper[4942]: I0218 19:17:45.470066 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vscpp\" (UniqueName: \"kubernetes.io/projected/69ef2748-687e-4223-998e-7bd92ad8aaaf-kube-api-access-vscpp\") pod \"node-ca-wxck8\" (UID: \"69ef2748-687e-4223-998e-7bd92ad8aaaf\") " pod="openshift-image-registry/node-ca-wxck8" Feb 18 19:17:45 crc kubenswrapper[4942]: I0218 19:17:45.470108 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/69ef2748-687e-4223-998e-7bd92ad8aaaf-serviceca\") pod \"node-ca-wxck8\" (UID: \"69ef2748-687e-4223-998e-7bd92ad8aaaf\") " pod="openshift-image-registry/node-ca-wxck8" Feb 18 19:17:45 crc kubenswrapper[4942]: I0218 19:17:45.470129 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/69ef2748-687e-4223-998e-7bd92ad8aaaf-host\") pod \"node-ca-wxck8\" (UID: \"69ef2748-687e-4223-998e-7bd92ad8aaaf\") " pod="openshift-image-registry/node-ca-wxck8" Feb 18 19:17:45 crc kubenswrapper[4942]: I0218 19:17:45.470217 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/69ef2748-687e-4223-998e-7bd92ad8aaaf-host\") pod \"node-ca-wxck8\" (UID: \"69ef2748-687e-4223-998e-7bd92ad8aaaf\") " pod="openshift-image-registry/node-ca-wxck8" Feb 18 19:17:45 crc kubenswrapper[4942]: I0218 19:17:45.472269 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/69ef2748-687e-4223-998e-7bd92ad8aaaf-serviceca\") pod \"node-ca-wxck8\" (UID: \"69ef2748-687e-4223-998e-7bd92ad8aaaf\") " pod="openshift-image-registry/node-ca-wxck8" Feb 18 19:17:45 crc kubenswrapper[4942]: I0218 19:17:45.487504 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8cc6e8b6926e9cadf0bfdedb3a9fd0e5a7a902ba1cc703cd0396c3d7b2ec8666\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://45c0716738e2acbb0104b2ce05e3f23fd6933b653297d10972914500f3e55cd3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:17:45Z is after 2025-08-24T17:21:41Z" Feb 18 19:17:45 crc kubenswrapper[4942]: I0218 19:17:45.497131 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vscpp\" (UniqueName: \"kubernetes.io/projected/69ef2748-687e-4223-998e-7bd92ad8aaaf-kube-api-access-vscpp\") pod \"node-ca-wxck8\" (UID: \"69ef2748-687e-4223-998e-7bd92ad8aaaf\") " pod="openshift-image-registry/node-ca-wxck8" Feb 18 19:17:45 crc kubenswrapper[4942]: I0218 19:17:45.505382 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:17:45Z is after 2025-08-24T17:21:41Z" Feb 18 19:17:45 crc kubenswrapper[4942]: I0218 19:17:45.516743 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-5pgvt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f163820b-df8b-4e07-9b74-d5f3332580a6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://97b02b2ef091c462632d385e824d90a6dc8270726bb3b5dfaa6c3036e99d323f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjg6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:17:41Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-5pgvt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:17:45Z is after 2025-08-24T17:21:41Z" Feb 18 19:17:45 crc kubenswrapper[4942]: I0218 19:17:45.534195 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-2rbc4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1ec6cacf-a09d-43b8-89fc-aea1d5a0ca9d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:42Z\\\",\\\"message\\\":\\\"containers with incomplete status: [routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:42Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:42Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-27v7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://26e15915f01864e6357f914244e51cc52eb7c1c79fdda6cebfb23c7723978fc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://26e15915f01864e6357f914244e51cc52eb7c1c79fdda6cebfb23c7723978fc7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:17:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-27v7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3845cae9bbde573b86221f80bf461886dadf5c5149bb19824e505c703e168ba3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3845cae9bbde573b86221f80bf461886dadf5c5149bb19824e505c703e168ba3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:17:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-27v7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://83b7a573c632fbc4da1c088193202dcbe4f5f09d82c98b12e901a06acc877b80\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://83b7a573c632fbc4da1c088193202dcbe4f5f09d82c98b12e901a06acc877b80\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:17:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:17:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-27v7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-27v7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-27v7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-27v7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:17:42Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-2rbc4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:17:45Z is after 2025-08-24T17:21:41Z" Feb 18 19:17:45 crc kubenswrapper[4942]: I0218 19:17:45.539324 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-wxck8" Feb 18 19:17:45 crc kubenswrapper[4942]: I0218 19:17:45.558336 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-89fzv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"45dc4164-81a9-44cf-b86a-dff571bc0417\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:42Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:42Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl7tj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl7tj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl7tj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl7tj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl7tj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl7tj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl7tj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl7tj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://581689b9e064557a35e24e6d5c15a73036b8499700959fd330e9ebee15543edc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://581689b9e064557a35e24e6d5c15a73036b8499700959fd330e9ebee15543edc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:17:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:17:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl7tj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:17:42Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-89fzv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:17:45Z is after 2025-08-24T17:21:41Z" Feb 18 19:17:45 crc kubenswrapper[4942]: I0218 19:17:45.570878 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://36f5db0de79285e1aca04aee9ebb8824353d8746f2f7df24be858a55db3c9abf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:17:45Z is after 2025-08-24T17:21:41Z" Feb 18 19:17:45 crc kubenswrapper[4942]: I0218 19:17:45.571200 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 19:17:45 crc kubenswrapper[4942]: I0218 19:17:45.571244 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 19:17:45 crc kubenswrapper[4942]: E0218 19:17:45.571386 4942 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 18 19:17:45 crc kubenswrapper[4942]: E0218 19:17:45.571413 4942 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 18 19:17:45 crc kubenswrapper[4942]: E0218 19:17:45.571426 4942 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 18 19:17:45 crc kubenswrapper[4942]: E0218 19:17:45.571477 4942 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-18 19:17:49.571460042 +0000 UTC m=+29.276392707 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 18 19:17:45 crc kubenswrapper[4942]: E0218 19:17:45.571393 4942 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 18 19:17:45 crc kubenswrapper[4942]: E0218 19:17:45.571500 4942 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 18 19:17:45 crc kubenswrapper[4942]: E0218 19:17:45.571512 4942 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 18 19:17:45 crc kubenswrapper[4942]: E0218 19:17:45.571560 4942 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-18 19:17:49.571541944 +0000 UTC m=+29.276474609 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 18 19:17:45 crc kubenswrapper[4942]: I0218 19:17:45.584196 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:17:45Z is after 2025-08-24T17:21:41Z" Feb 18 19:17:45 crc kubenswrapper[4942]: I0218 19:17:45.596972 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:17:45Z is after 2025-08-24T17:21:41Z" Feb 18 19:17:45 crc kubenswrapper[4942]: I0218 19:17:45.615207 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wqxh4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"28921539-823a-4439-a230-3b5aed7085cc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d1f426cf3a46e9dbd6da2d7e0d1dc2649a781bb63b9b116e2e96e297ffe685f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c2zj5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3f2583de812c35d32f50918d2ea1071672e650d7bb1eca09416558ca25526b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c2zj5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:17:42Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wqxh4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:17:45Z is after 2025-08-24T17:21:41Z" Feb 18 19:17:45 crc kubenswrapper[4942]: I0218 19:17:45.628083 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-8jfwb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"75150b8c-7a02-497b-86c3-eabc9c8dbc55\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f6aba9b40a3a963de7e8fb8f2a121318f0800350a41caa30b6aef71468e5e0e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-65c5q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:17:42Z\\\"}}\" for pod \"openshift-multus\"/\"multus-8jfwb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:17:45Z is after 2025-08-24T17:21:41Z" Feb 18 19:17:45 crc kubenswrapper[4942]: I0218 19:17:45.643277 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4da93830-99a3-4d84-91c8-a5352a987b3f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://beecfbdf76954e7b9895240b52a2ec033ec3b81094ece02095f67a5f389d0383\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca3d8e99733c89b17e7211c9bae268f8e75942d896d32a6e2e9fc7e613000a6d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee5e19c2c5a503ae69e8052828713b9b399137e0fb7f3a06865d4d7f6b29c954\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b82ad9b4d66dae63d0d0fcf0dd65333cf43c3b1d6006e129b7db1c6320bfdee8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b82ad9b4d66dae63d0d0fcf0dd65333cf43c3b1d6006e129b7db1c6320bfdee8\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"message\\\":\\\"le observer\\\\nW0218 19:17:41.723890 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0218 19:17:41.724123 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0218 19:17:41.725411 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3231040961/tls.crt::/tmp/serving-cert-3231040961/tls.key\\\\\\\"\\\\nI0218 19:17:41.923908 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0218 19:17:41.936017 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0218 19:17:41.936045 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0218 19:17:41.936073 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0218 19:17:41.936079 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0218 19:17:41.944174 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0218 19:17:41.944200 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 19:17:41.944205 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 19:17:41.944211 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0218 19:17:41.944214 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0218 19:17:41.944217 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0218 19:17:41.944220 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0218 19:17:41.944371 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0218 19:17:41.958094 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T19:17:41Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5fcd5de3303bba82e4a354de9f77b9aac574912955c2e49e2e74232f4d432a88\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:23Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6adb31d2464d541db843bddad1c43811ca11900db12deeb0d7c13ff8a3186d09\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6adb31d2464d541db843bddad1c43811ca11900db12deeb0d7c13ff8a3186d09\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:17:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:17:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:17:21Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:17:45Z is after 2025-08-24T17:21:41Z" Feb 18 19:17:45 crc kubenswrapper[4942]: I0218 19:17:45.669348 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:17:45Z is after 2025-08-24T17:21:41Z" Feb 18 19:17:45 crc kubenswrapper[4942]: I0218 19:17:45.688438 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-wxck8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"69ef2748-687e-4223-998e-7bd92ad8aaaf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:45Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:45Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vscpp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:17:45Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-wxck8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:17:45Z is after 2025-08-24T17:21:41Z" Feb 18 19:17:45 crc kubenswrapper[4942]: I0218 19:17:45.774360 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 19:17:45 crc kubenswrapper[4942]: I0218 19:17:45.774549 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 19:17:45 crc kubenswrapper[4942]: I0218 19:17:45.774587 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 19:17:45 crc kubenswrapper[4942]: E0218 19:17:45.774633 4942 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 19:17:49.774590549 +0000 UTC m=+29.479523214 (durationBeforeRetry 4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 19:17:45 crc kubenswrapper[4942]: E0218 19:17:45.774750 4942 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 18 19:17:45 crc kubenswrapper[4942]: E0218 19:17:45.774862 4942 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-18 19:17:49.774840735 +0000 UTC m=+29.479773590 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 18 19:17:45 crc kubenswrapper[4942]: E0218 19:17:45.774777 4942 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 18 19:17:45 crc kubenswrapper[4942]: E0218 19:17:45.774964 4942 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-18 19:17:49.774951388 +0000 UTC m=+29.479884263 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 18 19:17:45 crc kubenswrapper[4942]: I0218 19:17:45.851269 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 18 19:17:45 crc kubenswrapper[4942]: I0218 19:17:45.852469 4942 scope.go:117] "RemoveContainer" containerID="b82ad9b4d66dae63d0d0fcf0dd65333cf43c3b1d6006e129b7db1c6320bfdee8" Feb 18 19:17:45 crc kubenswrapper[4942]: E0218 19:17:45.852636 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Feb 18 19:17:45 crc kubenswrapper[4942]: I0218 19:17:45.970525 4942 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-02 06:46:27.534913586 +0000 UTC Feb 18 19:17:46 crc kubenswrapper[4942]: I0218 19:17:46.035569 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 19:17:46 crc kubenswrapper[4942]: I0218 19:17:46.035603 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 19:17:46 crc kubenswrapper[4942]: E0218 19:17:46.035840 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 19:17:46 crc kubenswrapper[4942]: E0218 19:17:46.035987 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 19:17:46 crc kubenswrapper[4942]: I0218 19:17:46.259644 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-wxck8" event={"ID":"69ef2748-687e-4223-998e-7bd92ad8aaaf","Type":"ContainerStarted","Data":"2ba4df5c822ff37a1a027d1908aab6472cd0b5a6ab0a2b5e5d1b172774107727"} Feb 18 19:17:46 crc kubenswrapper[4942]: I0218 19:17:46.259732 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-wxck8" event={"ID":"69ef2748-687e-4223-998e-7bd92ad8aaaf","Type":"ContainerStarted","Data":"530f4ea3ed961092e14f800152879b7dd96034db958da0cc81eb74d156e31a47"} Feb 18 19:17:46 crc kubenswrapper[4942]: I0218 19:17:46.264227 4942 generic.go:334] "Generic (PLEG): container finished" podID="1ec6cacf-a09d-43b8-89fc-aea1d5a0ca9d" containerID="b2730d908eb063a0dc3278a304a8b7b9aee84bb6df39693e476d6517362864da" exitCode=0 Feb 18 19:17:46 crc kubenswrapper[4942]: I0218 19:17:46.264332 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-2rbc4" event={"ID":"1ec6cacf-a09d-43b8-89fc-aea1d5a0ca9d","Type":"ContainerDied","Data":"b2730d908eb063a0dc3278a304a8b7b9aee84bb6df39693e476d6517362864da"} Feb 18 19:17:46 crc kubenswrapper[4942]: I0218 19:17:46.266837 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"2e4be8605467674f949e5b4b8d282634126ab56d2983d5ffadb64ca4043b79b5"} Feb 18 19:17:46 crc kubenswrapper[4942]: I0218 19:17:46.281496 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b5d2b9d-7ec0-41fa-a073-399c6fd41eb6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2c8b81c113e461032be39d6328308bad3189a9e84d987da987d43e8e2f6449fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3654d3b4a5084ce9ffb9ef8aeab6155788b56ac636aee44b098f6e9d457a8d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3a247d311cfbec62a54df5757a344bbc7ea516a66ccdeb67aecbbe268a4fbe4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://117748c4c4fa5e68d4b927639faa447ed3a984e0d7364a2224abe27e178d5746\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:17:21Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:17:46Z is after 2025-08-24T17:21:41Z" Feb 18 19:17:46 crc kubenswrapper[4942]: I0218 19:17:46.305491 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8cc6e8b6926e9cadf0bfdedb3a9fd0e5a7a902ba1cc703cd0396c3d7b2ec8666\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://45c0716738e2acbb0104b2ce05e3f23fd6933b653297d10972914500f3e55cd3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:17:46Z is after 2025-08-24T17:21:41Z" Feb 18 19:17:46 crc kubenswrapper[4942]: I0218 19:17:46.318456 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-5pgvt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f163820b-df8b-4e07-9b74-d5f3332580a6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://97b02b2ef091c462632d385e824d90a6dc8270726bb3b5dfaa6c3036e99d323f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjg6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:17:41Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-5pgvt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:17:46Z is after 2025-08-24T17:21:41Z" Feb 18 19:17:46 crc kubenswrapper[4942]: I0218 19:17:46.342576 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-2rbc4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1ec6cacf-a09d-43b8-89fc-aea1d5a0ca9d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:42Z\\\",\\\"message\\\":\\\"containers with incomplete status: [routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:42Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:42Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-27v7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://26e15915f01864e6357f914244e51cc52eb7c1c79fdda6cebfb23c7723978fc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://26e15915f01864e6357f914244e51cc52eb7c1c79fdda6cebfb23c7723978fc7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:17:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-27v7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3845cae9bbde573b86221f80bf461886dadf5c5149bb19824e505c703e168ba3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3845cae9bbde573b86221f80bf461886dadf5c5149bb19824e505c703e168ba3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:17:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-27v7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://83b7a573c632fbc4da1c088193202dcbe4f5f09d82c98b12e901a06acc877b80\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://83b7a573c632fbc4da1c088193202dcbe4f5f09d82c98b12e901a06acc877b80\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:17:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:17:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-27v7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-27v7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-27v7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-27v7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:17:42Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-2rbc4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:17:46Z is after 2025-08-24T17:21:41Z" Feb 18 19:17:46 crc kubenswrapper[4942]: I0218 19:17:46.360787 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:17:46Z is after 2025-08-24T17:21:41Z" Feb 18 19:17:46 crc kubenswrapper[4942]: I0218 19:17:46.380562 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://36f5db0de79285e1aca04aee9ebb8824353d8746f2f7df24be858a55db3c9abf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:17:46Z is after 2025-08-24T17:21:41Z" Feb 18 19:17:46 crc kubenswrapper[4942]: I0218 19:17:46.393029 4942 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 18 19:17:46 crc kubenswrapper[4942]: I0218 19:17:46.394480 4942 scope.go:117] "RemoveContainer" containerID="b82ad9b4d66dae63d0d0fcf0dd65333cf43c3b1d6006e129b7db1c6320bfdee8" Feb 18 19:17:46 crc kubenswrapper[4942]: E0218 19:17:46.394749 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Feb 18 19:17:46 crc kubenswrapper[4942]: I0218 19:17:46.398594 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:17:46Z is after 2025-08-24T17:21:41Z" Feb 18 19:17:46 crc kubenswrapper[4942]: I0218 19:17:46.417181 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:17:46Z is after 2025-08-24T17:21:41Z" Feb 18 19:17:46 crc kubenswrapper[4942]: I0218 19:17:46.445225 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wqxh4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"28921539-823a-4439-a230-3b5aed7085cc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d1f426cf3a46e9dbd6da2d7e0d1dc2649a781bb63b9b116e2e96e297ffe685f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c2zj5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3f2583de812c35d32f50918d2ea1071672e650d7bb1eca09416558ca25526b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c2zj5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:17:42Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wqxh4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:17:46Z is after 2025-08-24T17:21:41Z" Feb 18 19:17:46 crc kubenswrapper[4942]: I0218 19:17:46.466451 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-8jfwb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"75150b8c-7a02-497b-86c3-eabc9c8dbc55\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f6aba9b40a3a963de7e8fb8f2a121318f0800350a41caa30b6aef71468e5e0e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-65c5q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:17:42Z\\\"}}\" for pod \"openshift-multus\"/\"multus-8jfwb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:17:46Z is after 2025-08-24T17:21:41Z" Feb 18 19:17:46 crc kubenswrapper[4942]: I0218 19:17:46.487114 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-89fzv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"45dc4164-81a9-44cf-b86a-dff571bc0417\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:42Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:42Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl7tj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl7tj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl7tj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl7tj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl7tj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl7tj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl7tj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl7tj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://581689b9e064557a35e24e6d5c15a73036b8499700959fd330e9ebee15543edc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://581689b9e064557a35e24e6d5c15a73036b8499700959fd330e9ebee15543edc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:17:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:17:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl7tj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:17:42Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-89fzv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:17:46Z is after 2025-08-24T17:21:41Z" Feb 18 19:17:46 crc kubenswrapper[4942]: I0218 19:17:46.500396 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-wxck8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"69ef2748-687e-4223-998e-7bd92ad8aaaf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ba4df5c822ff37a1a027d1908aab6472cd0b5a6ab0a2b5e5d1b172774107727\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vscpp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:17:45Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-wxck8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:17:46Z is after 2025-08-24T17:21:41Z" Feb 18 19:17:46 crc kubenswrapper[4942]: I0218 19:17:46.520189 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4da93830-99a3-4d84-91c8-a5352a987b3f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://beecfbdf76954e7b9895240b52a2ec033ec3b81094ece02095f67a5f389d0383\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca3d8e99733c89b17e7211c9bae268f8e75942d896d32a6e2e9fc7e613000a6d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee5e19c2c5a503ae69e8052828713b9b399137e0fb7f3a06865d4d7f6b29c954\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b82ad9b4d66dae63d0d0fcf0dd65333cf43c3b1d6006e129b7db1c6320bfdee8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b82ad9b4d66dae63d0d0fcf0dd65333cf43c3b1d6006e129b7db1c6320bfdee8\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"message\\\":\\\"le observer\\\\nW0218 19:17:41.723890 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0218 19:17:41.724123 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0218 19:17:41.725411 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3231040961/tls.crt::/tmp/serving-cert-3231040961/tls.key\\\\\\\"\\\\nI0218 19:17:41.923908 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0218 19:17:41.936017 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0218 19:17:41.936045 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0218 19:17:41.936073 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0218 19:17:41.936079 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0218 19:17:41.944174 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0218 19:17:41.944200 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 19:17:41.944205 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 19:17:41.944211 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0218 19:17:41.944214 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0218 19:17:41.944217 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0218 19:17:41.944220 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0218 19:17:41.944371 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0218 19:17:41.958094 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T19:17:41Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5fcd5de3303bba82e4a354de9f77b9aac574912955c2e49e2e74232f4d432a88\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:23Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6adb31d2464d541db843bddad1c43811ca11900db12deeb0d7c13ff8a3186d09\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6adb31d2464d541db843bddad1c43811ca11900db12deeb0d7c13ff8a3186d09\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:17:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:17:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:17:21Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:17:46Z is after 2025-08-24T17:21:41Z" Feb 18 19:17:46 crc kubenswrapper[4942]: I0218 19:17:46.536179 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:17:46Z is after 2025-08-24T17:21:41Z" Feb 18 19:17:46 crc kubenswrapper[4942]: I0218 19:17:46.557919 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b5d2b9d-7ec0-41fa-a073-399c6fd41eb6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2c8b81c113e461032be39d6328308bad3189a9e84d987da987d43e8e2f6449fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3654d3b4a5084ce9ffb9ef8aeab6155788b56ac636aee44b098f6e9d457a8d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3a247d311cfbec62a54df5757a344bbc7ea516a66ccdeb67aecbbe268a4fbe4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://117748c4c4fa5e68d4b927639faa447ed3a984e0d7364a2224abe27e178d5746\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:17:21Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:17:46Z is after 2025-08-24T17:21:41Z" Feb 18 19:17:46 crc kubenswrapper[4942]: I0218 19:17:46.573832 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8cc6e8b6926e9cadf0bfdedb3a9fd0e5a7a902ba1cc703cd0396c3d7b2ec8666\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://45c0716738e2acbb0104b2ce05e3f23fd6933b653297d10972914500f3e55cd3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:17:46Z is after 2025-08-24T17:21:41Z" Feb 18 19:17:46 crc kubenswrapper[4942]: I0218 19:17:46.593219 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-2rbc4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1ec6cacf-a09d-43b8-89fc-aea1d5a0ca9d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:42Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:42Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:42Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-27v7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://26e15915f01864e6357f914244e51cc52eb7c1c79fdda6cebfb23c7723978fc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://26e15915f01864e6357f914244e51cc52eb7c1c79fdda6cebfb23c7723978fc7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:17:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-27v7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3845cae9bbde573b86221f80bf461886dadf5c5149bb19824e505c703e168ba3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3845cae9bbde573b86221f80bf461886dadf5c5149bb19824e505c703e168ba3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:17:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-27v7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://83b7a573c632fbc4da1c088193202dcbe4f5f09d82c98b12e901a06acc877b80\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://83b7a573c632fbc4da1c088193202dcbe4f5f09d82c98b12e901a06acc877b80\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:17:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:17:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-27v7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2730d908eb063a0dc3278a304a8b7b9aee84bb6df39693e476d6517362864da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b2730d908eb063a0dc3278a304a8b7b9aee84bb6df39693e476d6517362864da\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:17:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:17:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-27v7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-27v7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-27v7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:17:42Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-2rbc4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:17:46Z is after 2025-08-24T17:21:41Z" Feb 18 19:17:46 crc kubenswrapper[4942]: I0218 19:17:46.609468 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:17:46Z is after 2025-08-24T17:21:41Z" Feb 18 19:17:46 crc kubenswrapper[4942]: I0218 19:17:46.623677 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-5pgvt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f163820b-df8b-4e07-9b74-d5f3332580a6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://97b02b2ef091c462632d385e824d90a6dc8270726bb3b5dfaa6c3036e99d323f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjg6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:17:41Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-5pgvt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:17:46Z is after 2025-08-24T17:21:41Z" Feb 18 19:17:46 crc kubenswrapper[4942]: I0218 19:17:46.637632 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2e4be8605467674f949e5b4b8d282634126ab56d2983d5ffadb64ca4043b79b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:17:46Z is after 2025-08-24T17:21:41Z" Feb 18 19:17:46 crc kubenswrapper[4942]: I0218 19:17:46.654647 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:17:46Z is after 2025-08-24T17:21:41Z" Feb 18 19:17:46 crc kubenswrapper[4942]: I0218 19:17:46.671796 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wqxh4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"28921539-823a-4439-a230-3b5aed7085cc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d1f426cf3a46e9dbd6da2d7e0d1dc2649a781bb63b9b116e2e96e297ffe685f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c2zj5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3f2583de812c35d32f50918d2ea1071672e650d7bb1eca09416558ca25526b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c2zj5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:17:42Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wqxh4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:17:46Z is after 2025-08-24T17:21:41Z" Feb 18 19:17:46 crc kubenswrapper[4942]: I0218 19:17:46.690542 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-8jfwb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"75150b8c-7a02-497b-86c3-eabc9c8dbc55\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f6aba9b40a3a963de7e8fb8f2a121318f0800350a41caa30b6aef71468e5e0e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-65c5q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:17:42Z\\\"}}\" for pod \"openshift-multus\"/\"multus-8jfwb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:17:46Z is after 2025-08-24T17:21:41Z" Feb 18 19:17:46 crc kubenswrapper[4942]: I0218 19:17:46.715029 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-89fzv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"45dc4164-81a9-44cf-b86a-dff571bc0417\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:42Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:42Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl7tj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl7tj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl7tj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl7tj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl7tj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl7tj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl7tj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl7tj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://581689b9e064557a35e24e6d5c15a73036b8499700959fd330e9ebee15543edc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://581689b9e064557a35e24e6d5c15a73036b8499700959fd330e9ebee15543edc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:17:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:17:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl7tj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:17:42Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-89fzv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:17:46Z is after 2025-08-24T17:21:41Z" Feb 18 19:17:46 crc kubenswrapper[4942]: I0218 19:17:46.732894 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://36f5db0de79285e1aca04aee9ebb8824353d8746f2f7df24be858a55db3c9abf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:17:46Z is after 2025-08-24T17:21:41Z" Feb 18 19:17:46 crc kubenswrapper[4942]: I0218 19:17:46.759999 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4da93830-99a3-4d84-91c8-a5352a987b3f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://beecfbdf76954e7b9895240b52a2ec033ec3b81094ece02095f67a5f389d0383\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca3d8e99733c89b17e7211c9bae268f8e75942d896d32a6e2e9fc7e613000a6d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee5e19c2c5a503ae69e8052828713b9b399137e0fb7f3a06865d4d7f6b29c954\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b82ad9b4d66dae63d0d0fcf0dd65333cf43c3b1d6006e129b7db1c6320bfdee8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b82ad9b4d66dae63d0d0fcf0dd65333cf43c3b1d6006e129b7db1c6320bfdee8\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"message\\\":\\\"le observer\\\\nW0218 19:17:41.723890 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0218 19:17:41.724123 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0218 19:17:41.725411 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3231040961/tls.crt::/tmp/serving-cert-3231040961/tls.key\\\\\\\"\\\\nI0218 19:17:41.923908 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0218 19:17:41.936017 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0218 19:17:41.936045 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0218 19:17:41.936073 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0218 19:17:41.936079 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0218 19:17:41.944174 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0218 19:17:41.944200 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 19:17:41.944205 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 19:17:41.944211 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0218 19:17:41.944214 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0218 19:17:41.944217 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0218 19:17:41.944220 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0218 19:17:41.944371 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0218 19:17:41.958094 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T19:17:41Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5fcd5de3303bba82e4a354de9f77b9aac574912955c2e49e2e74232f4d432a88\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:23Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6adb31d2464d541db843bddad1c43811ca11900db12deeb0d7c13ff8a3186d09\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6adb31d2464d541db843bddad1c43811ca11900db12deeb0d7c13ff8a3186d09\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:17:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:17:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:17:21Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:17:46Z is after 2025-08-24T17:21:41Z" Feb 18 19:17:46 crc kubenswrapper[4942]: I0218 19:17:46.781429 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:17:46Z is after 2025-08-24T17:21:41Z" Feb 18 19:17:46 crc kubenswrapper[4942]: I0218 19:17:46.794633 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-wxck8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"69ef2748-687e-4223-998e-7bd92ad8aaaf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ba4df5c822ff37a1a027d1908aab6472cd0b5a6ab0a2b5e5d1b172774107727\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vscpp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:17:45Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-wxck8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:17:46Z is after 2025-08-24T17:21:41Z" Feb 18 19:17:46 crc kubenswrapper[4942]: I0218 19:17:46.805462 4942 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 18 19:17:46 crc kubenswrapper[4942]: I0218 19:17:46.807956 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:17:46 crc kubenswrapper[4942]: I0218 19:17:46.808018 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:17:46 crc kubenswrapper[4942]: I0218 19:17:46.808033 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:17:46 crc kubenswrapper[4942]: I0218 19:17:46.808231 4942 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 18 19:17:46 crc kubenswrapper[4942]: I0218 19:17:46.817636 4942 kubelet_node_status.go:115] "Node was previously registered" node="crc" Feb 18 19:17:46 crc kubenswrapper[4942]: I0218 19:17:46.817986 4942 kubelet_node_status.go:79] "Successfully registered node" node="crc" Feb 18 19:17:46 crc kubenswrapper[4942]: I0218 19:17:46.819295 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:17:46 crc kubenswrapper[4942]: I0218 19:17:46.819344 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:17:46 crc kubenswrapper[4942]: I0218 19:17:46.819360 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:17:46 crc kubenswrapper[4942]: I0218 19:17:46.819382 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:17:46 crc kubenswrapper[4942]: I0218 19:17:46.819398 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:17:46Z","lastTransitionTime":"2026-02-18T19:17:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:17:46 crc kubenswrapper[4942]: E0218 19:17:46.844996 4942 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:17:46Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:46Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:17:46Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:46Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:17:46Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:46Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:17:46Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:46Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"26ba8477-3134-4454-b1a3-81cc0f315017\\\",\\\"systemUUID\\\":\\\"15e4da6b-0b96-4412-ada2-f835d7e5f88a\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:17:46Z is after 2025-08-24T17:21:41Z" Feb 18 19:17:46 crc kubenswrapper[4942]: I0218 19:17:46.853103 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:17:46 crc kubenswrapper[4942]: I0218 19:17:46.853188 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:17:46 crc kubenswrapper[4942]: I0218 19:17:46.853206 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:17:46 crc kubenswrapper[4942]: I0218 19:17:46.853232 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:17:46 crc kubenswrapper[4942]: I0218 19:17:46.853248 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:17:46Z","lastTransitionTime":"2026-02-18T19:17:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:17:46 crc kubenswrapper[4942]: E0218 19:17:46.871327 4942 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:17:46Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:46Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:17:46Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:46Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:17:46Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:46Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:17:46Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:46Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"26ba8477-3134-4454-b1a3-81cc0f315017\\\",\\\"systemUUID\\\":\\\"15e4da6b-0b96-4412-ada2-f835d7e5f88a\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:17:46Z is after 2025-08-24T17:21:41Z" Feb 18 19:17:46 crc kubenswrapper[4942]: I0218 19:17:46.876787 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:17:46 crc kubenswrapper[4942]: I0218 19:17:46.876963 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:17:46 crc kubenswrapper[4942]: I0218 19:17:46.877297 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:17:46 crc kubenswrapper[4942]: I0218 19:17:46.877557 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:17:46 crc kubenswrapper[4942]: I0218 19:17:46.877684 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:17:46Z","lastTransitionTime":"2026-02-18T19:17:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:17:46 crc kubenswrapper[4942]: E0218 19:17:46.895498 4942 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:17:46Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:46Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:17:46Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:46Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:17:46Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:46Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:17:46Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:46Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"26ba8477-3134-4454-b1a3-81cc0f315017\\\",\\\"systemUUID\\\":\\\"15e4da6b-0b96-4412-ada2-f835d7e5f88a\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:17:46Z is after 2025-08-24T17:21:41Z" Feb 18 19:17:46 crc kubenswrapper[4942]: I0218 19:17:46.900802 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:17:46 crc kubenswrapper[4942]: I0218 19:17:46.900998 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:17:46 crc kubenswrapper[4942]: I0218 19:17:46.901108 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:17:46 crc kubenswrapper[4942]: I0218 19:17:46.901261 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:17:46 crc kubenswrapper[4942]: I0218 19:17:46.901560 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:17:46Z","lastTransitionTime":"2026-02-18T19:17:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:17:46 crc kubenswrapper[4942]: E0218 19:17:46.924373 4942 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:17:46Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:46Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:17:46Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:46Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:17:46Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:46Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:17:46Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:46Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"26ba8477-3134-4454-b1a3-81cc0f315017\\\",\\\"systemUUID\\\":\\\"15e4da6b-0b96-4412-ada2-f835d7e5f88a\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:17:46Z is after 2025-08-24T17:21:41Z" Feb 18 19:17:46 crc kubenswrapper[4942]: I0218 19:17:46.929579 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:17:46 crc kubenswrapper[4942]: I0218 19:17:46.929625 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:17:46 crc kubenswrapper[4942]: I0218 19:17:46.929638 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:17:46 crc kubenswrapper[4942]: I0218 19:17:46.929659 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:17:46 crc kubenswrapper[4942]: I0218 19:17:46.929672 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:17:46Z","lastTransitionTime":"2026-02-18T19:17:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:17:46 crc kubenswrapper[4942]: E0218 19:17:46.943707 4942 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:17:46Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:46Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:17:46Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:46Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:17:46Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:46Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:17:46Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:46Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"26ba8477-3134-4454-b1a3-81cc0f315017\\\",\\\"systemUUID\\\":\\\"15e4da6b-0b96-4412-ada2-f835d7e5f88a\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:17:46Z is after 2025-08-24T17:21:41Z" Feb 18 19:17:46 crc kubenswrapper[4942]: E0218 19:17:46.943892 4942 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 18 19:17:46 crc kubenswrapper[4942]: I0218 19:17:46.945879 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:17:46 crc kubenswrapper[4942]: I0218 19:17:46.945914 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:17:46 crc kubenswrapper[4942]: I0218 19:17:46.945927 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:17:46 crc kubenswrapper[4942]: I0218 19:17:46.945945 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:17:46 crc kubenswrapper[4942]: I0218 19:17:46.945959 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:17:46Z","lastTransitionTime":"2026-02-18T19:17:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:17:46 crc kubenswrapper[4942]: I0218 19:17:46.971287 4942 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-17 18:19:27.736138032 +0000 UTC Feb 18 19:17:47 crc kubenswrapper[4942]: I0218 19:17:47.035004 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 19:17:47 crc kubenswrapper[4942]: E0218 19:17:47.035217 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 19:17:47 crc kubenswrapper[4942]: I0218 19:17:47.048312 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:17:47 crc kubenswrapper[4942]: I0218 19:17:47.048348 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:17:47 crc kubenswrapper[4942]: I0218 19:17:47.048361 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:17:47 crc kubenswrapper[4942]: I0218 19:17:47.048378 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:17:47 crc kubenswrapper[4942]: I0218 19:17:47.048389 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:17:47Z","lastTransitionTime":"2026-02-18T19:17:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:17:47 crc kubenswrapper[4942]: I0218 19:17:47.151068 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:17:47 crc kubenswrapper[4942]: I0218 19:17:47.151412 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:17:47 crc kubenswrapper[4942]: I0218 19:17:47.151500 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:17:47 crc kubenswrapper[4942]: I0218 19:17:47.151575 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:17:47 crc kubenswrapper[4942]: I0218 19:17:47.151637 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:17:47Z","lastTransitionTime":"2026-02-18T19:17:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:17:47 crc kubenswrapper[4942]: I0218 19:17:47.255743 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:17:47 crc kubenswrapper[4942]: I0218 19:17:47.255845 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:17:47 crc kubenswrapper[4942]: I0218 19:17:47.255870 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:17:47 crc kubenswrapper[4942]: I0218 19:17:47.255903 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:17:47 crc kubenswrapper[4942]: I0218 19:17:47.255925 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:17:47Z","lastTransitionTime":"2026-02-18T19:17:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:17:47 crc kubenswrapper[4942]: I0218 19:17:47.278049 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-89fzv" event={"ID":"45dc4164-81a9-44cf-b86a-dff571bc0417","Type":"ContainerStarted","Data":"c498aa99d3ec10af57c279f23804f4dce52a99d2c73fafe2bd9dc6ea454c7a23"} Feb 18 19:17:47 crc kubenswrapper[4942]: I0218 19:17:47.281860 4942 generic.go:334] "Generic (PLEG): container finished" podID="1ec6cacf-a09d-43b8-89fc-aea1d5a0ca9d" containerID="86ba552c18df4c07b6d6b34acf51c27ec696374ddd079486c045e1cb9f68f703" exitCode=0 Feb 18 19:17:47 crc kubenswrapper[4942]: I0218 19:17:47.282505 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-2rbc4" event={"ID":"1ec6cacf-a09d-43b8-89fc-aea1d5a0ca9d","Type":"ContainerDied","Data":"86ba552c18df4c07b6d6b34acf51c27ec696374ddd079486c045e1cb9f68f703"} Feb 18 19:17:47 crc kubenswrapper[4942]: I0218 19:17:47.309544 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-8jfwb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"75150b8c-7a02-497b-86c3-eabc9c8dbc55\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f6aba9b40a3a963de7e8fb8f2a121318f0800350a41caa30b6aef71468e5e0e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-65c5q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:17:42Z\\\"}}\" for pod \"openshift-multus\"/\"multus-8jfwb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:17:47Z is after 2025-08-24T17:21:41Z" Feb 18 19:17:47 crc kubenswrapper[4942]: I0218 19:17:47.335987 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-89fzv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"45dc4164-81a9-44cf-b86a-dff571bc0417\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:42Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:42Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl7tj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl7tj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl7tj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl7tj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl7tj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl7tj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl7tj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl7tj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://581689b9e064557a35e24e6d5c15a73036b8499700959fd330e9ebee15543edc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://581689b9e064557a35e24e6d5c15a73036b8499700959fd330e9ebee15543edc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:17:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:17:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl7tj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:17:42Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-89fzv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:17:47Z is after 2025-08-24T17:21:41Z" Feb 18 19:17:47 crc kubenswrapper[4942]: I0218 19:17:47.355988 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://36f5db0de79285e1aca04aee9ebb8824353d8746f2f7df24be858a55db3c9abf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:17:47Z is after 2025-08-24T17:21:41Z" Feb 18 19:17:47 crc kubenswrapper[4942]: I0218 19:17:47.358879 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:17:47 crc kubenswrapper[4942]: I0218 19:17:47.358913 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:17:47 crc kubenswrapper[4942]: I0218 19:17:47.358928 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:17:47 crc kubenswrapper[4942]: I0218 19:17:47.358955 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:17:47 crc kubenswrapper[4942]: I0218 19:17:47.358971 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:17:47Z","lastTransitionTime":"2026-02-18T19:17:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:17:47 crc kubenswrapper[4942]: I0218 19:17:47.373614 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2e4be8605467674f949e5b4b8d282634126ab56d2983d5ffadb64ca4043b79b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:17:47Z is after 2025-08-24T17:21:41Z" Feb 18 19:17:47 crc kubenswrapper[4942]: I0218 19:17:47.380633 4942 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Feb 18 19:17:47 crc kubenswrapper[4942]: I0218 19:17:47.387661 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:17:47Z is after 2025-08-24T17:21:41Z" Feb 18 19:17:47 crc kubenswrapper[4942]: I0218 19:17:47.399566 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wqxh4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"28921539-823a-4439-a230-3b5aed7085cc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d1f426cf3a46e9dbd6da2d7e0d1dc2649a781bb63b9b116e2e96e297ffe685f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c2zj5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3f2583de812c35d32f50918d2ea1071672e650d7bb1eca09416558ca25526b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c2zj5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:17:42Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wqxh4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:17:47Z is after 2025-08-24T17:21:41Z" Feb 18 19:17:47 crc kubenswrapper[4942]: I0218 19:17:47.414096 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4da93830-99a3-4d84-91c8-a5352a987b3f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://beecfbdf76954e7b9895240b52a2ec033ec3b81094ece02095f67a5f389d0383\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca3d8e99733c89b17e7211c9bae268f8e75942d896d32a6e2e9fc7e613000a6d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee5e19c2c5a503ae69e8052828713b9b399137e0fb7f3a06865d4d7f6b29c954\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b82ad9b4d66dae63d0d0fcf0dd65333cf43c3b1d6006e129b7db1c6320bfdee8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b82ad9b4d66dae63d0d0fcf0dd65333cf43c3b1d6006e129b7db1c6320bfdee8\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"message\\\":\\\"le observer\\\\nW0218 19:17:41.723890 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0218 19:17:41.724123 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0218 19:17:41.725411 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3231040961/tls.crt::/tmp/serving-cert-3231040961/tls.key\\\\\\\"\\\\nI0218 19:17:41.923908 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0218 19:17:41.936017 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0218 19:17:41.936045 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0218 19:17:41.936073 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0218 19:17:41.936079 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0218 19:17:41.944174 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0218 19:17:41.944200 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 19:17:41.944205 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 19:17:41.944211 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0218 19:17:41.944214 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0218 19:17:41.944217 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0218 19:17:41.944220 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0218 19:17:41.944371 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0218 19:17:41.958094 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T19:17:41Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5fcd5de3303bba82e4a354de9f77b9aac574912955c2e49e2e74232f4d432a88\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:23Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6adb31d2464d541db843bddad1c43811ca11900db12deeb0d7c13ff8a3186d09\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6adb31d2464d541db843bddad1c43811ca11900db12deeb0d7c13ff8a3186d09\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:17:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:17:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:17:21Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:17:47Z is after 2025-08-24T17:21:41Z" Feb 18 19:17:47 crc kubenswrapper[4942]: I0218 19:17:47.428941 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:17:47Z is after 2025-08-24T17:21:41Z" Feb 18 19:17:47 crc kubenswrapper[4942]: I0218 19:17:47.447613 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-wxck8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"69ef2748-687e-4223-998e-7bd92ad8aaaf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ba4df5c822ff37a1a027d1908aab6472cd0b5a6ab0a2b5e5d1b172774107727\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vscpp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:17:45Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-wxck8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:17:47Z is after 2025-08-24T17:21:41Z" Feb 18 19:17:47 crc kubenswrapper[4942]: I0218 19:17:47.463353 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:17:47 crc kubenswrapper[4942]: I0218 19:17:47.463393 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:17:47 crc kubenswrapper[4942]: I0218 19:17:47.463405 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:17:47 crc kubenswrapper[4942]: I0218 19:17:47.463362 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b5d2b9d-7ec0-41fa-a073-399c6fd41eb6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2c8b81c113e461032be39d6328308bad3189a9e84d987da987d43e8e2f6449fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3654d3b4a5084ce9ffb9ef8aeab6155788b56ac636aee44b098f6e9d457a8d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3a247d311cfbec62a54df5757a344bbc7ea516a66ccdeb67aecbbe268a4fbe4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://117748c4c4fa5e68d4b927639faa447ed3a984e0d7364a2224abe27e178d5746\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:17:21Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:17:47Z is after 2025-08-24T17:21:41Z" Feb 18 19:17:47 crc kubenswrapper[4942]: I0218 19:17:47.463427 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:17:47 crc kubenswrapper[4942]: I0218 19:17:47.463601 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:17:47Z","lastTransitionTime":"2026-02-18T19:17:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:17:47 crc kubenswrapper[4942]: I0218 19:17:47.478017 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8cc6e8b6926e9cadf0bfdedb3a9fd0e5a7a902ba1cc703cd0396c3d7b2ec8666\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://45c0716738e2acbb0104b2ce05e3f23fd6933b653297d10972914500f3e55cd3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:17:47Z is after 2025-08-24T17:21:41Z" Feb 18 19:17:47 crc kubenswrapper[4942]: I0218 19:17:47.490787 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:17:47Z is after 2025-08-24T17:21:41Z" Feb 18 19:17:47 crc kubenswrapper[4942]: I0218 19:17:47.504214 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-5pgvt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f163820b-df8b-4e07-9b74-d5f3332580a6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://97b02b2ef091c462632d385e824d90a6dc8270726bb3b5dfaa6c3036e99d323f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjg6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:17:41Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-5pgvt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:17:47Z is after 2025-08-24T17:21:41Z" Feb 18 19:17:47 crc kubenswrapper[4942]: I0218 19:17:47.519358 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-2rbc4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1ec6cacf-a09d-43b8-89fc-aea1d5a0ca9d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:42Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:42Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:42Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-27v7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://26e15915f01864e6357f914244e51cc52eb7c1c79fdda6cebfb23c7723978fc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://26e15915f01864e6357f914244e51cc52eb7c1c79fdda6cebfb23c7723978fc7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:17:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-27v7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3845cae9bbde573b86221f80bf461886dadf5c5149bb19824e505c703e168ba3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3845cae9bbde573b86221f80bf461886dadf5c5149bb19824e505c703e168ba3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:17:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-27v7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://83b7a573c632fbc4da1c088193202dcbe4f5f09d82c98b12e901a06acc877b80\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://83b7a573c632fbc4da1c088193202dcbe4f5f09d82c98b12e901a06acc877b80\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:17:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:17:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-27v7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2730d908eb063a0dc3278a304a8b7b9aee84bb6df39693e476d6517362864da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b2730d908eb063a0dc3278a304a8b7b9aee84bb6df39693e476d6517362864da\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:17:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:17:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-27v7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://86ba552c18df4c07b6d6b34acf51c27ec696374ddd079486c045e1cb9f68f703\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://86ba552c18df4c07b6d6b34acf51c27ec696374ddd079486c045e1cb9f68f703\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:17:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:17:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-27v7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-27v7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:17:42Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-2rbc4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:17:47Z is after 2025-08-24T17:21:41Z" Feb 18 19:17:47 crc kubenswrapper[4942]: I0218 19:17:47.566221 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:17:47 crc kubenswrapper[4942]: I0218 19:17:47.566261 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:17:47 crc kubenswrapper[4942]: I0218 19:17:47.566271 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:17:47 crc kubenswrapper[4942]: I0218 19:17:47.566289 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:17:47 crc kubenswrapper[4942]: I0218 19:17:47.566299 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:17:47Z","lastTransitionTime":"2026-02-18T19:17:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:17:47 crc kubenswrapper[4942]: I0218 19:17:47.671386 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:17:47 crc kubenswrapper[4942]: I0218 19:17:47.671436 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:17:47 crc kubenswrapper[4942]: I0218 19:17:47.671450 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:17:47 crc kubenswrapper[4942]: I0218 19:17:47.671471 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:17:47 crc kubenswrapper[4942]: I0218 19:17:47.671484 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:17:47Z","lastTransitionTime":"2026-02-18T19:17:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:17:47 crc kubenswrapper[4942]: I0218 19:17:47.775402 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:17:47 crc kubenswrapper[4942]: I0218 19:17:47.775456 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:17:47 crc kubenswrapper[4942]: I0218 19:17:47.775470 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:17:47 crc kubenswrapper[4942]: I0218 19:17:47.775491 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:17:47 crc kubenswrapper[4942]: I0218 19:17:47.775508 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:17:47Z","lastTransitionTime":"2026-02-18T19:17:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:17:47 crc kubenswrapper[4942]: I0218 19:17:47.879499 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:17:47 crc kubenswrapper[4942]: I0218 19:17:47.879543 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:17:47 crc kubenswrapper[4942]: I0218 19:17:47.879559 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:17:47 crc kubenswrapper[4942]: I0218 19:17:47.879580 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:17:47 crc kubenswrapper[4942]: I0218 19:17:47.879592 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:17:47Z","lastTransitionTime":"2026-02-18T19:17:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:17:47 crc kubenswrapper[4942]: I0218 19:17:47.971837 4942 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-02 07:36:35.878794846 +0000 UTC Feb 18 19:17:47 crc kubenswrapper[4942]: I0218 19:17:47.982953 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:17:47 crc kubenswrapper[4942]: I0218 19:17:47.982994 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:17:47 crc kubenswrapper[4942]: I0218 19:17:47.983007 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:17:47 crc kubenswrapper[4942]: I0218 19:17:47.983070 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:17:47 crc kubenswrapper[4942]: I0218 19:17:47.983087 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:17:47Z","lastTransitionTime":"2026-02-18T19:17:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:17:48 crc kubenswrapper[4942]: I0218 19:17:48.034869 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 19:17:48 crc kubenswrapper[4942]: I0218 19:17:48.034902 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 19:17:48 crc kubenswrapper[4942]: E0218 19:17:48.035648 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 19:17:48 crc kubenswrapper[4942]: E0218 19:17:48.035904 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 19:17:48 crc kubenswrapper[4942]: I0218 19:17:48.086565 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:17:48 crc kubenswrapper[4942]: I0218 19:17:48.086610 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:17:48 crc kubenswrapper[4942]: I0218 19:17:48.086629 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:17:48 crc kubenswrapper[4942]: I0218 19:17:48.086654 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:17:48 crc kubenswrapper[4942]: I0218 19:17:48.086670 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:17:48Z","lastTransitionTime":"2026-02-18T19:17:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:17:48 crc kubenswrapper[4942]: I0218 19:17:48.193992 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:17:48 crc kubenswrapper[4942]: I0218 19:17:48.194392 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:17:48 crc kubenswrapper[4942]: I0218 19:17:48.194571 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:17:48 crc kubenswrapper[4942]: I0218 19:17:48.194689 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:17:48 crc kubenswrapper[4942]: I0218 19:17:48.194850 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:17:48Z","lastTransitionTime":"2026-02-18T19:17:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:17:48 crc kubenswrapper[4942]: I0218 19:17:48.290556 4942 generic.go:334] "Generic (PLEG): container finished" podID="1ec6cacf-a09d-43b8-89fc-aea1d5a0ca9d" containerID="522b8abd41e12aecabbbc8a1f16dd8978b1e72b0984784780349570290bcc168" exitCode=0 Feb 18 19:17:48 crc kubenswrapper[4942]: I0218 19:17:48.290622 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-2rbc4" event={"ID":"1ec6cacf-a09d-43b8-89fc-aea1d5a0ca9d","Type":"ContainerDied","Data":"522b8abd41e12aecabbbc8a1f16dd8978b1e72b0984784780349570290bcc168"} Feb 18 19:17:48 crc kubenswrapper[4942]: I0218 19:17:48.296753 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:17:48 crc kubenswrapper[4942]: I0218 19:17:48.297054 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:17:48 crc kubenswrapper[4942]: I0218 19:17:48.297205 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:17:48 crc kubenswrapper[4942]: I0218 19:17:48.297365 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:17:48 crc kubenswrapper[4942]: I0218 19:17:48.297508 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:17:48Z","lastTransitionTime":"2026-02-18T19:17:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:17:48 crc kubenswrapper[4942]: I0218 19:17:48.315302 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:17:48Z is after 2025-08-24T17:21:41Z" Feb 18 19:17:48 crc kubenswrapper[4942]: I0218 19:17:48.335036 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-5pgvt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f163820b-df8b-4e07-9b74-d5f3332580a6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://97b02b2ef091c462632d385e824d90a6dc8270726bb3b5dfaa6c3036e99d323f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjg6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:17:41Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-5pgvt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:17:48Z is after 2025-08-24T17:21:41Z" Feb 18 19:17:48 crc kubenswrapper[4942]: I0218 19:17:48.359579 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-2rbc4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1ec6cacf-a09d-43b8-89fc-aea1d5a0ca9d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:42Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:42Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-27v7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://26e15915f01864e6357f914244e51cc52eb7c1c79fdda6cebfb23c7723978fc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://26e15915f01864e6357f914244e51cc52eb7c1c79fdda6cebfb23c7723978fc7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:17:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-27v7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3845cae9bbde573b86221f80bf461886dadf5c5149bb19824e505c703e168ba3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3845cae9bbde573b86221f80bf461886dadf5c5149bb19824e505c703e168ba3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:17:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-27v7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://83b7a573c632fbc4da1c088193202dcbe4f5f09d82c98b12e901a06acc877b80\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://83b7a573c632fbc4da1c088193202dcbe4f5f09d82c98b12e901a06acc877b80\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:17:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:17:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-27v7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2730d908eb063a0dc3278a304a8b7b9aee84bb6df39693e476d6517362864da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b2730d908eb063a0dc3278a304a8b7b9aee84bb6df39693e476d6517362864da\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:17:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:17:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-27v7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://86ba552c18df4c07b6d6b34acf51c27ec696374ddd079486c045e1cb9f68f703\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://86ba552c18df4c07b6d6b34acf51c27ec696374ddd079486c045e1cb9f68f703\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:17:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:17:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-27v7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://522b8abd41e12aecabbbc8a1f16dd8978b1e72b0984784780349570290bcc168\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://522b8abd41e12aecabbbc8a1f16dd8978b1e72b0984784780349570290bcc168\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:17:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:17:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-27v7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:17:42Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-2rbc4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:17:48Z is after 2025-08-24T17:21:41Z" Feb 18 19:17:48 crc kubenswrapper[4942]: I0218 19:17:48.378537 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-8jfwb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"75150b8c-7a02-497b-86c3-eabc9c8dbc55\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f6aba9b40a3a963de7e8fb8f2a121318f0800350a41caa30b6aef71468e5e0e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-65c5q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:17:42Z\\\"}}\" for pod \"openshift-multus\"/\"multus-8jfwb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:17:48Z is after 2025-08-24T17:21:41Z" Feb 18 19:17:48 crc kubenswrapper[4942]: I0218 19:17:48.401138 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:17:48 crc kubenswrapper[4942]: I0218 19:17:48.401240 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:17:48 crc kubenswrapper[4942]: I0218 19:17:48.401272 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:17:48 crc kubenswrapper[4942]: I0218 19:17:48.401308 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:17:48 crc kubenswrapper[4942]: I0218 19:17:48.401332 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:17:48Z","lastTransitionTime":"2026-02-18T19:17:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:17:48 crc kubenswrapper[4942]: I0218 19:17:48.407351 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-89fzv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"45dc4164-81a9-44cf-b86a-dff571bc0417\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:42Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:42Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl7tj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl7tj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl7tj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl7tj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl7tj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl7tj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl7tj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl7tj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://581689b9e064557a35e24e6d5c15a73036b8499700959fd330e9ebee15543edc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://581689b9e064557a35e24e6d5c15a73036b8499700959fd330e9ebee15543edc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:17:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:17:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl7tj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:17:42Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-89fzv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:17:48Z is after 2025-08-24T17:21:41Z" Feb 18 19:17:48 crc kubenswrapper[4942]: I0218 19:17:48.428718 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://36f5db0de79285e1aca04aee9ebb8824353d8746f2f7df24be858a55db3c9abf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:17:48Z is after 2025-08-24T17:21:41Z" Feb 18 19:17:48 crc kubenswrapper[4942]: I0218 19:17:48.450462 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2e4be8605467674f949e5b4b8d282634126ab56d2983d5ffadb64ca4043b79b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:17:48Z is after 2025-08-24T17:21:41Z" Feb 18 19:17:48 crc kubenswrapper[4942]: I0218 19:17:48.473097 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:17:48Z is after 2025-08-24T17:21:41Z" Feb 18 19:17:48 crc kubenswrapper[4942]: I0218 19:17:48.489112 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wqxh4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"28921539-823a-4439-a230-3b5aed7085cc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d1f426cf3a46e9dbd6da2d7e0d1dc2649a781bb63b9b116e2e96e297ffe685f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c2zj5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3f2583de812c35d32f50918d2ea1071672e650d7bb1eca09416558ca25526b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c2zj5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:17:42Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wqxh4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:17:48Z is after 2025-08-24T17:21:41Z" Feb 18 19:17:48 crc kubenswrapper[4942]: I0218 19:17:48.504285 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:17:48 crc kubenswrapper[4942]: I0218 19:17:48.504336 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:17:48 crc kubenswrapper[4942]: I0218 19:17:48.504356 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:17:48 crc kubenswrapper[4942]: I0218 19:17:48.504381 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:17:48 crc kubenswrapper[4942]: I0218 19:17:48.504398 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:17:48Z","lastTransitionTime":"2026-02-18T19:17:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:17:48 crc kubenswrapper[4942]: I0218 19:17:48.507271 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4da93830-99a3-4d84-91c8-a5352a987b3f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://beecfbdf76954e7b9895240b52a2ec033ec3b81094ece02095f67a5f389d0383\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca3d8e99733c89b17e7211c9bae268f8e75942d896d32a6e2e9fc7e613000a6d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee5e19c2c5a503ae69e8052828713b9b399137e0fb7f3a06865d4d7f6b29c954\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b82ad9b4d66dae63d0d0fcf0dd65333cf43c3b1d6006e129b7db1c6320bfdee8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b82ad9b4d66dae63d0d0fcf0dd65333cf43c3b1d6006e129b7db1c6320bfdee8\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"message\\\":\\\"le observer\\\\nW0218 19:17:41.723890 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0218 19:17:41.724123 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0218 19:17:41.725411 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3231040961/tls.crt::/tmp/serving-cert-3231040961/tls.key\\\\\\\"\\\\nI0218 19:17:41.923908 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0218 19:17:41.936017 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0218 19:17:41.936045 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0218 19:17:41.936073 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0218 19:17:41.936079 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0218 19:17:41.944174 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0218 19:17:41.944200 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 19:17:41.944205 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 19:17:41.944211 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0218 19:17:41.944214 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0218 19:17:41.944217 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0218 19:17:41.944220 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0218 19:17:41.944371 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0218 19:17:41.958094 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T19:17:41Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5fcd5de3303bba82e4a354de9f77b9aac574912955c2e49e2e74232f4d432a88\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:23Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6adb31d2464d541db843bddad1c43811ca11900db12deeb0d7c13ff8a3186d09\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6adb31d2464d541db843bddad1c43811ca11900db12deeb0d7c13ff8a3186d09\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:17:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:17:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:17:21Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:17:48Z is after 2025-08-24T17:21:41Z" Feb 18 19:17:48 crc kubenswrapper[4942]: I0218 19:17:48.522594 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:17:48Z is after 2025-08-24T17:21:41Z" Feb 18 19:17:48 crc kubenswrapper[4942]: I0218 19:17:48.534148 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-wxck8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"69ef2748-687e-4223-998e-7bd92ad8aaaf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ba4df5c822ff37a1a027d1908aab6472cd0b5a6ab0a2b5e5d1b172774107727\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vscpp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:17:45Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-wxck8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:17:48Z is after 2025-08-24T17:21:41Z" Feb 18 19:17:48 crc kubenswrapper[4942]: I0218 19:17:48.549160 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b5d2b9d-7ec0-41fa-a073-399c6fd41eb6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2c8b81c113e461032be39d6328308bad3189a9e84d987da987d43e8e2f6449fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3654d3b4a5084ce9ffb9ef8aeab6155788b56ac636aee44b098f6e9d457a8d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3a247d311cfbec62a54df5757a344bbc7ea516a66ccdeb67aecbbe268a4fbe4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://117748c4c4fa5e68d4b927639faa447ed3a984e0d7364a2224abe27e178d5746\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:17:21Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:17:48Z is after 2025-08-24T17:21:41Z" Feb 18 19:17:48 crc kubenswrapper[4942]: I0218 19:17:48.567430 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8cc6e8b6926e9cadf0bfdedb3a9fd0e5a7a902ba1cc703cd0396c3d7b2ec8666\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://45c0716738e2acbb0104b2ce05e3f23fd6933b653297d10972914500f3e55cd3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:17:48Z is after 2025-08-24T17:21:41Z" Feb 18 19:17:48 crc kubenswrapper[4942]: I0218 19:17:48.608011 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:17:48 crc kubenswrapper[4942]: I0218 19:17:48.608064 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:17:48 crc kubenswrapper[4942]: I0218 19:17:48.608085 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:17:48 crc kubenswrapper[4942]: I0218 19:17:48.608105 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:17:48 crc kubenswrapper[4942]: I0218 19:17:48.608115 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:17:48Z","lastTransitionTime":"2026-02-18T19:17:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:17:48 crc kubenswrapper[4942]: I0218 19:17:48.710721 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:17:48 crc kubenswrapper[4942]: I0218 19:17:48.710779 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:17:48 crc kubenswrapper[4942]: I0218 19:17:48.710791 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:17:48 crc kubenswrapper[4942]: I0218 19:17:48.710808 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:17:48 crc kubenswrapper[4942]: I0218 19:17:48.710821 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:17:48Z","lastTransitionTime":"2026-02-18T19:17:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:17:48 crc kubenswrapper[4942]: I0218 19:17:48.814009 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:17:48 crc kubenswrapper[4942]: I0218 19:17:48.814068 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:17:48 crc kubenswrapper[4942]: I0218 19:17:48.814080 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:17:48 crc kubenswrapper[4942]: I0218 19:17:48.814102 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:17:48 crc kubenswrapper[4942]: I0218 19:17:48.814116 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:17:48Z","lastTransitionTime":"2026-02-18T19:17:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:17:48 crc kubenswrapper[4942]: I0218 19:17:48.917017 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:17:48 crc kubenswrapper[4942]: I0218 19:17:48.917079 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:17:48 crc kubenswrapper[4942]: I0218 19:17:48.917092 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:17:48 crc kubenswrapper[4942]: I0218 19:17:48.917113 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:17:48 crc kubenswrapper[4942]: I0218 19:17:48.917124 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:17:48Z","lastTransitionTime":"2026-02-18T19:17:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:17:48 crc kubenswrapper[4942]: I0218 19:17:48.972564 4942 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-10 06:21:24.585700294 +0000 UTC Feb 18 19:17:49 crc kubenswrapper[4942]: I0218 19:17:49.020707 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:17:49 crc kubenswrapper[4942]: I0218 19:17:49.020747 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:17:49 crc kubenswrapper[4942]: I0218 19:17:49.020789 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:17:49 crc kubenswrapper[4942]: I0218 19:17:49.020813 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:17:49 crc kubenswrapper[4942]: I0218 19:17:49.020830 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:17:49Z","lastTransitionTime":"2026-02-18T19:17:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:17:49 crc kubenswrapper[4942]: I0218 19:17:49.035720 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 19:17:49 crc kubenswrapper[4942]: E0218 19:17:49.035931 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 19:17:49 crc kubenswrapper[4942]: I0218 19:17:49.124121 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:17:49 crc kubenswrapper[4942]: I0218 19:17:49.124213 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:17:49 crc kubenswrapper[4942]: I0218 19:17:49.124244 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:17:49 crc kubenswrapper[4942]: I0218 19:17:49.124263 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:17:49 crc kubenswrapper[4942]: I0218 19:17:49.124276 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:17:49Z","lastTransitionTime":"2026-02-18T19:17:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:17:49 crc kubenswrapper[4942]: I0218 19:17:49.227475 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:17:49 crc kubenswrapper[4942]: I0218 19:17:49.227867 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:17:49 crc kubenswrapper[4942]: I0218 19:17:49.228346 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:17:49 crc kubenswrapper[4942]: I0218 19:17:49.228830 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:17:49 crc kubenswrapper[4942]: I0218 19:17:49.229001 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:17:49Z","lastTransitionTime":"2026-02-18T19:17:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:17:49 crc kubenswrapper[4942]: I0218 19:17:49.300331 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-2rbc4" event={"ID":"1ec6cacf-a09d-43b8-89fc-aea1d5a0ca9d","Type":"ContainerStarted","Data":"d379b6cff5fad06493f1e137d6f8de20b35e5350025c5875db8afb23cf30ac97"} Feb 18 19:17:49 crc kubenswrapper[4942]: I0218 19:17:49.309122 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-89fzv" event={"ID":"45dc4164-81a9-44cf-b86a-dff571bc0417","Type":"ContainerStarted","Data":"34ae88814307bf6ee0867a2fd00ea4020fd0b74379801aad00948088bac875bf"} Feb 18 19:17:49 crc kubenswrapper[4942]: I0218 19:17:49.309850 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-89fzv" Feb 18 19:17:49 crc kubenswrapper[4942]: I0218 19:17:49.310094 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-89fzv" Feb 18 19:17:49 crc kubenswrapper[4942]: I0218 19:17:49.323288 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b5d2b9d-7ec0-41fa-a073-399c6fd41eb6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2c8b81c113e461032be39d6328308bad3189a9e84d987da987d43e8e2f6449fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3654d3b4a5084ce9ffb9ef8aeab6155788b56ac636aee44b098f6e9d457a8d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3a247d311cfbec62a54df5757a344bbc7ea516a66ccdeb67aecbbe268a4fbe4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://117748c4c4fa5e68d4b927639faa447ed3a984e0d7364a2224abe27e178d5746\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:17:21Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:17:49Z is after 2025-08-24T17:21:41Z" Feb 18 19:17:49 crc kubenswrapper[4942]: I0218 19:17:49.331182 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:17:49 crc kubenswrapper[4942]: I0218 19:17:49.331262 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:17:49 crc kubenswrapper[4942]: I0218 19:17:49.331287 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:17:49 crc kubenswrapper[4942]: I0218 19:17:49.331318 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:17:49 crc kubenswrapper[4942]: I0218 19:17:49.331337 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:17:49Z","lastTransitionTime":"2026-02-18T19:17:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:17:49 crc kubenswrapper[4942]: I0218 19:17:49.346243 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8cc6e8b6926e9cadf0bfdedb3a9fd0e5a7a902ba1cc703cd0396c3d7b2ec8666\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://45c0716738e2acbb0104b2ce05e3f23fd6933b653297d10972914500f3e55cd3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:17:49Z is after 2025-08-24T17:21:41Z" Feb 18 19:17:49 crc kubenswrapper[4942]: I0218 19:17:49.360623 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-89fzv" Feb 18 19:17:49 crc kubenswrapper[4942]: I0218 19:17:49.369349 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:17:49Z is after 2025-08-24T17:21:41Z" Feb 18 19:17:49 crc kubenswrapper[4942]: I0218 19:17:49.383844 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-5pgvt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f163820b-df8b-4e07-9b74-d5f3332580a6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://97b02b2ef091c462632d385e824d90a6dc8270726bb3b5dfaa6c3036e99d323f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjg6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:17:41Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-5pgvt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:17:49Z is after 2025-08-24T17:21:41Z" Feb 18 19:17:49 crc kubenswrapper[4942]: I0218 19:17:49.408716 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-2rbc4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1ec6cacf-a09d-43b8-89fc-aea1d5a0ca9d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d379b6cff5fad06493f1e137d6f8de20b35e5350025c5875db8afb23cf30ac97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-27v7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://26e15915f01864e6357f914244e51cc52eb7c1c79fdda6cebfb23c7723978fc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://26e15915f01864e6357f914244e51cc52eb7c1c79fdda6cebfb23c7723978fc7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:17:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-27v7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3845cae9bbde573b86221f80bf461886dadf5c5149bb19824e505c703e168ba3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3845cae9bbde573b86221f80bf461886dadf5c5149bb19824e505c703e168ba3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:17:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-27v7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://83b7a573c632fbc4da1c088193202dcbe4f5f09d82c98b12e901a06acc877b80\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://83b7a573c632fbc4da1c088193202dcbe4f5f09d82c98b12e901a06acc877b80\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:17:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:17:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-27v7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2730d908eb063a0dc3278a304a8b7b9aee84bb6df39693e476d6517362864da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b2730d908eb063a0dc3278a304a8b7b9aee84bb6df39693e476d6517362864da\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:17:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:17:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-27v7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://86ba552c18df4c07b6d6b34acf51c27ec696374ddd079486c045e1cb9f68f703\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://86ba552c18df4c07b6d6b34acf51c27ec696374ddd079486c045e1cb9f68f703\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:17:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:17:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-27v7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://522b8abd41e12aecabbbc8a1f16dd8978b1e72b0984784780349570290bcc168\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://522b8abd41e12aecabbbc8a1f16dd8978b1e72b0984784780349570290bcc168\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:17:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:17:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-27v7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:17:42Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-2rbc4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:17:49Z is after 2025-08-24T17:21:41Z" Feb 18 19:17:49 crc kubenswrapper[4942]: I0218 19:17:49.433984 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:17:49 crc kubenswrapper[4942]: I0218 19:17:49.434020 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:17:49 crc kubenswrapper[4942]: I0218 19:17:49.434030 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:17:49 crc kubenswrapper[4942]: I0218 19:17:49.434049 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:17:49 crc kubenswrapper[4942]: I0218 19:17:49.434063 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:17:49Z","lastTransitionTime":"2026-02-18T19:17:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:17:49 crc kubenswrapper[4942]: I0218 19:17:49.444969 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-89fzv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"45dc4164-81a9-44cf-b86a-dff571bc0417\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:42Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:42Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl7tj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl7tj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl7tj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl7tj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl7tj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl7tj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl7tj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl7tj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://581689b9e064557a35e24e6d5c15a73036b8499700959fd330e9ebee15543edc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://581689b9e064557a35e24e6d5c15a73036b8499700959fd330e9ebee15543edc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:17:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:17:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl7tj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:17:42Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-89fzv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:17:49Z is after 2025-08-24T17:21:41Z" Feb 18 19:17:49 crc kubenswrapper[4942]: I0218 19:17:49.463957 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://36f5db0de79285e1aca04aee9ebb8824353d8746f2f7df24be858a55db3c9abf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:17:49Z is after 2025-08-24T17:21:41Z" Feb 18 19:17:49 crc kubenswrapper[4942]: I0218 19:17:49.481125 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2e4be8605467674f949e5b4b8d282634126ab56d2983d5ffadb64ca4043b79b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:17:49Z is after 2025-08-24T17:21:41Z" Feb 18 19:17:49 crc kubenswrapper[4942]: I0218 19:17:49.500878 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:17:49Z is after 2025-08-24T17:21:41Z" Feb 18 19:17:49 crc kubenswrapper[4942]: I0218 19:17:49.514239 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wqxh4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"28921539-823a-4439-a230-3b5aed7085cc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d1f426cf3a46e9dbd6da2d7e0d1dc2649a781bb63b9b116e2e96e297ffe685f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c2zj5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3f2583de812c35d32f50918d2ea1071672e650d7bb1eca09416558ca25526b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c2zj5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:17:42Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wqxh4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:17:49Z is after 2025-08-24T17:21:41Z" Feb 18 19:17:49 crc kubenswrapper[4942]: I0218 19:17:49.528417 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-8jfwb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"75150b8c-7a02-497b-86c3-eabc9c8dbc55\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f6aba9b40a3a963de7e8fb8f2a121318f0800350a41caa30b6aef71468e5e0e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-65c5q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:17:42Z\\\"}}\" for pod \"openshift-multus\"/\"multus-8jfwb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:17:49Z is after 2025-08-24T17:21:41Z" Feb 18 19:17:49 crc kubenswrapper[4942]: I0218 19:17:49.536892 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:17:49 crc kubenswrapper[4942]: I0218 19:17:49.536968 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:17:49 crc kubenswrapper[4942]: I0218 19:17:49.536983 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:17:49 crc kubenswrapper[4942]: I0218 19:17:49.537003 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:17:49 crc kubenswrapper[4942]: I0218 19:17:49.537017 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:17:49Z","lastTransitionTime":"2026-02-18T19:17:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:17:49 crc kubenswrapper[4942]: I0218 19:17:49.542670 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4da93830-99a3-4d84-91c8-a5352a987b3f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://beecfbdf76954e7b9895240b52a2ec033ec3b81094ece02095f67a5f389d0383\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca3d8e99733c89b17e7211c9bae268f8e75942d896d32a6e2e9fc7e613000a6d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee5e19c2c5a503ae69e8052828713b9b399137e0fb7f3a06865d4d7f6b29c954\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b82ad9b4d66dae63d0d0fcf0dd65333cf43c3b1d6006e129b7db1c6320bfdee8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b82ad9b4d66dae63d0d0fcf0dd65333cf43c3b1d6006e129b7db1c6320bfdee8\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"message\\\":\\\"le observer\\\\nW0218 19:17:41.723890 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0218 19:17:41.724123 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0218 19:17:41.725411 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3231040961/tls.crt::/tmp/serving-cert-3231040961/tls.key\\\\\\\"\\\\nI0218 19:17:41.923908 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0218 19:17:41.936017 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0218 19:17:41.936045 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0218 19:17:41.936073 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0218 19:17:41.936079 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0218 19:17:41.944174 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0218 19:17:41.944200 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 19:17:41.944205 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 19:17:41.944211 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0218 19:17:41.944214 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0218 19:17:41.944217 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0218 19:17:41.944220 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0218 19:17:41.944371 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0218 19:17:41.958094 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T19:17:41Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5fcd5de3303bba82e4a354de9f77b9aac574912955c2e49e2e74232f4d432a88\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:23Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6adb31d2464d541db843bddad1c43811ca11900db12deeb0d7c13ff8a3186d09\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6adb31d2464d541db843bddad1c43811ca11900db12deeb0d7c13ff8a3186d09\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:17:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:17:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:17:21Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:17:49Z is after 2025-08-24T17:21:41Z" Feb 18 19:17:49 crc kubenswrapper[4942]: I0218 19:17:49.560486 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:17:49Z is after 2025-08-24T17:21:41Z" Feb 18 19:17:49 crc kubenswrapper[4942]: I0218 19:17:49.572599 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-wxck8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"69ef2748-687e-4223-998e-7bd92ad8aaaf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ba4df5c822ff37a1a027d1908aab6472cd0b5a6ab0a2b5e5d1b172774107727\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vscpp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:17:45Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-wxck8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:17:49Z is after 2025-08-24T17:21:41Z" Feb 18 19:17:49 crc kubenswrapper[4942]: I0218 19:17:49.586391 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:17:49Z is after 2025-08-24T17:21:41Z" Feb 18 19:17:49 crc kubenswrapper[4942]: I0218 19:17:49.596512 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-wxck8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"69ef2748-687e-4223-998e-7bd92ad8aaaf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ba4df5c822ff37a1a027d1908aab6472cd0b5a6ab0a2b5e5d1b172774107727\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vscpp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:17:45Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-wxck8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:17:49Z is after 2025-08-24T17:21:41Z" Feb 18 19:17:49 crc kubenswrapper[4942]: I0218 19:17:49.612123 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4da93830-99a3-4d84-91c8-a5352a987b3f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://beecfbdf76954e7b9895240b52a2ec033ec3b81094ece02095f67a5f389d0383\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca3d8e99733c89b17e7211c9bae268f8e75942d896d32a6e2e9fc7e613000a6d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee5e19c2c5a503ae69e8052828713b9b399137e0fb7f3a06865d4d7f6b29c954\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b82ad9b4d66dae63d0d0fcf0dd65333cf43c3b1d6006e129b7db1c6320bfdee8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b82ad9b4d66dae63d0d0fcf0dd65333cf43c3b1d6006e129b7db1c6320bfdee8\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"message\\\":\\\"le observer\\\\nW0218 19:17:41.723890 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0218 19:17:41.724123 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0218 19:17:41.725411 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3231040961/tls.crt::/tmp/serving-cert-3231040961/tls.key\\\\\\\"\\\\nI0218 19:17:41.923908 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0218 19:17:41.936017 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0218 19:17:41.936045 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0218 19:17:41.936073 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0218 19:17:41.936079 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0218 19:17:41.944174 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0218 19:17:41.944200 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 19:17:41.944205 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 19:17:41.944211 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0218 19:17:41.944214 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0218 19:17:41.944217 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0218 19:17:41.944220 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0218 19:17:41.944371 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0218 19:17:41.958094 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T19:17:41Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5fcd5de3303bba82e4a354de9f77b9aac574912955c2e49e2e74232f4d432a88\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:23Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6adb31d2464d541db843bddad1c43811ca11900db12deeb0d7c13ff8a3186d09\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6adb31d2464d541db843bddad1c43811ca11900db12deeb0d7c13ff8a3186d09\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:17:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:17:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:17:21Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:17:49Z is after 2025-08-24T17:21:41Z" Feb 18 19:17:49 crc kubenswrapper[4942]: I0218 19:17:49.617455 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 19:17:49 crc kubenswrapper[4942]: I0218 19:17:49.617524 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 19:17:49 crc kubenswrapper[4942]: E0218 19:17:49.617740 4942 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 18 19:17:49 crc kubenswrapper[4942]: E0218 19:17:49.617806 4942 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 18 19:17:49 crc kubenswrapper[4942]: E0218 19:17:49.617827 4942 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 18 19:17:49 crc kubenswrapper[4942]: E0218 19:17:49.617918 4942 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-18 19:17:57.617886231 +0000 UTC m=+37.322818936 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 18 19:17:49 crc kubenswrapper[4942]: E0218 19:17:49.618400 4942 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 18 19:17:49 crc kubenswrapper[4942]: E0218 19:17:49.618423 4942 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 18 19:17:49 crc kubenswrapper[4942]: E0218 19:17:49.618432 4942 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 18 19:17:49 crc kubenswrapper[4942]: E0218 19:17:49.618464 4942 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-18 19:17:57.618455325 +0000 UTC m=+37.323387990 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 18 19:17:49 crc kubenswrapper[4942]: I0218 19:17:49.627000 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8cc6e8b6926e9cadf0bfdedb3a9fd0e5a7a902ba1cc703cd0396c3d7b2ec8666\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://45c0716738e2acbb0104b2ce05e3f23fd6933b653297d10972914500f3e55cd3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:17:49Z is after 2025-08-24T17:21:41Z" Feb 18 19:17:49 crc kubenswrapper[4942]: I0218 19:17:49.639368 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:17:49 crc kubenswrapper[4942]: I0218 19:17:49.639427 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:17:49 crc kubenswrapper[4942]: I0218 19:17:49.639443 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:17:49 crc kubenswrapper[4942]: I0218 19:17:49.639471 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:17:49 crc kubenswrapper[4942]: I0218 19:17:49.639493 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:17:49Z","lastTransitionTime":"2026-02-18T19:17:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:17:49 crc kubenswrapper[4942]: I0218 19:17:49.644450 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b5d2b9d-7ec0-41fa-a073-399c6fd41eb6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2c8b81c113e461032be39d6328308bad3189a9e84d987da987d43e8e2f6449fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3654d3b4a5084ce9ffb9ef8aeab6155788b56ac636aee44b098f6e9d457a8d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3a247d311cfbec62a54df5757a344bbc7ea516a66ccdeb67aecbbe268a4fbe4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://117748c4c4fa5e68d4b927639faa447ed3a984e0d7364a2224abe27e178d5746\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:17:21Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:17:49Z is after 2025-08-24T17:21:41Z" Feb 18 19:17:49 crc kubenswrapper[4942]: I0218 19:17:49.659575 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:17:49Z is after 2025-08-24T17:21:41Z" Feb 18 19:17:49 crc kubenswrapper[4942]: I0218 19:17:49.673380 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-5pgvt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f163820b-df8b-4e07-9b74-d5f3332580a6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://97b02b2ef091c462632d385e824d90a6dc8270726bb3b5dfaa6c3036e99d323f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjg6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:17:41Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-5pgvt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:17:49Z is after 2025-08-24T17:21:41Z" Feb 18 19:17:49 crc kubenswrapper[4942]: I0218 19:17:49.688866 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-2rbc4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1ec6cacf-a09d-43b8-89fc-aea1d5a0ca9d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d379b6cff5fad06493f1e137d6f8de20b35e5350025c5875db8afb23cf30ac97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-27v7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://26e15915f01864e6357f914244e51cc52eb7c1c79fdda6cebfb23c7723978fc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://26e15915f01864e6357f914244e51cc52eb7c1c79fdda6cebfb23c7723978fc7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:17:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-27v7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3845cae9bbde573b86221f80bf461886dadf5c5149bb19824e505c703e168ba3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3845cae9bbde573b86221f80bf461886dadf5c5149bb19824e505c703e168ba3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:17:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-27v7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://83b7a573c632fbc4da1c088193202dcbe4f5f09d82c98b12e901a06acc877b80\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://83b7a573c632fbc4da1c088193202dcbe4f5f09d82c98b12e901a06acc877b80\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:17:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:17:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-27v7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2730d908eb063a0dc3278a304a8b7b9aee84bb6df39693e476d6517362864da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b2730d908eb063a0dc3278a304a8b7b9aee84bb6df39693e476d6517362864da\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:17:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:17:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-27v7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://86ba552c18df4c07b6d6b34acf51c27ec696374ddd079486c045e1cb9f68f703\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://86ba552c18df4c07b6d6b34acf51c27ec696374ddd079486c045e1cb9f68f703\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:17:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:17:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-27v7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://522b8abd41e12aecabbbc8a1f16dd8978b1e72b0984784780349570290bcc168\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://522b8abd41e12aecabbbc8a1f16dd8978b1e72b0984784780349570290bcc168\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:17:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:17:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-27v7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:17:42Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-2rbc4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:17:49Z is after 2025-08-24T17:21:41Z" Feb 18 19:17:49 crc kubenswrapper[4942]: I0218 19:17:49.702902 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://36f5db0de79285e1aca04aee9ebb8824353d8746f2f7df24be858a55db3c9abf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:17:49Z is after 2025-08-24T17:21:41Z" Feb 18 19:17:49 crc kubenswrapper[4942]: I0218 19:17:49.715958 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2e4be8605467674f949e5b4b8d282634126ab56d2983d5ffadb64ca4043b79b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:17:49Z is after 2025-08-24T17:21:41Z" Feb 18 19:17:49 crc kubenswrapper[4942]: I0218 19:17:49.733079 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:17:49Z is after 2025-08-24T17:21:41Z" Feb 18 19:17:49 crc kubenswrapper[4942]: I0218 19:17:49.742393 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:17:49 crc kubenswrapper[4942]: I0218 19:17:49.742436 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:17:49 crc kubenswrapper[4942]: I0218 19:17:49.742447 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:17:49 crc kubenswrapper[4942]: I0218 19:17:49.742466 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:17:49 crc kubenswrapper[4942]: I0218 19:17:49.742478 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:17:49Z","lastTransitionTime":"2026-02-18T19:17:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:17:49 crc kubenswrapper[4942]: I0218 19:17:49.754145 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wqxh4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"28921539-823a-4439-a230-3b5aed7085cc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d1f426cf3a46e9dbd6da2d7e0d1dc2649a781bb63b9b116e2e96e297ffe685f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c2zj5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3f2583de812c35d32f50918d2ea1071672e650d7bb1eca09416558ca25526b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c2zj5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:17:42Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wqxh4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:17:49Z is after 2025-08-24T17:21:41Z" Feb 18 19:17:49 crc kubenswrapper[4942]: I0218 19:17:49.772548 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-8jfwb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"75150b8c-7a02-497b-86c3-eabc9c8dbc55\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f6aba9b40a3a963de7e8fb8f2a121318f0800350a41caa30b6aef71468e5e0e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-65c5q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:17:42Z\\\"}}\" for pod \"openshift-multus\"/\"multus-8jfwb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:17:49Z is after 2025-08-24T17:21:41Z" Feb 18 19:17:49 crc kubenswrapper[4942]: I0218 19:17:49.791892 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-89fzv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"45dc4164-81a9-44cf-b86a-dff571bc0417\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:42Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:42Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e988175a524e389ddf3e3a47acb65910ac3bf3b812e14b76d988f13e2cdc5dc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl7tj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9333dac09e056ca12a248589ed4a097788b86ab83f9a1014d76d8bad88f1800c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl7tj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bcc9ee5f12cc3a3518c9fe13c16743e946e59b82dc01239767afb1e4afb2e4b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl7tj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2e222b580b244e85a382499ae61c72779f95fdab87e4d4c723d29b488219f94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl7tj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6351d0088a3e9c170ebe043fa700ef7f870c52f40d751b4fd13ac7b5bfa5e3b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl7tj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://427d7c083c5040fc6afe217c7850f1114323977542e83eb35d0a71b4bef6ecc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl7tj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://34ae88814307bf6ee0867a2fd00ea4020fd0b74379801aad00948088bac875bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl7tj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c498aa99d3ec10af57c279f23804f4dce52a99d2c73fafe2bd9dc6ea454c7a23\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl7tj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://581689b9e064557a35e24e6d5c15a73036b8499700959fd330e9ebee15543edc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://581689b9e064557a35e24e6d5c15a73036b8499700959fd330e9ebee15543edc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:17:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:17:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl7tj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:17:42Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-89fzv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:17:49Z is after 2025-08-24T17:21:41Z" Feb 18 19:17:49 crc kubenswrapper[4942]: I0218 19:17:49.820070 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 19:17:49 crc kubenswrapper[4942]: I0218 19:17:49.820347 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 19:17:49 crc kubenswrapper[4942]: I0218 19:17:49.820394 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 19:17:49 crc kubenswrapper[4942]: E0218 19:17:49.820610 4942 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 18 19:17:49 crc kubenswrapper[4942]: E0218 19:17:49.820801 4942 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-18 19:17:57.820665199 +0000 UTC m=+37.525597864 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 18 19:17:49 crc kubenswrapper[4942]: E0218 19:17:49.821154 4942 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 18 19:17:49 crc kubenswrapper[4942]: E0218 19:17:49.821417 4942 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-18 19:17:57.821241363 +0000 UTC m=+37.526174048 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 18 19:17:49 crc kubenswrapper[4942]: E0218 19:17:49.821735 4942 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 19:17:57.821715165 +0000 UTC m=+37.526647830 (durationBeforeRetry 8s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 19:17:49 crc kubenswrapper[4942]: I0218 19:17:49.845853 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:17:49 crc kubenswrapper[4942]: I0218 19:17:49.845895 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:17:49 crc kubenswrapper[4942]: I0218 19:17:49.845907 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:17:49 crc kubenswrapper[4942]: I0218 19:17:49.845923 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:17:49 crc kubenswrapper[4942]: I0218 19:17:49.845933 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:17:49Z","lastTransitionTime":"2026-02-18T19:17:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:17:49 crc kubenswrapper[4942]: I0218 19:17:49.949112 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:17:49 crc kubenswrapper[4942]: I0218 19:17:49.949698 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:17:49 crc kubenswrapper[4942]: I0218 19:17:49.949855 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:17:49 crc kubenswrapper[4942]: I0218 19:17:49.950010 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:17:49 crc kubenswrapper[4942]: I0218 19:17:49.950125 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:17:49Z","lastTransitionTime":"2026-02-18T19:17:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:17:49 crc kubenswrapper[4942]: I0218 19:17:49.973478 4942 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-22 03:06:04.5697286 +0000 UTC Feb 18 19:17:50 crc kubenswrapper[4942]: I0218 19:17:50.005319 4942 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Feb 18 19:17:50 crc kubenswrapper[4942]: I0218 19:17:50.035058 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 19:17:50 crc kubenswrapper[4942]: I0218 19:17:50.035071 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 19:17:50 crc kubenswrapper[4942]: E0218 19:17:50.035594 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 19:17:50 crc kubenswrapper[4942]: E0218 19:17:50.035902 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 19:17:50 crc kubenswrapper[4942]: I0218 19:17:50.052804 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:17:50 crc kubenswrapper[4942]: I0218 19:17:50.052880 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:17:50 crc kubenswrapper[4942]: I0218 19:17:50.052901 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:17:50 crc kubenswrapper[4942]: I0218 19:17:50.052926 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:17:50 crc kubenswrapper[4942]: I0218 19:17:50.052942 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:17:50Z","lastTransitionTime":"2026-02-18T19:17:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:17:50 crc kubenswrapper[4942]: I0218 19:17:50.155933 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:17:50 crc kubenswrapper[4942]: I0218 19:17:50.156372 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:17:50 crc kubenswrapper[4942]: I0218 19:17:50.156499 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:17:50 crc kubenswrapper[4942]: I0218 19:17:50.156623 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:17:50 crc kubenswrapper[4942]: I0218 19:17:50.156687 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:17:50Z","lastTransitionTime":"2026-02-18T19:17:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:17:50 crc kubenswrapper[4942]: I0218 19:17:50.259544 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:17:50 crc kubenswrapper[4942]: I0218 19:17:50.259603 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:17:50 crc kubenswrapper[4942]: I0218 19:17:50.259615 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:17:50 crc kubenswrapper[4942]: I0218 19:17:50.259663 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:17:50 crc kubenswrapper[4942]: I0218 19:17:50.259680 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:17:50Z","lastTransitionTime":"2026-02-18T19:17:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:17:50 crc kubenswrapper[4942]: I0218 19:17:50.312666 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-89fzv" Feb 18 19:17:50 crc kubenswrapper[4942]: I0218 19:17:50.348894 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-89fzv" Feb 18 19:17:50 crc kubenswrapper[4942]: I0218 19:17:50.361849 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:17:50 crc kubenswrapper[4942]: I0218 19:17:50.361922 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:17:50 crc kubenswrapper[4942]: I0218 19:17:50.361941 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:17:50 crc kubenswrapper[4942]: I0218 19:17:50.361969 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:17:50 crc kubenswrapper[4942]: I0218 19:17:50.361987 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:17:50Z","lastTransitionTime":"2026-02-18T19:17:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:17:50 crc kubenswrapper[4942]: I0218 19:17:50.364167 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://36f5db0de79285e1aca04aee9ebb8824353d8746f2f7df24be858a55db3c9abf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:17:50Z is after 2025-08-24T17:21:41Z" Feb 18 19:17:50 crc kubenswrapper[4942]: I0218 19:17:50.378154 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2e4be8605467674f949e5b4b8d282634126ab56d2983d5ffadb64ca4043b79b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:17:50Z is after 2025-08-24T17:21:41Z" Feb 18 19:17:50 crc kubenswrapper[4942]: I0218 19:17:50.397915 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:17:50Z is after 2025-08-24T17:21:41Z" Feb 18 19:17:50 crc kubenswrapper[4942]: I0218 19:17:50.413894 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wqxh4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"28921539-823a-4439-a230-3b5aed7085cc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d1f426cf3a46e9dbd6da2d7e0d1dc2649a781bb63b9b116e2e96e297ffe685f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c2zj5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3f2583de812c35d32f50918d2ea1071672e650d7bb1eca09416558ca25526b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c2zj5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:17:42Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wqxh4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:17:50Z is after 2025-08-24T17:21:41Z" Feb 18 19:17:50 crc kubenswrapper[4942]: I0218 19:17:50.434848 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-8jfwb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"75150b8c-7a02-497b-86c3-eabc9c8dbc55\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f6aba9b40a3a963de7e8fb8f2a121318f0800350a41caa30b6aef71468e5e0e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-65c5q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:17:42Z\\\"}}\" for pod \"openshift-multus\"/\"multus-8jfwb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:17:50Z is after 2025-08-24T17:21:41Z" Feb 18 19:17:50 crc kubenswrapper[4942]: I0218 19:17:50.465541 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:17:50 crc kubenswrapper[4942]: I0218 19:17:50.465599 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:17:50 crc kubenswrapper[4942]: I0218 19:17:50.465618 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:17:50 crc kubenswrapper[4942]: I0218 19:17:50.465644 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:17:50 crc kubenswrapper[4942]: I0218 19:17:50.465666 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:17:50Z","lastTransitionTime":"2026-02-18T19:17:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:17:50 crc kubenswrapper[4942]: I0218 19:17:50.466560 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-89fzv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"45dc4164-81a9-44cf-b86a-dff571bc0417\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:42Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:42Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e988175a524e389ddf3e3a47acb65910ac3bf3b812e14b76d988f13e2cdc5dc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl7tj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9333dac09e056ca12a248589ed4a097788b86ab83f9a1014d76d8bad88f1800c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl7tj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bcc9ee5f12cc3a3518c9fe13c16743e946e59b82dc01239767afb1e4afb2e4b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl7tj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2e222b580b244e85a382499ae61c72779f95fdab87e4d4c723d29b488219f94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl7tj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6351d0088a3e9c170ebe043fa700ef7f870c52f40d751b4fd13ac7b5bfa5e3b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl7tj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://427d7c083c5040fc6afe217c7850f1114323977542e83eb35d0a71b4bef6ecc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl7tj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://34ae88814307bf6ee0867a2fd00ea4020fd0b74379801aad00948088bac875bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl7tj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c498aa99d3ec10af57c279f23804f4dce52a99d2c73fafe2bd9dc6ea454c7a23\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl7tj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://581689b9e064557a35e24e6d5c15a73036b8499700959fd330e9ebee15543edc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://581689b9e064557a35e24e6d5c15a73036b8499700959fd330e9ebee15543edc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:17:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:17:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl7tj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:17:42Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-89fzv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:17:50Z is after 2025-08-24T17:21:41Z" Feb 18 19:17:50 crc kubenswrapper[4942]: I0218 19:17:50.489275 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4da93830-99a3-4d84-91c8-a5352a987b3f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://beecfbdf76954e7b9895240b52a2ec033ec3b81094ece02095f67a5f389d0383\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca3d8e99733c89b17e7211c9bae268f8e75942d896d32a6e2e9fc7e613000a6d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee5e19c2c5a503ae69e8052828713b9b399137e0fb7f3a06865d4d7f6b29c954\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b82ad9b4d66dae63d0d0fcf0dd65333cf43c3b1d6006e129b7db1c6320bfdee8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b82ad9b4d66dae63d0d0fcf0dd65333cf43c3b1d6006e129b7db1c6320bfdee8\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"message\\\":\\\"le observer\\\\nW0218 19:17:41.723890 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0218 19:17:41.724123 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0218 19:17:41.725411 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3231040961/tls.crt::/tmp/serving-cert-3231040961/tls.key\\\\\\\"\\\\nI0218 19:17:41.923908 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0218 19:17:41.936017 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0218 19:17:41.936045 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0218 19:17:41.936073 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0218 19:17:41.936079 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0218 19:17:41.944174 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0218 19:17:41.944200 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 19:17:41.944205 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 19:17:41.944211 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0218 19:17:41.944214 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0218 19:17:41.944217 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0218 19:17:41.944220 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0218 19:17:41.944371 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0218 19:17:41.958094 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T19:17:41Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5fcd5de3303bba82e4a354de9f77b9aac574912955c2e49e2e74232f4d432a88\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:23Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6adb31d2464d541db843bddad1c43811ca11900db12deeb0d7c13ff8a3186d09\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6adb31d2464d541db843bddad1c43811ca11900db12deeb0d7c13ff8a3186d09\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:17:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:17:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:17:21Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:17:50Z is after 2025-08-24T17:21:41Z" Feb 18 19:17:50 crc kubenswrapper[4942]: I0218 19:17:50.508478 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:17:50Z is after 2025-08-24T17:21:41Z" Feb 18 19:17:50 crc kubenswrapper[4942]: I0218 19:17:50.524516 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-wxck8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"69ef2748-687e-4223-998e-7bd92ad8aaaf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ba4df5c822ff37a1a027d1908aab6472cd0b5a6ab0a2b5e5d1b172774107727\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vscpp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:17:45Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-wxck8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:17:50Z is after 2025-08-24T17:21:41Z" Feb 18 19:17:50 crc kubenswrapper[4942]: I0218 19:17:50.552573 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b5d2b9d-7ec0-41fa-a073-399c6fd41eb6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2c8b81c113e461032be39d6328308bad3189a9e84d987da987d43e8e2f6449fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3654d3b4a5084ce9ffb9ef8aeab6155788b56ac636aee44b098f6e9d457a8d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3a247d311cfbec62a54df5757a344bbc7ea516a66ccdeb67aecbbe268a4fbe4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://117748c4c4fa5e68d4b927639faa447ed3a984e0d7364a2224abe27e178d5746\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:17:21Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:17:50Z is after 2025-08-24T17:21:41Z" Feb 18 19:17:50 crc kubenswrapper[4942]: I0218 19:17:50.568442 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:17:50 crc kubenswrapper[4942]: I0218 19:17:50.568474 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:17:50 crc kubenswrapper[4942]: I0218 19:17:50.568508 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:17:50 crc kubenswrapper[4942]: I0218 19:17:50.568528 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:17:50 crc kubenswrapper[4942]: I0218 19:17:50.568540 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:17:50Z","lastTransitionTime":"2026-02-18T19:17:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:17:50 crc kubenswrapper[4942]: I0218 19:17:50.591100 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8cc6e8b6926e9cadf0bfdedb3a9fd0e5a7a902ba1cc703cd0396c3d7b2ec8666\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://45c0716738e2acbb0104b2ce05e3f23fd6933b653297d10972914500f3e55cd3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:17:50Z is after 2025-08-24T17:21:41Z" Feb 18 19:17:50 crc kubenswrapper[4942]: I0218 19:17:50.627874 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:17:50Z is after 2025-08-24T17:21:41Z" Feb 18 19:17:50 crc kubenswrapper[4942]: I0218 19:17:50.640203 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-5pgvt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f163820b-df8b-4e07-9b74-d5f3332580a6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://97b02b2ef091c462632d385e824d90a6dc8270726bb3b5dfaa6c3036e99d323f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjg6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:17:41Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-5pgvt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:17:50Z is after 2025-08-24T17:21:41Z" Feb 18 19:17:50 crc kubenswrapper[4942]: I0218 19:17:50.656294 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-2rbc4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1ec6cacf-a09d-43b8-89fc-aea1d5a0ca9d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d379b6cff5fad06493f1e137d6f8de20b35e5350025c5875db8afb23cf30ac97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-27v7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://26e15915f01864e6357f914244e51cc52eb7c1c79fdda6cebfb23c7723978fc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://26e15915f01864e6357f914244e51cc52eb7c1c79fdda6cebfb23c7723978fc7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:17:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-27v7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3845cae9bbde573b86221f80bf461886dadf5c5149bb19824e505c703e168ba3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3845cae9bbde573b86221f80bf461886dadf5c5149bb19824e505c703e168ba3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:17:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-27v7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://83b7a573c632fbc4da1c088193202dcbe4f5f09d82c98b12e901a06acc877b80\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://83b7a573c632fbc4da1c088193202dcbe4f5f09d82c98b12e901a06acc877b80\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:17:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:17:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-27v7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2730d908eb063a0dc3278a304a8b7b9aee84bb6df39693e476d6517362864da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b2730d908eb063a0dc3278a304a8b7b9aee84bb6df39693e476d6517362864da\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:17:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:17:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-27v7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://86ba552c18df4c07b6d6b34acf51c27ec696374ddd079486c045e1cb9f68f703\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://86ba552c18df4c07b6d6b34acf51c27ec696374ddd079486c045e1cb9f68f703\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:17:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:17:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-27v7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://522b8abd41e12aecabbbc8a1f16dd8978b1e72b0984784780349570290bcc168\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://522b8abd41e12aecabbbc8a1f16dd8978b1e72b0984784780349570290bcc168\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:17:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:17:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-27v7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:17:42Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-2rbc4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:17:50Z is after 2025-08-24T17:21:41Z" Feb 18 19:17:50 crc kubenswrapper[4942]: I0218 19:17:50.671118 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:17:50 crc kubenswrapper[4942]: I0218 19:17:50.671197 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:17:50 crc kubenswrapper[4942]: I0218 19:17:50.671210 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:17:50 crc kubenswrapper[4942]: I0218 19:17:50.671232 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:17:50 crc kubenswrapper[4942]: I0218 19:17:50.671245 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:17:50Z","lastTransitionTime":"2026-02-18T19:17:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:17:50 crc kubenswrapper[4942]: I0218 19:17:50.774924 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:17:50 crc kubenswrapper[4942]: I0218 19:17:50.775035 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:17:50 crc kubenswrapper[4942]: I0218 19:17:50.775063 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:17:50 crc kubenswrapper[4942]: I0218 19:17:50.775100 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:17:50 crc kubenswrapper[4942]: I0218 19:17:50.775143 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:17:50Z","lastTransitionTime":"2026-02-18T19:17:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:17:50 crc kubenswrapper[4942]: I0218 19:17:50.878967 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:17:50 crc kubenswrapper[4942]: I0218 19:17:50.879043 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:17:50 crc kubenswrapper[4942]: I0218 19:17:50.879066 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:17:50 crc kubenswrapper[4942]: I0218 19:17:50.879090 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:17:50 crc kubenswrapper[4942]: I0218 19:17:50.879104 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:17:50Z","lastTransitionTime":"2026-02-18T19:17:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:17:50 crc kubenswrapper[4942]: I0218 19:17:50.974152 4942 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-13 13:58:51.278953405 +0000 UTC Feb 18 19:17:50 crc kubenswrapper[4942]: I0218 19:17:50.982168 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:17:50 crc kubenswrapper[4942]: I0218 19:17:50.982213 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:17:50 crc kubenswrapper[4942]: I0218 19:17:50.982223 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:17:50 crc kubenswrapper[4942]: I0218 19:17:50.982242 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:17:50 crc kubenswrapper[4942]: I0218 19:17:50.982256 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:17:50Z","lastTransitionTime":"2026-02-18T19:17:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:17:51 crc kubenswrapper[4942]: I0218 19:17:51.035039 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 19:17:51 crc kubenswrapper[4942]: E0218 19:17:51.035227 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 19:17:51 crc kubenswrapper[4942]: I0218 19:17:51.056149 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8cc6e8b6926e9cadf0bfdedb3a9fd0e5a7a902ba1cc703cd0396c3d7b2ec8666\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://45c0716738e2acbb0104b2ce05e3f23fd6933b653297d10972914500f3e55cd3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:17:51Z is after 2025-08-24T17:21:41Z" Feb 18 19:17:51 crc kubenswrapper[4942]: I0218 19:17:51.084614 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b5d2b9d-7ec0-41fa-a073-399c6fd41eb6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2c8b81c113e461032be39d6328308bad3189a9e84d987da987d43e8e2f6449fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3654d3b4a5084ce9ffb9ef8aeab6155788b56ac636aee44b098f6e9d457a8d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3a247d311cfbec62a54df5757a344bbc7ea516a66ccdeb67aecbbe268a4fbe4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://117748c4c4fa5e68d4b927639faa447ed3a984e0d7364a2224abe27e178d5746\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:17:21Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:17:51Z is after 2025-08-24T17:21:41Z" Feb 18 19:17:51 crc kubenswrapper[4942]: I0218 19:17:51.085743 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:17:51 crc kubenswrapper[4942]: I0218 19:17:51.085799 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:17:51 crc kubenswrapper[4942]: I0218 19:17:51.085809 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:17:51 crc kubenswrapper[4942]: I0218 19:17:51.085830 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:17:51 crc kubenswrapper[4942]: I0218 19:17:51.085839 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:17:51Z","lastTransitionTime":"2026-02-18T19:17:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:17:51 crc kubenswrapper[4942]: I0218 19:17:51.105257 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:17:51Z is after 2025-08-24T17:21:41Z" Feb 18 19:17:51 crc kubenswrapper[4942]: I0218 19:17:51.116522 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-5pgvt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f163820b-df8b-4e07-9b74-d5f3332580a6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://97b02b2ef091c462632d385e824d90a6dc8270726bb3b5dfaa6c3036e99d323f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjg6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:17:41Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-5pgvt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:17:51Z is after 2025-08-24T17:21:41Z" Feb 18 19:17:51 crc kubenswrapper[4942]: I0218 19:17:51.141157 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-2rbc4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1ec6cacf-a09d-43b8-89fc-aea1d5a0ca9d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d379b6cff5fad06493f1e137d6f8de20b35e5350025c5875db8afb23cf30ac97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-27v7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://26e15915f01864e6357f914244e51cc52eb7c1c79fdda6cebfb23c7723978fc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://26e15915f01864e6357f914244e51cc52eb7c1c79fdda6cebfb23c7723978fc7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:17:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-27v7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3845cae9bbde573b86221f80bf461886dadf5c5149bb19824e505c703e168ba3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3845cae9bbde573b86221f80bf461886dadf5c5149bb19824e505c703e168ba3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:17:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-27v7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://83b7a573c632fbc4da1c088193202dcbe4f5f09d82c98b12e901a06acc877b80\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://83b7a573c632fbc4da1c088193202dcbe4f5f09d82c98b12e901a06acc877b80\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:17:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:17:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-27v7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2730d908eb063a0dc3278a304a8b7b9aee84bb6df39693e476d6517362864da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b2730d908eb063a0dc3278a304a8b7b9aee84bb6df39693e476d6517362864da\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:17:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:17:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-27v7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://86ba552c18df4c07b6d6b34acf51c27ec696374ddd079486c045e1cb9f68f703\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://86ba552c18df4c07b6d6b34acf51c27ec696374ddd079486c045e1cb9f68f703\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:17:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:17:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-27v7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://522b8abd41e12aecabbbc8a1f16dd8978b1e72b0984784780349570290bcc168\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://522b8abd41e12aecabbbc8a1f16dd8978b1e72b0984784780349570290bcc168\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:17:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:17:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-27v7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:17:42Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-2rbc4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:17:51Z is after 2025-08-24T17:21:41Z" Feb 18 19:17:51 crc kubenswrapper[4942]: I0218 19:17:51.166298 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://36f5db0de79285e1aca04aee9ebb8824353d8746f2f7df24be858a55db3c9abf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:17:51Z is after 2025-08-24T17:21:41Z" Feb 18 19:17:51 crc kubenswrapper[4942]: I0218 19:17:51.181561 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2e4be8605467674f949e5b4b8d282634126ab56d2983d5ffadb64ca4043b79b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:17:51Z is after 2025-08-24T17:21:41Z" Feb 18 19:17:51 crc kubenswrapper[4942]: I0218 19:17:51.189058 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:17:51 crc kubenswrapper[4942]: I0218 19:17:51.189112 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:17:51 crc kubenswrapper[4942]: I0218 19:17:51.189123 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:17:51 crc kubenswrapper[4942]: I0218 19:17:51.189147 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:17:51 crc kubenswrapper[4942]: I0218 19:17:51.189160 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:17:51Z","lastTransitionTime":"2026-02-18T19:17:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:17:51 crc kubenswrapper[4942]: I0218 19:17:51.197310 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:17:51Z is after 2025-08-24T17:21:41Z" Feb 18 19:17:51 crc kubenswrapper[4942]: I0218 19:17:51.212569 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wqxh4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"28921539-823a-4439-a230-3b5aed7085cc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d1f426cf3a46e9dbd6da2d7e0d1dc2649a781bb63b9b116e2e96e297ffe685f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c2zj5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3f2583de812c35d32f50918d2ea1071672e650d7bb1eca09416558ca25526b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c2zj5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:17:42Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wqxh4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:17:51Z is after 2025-08-24T17:21:41Z" Feb 18 19:17:51 crc kubenswrapper[4942]: I0218 19:17:51.228385 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-8jfwb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"75150b8c-7a02-497b-86c3-eabc9c8dbc55\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f6aba9b40a3a963de7e8fb8f2a121318f0800350a41caa30b6aef71468e5e0e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-65c5q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:17:42Z\\\"}}\" for pod \"openshift-multus\"/\"multus-8jfwb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:17:51Z is after 2025-08-24T17:21:41Z" Feb 18 19:17:51 crc kubenswrapper[4942]: I0218 19:17:51.259002 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-89fzv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"45dc4164-81a9-44cf-b86a-dff571bc0417\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:42Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:42Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e988175a524e389ddf3e3a47acb65910ac3bf3b812e14b76d988f13e2cdc5dc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl7tj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9333dac09e056ca12a248589ed4a097788b86ab83f9a1014d76d8bad88f1800c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl7tj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bcc9ee5f12cc3a3518c9fe13c16743e946e59b82dc01239767afb1e4afb2e4b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl7tj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2e222b580b244e85a382499ae61c72779f95fdab87e4d4c723d29b488219f94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl7tj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6351d0088a3e9c170ebe043fa700ef7f870c52f40d751b4fd13ac7b5bfa5e3b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl7tj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://427d7c083c5040fc6afe217c7850f1114323977542e83eb35d0a71b4bef6ecc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl7tj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://34ae88814307bf6ee0867a2fd00ea4020fd0b74379801aad00948088bac875bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl7tj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c498aa99d3ec10af57c279f23804f4dce52a99d2c73fafe2bd9dc6ea454c7a23\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl7tj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://581689b9e064557a35e24e6d5c15a73036b8499700959fd330e9ebee15543edc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://581689b9e064557a35e24e6d5c15a73036b8499700959fd330e9ebee15543edc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:17:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:17:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl7tj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:17:42Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-89fzv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:17:51Z is after 2025-08-24T17:21:41Z" Feb 18 19:17:51 crc kubenswrapper[4942]: I0218 19:17:51.273009 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:17:51Z is after 2025-08-24T17:21:41Z" Feb 18 19:17:51 crc kubenswrapper[4942]: I0218 19:17:51.285077 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-wxck8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"69ef2748-687e-4223-998e-7bd92ad8aaaf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ba4df5c822ff37a1a027d1908aab6472cd0b5a6ab0a2b5e5d1b172774107727\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vscpp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:17:45Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-wxck8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:17:51Z is after 2025-08-24T17:21:41Z" Feb 18 19:17:51 crc kubenswrapper[4942]: I0218 19:17:51.292326 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:17:51 crc kubenswrapper[4942]: I0218 19:17:51.292373 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:17:51 crc kubenswrapper[4942]: I0218 19:17:51.292384 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:17:51 crc kubenswrapper[4942]: I0218 19:17:51.292407 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:17:51 crc kubenswrapper[4942]: I0218 19:17:51.292419 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:17:51Z","lastTransitionTime":"2026-02-18T19:17:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:17:51 crc kubenswrapper[4942]: I0218 19:17:51.312947 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4da93830-99a3-4d84-91c8-a5352a987b3f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://beecfbdf76954e7b9895240b52a2ec033ec3b81094ece02095f67a5f389d0383\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca3d8e99733c89b17e7211c9bae268f8e75942d896d32a6e2e9fc7e613000a6d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee5e19c2c5a503ae69e8052828713b9b399137e0fb7f3a06865d4d7f6b29c954\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b82ad9b4d66dae63d0d0fcf0dd65333cf43c3b1d6006e129b7db1c6320bfdee8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b82ad9b4d66dae63d0d0fcf0dd65333cf43c3b1d6006e129b7db1c6320bfdee8\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"message\\\":\\\"le observer\\\\nW0218 19:17:41.723890 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0218 19:17:41.724123 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0218 19:17:41.725411 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3231040961/tls.crt::/tmp/serving-cert-3231040961/tls.key\\\\\\\"\\\\nI0218 19:17:41.923908 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0218 19:17:41.936017 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0218 19:17:41.936045 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0218 19:17:41.936073 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0218 19:17:41.936079 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0218 19:17:41.944174 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0218 19:17:41.944200 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 19:17:41.944205 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 19:17:41.944211 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0218 19:17:41.944214 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0218 19:17:41.944217 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0218 19:17:41.944220 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0218 19:17:41.944371 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0218 19:17:41.958094 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T19:17:41Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5fcd5de3303bba82e4a354de9f77b9aac574912955c2e49e2e74232f4d432a88\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:23Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6adb31d2464d541db843bddad1c43811ca11900db12deeb0d7c13ff8a3186d09\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6adb31d2464d541db843bddad1c43811ca11900db12deeb0d7c13ff8a3186d09\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:17:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:17:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:17:21Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:17:51Z is after 2025-08-24T17:21:41Z" Feb 18 19:17:51 crc kubenswrapper[4942]: I0218 19:17:51.395495 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:17:51 crc kubenswrapper[4942]: I0218 19:17:51.395543 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:17:51 crc kubenswrapper[4942]: I0218 19:17:51.395554 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:17:51 crc kubenswrapper[4942]: I0218 19:17:51.395572 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:17:51 crc kubenswrapper[4942]: I0218 19:17:51.395584 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:17:51Z","lastTransitionTime":"2026-02-18T19:17:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:17:51 crc kubenswrapper[4942]: I0218 19:17:51.498957 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:17:51 crc kubenswrapper[4942]: I0218 19:17:51.499014 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:17:51 crc kubenswrapper[4942]: I0218 19:17:51.499025 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:17:51 crc kubenswrapper[4942]: I0218 19:17:51.499048 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:17:51 crc kubenswrapper[4942]: I0218 19:17:51.499065 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:17:51Z","lastTransitionTime":"2026-02-18T19:17:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:17:51 crc kubenswrapper[4942]: I0218 19:17:51.601678 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:17:51 crc kubenswrapper[4942]: I0218 19:17:51.602056 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:17:51 crc kubenswrapper[4942]: I0218 19:17:51.602065 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:17:51 crc kubenswrapper[4942]: I0218 19:17:51.602085 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:17:51 crc kubenswrapper[4942]: I0218 19:17:51.602096 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:17:51Z","lastTransitionTime":"2026-02-18T19:17:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:17:51 crc kubenswrapper[4942]: I0218 19:17:51.705198 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:17:51 crc kubenswrapper[4942]: I0218 19:17:51.705241 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:17:51 crc kubenswrapper[4942]: I0218 19:17:51.705251 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:17:51 crc kubenswrapper[4942]: I0218 19:17:51.705273 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:17:51 crc kubenswrapper[4942]: I0218 19:17:51.705284 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:17:51Z","lastTransitionTime":"2026-02-18T19:17:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:17:51 crc kubenswrapper[4942]: I0218 19:17:51.808444 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:17:51 crc kubenswrapper[4942]: I0218 19:17:51.808505 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:17:51 crc kubenswrapper[4942]: I0218 19:17:51.808517 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:17:51 crc kubenswrapper[4942]: I0218 19:17:51.808540 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:17:51 crc kubenswrapper[4942]: I0218 19:17:51.808554 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:17:51Z","lastTransitionTime":"2026-02-18T19:17:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:17:51 crc kubenswrapper[4942]: I0218 19:17:51.911655 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:17:51 crc kubenswrapper[4942]: I0218 19:17:51.911700 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:17:51 crc kubenswrapper[4942]: I0218 19:17:51.911713 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:17:51 crc kubenswrapper[4942]: I0218 19:17:51.911743 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:17:51 crc kubenswrapper[4942]: I0218 19:17:51.911788 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:17:51Z","lastTransitionTime":"2026-02-18T19:17:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:17:51 crc kubenswrapper[4942]: I0218 19:17:51.975183 4942 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-26 22:15:30.302671854 +0000 UTC Feb 18 19:17:52 crc kubenswrapper[4942]: I0218 19:17:52.015349 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:17:52 crc kubenswrapper[4942]: I0218 19:17:52.015434 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:17:52 crc kubenswrapper[4942]: I0218 19:17:52.015457 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:17:52 crc kubenswrapper[4942]: I0218 19:17:52.015488 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:17:52 crc kubenswrapper[4942]: I0218 19:17:52.015510 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:17:52Z","lastTransitionTime":"2026-02-18T19:17:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:17:52 crc kubenswrapper[4942]: I0218 19:17:52.035812 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 19:17:52 crc kubenswrapper[4942]: I0218 19:17:52.035812 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 19:17:52 crc kubenswrapper[4942]: E0218 19:17:52.036053 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 19:17:52 crc kubenswrapper[4942]: E0218 19:17:52.036179 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 19:17:52 crc kubenswrapper[4942]: I0218 19:17:52.119108 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:17:52 crc kubenswrapper[4942]: I0218 19:17:52.119173 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:17:52 crc kubenswrapper[4942]: I0218 19:17:52.119189 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:17:52 crc kubenswrapper[4942]: I0218 19:17:52.119210 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:17:52 crc kubenswrapper[4942]: I0218 19:17:52.119226 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:17:52Z","lastTransitionTime":"2026-02-18T19:17:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:17:52 crc kubenswrapper[4942]: I0218 19:17:52.222304 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:17:52 crc kubenswrapper[4942]: I0218 19:17:52.222360 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:17:52 crc kubenswrapper[4942]: I0218 19:17:52.222374 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:17:52 crc kubenswrapper[4942]: I0218 19:17:52.222395 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:17:52 crc kubenswrapper[4942]: I0218 19:17:52.222424 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:17:52Z","lastTransitionTime":"2026-02-18T19:17:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:17:52 crc kubenswrapper[4942]: I0218 19:17:52.321818 4942 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-89fzv_45dc4164-81a9-44cf-b86a-dff571bc0417/ovnkube-controller/0.log" Feb 18 19:17:52 crc kubenswrapper[4942]: I0218 19:17:52.324808 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:17:52 crc kubenswrapper[4942]: I0218 19:17:52.324869 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:17:52 crc kubenswrapper[4942]: I0218 19:17:52.324885 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:17:52 crc kubenswrapper[4942]: I0218 19:17:52.324908 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:17:52 crc kubenswrapper[4942]: I0218 19:17:52.324924 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:17:52Z","lastTransitionTime":"2026-02-18T19:17:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:17:52 crc kubenswrapper[4942]: I0218 19:17:52.332131 4942 generic.go:334] "Generic (PLEG): container finished" podID="45dc4164-81a9-44cf-b86a-dff571bc0417" containerID="34ae88814307bf6ee0867a2fd00ea4020fd0b74379801aad00948088bac875bf" exitCode=1 Feb 18 19:17:52 crc kubenswrapper[4942]: I0218 19:17:52.332205 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-89fzv" event={"ID":"45dc4164-81a9-44cf-b86a-dff571bc0417","Type":"ContainerDied","Data":"34ae88814307bf6ee0867a2fd00ea4020fd0b74379801aad00948088bac875bf"} Feb 18 19:17:52 crc kubenswrapper[4942]: I0218 19:17:52.333121 4942 scope.go:117] "RemoveContainer" containerID="34ae88814307bf6ee0867a2fd00ea4020fd0b74379801aad00948088bac875bf" Feb 18 19:17:52 crc kubenswrapper[4942]: I0218 19:17:52.352589 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://36f5db0de79285e1aca04aee9ebb8824353d8746f2f7df24be858a55db3c9abf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:17:52Z is after 2025-08-24T17:21:41Z" Feb 18 19:17:52 crc kubenswrapper[4942]: I0218 19:17:52.365682 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2e4be8605467674f949e5b4b8d282634126ab56d2983d5ffadb64ca4043b79b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:17:52Z is after 2025-08-24T17:21:41Z" Feb 18 19:17:52 crc kubenswrapper[4942]: I0218 19:17:52.383025 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:17:52Z is after 2025-08-24T17:21:41Z" Feb 18 19:17:52 crc kubenswrapper[4942]: I0218 19:17:52.397149 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wqxh4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"28921539-823a-4439-a230-3b5aed7085cc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d1f426cf3a46e9dbd6da2d7e0d1dc2649a781bb63b9b116e2e96e297ffe685f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c2zj5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3f2583de812c35d32f50918d2ea1071672e650d7bb1eca09416558ca25526b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c2zj5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:17:42Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wqxh4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:17:52Z is after 2025-08-24T17:21:41Z" Feb 18 19:17:52 crc kubenswrapper[4942]: I0218 19:17:52.414793 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-8jfwb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"75150b8c-7a02-497b-86c3-eabc9c8dbc55\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f6aba9b40a3a963de7e8fb8f2a121318f0800350a41caa30b6aef71468e5e0e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-65c5q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:17:42Z\\\"}}\" for pod \"openshift-multus\"/\"multus-8jfwb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:17:52Z is after 2025-08-24T17:21:41Z" Feb 18 19:17:52 crc kubenswrapper[4942]: I0218 19:17:52.428947 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:17:52 crc kubenswrapper[4942]: I0218 19:17:52.429016 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:17:52 crc kubenswrapper[4942]: I0218 19:17:52.429034 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:17:52 crc kubenswrapper[4942]: I0218 19:17:52.429060 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:17:52 crc kubenswrapper[4942]: I0218 19:17:52.429079 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:17:52Z","lastTransitionTime":"2026-02-18T19:17:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:17:52 crc kubenswrapper[4942]: I0218 19:17:52.441002 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-89fzv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"45dc4164-81a9-44cf-b86a-dff571bc0417\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:42Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:42Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e988175a524e389ddf3e3a47acb65910ac3bf3b812e14b76d988f13e2cdc5dc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl7tj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9333dac09e056ca12a248589ed4a097788b86ab83f9a1014d76d8bad88f1800c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl7tj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bcc9ee5f12cc3a3518c9fe13c16743e946e59b82dc01239767afb1e4afb2e4b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl7tj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2e222b580b244e85a382499ae61c72779f95fdab87e4d4c723d29b488219f94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl7tj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6351d0088a3e9c170ebe043fa700ef7f870c52f40d751b4fd13ac7b5bfa5e3b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl7tj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://427d7c083c5040fc6afe217c7850f1114323977542e83eb35d0a71b4bef6ecc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl7tj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://34ae88814307bf6ee0867a2fd00ea4020fd0b74379801aad00948088bac875bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://34ae88814307bf6ee0867a2fd00ea4020fd0b74379801aad00948088bac875bf\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-18T19:17:51Z\\\",\\\"message\\\":\\\"roup\\\\\\\"}}}, built lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-route-controller-manager/route-controller-manager_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-route-controller-manager/route-controller-manager\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.5.239\\\\\\\", Port:443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nI0218 19:17:51.651321 6254 ovnkube.go:599] Stopped ovnkube\\\\nI0218 19:17:51.651348 6254 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI0218 19:17:51.651395 6254 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-etcd/etcd\\\\\\\"}\\\\nF0218 19:17:51.651417 6254 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: failed to add event handler: handler {0x1e60340 0x1e60020 0x1e5ffc0} was not added to shared informer because it has stopped already, failed to start node network controller: failed to star\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T19:17:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl7tj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c498aa99d3ec10af57c279f23804f4dce52a99d2c73fafe2bd9dc6ea454c7a23\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl7tj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://581689b9e064557a35e24e6d5c15a73036b8499700959fd330e9ebee15543edc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://581689b9e064557a35e24e6d5c15a73036b8499700959fd330e9ebee15543edc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:17:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:17:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl7tj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:17:42Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-89fzv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:17:52Z is after 2025-08-24T17:21:41Z" Feb 18 19:17:52 crc kubenswrapper[4942]: I0218 19:17:52.461378 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:17:52Z is after 2025-08-24T17:21:41Z" Feb 18 19:17:52 crc kubenswrapper[4942]: I0218 19:17:52.476451 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-wxck8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"69ef2748-687e-4223-998e-7bd92ad8aaaf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ba4df5c822ff37a1a027d1908aab6472cd0b5a6ab0a2b5e5d1b172774107727\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vscpp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:17:45Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-wxck8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:17:52Z is after 2025-08-24T17:21:41Z" Feb 18 19:17:52 crc kubenswrapper[4942]: I0218 19:17:52.498091 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4da93830-99a3-4d84-91c8-a5352a987b3f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://beecfbdf76954e7b9895240b52a2ec033ec3b81094ece02095f67a5f389d0383\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca3d8e99733c89b17e7211c9bae268f8e75942d896d32a6e2e9fc7e613000a6d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee5e19c2c5a503ae69e8052828713b9b399137e0fb7f3a06865d4d7f6b29c954\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b82ad9b4d66dae63d0d0fcf0dd65333cf43c3b1d6006e129b7db1c6320bfdee8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b82ad9b4d66dae63d0d0fcf0dd65333cf43c3b1d6006e129b7db1c6320bfdee8\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"message\\\":\\\"le observer\\\\nW0218 19:17:41.723890 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0218 19:17:41.724123 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0218 19:17:41.725411 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3231040961/tls.crt::/tmp/serving-cert-3231040961/tls.key\\\\\\\"\\\\nI0218 19:17:41.923908 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0218 19:17:41.936017 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0218 19:17:41.936045 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0218 19:17:41.936073 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0218 19:17:41.936079 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0218 19:17:41.944174 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0218 19:17:41.944200 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 19:17:41.944205 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 19:17:41.944211 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0218 19:17:41.944214 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0218 19:17:41.944217 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0218 19:17:41.944220 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0218 19:17:41.944371 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0218 19:17:41.958094 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T19:17:41Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5fcd5de3303bba82e4a354de9f77b9aac574912955c2e49e2e74232f4d432a88\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:23Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6adb31d2464d541db843bddad1c43811ca11900db12deeb0d7c13ff8a3186d09\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6adb31d2464d541db843bddad1c43811ca11900db12deeb0d7c13ff8a3186d09\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:17:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:17:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:17:21Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:17:52Z is after 2025-08-24T17:21:41Z" Feb 18 19:17:52 crc kubenswrapper[4942]: I0218 19:17:52.520909 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8cc6e8b6926e9cadf0bfdedb3a9fd0e5a7a902ba1cc703cd0396c3d7b2ec8666\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://45c0716738e2acbb0104b2ce05e3f23fd6933b653297d10972914500f3e55cd3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:17:52Z is after 2025-08-24T17:21:41Z" Feb 18 19:17:52 crc kubenswrapper[4942]: I0218 19:17:52.532189 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:17:52 crc kubenswrapper[4942]: I0218 19:17:52.532218 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:17:52 crc kubenswrapper[4942]: I0218 19:17:52.532227 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:17:52 crc kubenswrapper[4942]: I0218 19:17:52.532244 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:17:52 crc kubenswrapper[4942]: I0218 19:17:52.532257 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:17:52Z","lastTransitionTime":"2026-02-18T19:17:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:17:52 crc kubenswrapper[4942]: I0218 19:17:52.536715 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b5d2b9d-7ec0-41fa-a073-399c6fd41eb6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2c8b81c113e461032be39d6328308bad3189a9e84d987da987d43e8e2f6449fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3654d3b4a5084ce9ffb9ef8aeab6155788b56ac636aee44b098f6e9d457a8d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3a247d311cfbec62a54df5757a344bbc7ea516a66ccdeb67aecbbe268a4fbe4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://117748c4c4fa5e68d4b927639faa447ed3a984e0d7364a2224abe27e178d5746\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:17:21Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:17:52Z is after 2025-08-24T17:21:41Z" Feb 18 19:17:52 crc kubenswrapper[4942]: I0218 19:17:52.548221 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:17:52Z is after 2025-08-24T17:21:41Z" Feb 18 19:17:52 crc kubenswrapper[4942]: I0218 19:17:52.558688 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-5pgvt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f163820b-df8b-4e07-9b74-d5f3332580a6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://97b02b2ef091c462632d385e824d90a6dc8270726bb3b5dfaa6c3036e99d323f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjg6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:17:41Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-5pgvt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:17:52Z is after 2025-08-24T17:21:41Z" Feb 18 19:17:52 crc kubenswrapper[4942]: I0218 19:17:52.574750 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-2rbc4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1ec6cacf-a09d-43b8-89fc-aea1d5a0ca9d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d379b6cff5fad06493f1e137d6f8de20b35e5350025c5875db8afb23cf30ac97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-27v7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://26e15915f01864e6357f914244e51cc52eb7c1c79fdda6cebfb23c7723978fc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://26e15915f01864e6357f914244e51cc52eb7c1c79fdda6cebfb23c7723978fc7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:17:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-27v7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3845cae9bbde573b86221f80bf461886dadf5c5149bb19824e505c703e168ba3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3845cae9bbde573b86221f80bf461886dadf5c5149bb19824e505c703e168ba3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:17:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-27v7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://83b7a573c632fbc4da1c088193202dcbe4f5f09d82c98b12e901a06acc877b80\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://83b7a573c632fbc4da1c088193202dcbe4f5f09d82c98b12e901a06acc877b80\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:17:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:17:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-27v7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2730d908eb063a0dc3278a304a8b7b9aee84bb6df39693e476d6517362864da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b2730d908eb063a0dc3278a304a8b7b9aee84bb6df39693e476d6517362864da\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:17:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:17:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-27v7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://86ba552c18df4c07b6d6b34acf51c27ec696374ddd079486c045e1cb9f68f703\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://86ba552c18df4c07b6d6b34acf51c27ec696374ddd079486c045e1cb9f68f703\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:17:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:17:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-27v7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://522b8abd41e12aecabbbc8a1f16dd8978b1e72b0984784780349570290bcc168\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://522b8abd41e12aecabbbc8a1f16dd8978b1e72b0984784780349570290bcc168\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:17:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:17:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-27v7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:17:42Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-2rbc4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:17:52Z is after 2025-08-24T17:21:41Z" Feb 18 19:17:52 crc kubenswrapper[4942]: I0218 19:17:52.635665 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:17:52 crc kubenswrapper[4942]: I0218 19:17:52.635710 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:17:52 crc kubenswrapper[4942]: I0218 19:17:52.635724 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:17:52 crc kubenswrapper[4942]: I0218 19:17:52.635744 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:17:52 crc kubenswrapper[4942]: I0218 19:17:52.635755 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:17:52Z","lastTransitionTime":"2026-02-18T19:17:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:17:52 crc kubenswrapper[4942]: I0218 19:17:52.738640 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:17:52 crc kubenswrapper[4942]: I0218 19:17:52.738691 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:17:52 crc kubenswrapper[4942]: I0218 19:17:52.738701 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:17:52 crc kubenswrapper[4942]: I0218 19:17:52.738724 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:17:52 crc kubenswrapper[4942]: I0218 19:17:52.738739 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:17:52Z","lastTransitionTime":"2026-02-18T19:17:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:17:52 crc kubenswrapper[4942]: I0218 19:17:52.841856 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:17:52 crc kubenswrapper[4942]: I0218 19:17:52.842302 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:17:52 crc kubenswrapper[4942]: I0218 19:17:52.842383 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:17:52 crc kubenswrapper[4942]: I0218 19:17:52.842467 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:17:52 crc kubenswrapper[4942]: I0218 19:17:52.842562 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:17:52Z","lastTransitionTime":"2026-02-18T19:17:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:17:52 crc kubenswrapper[4942]: I0218 19:17:52.877689 4942 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Feb 18 19:17:52 crc kubenswrapper[4942]: I0218 19:17:52.945702 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:17:52 crc kubenswrapper[4942]: I0218 19:17:52.945756 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:17:52 crc kubenswrapper[4942]: I0218 19:17:52.945789 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:17:52 crc kubenswrapper[4942]: I0218 19:17:52.945813 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:17:52 crc kubenswrapper[4942]: I0218 19:17:52.945832 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:17:52Z","lastTransitionTime":"2026-02-18T19:17:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:17:52 crc kubenswrapper[4942]: I0218 19:17:52.975379 4942 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-13 01:45:22.663782011 +0000 UTC Feb 18 19:17:53 crc kubenswrapper[4942]: I0218 19:17:53.035950 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 19:17:53 crc kubenswrapper[4942]: E0218 19:17:53.036140 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 19:17:53 crc kubenswrapper[4942]: I0218 19:17:53.049203 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:17:53 crc kubenswrapper[4942]: I0218 19:17:53.049252 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:17:53 crc kubenswrapper[4942]: I0218 19:17:53.049265 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:17:53 crc kubenswrapper[4942]: I0218 19:17:53.049289 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:17:53 crc kubenswrapper[4942]: I0218 19:17:53.049304 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:17:53Z","lastTransitionTime":"2026-02-18T19:17:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:17:53 crc kubenswrapper[4942]: I0218 19:17:53.152169 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:17:53 crc kubenswrapper[4942]: I0218 19:17:53.152243 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:17:53 crc kubenswrapper[4942]: I0218 19:17:53.152266 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:17:53 crc kubenswrapper[4942]: I0218 19:17:53.152297 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:17:53 crc kubenswrapper[4942]: I0218 19:17:53.152319 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:17:53Z","lastTransitionTime":"2026-02-18T19:17:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:17:53 crc kubenswrapper[4942]: I0218 19:17:53.254949 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:17:53 crc kubenswrapper[4942]: I0218 19:17:53.254998 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:17:53 crc kubenswrapper[4942]: I0218 19:17:53.255016 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:17:53 crc kubenswrapper[4942]: I0218 19:17:53.255043 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:17:53 crc kubenswrapper[4942]: I0218 19:17:53.255054 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:17:53Z","lastTransitionTime":"2026-02-18T19:17:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:17:53 crc kubenswrapper[4942]: I0218 19:17:53.336608 4942 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-89fzv_45dc4164-81a9-44cf-b86a-dff571bc0417/ovnkube-controller/0.log" Feb 18 19:17:53 crc kubenswrapper[4942]: I0218 19:17:53.339024 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-89fzv" event={"ID":"45dc4164-81a9-44cf-b86a-dff571bc0417","Type":"ContainerStarted","Data":"093e5e3bd5a3d7277ee21a03cf707e96c859c4d827efe302bd1a67ee3491c717"} Feb 18 19:17:53 crc kubenswrapper[4942]: I0218 19:17:53.339564 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-89fzv" Feb 18 19:17:53 crc kubenswrapper[4942]: I0218 19:17:53.354815 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b5d2b9d-7ec0-41fa-a073-399c6fd41eb6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2c8b81c113e461032be39d6328308bad3189a9e84d987da987d43e8e2f6449fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3654d3b4a5084ce9ffb9ef8aeab6155788b56ac636aee44b098f6e9d457a8d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3a247d311cfbec62a54df5757a344bbc7ea516a66ccdeb67aecbbe268a4fbe4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://117748c4c4fa5e68d4b927639faa447ed3a984e0d7364a2224abe27e178d5746\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:17:21Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:17:53Z is after 2025-08-24T17:21:41Z" Feb 18 19:17:53 crc kubenswrapper[4942]: I0218 19:17:53.357641 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:17:53 crc kubenswrapper[4942]: I0218 19:17:53.357678 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:17:53 crc kubenswrapper[4942]: I0218 19:17:53.357690 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:17:53 crc kubenswrapper[4942]: I0218 19:17:53.357711 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:17:53 crc kubenswrapper[4942]: I0218 19:17:53.357726 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:17:53Z","lastTransitionTime":"2026-02-18T19:17:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:17:53 crc kubenswrapper[4942]: I0218 19:17:53.370954 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8cc6e8b6926e9cadf0bfdedb3a9fd0e5a7a902ba1cc703cd0396c3d7b2ec8666\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://45c0716738e2acbb0104b2ce05e3f23fd6933b653297d10972914500f3e55cd3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:17:53Z is after 2025-08-24T17:21:41Z" Feb 18 19:17:53 crc kubenswrapper[4942]: I0218 19:17:53.386525 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:17:53Z is after 2025-08-24T17:21:41Z" Feb 18 19:17:53 crc kubenswrapper[4942]: I0218 19:17:53.397122 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-5pgvt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f163820b-df8b-4e07-9b74-d5f3332580a6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://97b02b2ef091c462632d385e824d90a6dc8270726bb3b5dfaa6c3036e99d323f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjg6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:17:41Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-5pgvt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:17:53Z is after 2025-08-24T17:21:41Z" Feb 18 19:17:53 crc kubenswrapper[4942]: I0218 19:17:53.409020 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-2rbc4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1ec6cacf-a09d-43b8-89fc-aea1d5a0ca9d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d379b6cff5fad06493f1e137d6f8de20b35e5350025c5875db8afb23cf30ac97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-27v7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://26e15915f01864e6357f914244e51cc52eb7c1c79fdda6cebfb23c7723978fc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://26e15915f01864e6357f914244e51cc52eb7c1c79fdda6cebfb23c7723978fc7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:17:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-27v7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3845cae9bbde573b86221f80bf461886dadf5c5149bb19824e505c703e168ba3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3845cae9bbde573b86221f80bf461886dadf5c5149bb19824e505c703e168ba3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:17:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-27v7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://83b7a573c632fbc4da1c088193202dcbe4f5f09d82c98b12e901a06acc877b80\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://83b7a573c632fbc4da1c088193202dcbe4f5f09d82c98b12e901a06acc877b80\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:17:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:17:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-27v7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2730d908eb063a0dc3278a304a8b7b9aee84bb6df39693e476d6517362864da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b2730d908eb063a0dc3278a304a8b7b9aee84bb6df39693e476d6517362864da\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:17:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:17:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-27v7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://86ba552c18df4c07b6d6b34acf51c27ec696374ddd079486c045e1cb9f68f703\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://86ba552c18df4c07b6d6b34acf51c27ec696374ddd079486c045e1cb9f68f703\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:17:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:17:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-27v7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://522b8abd41e12aecabbbc8a1f16dd8978b1e72b0984784780349570290bcc168\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://522b8abd41e12aecabbbc8a1f16dd8978b1e72b0984784780349570290bcc168\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:17:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:17:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-27v7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:17:42Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-2rbc4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:17:53Z is after 2025-08-24T17:21:41Z" Feb 18 19:17:53 crc kubenswrapper[4942]: I0218 19:17:53.424548 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wqxh4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"28921539-823a-4439-a230-3b5aed7085cc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d1f426cf3a46e9dbd6da2d7e0d1dc2649a781bb63b9b116e2e96e297ffe685f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c2zj5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3f2583de812c35d32f50918d2ea1071672e650d7bb1eca09416558ca25526b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c2zj5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:17:42Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wqxh4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:17:53Z is after 2025-08-24T17:21:41Z" Feb 18 19:17:53 crc kubenswrapper[4942]: I0218 19:17:53.448213 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-8jfwb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"75150b8c-7a02-497b-86c3-eabc9c8dbc55\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f6aba9b40a3a963de7e8fb8f2a121318f0800350a41caa30b6aef71468e5e0e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-65c5q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:17:42Z\\\"}}\" for pod \"openshift-multus\"/\"multus-8jfwb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:17:53Z is after 2025-08-24T17:21:41Z" Feb 18 19:17:53 crc kubenswrapper[4942]: I0218 19:17:53.460381 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:17:53 crc kubenswrapper[4942]: I0218 19:17:53.460420 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:17:53 crc kubenswrapper[4942]: I0218 19:17:53.460432 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:17:53 crc kubenswrapper[4942]: I0218 19:17:53.460448 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:17:53 crc kubenswrapper[4942]: I0218 19:17:53.460458 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:17:53Z","lastTransitionTime":"2026-02-18T19:17:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:17:53 crc kubenswrapper[4942]: I0218 19:17:53.478173 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-89fzv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"45dc4164-81a9-44cf-b86a-dff571bc0417\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:42Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:42Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e988175a524e389ddf3e3a47acb65910ac3bf3b812e14b76d988f13e2cdc5dc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl7tj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9333dac09e056ca12a248589ed4a097788b86ab83f9a1014d76d8bad88f1800c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl7tj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bcc9ee5f12cc3a3518c9fe13c16743e946e59b82dc01239767afb1e4afb2e4b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl7tj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2e222b580b244e85a382499ae61c72779f95fdab87e4d4c723d29b488219f94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl7tj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6351d0088a3e9c170ebe043fa700ef7f870c52f40d751b4fd13ac7b5bfa5e3b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl7tj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://427d7c083c5040fc6afe217c7850f1114323977542e83eb35d0a71b4bef6ecc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl7tj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://093e5e3bd5a3d7277ee21a03cf707e96c859c4d827efe302bd1a67ee3491c717\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://34ae88814307bf6ee0867a2fd00ea4020fd0b74379801aad00948088bac875bf\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-18T19:17:51Z\\\",\\\"message\\\":\\\"roup\\\\\\\"}}}, built lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-route-controller-manager/route-controller-manager_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-route-controller-manager/route-controller-manager\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.5.239\\\\\\\", Port:443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nI0218 19:17:51.651321 6254 ovnkube.go:599] Stopped ovnkube\\\\nI0218 19:17:51.651348 6254 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI0218 19:17:51.651395 6254 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-etcd/etcd\\\\\\\"}\\\\nF0218 19:17:51.651417 6254 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: failed to add event handler: handler {0x1e60340 0x1e60020 0x1e5ffc0} was not added to shared informer because it has stopped already, failed to start node network controller: failed to star\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T19:17:48Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl7tj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c498aa99d3ec10af57c279f23804f4dce52a99d2c73fafe2bd9dc6ea454c7a23\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl7tj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://581689b9e064557a35e24e6d5c15a73036b8499700959fd330e9ebee15543edc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://581689b9e064557a35e24e6d5c15a73036b8499700959fd330e9ebee15543edc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:17:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:17:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl7tj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:17:42Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-89fzv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:17:53Z is after 2025-08-24T17:21:41Z" Feb 18 19:17:53 crc kubenswrapper[4942]: I0218 19:17:53.494310 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://36f5db0de79285e1aca04aee9ebb8824353d8746f2f7df24be858a55db3c9abf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:17:53Z is after 2025-08-24T17:21:41Z" Feb 18 19:17:53 crc kubenswrapper[4942]: I0218 19:17:53.507681 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2e4be8605467674f949e5b4b8d282634126ab56d2983d5ffadb64ca4043b79b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:17:53Z is after 2025-08-24T17:21:41Z" Feb 18 19:17:53 crc kubenswrapper[4942]: I0218 19:17:53.522369 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:17:53Z is after 2025-08-24T17:21:41Z" Feb 18 19:17:53 crc kubenswrapper[4942]: I0218 19:17:53.537731 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4da93830-99a3-4d84-91c8-a5352a987b3f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://beecfbdf76954e7b9895240b52a2ec033ec3b81094ece02095f67a5f389d0383\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca3d8e99733c89b17e7211c9bae268f8e75942d896d32a6e2e9fc7e613000a6d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee5e19c2c5a503ae69e8052828713b9b399137e0fb7f3a06865d4d7f6b29c954\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b82ad9b4d66dae63d0d0fcf0dd65333cf43c3b1d6006e129b7db1c6320bfdee8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b82ad9b4d66dae63d0d0fcf0dd65333cf43c3b1d6006e129b7db1c6320bfdee8\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"message\\\":\\\"le observer\\\\nW0218 19:17:41.723890 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0218 19:17:41.724123 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0218 19:17:41.725411 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3231040961/tls.crt::/tmp/serving-cert-3231040961/tls.key\\\\\\\"\\\\nI0218 19:17:41.923908 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0218 19:17:41.936017 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0218 19:17:41.936045 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0218 19:17:41.936073 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0218 19:17:41.936079 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0218 19:17:41.944174 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0218 19:17:41.944200 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 19:17:41.944205 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 19:17:41.944211 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0218 19:17:41.944214 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0218 19:17:41.944217 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0218 19:17:41.944220 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0218 19:17:41.944371 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0218 19:17:41.958094 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T19:17:41Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5fcd5de3303bba82e4a354de9f77b9aac574912955c2e49e2e74232f4d432a88\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:23Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6adb31d2464d541db843bddad1c43811ca11900db12deeb0d7c13ff8a3186d09\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6adb31d2464d541db843bddad1c43811ca11900db12deeb0d7c13ff8a3186d09\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:17:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:17:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:17:21Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:17:53Z is after 2025-08-24T17:21:41Z" Feb 18 19:17:53 crc kubenswrapper[4942]: I0218 19:17:53.552549 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:17:53Z is after 2025-08-24T17:21:41Z" Feb 18 19:17:53 crc kubenswrapper[4942]: I0218 19:17:53.563555 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:17:53 crc kubenswrapper[4942]: I0218 19:17:53.563598 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:17:53 crc kubenswrapper[4942]: I0218 19:17:53.563609 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:17:53 crc kubenswrapper[4942]: I0218 19:17:53.563630 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:17:53 crc kubenswrapper[4942]: I0218 19:17:53.563642 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:17:53Z","lastTransitionTime":"2026-02-18T19:17:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:17:53 crc kubenswrapper[4942]: I0218 19:17:53.565172 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-wxck8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"69ef2748-687e-4223-998e-7bd92ad8aaaf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ba4df5c822ff37a1a027d1908aab6472cd0b5a6ab0a2b5e5d1b172774107727\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vscpp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:17:45Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-wxck8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:17:53Z is after 2025-08-24T17:21:41Z" Feb 18 19:17:53 crc kubenswrapper[4942]: I0218 19:17:53.666196 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:17:53 crc kubenswrapper[4942]: I0218 19:17:53.666579 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:17:53 crc kubenswrapper[4942]: I0218 19:17:53.666680 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:17:53 crc kubenswrapper[4942]: I0218 19:17:53.666797 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:17:53 crc kubenswrapper[4942]: I0218 19:17:53.666897 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:17:53Z","lastTransitionTime":"2026-02-18T19:17:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:17:53 crc kubenswrapper[4942]: I0218 19:17:53.769821 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:17:53 crc kubenswrapper[4942]: I0218 19:17:53.769868 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:17:53 crc kubenswrapper[4942]: I0218 19:17:53.769880 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:17:53 crc kubenswrapper[4942]: I0218 19:17:53.769899 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:17:53 crc kubenswrapper[4942]: I0218 19:17:53.769913 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:17:53Z","lastTransitionTime":"2026-02-18T19:17:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:17:53 crc kubenswrapper[4942]: I0218 19:17:53.872617 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:17:53 crc kubenswrapper[4942]: I0218 19:17:53.872897 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:17:53 crc kubenswrapper[4942]: I0218 19:17:53.872959 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:17:53 crc kubenswrapper[4942]: I0218 19:17:53.873021 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:17:53 crc kubenswrapper[4942]: I0218 19:17:53.873085 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:17:53Z","lastTransitionTime":"2026-02-18T19:17:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:17:53 crc kubenswrapper[4942]: I0218 19:17:53.975949 4942 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-14 07:25:47.144854735 +0000 UTC Feb 18 19:17:53 crc kubenswrapper[4942]: I0218 19:17:53.977661 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:17:53 crc kubenswrapper[4942]: I0218 19:17:53.977719 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:17:53 crc kubenswrapper[4942]: I0218 19:17:53.977738 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:17:53 crc kubenswrapper[4942]: I0218 19:17:53.977796 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:17:53 crc kubenswrapper[4942]: I0218 19:17:53.977816 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:17:53Z","lastTransitionTime":"2026-02-18T19:17:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:17:54 crc kubenswrapper[4942]: I0218 19:17:54.035695 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 19:17:54 crc kubenswrapper[4942]: E0218 19:17:54.035877 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 19:17:54 crc kubenswrapper[4942]: I0218 19:17:54.036078 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 19:17:54 crc kubenswrapper[4942]: E0218 19:17:54.036317 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 19:17:54 crc kubenswrapper[4942]: I0218 19:17:54.081934 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:17:54 crc kubenswrapper[4942]: I0218 19:17:54.082039 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:17:54 crc kubenswrapper[4942]: I0218 19:17:54.082057 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:17:54 crc kubenswrapper[4942]: I0218 19:17:54.082078 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:17:54 crc kubenswrapper[4942]: I0218 19:17:54.082092 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:17:54Z","lastTransitionTime":"2026-02-18T19:17:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:17:54 crc kubenswrapper[4942]: I0218 19:17:54.186129 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:17:54 crc kubenswrapper[4942]: I0218 19:17:54.186173 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:17:54 crc kubenswrapper[4942]: I0218 19:17:54.186190 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:17:54 crc kubenswrapper[4942]: I0218 19:17:54.186210 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:17:54 crc kubenswrapper[4942]: I0218 19:17:54.186223 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:17:54Z","lastTransitionTime":"2026-02-18T19:17:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:17:54 crc kubenswrapper[4942]: I0218 19:17:54.289557 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:17:54 crc kubenswrapper[4942]: I0218 19:17:54.289608 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:17:54 crc kubenswrapper[4942]: I0218 19:17:54.289619 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:17:54 crc kubenswrapper[4942]: I0218 19:17:54.289639 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:17:54 crc kubenswrapper[4942]: I0218 19:17:54.289655 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:17:54Z","lastTransitionTime":"2026-02-18T19:17:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:17:54 crc kubenswrapper[4942]: I0218 19:17:54.345726 4942 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-89fzv_45dc4164-81a9-44cf-b86a-dff571bc0417/ovnkube-controller/1.log" Feb 18 19:17:54 crc kubenswrapper[4942]: I0218 19:17:54.347044 4942 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-89fzv_45dc4164-81a9-44cf-b86a-dff571bc0417/ovnkube-controller/0.log" Feb 18 19:17:54 crc kubenswrapper[4942]: I0218 19:17:54.352019 4942 generic.go:334] "Generic (PLEG): container finished" podID="45dc4164-81a9-44cf-b86a-dff571bc0417" containerID="093e5e3bd5a3d7277ee21a03cf707e96c859c4d827efe302bd1a67ee3491c717" exitCode=1 Feb 18 19:17:54 crc kubenswrapper[4942]: I0218 19:17:54.352104 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-89fzv" event={"ID":"45dc4164-81a9-44cf-b86a-dff571bc0417","Type":"ContainerDied","Data":"093e5e3bd5a3d7277ee21a03cf707e96c859c4d827efe302bd1a67ee3491c717"} Feb 18 19:17:54 crc kubenswrapper[4942]: I0218 19:17:54.352375 4942 scope.go:117] "RemoveContainer" containerID="34ae88814307bf6ee0867a2fd00ea4020fd0b74379801aad00948088bac875bf" Feb 18 19:17:54 crc kubenswrapper[4942]: I0218 19:17:54.353243 4942 scope.go:117] "RemoveContainer" containerID="093e5e3bd5a3d7277ee21a03cf707e96c859c4d827efe302bd1a67ee3491c717" Feb 18 19:17:54 crc kubenswrapper[4942]: E0218 19:17:54.353667 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-89fzv_openshift-ovn-kubernetes(45dc4164-81a9-44cf-b86a-dff571bc0417)\"" pod="openshift-ovn-kubernetes/ovnkube-node-89fzv" podUID="45dc4164-81a9-44cf-b86a-dff571bc0417" Feb 18 19:17:54 crc kubenswrapper[4942]: I0218 19:17:54.372231 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b5d2b9d-7ec0-41fa-a073-399c6fd41eb6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2c8b81c113e461032be39d6328308bad3189a9e84d987da987d43e8e2f6449fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3654d3b4a5084ce9ffb9ef8aeab6155788b56ac636aee44b098f6e9d457a8d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3a247d311cfbec62a54df5757a344bbc7ea516a66ccdeb67aecbbe268a4fbe4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://117748c4c4fa5e68d4b927639faa447ed3a984e0d7364a2224abe27e178d5746\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:17:21Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:17:54Z is after 2025-08-24T17:21:41Z" Feb 18 19:17:54 crc kubenswrapper[4942]: I0218 19:17:54.392607 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:17:54 crc kubenswrapper[4942]: I0218 19:17:54.392720 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:17:54 crc kubenswrapper[4942]: I0218 19:17:54.392733 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:17:54 crc kubenswrapper[4942]: I0218 19:17:54.392755 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:17:54 crc kubenswrapper[4942]: I0218 19:17:54.392795 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:17:54Z","lastTransitionTime":"2026-02-18T19:17:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:17:54 crc kubenswrapper[4942]: I0218 19:17:54.396728 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8cc6e8b6926e9cadf0bfdedb3a9fd0e5a7a902ba1cc703cd0396c3d7b2ec8666\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://45c0716738e2acbb0104b2ce05e3f23fd6933b653297d10972914500f3e55cd3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:17:54Z is after 2025-08-24T17:21:41Z" Feb 18 19:17:54 crc kubenswrapper[4942]: I0218 19:17:54.413496 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:17:54Z is after 2025-08-24T17:21:41Z" Feb 18 19:17:54 crc kubenswrapper[4942]: I0218 19:17:54.427722 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-5pgvt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f163820b-df8b-4e07-9b74-d5f3332580a6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://97b02b2ef091c462632d385e824d90a6dc8270726bb3b5dfaa6c3036e99d323f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjg6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:17:41Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-5pgvt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:17:54Z is after 2025-08-24T17:21:41Z" Feb 18 19:17:54 crc kubenswrapper[4942]: I0218 19:17:54.445679 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-2rbc4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1ec6cacf-a09d-43b8-89fc-aea1d5a0ca9d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d379b6cff5fad06493f1e137d6f8de20b35e5350025c5875db8afb23cf30ac97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-27v7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://26e15915f01864e6357f914244e51cc52eb7c1c79fdda6cebfb23c7723978fc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://26e15915f01864e6357f914244e51cc52eb7c1c79fdda6cebfb23c7723978fc7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:17:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-27v7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3845cae9bbde573b86221f80bf461886dadf5c5149bb19824e505c703e168ba3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3845cae9bbde573b86221f80bf461886dadf5c5149bb19824e505c703e168ba3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:17:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-27v7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://83b7a573c632fbc4da1c088193202dcbe4f5f09d82c98b12e901a06acc877b80\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://83b7a573c632fbc4da1c088193202dcbe4f5f09d82c98b12e901a06acc877b80\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:17:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:17:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-27v7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2730d908eb063a0dc3278a304a8b7b9aee84bb6df39693e476d6517362864da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b2730d908eb063a0dc3278a304a8b7b9aee84bb6df39693e476d6517362864da\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:17:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:17:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-27v7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://86ba552c18df4c07b6d6b34acf51c27ec696374ddd079486c045e1cb9f68f703\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://86ba552c18df4c07b6d6b34acf51c27ec696374ddd079486c045e1cb9f68f703\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:17:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:17:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-27v7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://522b8abd41e12aecabbbc8a1f16dd8978b1e72b0984784780349570290bcc168\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://522b8abd41e12aecabbbc8a1f16dd8978b1e72b0984784780349570290bcc168\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:17:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:17:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-27v7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:17:42Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-2rbc4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:17:54Z is after 2025-08-24T17:21:41Z" Feb 18 19:17:54 crc kubenswrapper[4942]: I0218 19:17:54.461467 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-8jfwb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"75150b8c-7a02-497b-86c3-eabc9c8dbc55\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f6aba9b40a3a963de7e8fb8f2a121318f0800350a41caa30b6aef71468e5e0e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-65c5q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:17:42Z\\\"}}\" for pod \"openshift-multus\"/\"multus-8jfwb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:17:54Z is after 2025-08-24T17:21:41Z" Feb 18 19:17:54 crc kubenswrapper[4942]: I0218 19:17:54.496538 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-89fzv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"45dc4164-81a9-44cf-b86a-dff571bc0417\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:42Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:42Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e988175a524e389ddf3e3a47acb65910ac3bf3b812e14b76d988f13e2cdc5dc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl7tj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9333dac09e056ca12a248589ed4a097788b86ab83f9a1014d76d8bad88f1800c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl7tj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bcc9ee5f12cc3a3518c9fe13c16743e946e59b82dc01239767afb1e4afb2e4b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl7tj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2e222b580b244e85a382499ae61c72779f95fdab87e4d4c723d29b488219f94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl7tj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6351d0088a3e9c170ebe043fa700ef7f870c52f40d751b4fd13ac7b5bfa5e3b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl7tj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://427d7c083c5040fc6afe217c7850f1114323977542e83eb35d0a71b4bef6ecc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl7tj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://093e5e3bd5a3d7277ee21a03cf707e96c859c4d827efe302bd1a67ee3491c717\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://34ae88814307bf6ee0867a2fd00ea4020fd0b74379801aad00948088bac875bf\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-18T19:17:51Z\\\",\\\"message\\\":\\\"roup\\\\\\\"}}}, built lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-route-controller-manager/route-controller-manager_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-route-controller-manager/route-controller-manager\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.5.239\\\\\\\", Port:443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nI0218 19:17:51.651321 6254 ovnkube.go:599] Stopped ovnkube\\\\nI0218 19:17:51.651348 6254 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI0218 19:17:51.651395 6254 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-etcd/etcd\\\\\\\"}\\\\nF0218 19:17:51.651417 6254 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: failed to add event handler: handler {0x1e60340 0x1e60020 0x1e5ffc0} was not added to shared informer because it has stopped already, failed to start node network controller: failed to star\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T19:17:48Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://093e5e3bd5a3d7277ee21a03cf707e96c859c4d827efe302bd1a67ee3491c717\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-18T19:17:53Z\\\",\\\"message\\\":\\\"4.36:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {ba175bbe-5cc4-47e6-a32d-57693e1320bd}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0218 19:17:53.331601 6390 factory.go:1336] Added *v1.EgressIP event handler 8\\\\nI0218 19:17:53.331580 6390 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-kube-controller-manager/kube-controller-manager]} name:Service_openshift-kube-controller-manager/kube-controller-manager_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.36:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {ba175bbe-5cc4-47e6-a32d-57693e1320bd}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0218 19:17:53.333326 6390 factory.go:1336] Added *v1.EgressFirewall event handler 9\\\\nI0218 19:17:53.333620 6390 controller.go:132] Adding controller ef_node_controller event handlers\\\\nI0218 19:17:53.333683 6390 ovnkube.go:599] Stopped ovnkube\\\\nI0218 19:17:53.333723 6390 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0218 19:17:53.333863 6390 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T19:17:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl7tj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c498aa99d3ec10af57c279f23804f4dce52a99d2c73fafe2bd9dc6ea454c7a23\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl7tj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://581689b9e064557a35e24e6d5c15a73036b8499700959fd330e9ebee15543edc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://581689b9e064557a35e24e6d5c15a73036b8499700959fd330e9ebee15543edc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:17:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:17:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl7tj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:17:42Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-89fzv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:17:54Z is after 2025-08-24T17:21:41Z" Feb 18 19:17:54 crc kubenswrapper[4942]: I0218 19:17:54.496900 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:17:54 crc kubenswrapper[4942]: I0218 19:17:54.496971 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:17:54 crc kubenswrapper[4942]: I0218 19:17:54.496992 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:17:54 crc kubenswrapper[4942]: I0218 19:17:54.497023 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:17:54 crc kubenswrapper[4942]: I0218 19:17:54.497043 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:17:54Z","lastTransitionTime":"2026-02-18T19:17:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:17:54 crc kubenswrapper[4942]: I0218 19:17:54.520902 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://36f5db0de79285e1aca04aee9ebb8824353d8746f2f7df24be858a55db3c9abf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:17:54Z is after 2025-08-24T17:21:41Z" Feb 18 19:17:54 crc kubenswrapper[4942]: I0218 19:17:54.542065 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2e4be8605467674f949e5b4b8d282634126ab56d2983d5ffadb64ca4043b79b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:17:54Z is after 2025-08-24T17:21:41Z" Feb 18 19:17:54 crc kubenswrapper[4942]: I0218 19:17:54.565905 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:17:54Z is after 2025-08-24T17:21:41Z" Feb 18 19:17:54 crc kubenswrapper[4942]: I0218 19:17:54.579740 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wqxh4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"28921539-823a-4439-a230-3b5aed7085cc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d1f426cf3a46e9dbd6da2d7e0d1dc2649a781bb63b9b116e2e96e297ffe685f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c2zj5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3f2583de812c35d32f50918d2ea1071672e650d7bb1eca09416558ca25526b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c2zj5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:17:42Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wqxh4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:17:54Z is after 2025-08-24T17:21:41Z" Feb 18 19:17:54 crc kubenswrapper[4942]: I0218 19:17:54.599519 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4da93830-99a3-4d84-91c8-a5352a987b3f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://beecfbdf76954e7b9895240b52a2ec033ec3b81094ece02095f67a5f389d0383\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca3d8e99733c89b17e7211c9bae268f8e75942d896d32a6e2e9fc7e613000a6d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee5e19c2c5a503ae69e8052828713b9b399137e0fb7f3a06865d4d7f6b29c954\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b82ad9b4d66dae63d0d0fcf0dd65333cf43c3b1d6006e129b7db1c6320bfdee8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b82ad9b4d66dae63d0d0fcf0dd65333cf43c3b1d6006e129b7db1c6320bfdee8\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"message\\\":\\\"le observer\\\\nW0218 19:17:41.723890 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0218 19:17:41.724123 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0218 19:17:41.725411 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3231040961/tls.crt::/tmp/serving-cert-3231040961/tls.key\\\\\\\"\\\\nI0218 19:17:41.923908 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0218 19:17:41.936017 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0218 19:17:41.936045 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0218 19:17:41.936073 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0218 19:17:41.936079 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0218 19:17:41.944174 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0218 19:17:41.944200 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 19:17:41.944205 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 19:17:41.944211 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0218 19:17:41.944214 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0218 19:17:41.944217 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0218 19:17:41.944220 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0218 19:17:41.944371 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0218 19:17:41.958094 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T19:17:41Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5fcd5de3303bba82e4a354de9f77b9aac574912955c2e49e2e74232f4d432a88\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:23Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6adb31d2464d541db843bddad1c43811ca11900db12deeb0d7c13ff8a3186d09\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6adb31d2464d541db843bddad1c43811ca11900db12deeb0d7c13ff8a3186d09\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:17:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:17:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:17:21Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:17:54Z is after 2025-08-24T17:21:41Z" Feb 18 19:17:54 crc kubenswrapper[4942]: I0218 19:17:54.599659 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:17:54 crc kubenswrapper[4942]: I0218 19:17:54.600054 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:17:54 crc kubenswrapper[4942]: I0218 19:17:54.600066 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:17:54 crc kubenswrapper[4942]: I0218 19:17:54.600084 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:17:54 crc kubenswrapper[4942]: I0218 19:17:54.600098 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:17:54Z","lastTransitionTime":"2026-02-18T19:17:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:17:54 crc kubenswrapper[4942]: I0218 19:17:54.616646 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:17:54Z is after 2025-08-24T17:21:41Z" Feb 18 19:17:54 crc kubenswrapper[4942]: I0218 19:17:54.628078 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-wxck8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"69ef2748-687e-4223-998e-7bd92ad8aaaf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ba4df5c822ff37a1a027d1908aab6472cd0b5a6ab0a2b5e5d1b172774107727\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vscpp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:17:45Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-wxck8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:17:54Z is after 2025-08-24T17:21:41Z" Feb 18 19:17:54 crc kubenswrapper[4942]: I0218 19:17:54.704531 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:17:54 crc kubenswrapper[4942]: I0218 19:17:54.704836 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:17:54 crc kubenswrapper[4942]: I0218 19:17:54.704930 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:17:54 crc kubenswrapper[4942]: I0218 19:17:54.705007 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:17:54 crc kubenswrapper[4942]: I0218 19:17:54.705067 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:17:54Z","lastTransitionTime":"2026-02-18T19:17:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:17:54 crc kubenswrapper[4942]: I0218 19:17:54.715140 4942 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-xk99z"] Feb 18 19:17:54 crc kubenswrapper[4942]: I0218 19:17:54.716244 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-xk99z" Feb 18 19:17:54 crc kubenswrapper[4942]: I0218 19:17:54.719882 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Feb 18 19:17:54 crc kubenswrapper[4942]: I0218 19:17:54.721500 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Feb 18 19:17:54 crc kubenswrapper[4942]: I0218 19:17:54.739263 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://36f5db0de79285e1aca04aee9ebb8824353d8746f2f7df24be858a55db3c9abf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:17:54Z is after 2025-08-24T17:21:41Z" Feb 18 19:17:54 crc kubenswrapper[4942]: I0218 19:17:54.756095 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2e4be8605467674f949e5b4b8d282634126ab56d2983d5ffadb64ca4043b79b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:17:54Z is after 2025-08-24T17:21:41Z" Feb 18 19:17:54 crc kubenswrapper[4942]: I0218 19:17:54.774199 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:17:54Z is after 2025-08-24T17:21:41Z" Feb 18 19:17:54 crc kubenswrapper[4942]: I0218 19:17:54.777522 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/8f8b40cd-7bbd-4189-a8c0-f4131e8b9add-env-overrides\") pod \"ovnkube-control-plane-749d76644c-xk99z\" (UID: \"8f8b40cd-7bbd-4189-a8c0-f4131e8b9add\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-xk99z" Feb 18 19:17:54 crc kubenswrapper[4942]: I0218 19:17:54.777625 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/8f8b40cd-7bbd-4189-a8c0-f4131e8b9add-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-xk99z\" (UID: \"8f8b40cd-7bbd-4189-a8c0-f4131e8b9add\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-xk99z" Feb 18 19:17:54 crc kubenswrapper[4942]: I0218 19:17:54.777685 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/8f8b40cd-7bbd-4189-a8c0-f4131e8b9add-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-xk99z\" (UID: \"8f8b40cd-7bbd-4189-a8c0-f4131e8b9add\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-xk99z" Feb 18 19:17:54 crc kubenswrapper[4942]: I0218 19:17:54.777711 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2zxvs\" (UniqueName: \"kubernetes.io/projected/8f8b40cd-7bbd-4189-a8c0-f4131e8b9add-kube-api-access-2zxvs\") pod \"ovnkube-control-plane-749d76644c-xk99z\" (UID: \"8f8b40cd-7bbd-4189-a8c0-f4131e8b9add\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-xk99z" Feb 18 19:17:54 crc kubenswrapper[4942]: I0218 19:17:54.789911 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wqxh4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"28921539-823a-4439-a230-3b5aed7085cc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d1f426cf3a46e9dbd6da2d7e0d1dc2649a781bb63b9b116e2e96e297ffe685f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c2zj5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3f2583de812c35d32f50918d2ea1071672e650d7bb1eca09416558ca25526b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c2zj5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:17:42Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wqxh4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:17:54Z is after 2025-08-24T17:21:41Z" Feb 18 19:17:54 crc kubenswrapper[4942]: I0218 19:17:54.807486 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:17:54 crc kubenswrapper[4942]: I0218 19:17:54.807528 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:17:54 crc kubenswrapper[4942]: I0218 19:17:54.807537 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:17:54 crc kubenswrapper[4942]: I0218 19:17:54.807556 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:17:54 crc kubenswrapper[4942]: I0218 19:17:54.807567 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:17:54Z","lastTransitionTime":"2026-02-18T19:17:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:17:54 crc kubenswrapper[4942]: I0218 19:17:54.813498 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-8jfwb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"75150b8c-7a02-497b-86c3-eabc9c8dbc55\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f6aba9b40a3a963de7e8fb8f2a121318f0800350a41caa30b6aef71468e5e0e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-65c5q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:17:42Z\\\"}}\" for pod \"openshift-multus\"/\"multus-8jfwb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:17:54Z is after 2025-08-24T17:21:41Z" Feb 18 19:17:54 crc kubenswrapper[4942]: I0218 19:17:54.836065 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-89fzv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"45dc4164-81a9-44cf-b86a-dff571bc0417\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:42Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:42Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e988175a524e389ddf3e3a47acb65910ac3bf3b812e14b76d988f13e2cdc5dc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl7tj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9333dac09e056ca12a248589ed4a097788b86ab83f9a1014d76d8bad88f1800c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl7tj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bcc9ee5f12cc3a3518c9fe13c16743e946e59b82dc01239767afb1e4afb2e4b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl7tj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2e222b580b244e85a382499ae61c72779f95fdab87e4d4c723d29b488219f94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl7tj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6351d0088a3e9c170ebe043fa700ef7f870c52f40d751b4fd13ac7b5bfa5e3b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl7tj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://427d7c083c5040fc6afe217c7850f1114323977542e83eb35d0a71b4bef6ecc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl7tj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://093e5e3bd5a3d7277ee21a03cf707e96c859c4d827efe302bd1a67ee3491c717\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://34ae88814307bf6ee0867a2fd00ea4020fd0b74379801aad00948088bac875bf\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-18T19:17:51Z\\\",\\\"message\\\":\\\"roup\\\\\\\"}}}, built lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-route-controller-manager/route-controller-manager_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-route-controller-manager/route-controller-manager\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.5.239\\\\\\\", Port:443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nI0218 19:17:51.651321 6254 ovnkube.go:599] Stopped ovnkube\\\\nI0218 19:17:51.651348 6254 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI0218 19:17:51.651395 6254 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-etcd/etcd\\\\\\\"}\\\\nF0218 19:17:51.651417 6254 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: failed to add event handler: handler {0x1e60340 0x1e60020 0x1e5ffc0} was not added to shared informer because it has stopped already, failed to start node network controller: failed to star\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T19:17:48Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://093e5e3bd5a3d7277ee21a03cf707e96c859c4d827efe302bd1a67ee3491c717\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-18T19:17:53Z\\\",\\\"message\\\":\\\"4.36:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {ba175bbe-5cc4-47e6-a32d-57693e1320bd}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0218 19:17:53.331601 6390 factory.go:1336] Added *v1.EgressIP event handler 8\\\\nI0218 19:17:53.331580 6390 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-kube-controller-manager/kube-controller-manager]} name:Service_openshift-kube-controller-manager/kube-controller-manager_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.36:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {ba175bbe-5cc4-47e6-a32d-57693e1320bd}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0218 19:17:53.333326 6390 factory.go:1336] Added *v1.EgressFirewall event handler 9\\\\nI0218 19:17:53.333620 6390 controller.go:132] Adding controller ef_node_controller event handlers\\\\nI0218 19:17:53.333683 6390 ovnkube.go:599] Stopped ovnkube\\\\nI0218 19:17:53.333723 6390 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0218 19:17:53.333863 6390 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T19:17:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl7tj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c498aa99d3ec10af57c279f23804f4dce52a99d2c73fafe2bd9dc6ea454c7a23\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl7tj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://581689b9e064557a35e24e6d5c15a73036b8499700959fd330e9ebee15543edc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://581689b9e064557a35e24e6d5c15a73036b8499700959fd330e9ebee15543edc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:17:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:17:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl7tj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:17:42Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-89fzv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:17:54Z is after 2025-08-24T17:21:41Z" Feb 18 19:17:54 crc kubenswrapper[4942]: I0218 19:17:54.851738 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-wxck8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"69ef2748-687e-4223-998e-7bd92ad8aaaf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ba4df5c822ff37a1a027d1908aab6472cd0b5a6ab0a2b5e5d1b172774107727\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vscpp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:17:45Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-wxck8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:17:54Z is after 2025-08-24T17:21:41Z" Feb 18 19:17:54 crc kubenswrapper[4942]: I0218 19:17:54.869286 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4da93830-99a3-4d84-91c8-a5352a987b3f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://beecfbdf76954e7b9895240b52a2ec033ec3b81094ece02095f67a5f389d0383\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca3d8e99733c89b17e7211c9bae268f8e75942d896d32a6e2e9fc7e613000a6d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee5e19c2c5a503ae69e8052828713b9b399137e0fb7f3a06865d4d7f6b29c954\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b82ad9b4d66dae63d0d0fcf0dd65333cf43c3b1d6006e129b7db1c6320bfdee8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b82ad9b4d66dae63d0d0fcf0dd65333cf43c3b1d6006e129b7db1c6320bfdee8\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"message\\\":\\\"le observer\\\\nW0218 19:17:41.723890 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0218 19:17:41.724123 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0218 19:17:41.725411 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3231040961/tls.crt::/tmp/serving-cert-3231040961/tls.key\\\\\\\"\\\\nI0218 19:17:41.923908 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0218 19:17:41.936017 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0218 19:17:41.936045 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0218 19:17:41.936073 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0218 19:17:41.936079 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0218 19:17:41.944174 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0218 19:17:41.944200 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 19:17:41.944205 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 19:17:41.944211 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0218 19:17:41.944214 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0218 19:17:41.944217 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0218 19:17:41.944220 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0218 19:17:41.944371 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0218 19:17:41.958094 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T19:17:41Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5fcd5de3303bba82e4a354de9f77b9aac574912955c2e49e2e74232f4d432a88\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:23Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6adb31d2464d541db843bddad1c43811ca11900db12deeb0d7c13ff8a3186d09\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6adb31d2464d541db843bddad1c43811ca11900db12deeb0d7c13ff8a3186d09\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:17:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:17:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:17:21Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:17:54Z is after 2025-08-24T17:21:41Z" Feb 18 19:17:54 crc kubenswrapper[4942]: I0218 19:17:54.878799 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/8f8b40cd-7bbd-4189-a8c0-f4131e8b9add-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-xk99z\" (UID: \"8f8b40cd-7bbd-4189-a8c0-f4131e8b9add\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-xk99z" Feb 18 19:17:54 crc kubenswrapper[4942]: I0218 19:17:54.878845 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2zxvs\" (UniqueName: \"kubernetes.io/projected/8f8b40cd-7bbd-4189-a8c0-f4131e8b9add-kube-api-access-2zxvs\") pod \"ovnkube-control-plane-749d76644c-xk99z\" (UID: \"8f8b40cd-7bbd-4189-a8c0-f4131e8b9add\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-xk99z" Feb 18 19:17:54 crc kubenswrapper[4942]: I0218 19:17:54.878867 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/8f8b40cd-7bbd-4189-a8c0-f4131e8b9add-env-overrides\") pod \"ovnkube-control-plane-749d76644c-xk99z\" (UID: \"8f8b40cd-7bbd-4189-a8c0-f4131e8b9add\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-xk99z" Feb 18 19:17:54 crc kubenswrapper[4942]: I0218 19:17:54.878918 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/8f8b40cd-7bbd-4189-a8c0-f4131e8b9add-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-xk99z\" (UID: \"8f8b40cd-7bbd-4189-a8c0-f4131e8b9add\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-xk99z" Feb 18 19:17:54 crc kubenswrapper[4942]: I0218 19:17:54.879577 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/8f8b40cd-7bbd-4189-a8c0-f4131e8b9add-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-xk99z\" (UID: \"8f8b40cd-7bbd-4189-a8c0-f4131e8b9add\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-xk99z" Feb 18 19:17:54 crc kubenswrapper[4942]: I0218 19:17:54.879693 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/8f8b40cd-7bbd-4189-a8c0-f4131e8b9add-env-overrides\") pod \"ovnkube-control-plane-749d76644c-xk99z\" (UID: \"8f8b40cd-7bbd-4189-a8c0-f4131e8b9add\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-xk99z" Feb 18 19:17:54 crc kubenswrapper[4942]: I0218 19:17:54.887332 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:17:54Z is after 2025-08-24T17:21:41Z" Feb 18 19:17:54 crc kubenswrapper[4942]: I0218 19:17:54.889463 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/8f8b40cd-7bbd-4189-a8c0-f4131e8b9add-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-xk99z\" (UID: \"8f8b40cd-7bbd-4189-a8c0-f4131e8b9add\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-xk99z" Feb 18 19:17:54 crc kubenswrapper[4942]: I0218 19:17:54.900024 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-xk99z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8f8b40cd-7bbd-4189-a8c0-f4131e8b9add\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:54Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:54Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2zxvs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2zxvs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:17:54Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-xk99z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:17:54Z is after 2025-08-24T17:21:41Z" Feb 18 19:17:54 crc kubenswrapper[4942]: I0218 19:17:54.901262 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2zxvs\" (UniqueName: \"kubernetes.io/projected/8f8b40cd-7bbd-4189-a8c0-f4131e8b9add-kube-api-access-2zxvs\") pod \"ovnkube-control-plane-749d76644c-xk99z\" (UID: \"8f8b40cd-7bbd-4189-a8c0-f4131e8b9add\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-xk99z" Feb 18 19:17:54 crc kubenswrapper[4942]: I0218 19:17:54.909543 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:17:54 crc kubenswrapper[4942]: I0218 19:17:54.909575 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:17:54 crc kubenswrapper[4942]: I0218 19:17:54.909586 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:17:54 crc kubenswrapper[4942]: I0218 19:17:54.909624 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:17:54 crc kubenswrapper[4942]: I0218 19:17:54.909638 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:17:54Z","lastTransitionTime":"2026-02-18T19:17:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:17:54 crc kubenswrapper[4942]: I0218 19:17:54.920092 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b5d2b9d-7ec0-41fa-a073-399c6fd41eb6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2c8b81c113e461032be39d6328308bad3189a9e84d987da987d43e8e2f6449fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3654d3b4a5084ce9ffb9ef8aeab6155788b56ac636aee44b098f6e9d457a8d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3a247d311cfbec62a54df5757a344bbc7ea516a66ccdeb67aecbbe268a4fbe4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://117748c4c4fa5e68d4b927639faa447ed3a984e0d7364a2224abe27e178d5746\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:17:21Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:17:54Z is after 2025-08-24T17:21:41Z" Feb 18 19:17:54 crc kubenswrapper[4942]: I0218 19:17:54.936176 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8cc6e8b6926e9cadf0bfdedb3a9fd0e5a7a902ba1cc703cd0396c3d7b2ec8666\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://45c0716738e2acbb0104b2ce05e3f23fd6933b653297d10972914500f3e55cd3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:17:54Z is after 2025-08-24T17:21:41Z" Feb 18 19:17:54 crc kubenswrapper[4942]: I0218 19:17:54.953179 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-5pgvt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f163820b-df8b-4e07-9b74-d5f3332580a6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://97b02b2ef091c462632d385e824d90a6dc8270726bb3b5dfaa6c3036e99d323f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjg6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:17:41Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-5pgvt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:17:54Z is after 2025-08-24T17:21:41Z" Feb 18 19:17:54 crc kubenswrapper[4942]: I0218 19:17:54.976412 4942 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-14 14:01:52.301371042 +0000 UTC Feb 18 19:17:54 crc kubenswrapper[4942]: I0218 19:17:54.976710 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-2rbc4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1ec6cacf-a09d-43b8-89fc-aea1d5a0ca9d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d379b6cff5fad06493f1e137d6f8de20b35e5350025c5875db8afb23cf30ac97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-27v7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://26e15915f01864e6357f914244e51cc52eb7c1c79fdda6cebfb23c7723978fc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://26e15915f01864e6357f914244e51cc52eb7c1c79fdda6cebfb23c7723978fc7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:17:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-27v7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3845cae9bbde573b86221f80bf461886dadf5c5149bb19824e505c703e168ba3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3845cae9bbde573b86221f80bf461886dadf5c5149bb19824e505c703e168ba3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:17:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-27v7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://83b7a573c632fbc4da1c088193202dcbe4f5f09d82c98b12e901a06acc877b80\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://83b7a573c632fbc4da1c088193202dcbe4f5f09d82c98b12e901a06acc877b80\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:17:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:17:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-27v7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2730d908eb063a0dc3278a304a8b7b9aee84bb6df39693e476d6517362864da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b2730d908eb063a0dc3278a304a8b7b9aee84bb6df39693e476d6517362864da\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:17:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:17:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-27v7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://86ba552c18df4c07b6d6b34acf51c27ec696374ddd079486c045e1cb9f68f703\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://86ba552c18df4c07b6d6b34acf51c27ec696374ddd079486c045e1cb9f68f703\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:17:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:17:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-27v7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://522b8abd41e12aecabbbc8a1f16dd8978b1e72b0984784780349570290bcc168\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://522b8abd41e12aecabbbc8a1f16dd8978b1e72b0984784780349570290bcc168\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:17:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:17:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-27v7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:17:42Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-2rbc4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:17:54Z is after 2025-08-24T17:21:41Z" Feb 18 19:17:54 crc kubenswrapper[4942]: I0218 19:17:54.997205 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:17:54Z is after 2025-08-24T17:21:41Z" Feb 18 19:17:55 crc kubenswrapper[4942]: I0218 19:17:55.012910 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:17:55 crc kubenswrapper[4942]: I0218 19:17:55.013108 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:17:55 crc kubenswrapper[4942]: I0218 19:17:55.013193 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:17:55 crc kubenswrapper[4942]: I0218 19:17:55.013331 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:17:55 crc kubenswrapper[4942]: I0218 19:17:55.013415 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:17:55Z","lastTransitionTime":"2026-02-18T19:17:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:17:55 crc kubenswrapper[4942]: I0218 19:17:55.031259 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-xk99z" Feb 18 19:17:55 crc kubenswrapper[4942]: I0218 19:17:55.035649 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 19:17:55 crc kubenswrapper[4942]: E0218 19:17:55.035845 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 19:17:55 crc kubenswrapper[4942]: W0218 19:17:55.051082 4942 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8f8b40cd_7bbd_4189_a8c0_f4131e8b9add.slice/crio-07903325e930473a2cb750b610573cffd86b8c58f8b6f3b67e6a0cf63c6bfca4 WatchSource:0}: Error finding container 07903325e930473a2cb750b610573cffd86b8c58f8b6f3b67e6a0cf63c6bfca4: Status 404 returned error can't find the container with id 07903325e930473a2cb750b610573cffd86b8c58f8b6f3b67e6a0cf63c6bfca4 Feb 18 19:17:55 crc kubenswrapper[4942]: I0218 19:17:55.118282 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:17:55 crc kubenswrapper[4942]: I0218 19:17:55.118338 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:17:55 crc kubenswrapper[4942]: I0218 19:17:55.118404 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:17:55 crc kubenswrapper[4942]: I0218 19:17:55.118426 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:17:55 crc kubenswrapper[4942]: I0218 19:17:55.118450 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:17:55Z","lastTransitionTime":"2026-02-18T19:17:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:17:55 crc kubenswrapper[4942]: I0218 19:17:55.221386 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:17:55 crc kubenswrapper[4942]: I0218 19:17:55.221494 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:17:55 crc kubenswrapper[4942]: I0218 19:17:55.221521 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:17:55 crc kubenswrapper[4942]: I0218 19:17:55.221551 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:17:55 crc kubenswrapper[4942]: I0218 19:17:55.221570 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:17:55Z","lastTransitionTime":"2026-02-18T19:17:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:17:55 crc kubenswrapper[4942]: I0218 19:17:55.326093 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:17:55 crc kubenswrapper[4942]: I0218 19:17:55.326837 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:17:55 crc kubenswrapper[4942]: I0218 19:17:55.326992 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:17:55 crc kubenswrapper[4942]: I0218 19:17:55.327139 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:17:55 crc kubenswrapper[4942]: I0218 19:17:55.327298 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:17:55Z","lastTransitionTime":"2026-02-18T19:17:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:17:55 crc kubenswrapper[4942]: I0218 19:17:55.366518 4942 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-89fzv_45dc4164-81a9-44cf-b86a-dff571bc0417/ovnkube-controller/1.log" Feb 18 19:17:55 crc kubenswrapper[4942]: I0218 19:17:55.372206 4942 scope.go:117] "RemoveContainer" containerID="093e5e3bd5a3d7277ee21a03cf707e96c859c4d827efe302bd1a67ee3491c717" Feb 18 19:17:55 crc kubenswrapper[4942]: E0218 19:17:55.372517 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-89fzv_openshift-ovn-kubernetes(45dc4164-81a9-44cf-b86a-dff571bc0417)\"" pod="openshift-ovn-kubernetes/ovnkube-node-89fzv" podUID="45dc4164-81a9-44cf-b86a-dff571bc0417" Feb 18 19:17:55 crc kubenswrapper[4942]: I0218 19:17:55.378801 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-xk99z" event={"ID":"8f8b40cd-7bbd-4189-a8c0-f4131e8b9add","Type":"ContainerStarted","Data":"b7ea4ede9f2f9b4438bc9befcf913e5b8c7b9dc765fa1edce809e17c5ac933a6"} Feb 18 19:17:55 crc kubenswrapper[4942]: I0218 19:17:55.378863 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-xk99z" event={"ID":"8f8b40cd-7bbd-4189-a8c0-f4131e8b9add","Type":"ContainerStarted","Data":"07903325e930473a2cb750b610573cffd86b8c58f8b6f3b67e6a0cf63c6bfca4"} Feb 18 19:17:55 crc kubenswrapper[4942]: I0218 19:17:55.394446 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4da93830-99a3-4d84-91c8-a5352a987b3f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://beecfbdf76954e7b9895240b52a2ec033ec3b81094ece02095f67a5f389d0383\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca3d8e99733c89b17e7211c9bae268f8e75942d896d32a6e2e9fc7e613000a6d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee5e19c2c5a503ae69e8052828713b9b399137e0fb7f3a06865d4d7f6b29c954\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b82ad9b4d66dae63d0d0fcf0dd65333cf43c3b1d6006e129b7db1c6320bfdee8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b82ad9b4d66dae63d0d0fcf0dd65333cf43c3b1d6006e129b7db1c6320bfdee8\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"message\\\":\\\"le observer\\\\nW0218 19:17:41.723890 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0218 19:17:41.724123 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0218 19:17:41.725411 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3231040961/tls.crt::/tmp/serving-cert-3231040961/tls.key\\\\\\\"\\\\nI0218 19:17:41.923908 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0218 19:17:41.936017 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0218 19:17:41.936045 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0218 19:17:41.936073 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0218 19:17:41.936079 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0218 19:17:41.944174 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0218 19:17:41.944200 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 19:17:41.944205 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 19:17:41.944211 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0218 19:17:41.944214 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0218 19:17:41.944217 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0218 19:17:41.944220 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0218 19:17:41.944371 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0218 19:17:41.958094 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T19:17:41Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5fcd5de3303bba82e4a354de9f77b9aac574912955c2e49e2e74232f4d432a88\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:23Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6adb31d2464d541db843bddad1c43811ca11900db12deeb0d7c13ff8a3186d09\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6adb31d2464d541db843bddad1c43811ca11900db12deeb0d7c13ff8a3186d09\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:17:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:17:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:17:21Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:17:55Z is after 2025-08-24T17:21:41Z" Feb 18 19:17:55 crc kubenswrapper[4942]: I0218 19:17:55.424321 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:17:55Z is after 2025-08-24T17:21:41Z" Feb 18 19:17:55 crc kubenswrapper[4942]: I0218 19:17:55.430321 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:17:55 crc kubenswrapper[4942]: I0218 19:17:55.430384 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:17:55 crc kubenswrapper[4942]: I0218 19:17:55.430397 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:17:55 crc kubenswrapper[4942]: I0218 19:17:55.430418 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:17:55 crc kubenswrapper[4942]: I0218 19:17:55.430673 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:17:55Z","lastTransitionTime":"2026-02-18T19:17:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:17:55 crc kubenswrapper[4942]: I0218 19:17:55.438225 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-wxck8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"69ef2748-687e-4223-998e-7bd92ad8aaaf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ba4df5c822ff37a1a027d1908aab6472cd0b5a6ab0a2b5e5d1b172774107727\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vscpp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:17:45Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-wxck8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:17:55Z is after 2025-08-24T17:21:41Z" Feb 18 19:17:55 crc kubenswrapper[4942]: I0218 19:17:55.458105 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b5d2b9d-7ec0-41fa-a073-399c6fd41eb6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2c8b81c113e461032be39d6328308bad3189a9e84d987da987d43e8e2f6449fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3654d3b4a5084ce9ffb9ef8aeab6155788b56ac636aee44b098f6e9d457a8d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3a247d311cfbec62a54df5757a344bbc7ea516a66ccdeb67aecbbe268a4fbe4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://117748c4c4fa5e68d4b927639faa447ed3a984e0d7364a2224abe27e178d5746\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:17:21Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:17:55Z is after 2025-08-24T17:21:41Z" Feb 18 19:17:55 crc kubenswrapper[4942]: I0218 19:17:55.476203 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8cc6e8b6926e9cadf0bfdedb3a9fd0e5a7a902ba1cc703cd0396c3d7b2ec8666\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://45c0716738e2acbb0104b2ce05e3f23fd6933b653297d10972914500f3e55cd3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:17:55Z is after 2025-08-24T17:21:41Z" Feb 18 19:17:55 crc kubenswrapper[4942]: I0218 19:17:55.494154 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-xk99z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8f8b40cd-7bbd-4189-a8c0-f4131e8b9add\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:54Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:54Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2zxvs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2zxvs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:17:54Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-xk99z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:17:55Z is after 2025-08-24T17:21:41Z" Feb 18 19:17:55 crc kubenswrapper[4942]: I0218 19:17:55.510060 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:17:55Z is after 2025-08-24T17:21:41Z" Feb 18 19:17:55 crc kubenswrapper[4942]: I0218 19:17:55.523256 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-5pgvt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f163820b-df8b-4e07-9b74-d5f3332580a6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://97b02b2ef091c462632d385e824d90a6dc8270726bb3b5dfaa6c3036e99d323f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjg6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:17:41Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-5pgvt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:17:55Z is after 2025-08-24T17:21:41Z" Feb 18 19:17:55 crc kubenswrapper[4942]: I0218 19:17:55.533657 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:17:55 crc kubenswrapper[4942]: I0218 19:17:55.533721 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:17:55 crc kubenswrapper[4942]: I0218 19:17:55.533738 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:17:55 crc kubenswrapper[4942]: I0218 19:17:55.533782 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:17:55 crc kubenswrapper[4942]: I0218 19:17:55.533813 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:17:55Z","lastTransitionTime":"2026-02-18T19:17:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:17:55 crc kubenswrapper[4942]: I0218 19:17:55.543265 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-2rbc4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1ec6cacf-a09d-43b8-89fc-aea1d5a0ca9d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d379b6cff5fad06493f1e137d6f8de20b35e5350025c5875db8afb23cf30ac97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-27v7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://26e15915f01864e6357f914244e51cc52eb7c1c79fdda6cebfb23c7723978fc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://26e15915f01864e6357f914244e51cc52eb7c1c79fdda6cebfb23c7723978fc7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:17:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-27v7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3845cae9bbde573b86221f80bf461886dadf5c5149bb19824e505c703e168ba3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3845cae9bbde573b86221f80bf461886dadf5c5149bb19824e505c703e168ba3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:17:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-27v7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://83b7a573c632fbc4da1c088193202dcbe4f5f09d82c98b12e901a06acc877b80\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://83b7a573c632fbc4da1c088193202dcbe4f5f09d82c98b12e901a06acc877b80\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:17:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:17:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-27v7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2730d908eb063a0dc3278a304a8b7b9aee84bb6df39693e476d6517362864da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b2730d908eb063a0dc3278a304a8b7b9aee84bb6df39693e476d6517362864da\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:17:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:17:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-27v7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://86ba552c18df4c07b6d6b34acf51c27ec696374ddd079486c045e1cb9f68f703\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://86ba552c18df4c07b6d6b34acf51c27ec696374ddd079486c045e1cb9f68f703\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:17:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:17:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-27v7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://522b8abd41e12aecabbbc8a1f16dd8978b1e72b0984784780349570290bcc168\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://522b8abd41e12aecabbbc8a1f16dd8978b1e72b0984784780349570290bcc168\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:17:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:17:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-27v7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:17:42Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-2rbc4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:17:55Z is after 2025-08-24T17:21:41Z" Feb 18 19:17:55 crc kubenswrapper[4942]: I0218 19:17:55.568213 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-89fzv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"45dc4164-81a9-44cf-b86a-dff571bc0417\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:42Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:42Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e988175a524e389ddf3e3a47acb65910ac3bf3b812e14b76d988f13e2cdc5dc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl7tj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9333dac09e056ca12a248589ed4a097788b86ab83f9a1014d76d8bad88f1800c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl7tj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bcc9ee5f12cc3a3518c9fe13c16743e946e59b82dc01239767afb1e4afb2e4b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl7tj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2e222b580b244e85a382499ae61c72779f95fdab87e4d4c723d29b488219f94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl7tj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6351d0088a3e9c170ebe043fa700ef7f870c52f40d751b4fd13ac7b5bfa5e3b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl7tj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://427d7c083c5040fc6afe217c7850f1114323977542e83eb35d0a71b4bef6ecc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl7tj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://093e5e3bd5a3d7277ee21a03cf707e96c859c4d827efe302bd1a67ee3491c717\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://093e5e3bd5a3d7277ee21a03cf707e96c859c4d827efe302bd1a67ee3491c717\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-18T19:17:53Z\\\",\\\"message\\\":\\\"4.36:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {ba175bbe-5cc4-47e6-a32d-57693e1320bd}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0218 19:17:53.331601 6390 factory.go:1336] Added *v1.EgressIP event handler 8\\\\nI0218 19:17:53.331580 6390 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-kube-controller-manager/kube-controller-manager]} name:Service_openshift-kube-controller-manager/kube-controller-manager_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.36:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {ba175bbe-5cc4-47e6-a32d-57693e1320bd}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0218 19:17:53.333326 6390 factory.go:1336] Added *v1.EgressFirewall event handler 9\\\\nI0218 19:17:53.333620 6390 controller.go:132] Adding controller ef_node_controller event handlers\\\\nI0218 19:17:53.333683 6390 ovnkube.go:599] Stopped ovnkube\\\\nI0218 19:17:53.333723 6390 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0218 19:17:53.333863 6390 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T19:17:52Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-89fzv_openshift-ovn-kubernetes(45dc4164-81a9-44cf-b86a-dff571bc0417)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl7tj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c498aa99d3ec10af57c279f23804f4dce52a99d2c73fafe2bd9dc6ea454c7a23\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl7tj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://581689b9e064557a35e24e6d5c15a73036b8499700959fd330e9ebee15543edc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://581689b9e064557a35e24e6d5c15a73036b8499700959fd330e9ebee15543edc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:17:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:17:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl7tj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:17:42Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-89fzv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:17:55Z is after 2025-08-24T17:21:41Z" Feb 18 19:17:55 crc kubenswrapper[4942]: I0218 19:17:55.588941 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://36f5db0de79285e1aca04aee9ebb8824353d8746f2f7df24be858a55db3c9abf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:17:55Z is after 2025-08-24T17:21:41Z" Feb 18 19:17:55 crc kubenswrapper[4942]: I0218 19:17:55.607826 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2e4be8605467674f949e5b4b8d282634126ab56d2983d5ffadb64ca4043b79b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:17:55Z is after 2025-08-24T17:21:41Z" Feb 18 19:17:55 crc kubenswrapper[4942]: I0218 19:17:55.620730 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:17:55Z is after 2025-08-24T17:21:41Z" Feb 18 19:17:55 crc kubenswrapper[4942]: I0218 19:17:55.632986 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wqxh4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"28921539-823a-4439-a230-3b5aed7085cc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d1f426cf3a46e9dbd6da2d7e0d1dc2649a781bb63b9b116e2e96e297ffe685f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c2zj5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3f2583de812c35d32f50918d2ea1071672e650d7bb1eca09416558ca25526b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c2zj5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:17:42Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wqxh4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:17:55Z is after 2025-08-24T17:21:41Z" Feb 18 19:17:55 crc kubenswrapper[4942]: I0218 19:17:55.637151 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:17:55 crc kubenswrapper[4942]: I0218 19:17:55.637221 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:17:55 crc kubenswrapper[4942]: I0218 19:17:55.637241 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:17:55 crc kubenswrapper[4942]: I0218 19:17:55.637270 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:17:55 crc kubenswrapper[4942]: I0218 19:17:55.637335 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:17:55Z","lastTransitionTime":"2026-02-18T19:17:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:17:55 crc kubenswrapper[4942]: I0218 19:17:55.648312 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-8jfwb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"75150b8c-7a02-497b-86c3-eabc9c8dbc55\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f6aba9b40a3a963de7e8fb8f2a121318f0800350a41caa30b6aef71468e5e0e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-65c5q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:17:42Z\\\"}}\" for pod \"openshift-multus\"/\"multus-8jfwb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:17:55Z is after 2025-08-24T17:21:41Z" Feb 18 19:17:55 crc kubenswrapper[4942]: I0218 19:17:55.739832 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:17:55 crc kubenswrapper[4942]: I0218 19:17:55.740115 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:17:55 crc kubenswrapper[4942]: I0218 19:17:55.740193 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:17:55 crc kubenswrapper[4942]: I0218 19:17:55.740277 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:17:55 crc kubenswrapper[4942]: I0218 19:17:55.740388 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:17:55Z","lastTransitionTime":"2026-02-18T19:17:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:17:55 crc kubenswrapper[4942]: I0218 19:17:55.843560 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:17:55 crc kubenswrapper[4942]: I0218 19:17:55.843614 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:17:55 crc kubenswrapper[4942]: I0218 19:17:55.843625 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:17:55 crc kubenswrapper[4942]: I0218 19:17:55.843648 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:17:55 crc kubenswrapper[4942]: I0218 19:17:55.843660 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:17:55Z","lastTransitionTime":"2026-02-18T19:17:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:17:55 crc kubenswrapper[4942]: I0218 19:17:55.946349 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:17:55 crc kubenswrapper[4942]: I0218 19:17:55.946400 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:17:55 crc kubenswrapper[4942]: I0218 19:17:55.946414 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:17:55 crc kubenswrapper[4942]: I0218 19:17:55.946437 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:17:55 crc kubenswrapper[4942]: I0218 19:17:55.946449 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:17:55Z","lastTransitionTime":"2026-02-18T19:17:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:17:55 crc kubenswrapper[4942]: I0218 19:17:55.977722 4942 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-04 16:15:03.423602074 +0000 UTC Feb 18 19:17:56 crc kubenswrapper[4942]: I0218 19:17:56.035803 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 19:17:56 crc kubenswrapper[4942]: I0218 19:17:56.035832 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 19:17:56 crc kubenswrapper[4942]: E0218 19:17:56.035935 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 19:17:56 crc kubenswrapper[4942]: E0218 19:17:56.036051 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 19:17:56 crc kubenswrapper[4942]: I0218 19:17:56.049244 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:17:56 crc kubenswrapper[4942]: I0218 19:17:56.049286 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:17:56 crc kubenswrapper[4942]: I0218 19:17:56.049296 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:17:56 crc kubenswrapper[4942]: I0218 19:17:56.049315 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:17:56 crc kubenswrapper[4942]: I0218 19:17:56.049329 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:17:56Z","lastTransitionTime":"2026-02-18T19:17:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:17:56 crc kubenswrapper[4942]: I0218 19:17:56.152947 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:17:56 crc kubenswrapper[4942]: I0218 19:17:56.152998 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:17:56 crc kubenswrapper[4942]: I0218 19:17:56.153008 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:17:56 crc kubenswrapper[4942]: I0218 19:17:56.153026 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:17:56 crc kubenswrapper[4942]: I0218 19:17:56.153035 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:17:56Z","lastTransitionTime":"2026-02-18T19:17:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:17:56 crc kubenswrapper[4942]: I0218 19:17:56.214456 4942 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/network-metrics-daemon-qwg6q"] Feb 18 19:17:56 crc kubenswrapper[4942]: I0218 19:17:56.214981 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qwg6q" Feb 18 19:17:56 crc kubenswrapper[4942]: E0218 19:17:56.215046 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qwg6q" podUID="ac5b5f40-34db-4aeb-abb4-57204673bd53" Feb 18 19:17:56 crc kubenswrapper[4942]: I0218 19:17:56.233669 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b5d2b9d-7ec0-41fa-a073-399c6fd41eb6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2c8b81c113e461032be39d6328308bad3189a9e84d987da987d43e8e2f6449fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3654d3b4a5084ce9ffb9ef8aeab6155788b56ac636aee44b098f6e9d457a8d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3a247d311cfbec62a54df5757a344bbc7ea516a66ccdeb67aecbbe268a4fbe4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://117748c4c4fa5e68d4b927639faa447ed3a984e0d7364a2224abe27e178d5746\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:17:21Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:17:56Z is after 2025-08-24T17:21:41Z" Feb 18 19:17:56 crc kubenswrapper[4942]: I0218 19:17:56.251912 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8cc6e8b6926e9cadf0bfdedb3a9fd0e5a7a902ba1cc703cd0396c3d7b2ec8666\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://45c0716738e2acbb0104b2ce05e3f23fd6933b653297d10972914500f3e55cd3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:17:56Z is after 2025-08-24T17:21:41Z" Feb 18 19:17:56 crc kubenswrapper[4942]: I0218 19:17:56.256458 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:17:56 crc kubenswrapper[4942]: I0218 19:17:56.256541 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:17:56 crc kubenswrapper[4942]: I0218 19:17:56.256561 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:17:56 crc kubenswrapper[4942]: I0218 19:17:56.256615 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:17:56 crc kubenswrapper[4942]: I0218 19:17:56.256636 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:17:56Z","lastTransitionTime":"2026-02-18T19:17:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:17:56 crc kubenswrapper[4942]: I0218 19:17:56.270085 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-xk99z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8f8b40cd-7bbd-4189-a8c0-f4131e8b9add\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:54Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:54Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2zxvs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2zxvs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:17:54Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-xk99z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:17:56Z is after 2025-08-24T17:21:41Z" Feb 18 19:17:56 crc kubenswrapper[4942]: I0218 19:17:56.283665 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-qwg6q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ac5b5f40-34db-4aeb-abb4-57204673bd53\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:56Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:56Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kmmq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kmmq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:17:56Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-qwg6q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:17:56Z is after 2025-08-24T17:21:41Z" Feb 18 19:17:56 crc kubenswrapper[4942]: I0218 19:17:56.294370 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7kmmq\" (UniqueName: \"kubernetes.io/projected/ac5b5f40-34db-4aeb-abb4-57204673bd53-kube-api-access-7kmmq\") pod \"network-metrics-daemon-qwg6q\" (UID: \"ac5b5f40-34db-4aeb-abb4-57204673bd53\") " pod="openshift-multus/network-metrics-daemon-qwg6q" Feb 18 19:17:56 crc kubenswrapper[4942]: I0218 19:17:56.294527 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/ac5b5f40-34db-4aeb-abb4-57204673bd53-metrics-certs\") pod \"network-metrics-daemon-qwg6q\" (UID: \"ac5b5f40-34db-4aeb-abb4-57204673bd53\") " pod="openshift-multus/network-metrics-daemon-qwg6q" Feb 18 19:17:56 crc kubenswrapper[4942]: I0218 19:17:56.304117 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:17:56Z is after 2025-08-24T17:21:41Z" Feb 18 19:17:56 crc kubenswrapper[4942]: I0218 19:17:56.321236 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-5pgvt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f163820b-df8b-4e07-9b74-d5f3332580a6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://97b02b2ef091c462632d385e824d90a6dc8270726bb3b5dfaa6c3036e99d323f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjg6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:17:41Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-5pgvt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:17:56Z is after 2025-08-24T17:21:41Z" Feb 18 19:17:56 crc kubenswrapper[4942]: I0218 19:17:56.339598 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-2rbc4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1ec6cacf-a09d-43b8-89fc-aea1d5a0ca9d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d379b6cff5fad06493f1e137d6f8de20b35e5350025c5875db8afb23cf30ac97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-27v7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://26e15915f01864e6357f914244e51cc52eb7c1c79fdda6cebfb23c7723978fc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://26e15915f01864e6357f914244e51cc52eb7c1c79fdda6cebfb23c7723978fc7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:17:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-27v7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3845cae9bbde573b86221f80bf461886dadf5c5149bb19824e505c703e168ba3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3845cae9bbde573b86221f80bf461886dadf5c5149bb19824e505c703e168ba3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:17:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-27v7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://83b7a573c632fbc4da1c088193202dcbe4f5f09d82c98b12e901a06acc877b80\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://83b7a573c632fbc4da1c088193202dcbe4f5f09d82c98b12e901a06acc877b80\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:17:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:17:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-27v7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2730d908eb063a0dc3278a304a8b7b9aee84bb6df39693e476d6517362864da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b2730d908eb063a0dc3278a304a8b7b9aee84bb6df39693e476d6517362864da\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:17:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:17:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-27v7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://86ba552c18df4c07b6d6b34acf51c27ec696374ddd079486c045e1cb9f68f703\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://86ba552c18df4c07b6d6b34acf51c27ec696374ddd079486c045e1cb9f68f703\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:17:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:17:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-27v7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://522b8abd41e12aecabbbc8a1f16dd8978b1e72b0984784780349570290bcc168\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://522b8abd41e12aecabbbc8a1f16dd8978b1e72b0984784780349570290bcc168\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:17:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:17:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-27v7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:17:42Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-2rbc4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:17:56Z is after 2025-08-24T17:21:41Z" Feb 18 19:17:56 crc kubenswrapper[4942]: I0218 19:17:56.355724 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:17:56Z is after 2025-08-24T17:21:41Z" Feb 18 19:17:56 crc kubenswrapper[4942]: I0218 19:17:56.359847 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:17:56 crc kubenswrapper[4942]: I0218 19:17:56.359888 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:17:56 crc kubenswrapper[4942]: I0218 19:17:56.359899 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:17:56 crc kubenswrapper[4942]: I0218 19:17:56.359921 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:17:56 crc kubenswrapper[4942]: I0218 19:17:56.359933 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:17:56Z","lastTransitionTime":"2026-02-18T19:17:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:17:56 crc kubenswrapper[4942]: I0218 19:17:56.373993 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wqxh4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"28921539-823a-4439-a230-3b5aed7085cc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d1f426cf3a46e9dbd6da2d7e0d1dc2649a781bb63b9b116e2e96e297ffe685f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c2zj5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3f2583de812c35d32f50918d2ea1071672e650d7bb1eca09416558ca25526b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c2zj5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:17:42Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wqxh4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:17:56Z is after 2025-08-24T17:21:41Z" Feb 18 19:17:56 crc kubenswrapper[4942]: I0218 19:17:56.384259 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-xk99z" event={"ID":"8f8b40cd-7bbd-4189-a8c0-f4131e8b9add","Type":"ContainerStarted","Data":"3573f095c220e3b1994394b83fdf24c7d1a721ccee2755042f520467f21ae1fa"} Feb 18 19:17:56 crc kubenswrapper[4942]: I0218 19:17:56.393502 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-8jfwb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"75150b8c-7a02-497b-86c3-eabc9c8dbc55\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f6aba9b40a3a963de7e8fb8f2a121318f0800350a41caa30b6aef71468e5e0e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-65c5q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:17:42Z\\\"}}\" for pod \"openshift-multus\"/\"multus-8jfwb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:17:56Z is after 2025-08-24T17:21:41Z" Feb 18 19:17:56 crc kubenswrapper[4942]: I0218 19:17:56.395116 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7kmmq\" (UniqueName: \"kubernetes.io/projected/ac5b5f40-34db-4aeb-abb4-57204673bd53-kube-api-access-7kmmq\") pod \"network-metrics-daemon-qwg6q\" (UID: \"ac5b5f40-34db-4aeb-abb4-57204673bd53\") " pod="openshift-multus/network-metrics-daemon-qwg6q" Feb 18 19:17:56 crc kubenswrapper[4942]: I0218 19:17:56.395272 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/ac5b5f40-34db-4aeb-abb4-57204673bd53-metrics-certs\") pod \"network-metrics-daemon-qwg6q\" (UID: \"ac5b5f40-34db-4aeb-abb4-57204673bd53\") " pod="openshift-multus/network-metrics-daemon-qwg6q" Feb 18 19:17:56 crc kubenswrapper[4942]: E0218 19:17:56.395531 4942 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 18 19:17:56 crc kubenswrapper[4942]: E0218 19:17:56.395646 4942 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ac5b5f40-34db-4aeb-abb4-57204673bd53-metrics-certs podName:ac5b5f40-34db-4aeb-abb4-57204673bd53 nodeName:}" failed. No retries permitted until 2026-02-18 19:17:56.895617768 +0000 UTC m=+36.600550433 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/ac5b5f40-34db-4aeb-abb4-57204673bd53-metrics-certs") pod "network-metrics-daemon-qwg6q" (UID: "ac5b5f40-34db-4aeb-abb4-57204673bd53") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 18 19:17:56 crc kubenswrapper[4942]: I0218 19:17:56.419122 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-89fzv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"45dc4164-81a9-44cf-b86a-dff571bc0417\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:42Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:42Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e988175a524e389ddf3e3a47acb65910ac3bf3b812e14b76d988f13e2cdc5dc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl7tj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9333dac09e056ca12a248589ed4a097788b86ab83f9a1014d76d8bad88f1800c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl7tj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bcc9ee5f12cc3a3518c9fe13c16743e946e59b82dc01239767afb1e4afb2e4b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl7tj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2e222b580b244e85a382499ae61c72779f95fdab87e4d4c723d29b488219f94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl7tj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6351d0088a3e9c170ebe043fa700ef7f870c52f40d751b4fd13ac7b5bfa5e3b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl7tj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://427d7c083c5040fc6afe217c7850f1114323977542e83eb35d0a71b4bef6ecc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl7tj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://093e5e3bd5a3d7277ee21a03cf707e96c859c4d827efe302bd1a67ee3491c717\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://093e5e3bd5a3d7277ee21a03cf707e96c859c4d827efe302bd1a67ee3491c717\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-18T19:17:53Z\\\",\\\"message\\\":\\\"4.36:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {ba175bbe-5cc4-47e6-a32d-57693e1320bd}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0218 19:17:53.331601 6390 factory.go:1336] Added *v1.EgressIP event handler 8\\\\nI0218 19:17:53.331580 6390 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-kube-controller-manager/kube-controller-manager]} name:Service_openshift-kube-controller-manager/kube-controller-manager_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.36:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {ba175bbe-5cc4-47e6-a32d-57693e1320bd}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0218 19:17:53.333326 6390 factory.go:1336] Added *v1.EgressFirewall event handler 9\\\\nI0218 19:17:53.333620 6390 controller.go:132] Adding controller ef_node_controller event handlers\\\\nI0218 19:17:53.333683 6390 ovnkube.go:599] Stopped ovnkube\\\\nI0218 19:17:53.333723 6390 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0218 19:17:53.333863 6390 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T19:17:52Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-89fzv_openshift-ovn-kubernetes(45dc4164-81a9-44cf-b86a-dff571bc0417)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl7tj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c498aa99d3ec10af57c279f23804f4dce52a99d2c73fafe2bd9dc6ea454c7a23\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl7tj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://581689b9e064557a35e24e6d5c15a73036b8499700959fd330e9ebee15543edc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://581689b9e064557a35e24e6d5c15a73036b8499700959fd330e9ebee15543edc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:17:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:17:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl7tj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:17:42Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-89fzv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:17:56Z is after 2025-08-24T17:21:41Z" Feb 18 19:17:56 crc kubenswrapper[4942]: I0218 19:17:56.445561 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7kmmq\" (UniqueName: \"kubernetes.io/projected/ac5b5f40-34db-4aeb-abb4-57204673bd53-kube-api-access-7kmmq\") pod \"network-metrics-daemon-qwg6q\" (UID: \"ac5b5f40-34db-4aeb-abb4-57204673bd53\") " pod="openshift-multus/network-metrics-daemon-qwg6q" Feb 18 19:17:56 crc kubenswrapper[4942]: I0218 19:17:56.459924 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://36f5db0de79285e1aca04aee9ebb8824353d8746f2f7df24be858a55db3c9abf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:17:56Z is after 2025-08-24T17:21:41Z" Feb 18 19:17:56 crc kubenswrapper[4942]: I0218 19:17:56.474973 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:17:56 crc kubenswrapper[4942]: I0218 19:17:56.475032 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:17:56 crc kubenswrapper[4942]: I0218 19:17:56.475051 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:17:56 crc kubenswrapper[4942]: I0218 19:17:56.475083 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:17:56 crc kubenswrapper[4942]: I0218 19:17:56.475098 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:17:56Z","lastTransitionTime":"2026-02-18T19:17:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:17:56 crc kubenswrapper[4942]: I0218 19:17:56.488425 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2e4be8605467674f949e5b4b8d282634126ab56d2983d5ffadb64ca4043b79b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:17:56Z is after 2025-08-24T17:21:41Z" Feb 18 19:17:56 crc kubenswrapper[4942]: I0218 19:17:56.503476 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4da93830-99a3-4d84-91c8-a5352a987b3f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://beecfbdf76954e7b9895240b52a2ec033ec3b81094ece02095f67a5f389d0383\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca3d8e99733c89b17e7211c9bae268f8e75942d896d32a6e2e9fc7e613000a6d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee5e19c2c5a503ae69e8052828713b9b399137e0fb7f3a06865d4d7f6b29c954\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b82ad9b4d66dae63d0d0fcf0dd65333cf43c3b1d6006e129b7db1c6320bfdee8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b82ad9b4d66dae63d0d0fcf0dd65333cf43c3b1d6006e129b7db1c6320bfdee8\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"message\\\":\\\"le observer\\\\nW0218 19:17:41.723890 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0218 19:17:41.724123 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0218 19:17:41.725411 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3231040961/tls.crt::/tmp/serving-cert-3231040961/tls.key\\\\\\\"\\\\nI0218 19:17:41.923908 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0218 19:17:41.936017 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0218 19:17:41.936045 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0218 19:17:41.936073 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0218 19:17:41.936079 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0218 19:17:41.944174 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0218 19:17:41.944200 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 19:17:41.944205 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 19:17:41.944211 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0218 19:17:41.944214 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0218 19:17:41.944217 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0218 19:17:41.944220 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0218 19:17:41.944371 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0218 19:17:41.958094 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T19:17:41Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5fcd5de3303bba82e4a354de9f77b9aac574912955c2e49e2e74232f4d432a88\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:23Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6adb31d2464d541db843bddad1c43811ca11900db12deeb0d7c13ff8a3186d09\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6adb31d2464d541db843bddad1c43811ca11900db12deeb0d7c13ff8a3186d09\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:17:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:17:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:17:21Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:17:56Z is after 2025-08-24T17:21:41Z" Feb 18 19:17:56 crc kubenswrapper[4942]: I0218 19:17:56.516922 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:17:56Z is after 2025-08-24T17:21:41Z" Feb 18 19:17:56 crc kubenswrapper[4942]: I0218 19:17:56.527377 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-wxck8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"69ef2748-687e-4223-998e-7bd92ad8aaaf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ba4df5c822ff37a1a027d1908aab6472cd0b5a6ab0a2b5e5d1b172774107727\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vscpp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:17:45Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-wxck8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:17:56Z is after 2025-08-24T17:21:41Z" Feb 18 19:17:56 crc kubenswrapper[4942]: I0218 19:17:56.543071 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:17:56Z is after 2025-08-24T17:21:41Z" Feb 18 19:17:56 crc kubenswrapper[4942]: I0218 19:17:56.557645 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-wxck8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"69ef2748-687e-4223-998e-7bd92ad8aaaf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ba4df5c822ff37a1a027d1908aab6472cd0b5a6ab0a2b5e5d1b172774107727\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vscpp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:17:45Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-wxck8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:17:56Z is after 2025-08-24T17:21:41Z" Feb 18 19:17:56 crc kubenswrapper[4942]: I0218 19:17:56.572684 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4da93830-99a3-4d84-91c8-a5352a987b3f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://beecfbdf76954e7b9895240b52a2ec033ec3b81094ece02095f67a5f389d0383\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca3d8e99733c89b17e7211c9bae268f8e75942d896d32a6e2e9fc7e613000a6d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee5e19c2c5a503ae69e8052828713b9b399137e0fb7f3a06865d4d7f6b29c954\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b82ad9b4d66dae63d0d0fcf0dd65333cf43c3b1d6006e129b7db1c6320bfdee8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b82ad9b4d66dae63d0d0fcf0dd65333cf43c3b1d6006e129b7db1c6320bfdee8\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"message\\\":\\\"le observer\\\\nW0218 19:17:41.723890 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0218 19:17:41.724123 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0218 19:17:41.725411 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3231040961/tls.crt::/tmp/serving-cert-3231040961/tls.key\\\\\\\"\\\\nI0218 19:17:41.923908 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0218 19:17:41.936017 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0218 19:17:41.936045 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0218 19:17:41.936073 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0218 19:17:41.936079 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0218 19:17:41.944174 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0218 19:17:41.944200 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 19:17:41.944205 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 19:17:41.944211 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0218 19:17:41.944214 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0218 19:17:41.944217 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0218 19:17:41.944220 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0218 19:17:41.944371 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0218 19:17:41.958094 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T19:17:41Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5fcd5de3303bba82e4a354de9f77b9aac574912955c2e49e2e74232f4d432a88\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:23Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6adb31d2464d541db843bddad1c43811ca11900db12deeb0d7c13ff8a3186d09\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6adb31d2464d541db843bddad1c43811ca11900db12deeb0d7c13ff8a3186d09\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:17:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:17:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:17:21Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:17:56Z is after 2025-08-24T17:21:41Z" Feb 18 19:17:56 crc kubenswrapper[4942]: I0218 19:17:56.578118 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:17:56 crc kubenswrapper[4942]: I0218 19:17:56.578145 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:17:56 crc kubenswrapper[4942]: I0218 19:17:56.578154 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:17:56 crc kubenswrapper[4942]: I0218 19:17:56.578169 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:17:56 crc kubenswrapper[4942]: I0218 19:17:56.578179 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:17:56Z","lastTransitionTime":"2026-02-18T19:17:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:17:56 crc kubenswrapper[4942]: I0218 19:17:56.591632 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8cc6e8b6926e9cadf0bfdedb3a9fd0e5a7a902ba1cc703cd0396c3d7b2ec8666\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://45c0716738e2acbb0104b2ce05e3f23fd6933b653297d10972914500f3e55cd3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:17:56Z is after 2025-08-24T17:21:41Z" Feb 18 19:17:56 crc kubenswrapper[4942]: I0218 19:17:56.605434 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-xk99z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8f8b40cd-7bbd-4189-a8c0-f4131e8b9add\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b7ea4ede9f2f9b4438bc9befcf913e5b8c7b9dc765fa1edce809e17c5ac933a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2zxvs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3573f095c220e3b1994394b83fdf24c7d1a721ccee2755042f520467f21ae1fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2zxvs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:17:54Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-xk99z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:17:56Z is after 2025-08-24T17:21:41Z" Feb 18 19:17:56 crc kubenswrapper[4942]: I0218 19:17:56.619818 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-qwg6q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ac5b5f40-34db-4aeb-abb4-57204673bd53\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:56Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:56Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kmmq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kmmq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:17:56Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-qwg6q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:17:56Z is after 2025-08-24T17:21:41Z" Feb 18 19:17:56 crc kubenswrapper[4942]: I0218 19:17:56.635671 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b5d2b9d-7ec0-41fa-a073-399c6fd41eb6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2c8b81c113e461032be39d6328308bad3189a9e84d987da987d43e8e2f6449fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3654d3b4a5084ce9ffb9ef8aeab6155788b56ac636aee44b098f6e9d457a8d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3a247d311cfbec62a54df5757a344bbc7ea516a66ccdeb67aecbbe268a4fbe4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://117748c4c4fa5e68d4b927639faa447ed3a984e0d7364a2224abe27e178d5746\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:17:21Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:17:56Z is after 2025-08-24T17:21:41Z" Feb 18 19:17:56 crc kubenswrapper[4942]: I0218 19:17:56.648445 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:17:56Z is after 2025-08-24T17:21:41Z" Feb 18 19:17:56 crc kubenswrapper[4942]: I0218 19:17:56.661305 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-5pgvt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f163820b-df8b-4e07-9b74-d5f3332580a6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://97b02b2ef091c462632d385e824d90a6dc8270726bb3b5dfaa6c3036e99d323f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjg6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:17:41Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-5pgvt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:17:56Z is after 2025-08-24T17:21:41Z" Feb 18 19:17:56 crc kubenswrapper[4942]: I0218 19:17:56.677920 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-2rbc4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1ec6cacf-a09d-43b8-89fc-aea1d5a0ca9d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d379b6cff5fad06493f1e137d6f8de20b35e5350025c5875db8afb23cf30ac97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-27v7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://26e15915f01864e6357f914244e51cc52eb7c1c79fdda6cebfb23c7723978fc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://26e15915f01864e6357f914244e51cc52eb7c1c79fdda6cebfb23c7723978fc7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:17:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-27v7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3845cae9bbde573b86221f80bf461886dadf5c5149bb19824e505c703e168ba3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3845cae9bbde573b86221f80bf461886dadf5c5149bb19824e505c703e168ba3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:17:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-27v7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://83b7a573c632fbc4da1c088193202dcbe4f5f09d82c98b12e901a06acc877b80\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://83b7a573c632fbc4da1c088193202dcbe4f5f09d82c98b12e901a06acc877b80\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:17:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:17:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-27v7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2730d908eb063a0dc3278a304a8b7b9aee84bb6df39693e476d6517362864da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b2730d908eb063a0dc3278a304a8b7b9aee84bb6df39693e476d6517362864da\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:17:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:17:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-27v7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://86ba552c18df4c07b6d6b34acf51c27ec696374ddd079486c045e1cb9f68f703\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://86ba552c18df4c07b6d6b34acf51c27ec696374ddd079486c045e1cb9f68f703\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:17:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:17:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-27v7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://522b8abd41e12aecabbbc8a1f16dd8978b1e72b0984784780349570290bcc168\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://522b8abd41e12aecabbbc8a1f16dd8978b1e72b0984784780349570290bcc168\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:17:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:17:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-27v7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:17:42Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-2rbc4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:17:56Z is after 2025-08-24T17:21:41Z" Feb 18 19:17:56 crc kubenswrapper[4942]: I0218 19:17:56.680637 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:17:56 crc kubenswrapper[4942]: I0218 19:17:56.680684 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:17:56 crc kubenswrapper[4942]: I0218 19:17:56.680700 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:17:56 crc kubenswrapper[4942]: I0218 19:17:56.680720 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:17:56 crc kubenswrapper[4942]: I0218 19:17:56.680731 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:17:56Z","lastTransitionTime":"2026-02-18T19:17:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:17:56 crc kubenswrapper[4942]: I0218 19:17:56.698966 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://36f5db0de79285e1aca04aee9ebb8824353d8746f2f7df24be858a55db3c9abf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:17:56Z is after 2025-08-24T17:21:41Z" Feb 18 19:17:56 crc kubenswrapper[4942]: I0218 19:17:56.715223 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2e4be8605467674f949e5b4b8d282634126ab56d2983d5ffadb64ca4043b79b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:17:56Z is after 2025-08-24T17:21:41Z" Feb 18 19:17:56 crc kubenswrapper[4942]: I0218 19:17:56.729201 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:17:56Z is after 2025-08-24T17:21:41Z" Feb 18 19:17:56 crc kubenswrapper[4942]: I0218 19:17:56.740662 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wqxh4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"28921539-823a-4439-a230-3b5aed7085cc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d1f426cf3a46e9dbd6da2d7e0d1dc2649a781bb63b9b116e2e96e297ffe685f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c2zj5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3f2583de812c35d32f50918d2ea1071672e650d7bb1eca09416558ca25526b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c2zj5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:17:42Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wqxh4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:17:56Z is after 2025-08-24T17:21:41Z" Feb 18 19:17:56 crc kubenswrapper[4942]: I0218 19:17:56.755232 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-8jfwb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"75150b8c-7a02-497b-86c3-eabc9c8dbc55\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f6aba9b40a3a963de7e8fb8f2a121318f0800350a41caa30b6aef71468e5e0e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-65c5q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:17:42Z\\\"}}\" for pod \"openshift-multus\"/\"multus-8jfwb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:17:56Z is after 2025-08-24T17:21:41Z" Feb 18 19:17:56 crc kubenswrapper[4942]: I0218 19:17:56.777016 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-89fzv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"45dc4164-81a9-44cf-b86a-dff571bc0417\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:42Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:42Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e988175a524e389ddf3e3a47acb65910ac3bf3b812e14b76d988f13e2cdc5dc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl7tj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9333dac09e056ca12a248589ed4a097788b86ab83f9a1014d76d8bad88f1800c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl7tj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bcc9ee5f12cc3a3518c9fe13c16743e946e59b82dc01239767afb1e4afb2e4b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl7tj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2e222b580b244e85a382499ae61c72779f95fdab87e4d4c723d29b488219f94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl7tj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6351d0088a3e9c170ebe043fa700ef7f870c52f40d751b4fd13ac7b5bfa5e3b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl7tj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://427d7c083c5040fc6afe217c7850f1114323977542e83eb35d0a71b4bef6ecc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl7tj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://093e5e3bd5a3d7277ee21a03cf707e96c859c4d827efe302bd1a67ee3491c717\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://093e5e3bd5a3d7277ee21a03cf707e96c859c4d827efe302bd1a67ee3491c717\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-18T19:17:53Z\\\",\\\"message\\\":\\\"4.36:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {ba175bbe-5cc4-47e6-a32d-57693e1320bd}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0218 19:17:53.331601 6390 factory.go:1336] Added *v1.EgressIP event handler 8\\\\nI0218 19:17:53.331580 6390 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-kube-controller-manager/kube-controller-manager]} name:Service_openshift-kube-controller-manager/kube-controller-manager_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.36:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {ba175bbe-5cc4-47e6-a32d-57693e1320bd}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0218 19:17:53.333326 6390 factory.go:1336] Added *v1.EgressFirewall event handler 9\\\\nI0218 19:17:53.333620 6390 controller.go:132] Adding controller ef_node_controller event handlers\\\\nI0218 19:17:53.333683 6390 ovnkube.go:599] Stopped ovnkube\\\\nI0218 19:17:53.333723 6390 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0218 19:17:53.333863 6390 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T19:17:52Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-89fzv_openshift-ovn-kubernetes(45dc4164-81a9-44cf-b86a-dff571bc0417)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl7tj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c498aa99d3ec10af57c279f23804f4dce52a99d2c73fafe2bd9dc6ea454c7a23\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl7tj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://581689b9e064557a35e24e6d5c15a73036b8499700959fd330e9ebee15543edc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://581689b9e064557a35e24e6d5c15a73036b8499700959fd330e9ebee15543edc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:17:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:17:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl7tj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:17:42Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-89fzv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:17:56Z is after 2025-08-24T17:21:41Z" Feb 18 19:17:56 crc kubenswrapper[4942]: I0218 19:17:56.783907 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:17:56 crc kubenswrapper[4942]: I0218 19:17:56.783954 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:17:56 crc kubenswrapper[4942]: I0218 19:17:56.783967 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:17:56 crc kubenswrapper[4942]: I0218 19:17:56.783990 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:17:56 crc kubenswrapper[4942]: I0218 19:17:56.784005 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:17:56Z","lastTransitionTime":"2026-02-18T19:17:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:17:56 crc kubenswrapper[4942]: I0218 19:17:56.886592 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:17:56 crc kubenswrapper[4942]: I0218 19:17:56.886653 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:17:56 crc kubenswrapper[4942]: I0218 19:17:56.886674 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:17:56 crc kubenswrapper[4942]: I0218 19:17:56.886700 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:17:56 crc kubenswrapper[4942]: I0218 19:17:56.886721 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:17:56Z","lastTransitionTime":"2026-02-18T19:17:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:17:56 crc kubenswrapper[4942]: I0218 19:17:56.899742 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/ac5b5f40-34db-4aeb-abb4-57204673bd53-metrics-certs\") pod \"network-metrics-daemon-qwg6q\" (UID: \"ac5b5f40-34db-4aeb-abb4-57204673bd53\") " pod="openshift-multus/network-metrics-daemon-qwg6q" Feb 18 19:17:56 crc kubenswrapper[4942]: E0218 19:17:56.899945 4942 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 18 19:17:56 crc kubenswrapper[4942]: E0218 19:17:56.900015 4942 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ac5b5f40-34db-4aeb-abb4-57204673bd53-metrics-certs podName:ac5b5f40-34db-4aeb-abb4-57204673bd53 nodeName:}" failed. No retries permitted until 2026-02-18 19:17:57.899996405 +0000 UTC m=+37.604929070 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/ac5b5f40-34db-4aeb-abb4-57204673bd53-metrics-certs") pod "network-metrics-daemon-qwg6q" (UID: "ac5b5f40-34db-4aeb-abb4-57204673bd53") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 18 19:17:56 crc kubenswrapper[4942]: I0218 19:17:56.978201 4942 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-02 17:56:32.730918974 +0000 UTC Feb 18 19:17:56 crc kubenswrapper[4942]: I0218 19:17:56.982122 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:17:56 crc kubenswrapper[4942]: I0218 19:17:56.982167 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:17:56 crc kubenswrapper[4942]: I0218 19:17:56.982177 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:17:56 crc kubenswrapper[4942]: I0218 19:17:56.982195 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:17:56 crc kubenswrapper[4942]: I0218 19:17:56.982208 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:17:56Z","lastTransitionTime":"2026-02-18T19:17:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:17:57 crc kubenswrapper[4942]: E0218 19:17:56.999910 4942 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:17:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:56Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:17:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:56Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:17:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:56Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:17:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:56Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"26ba8477-3134-4454-b1a3-81cc0f315017\\\",\\\"systemUUID\\\":\\\"15e4da6b-0b96-4412-ada2-f835d7e5f88a\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:17:56Z is after 2025-08-24T17:21:41Z" Feb 18 19:17:57 crc kubenswrapper[4942]: I0218 19:17:57.004872 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:17:57 crc kubenswrapper[4942]: I0218 19:17:57.004961 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:17:57 crc kubenswrapper[4942]: I0218 19:17:57.004980 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:17:57 crc kubenswrapper[4942]: I0218 19:17:57.005010 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:17:57 crc kubenswrapper[4942]: I0218 19:17:57.005029 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:17:57Z","lastTransitionTime":"2026-02-18T19:17:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:17:57 crc kubenswrapper[4942]: E0218 19:17:57.027407 4942 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:17:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:57Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:17:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:57Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:17:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:57Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:17:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:57Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"26ba8477-3134-4454-b1a3-81cc0f315017\\\",\\\"systemUUID\\\":\\\"15e4da6b-0b96-4412-ada2-f835d7e5f88a\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:17:57Z is after 2025-08-24T17:21:41Z" Feb 18 19:17:57 crc kubenswrapper[4942]: I0218 19:17:57.032572 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:17:57 crc kubenswrapper[4942]: I0218 19:17:57.032615 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:17:57 crc kubenswrapper[4942]: I0218 19:17:57.032628 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:17:57 crc kubenswrapper[4942]: I0218 19:17:57.032647 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:17:57 crc kubenswrapper[4942]: I0218 19:17:57.032663 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:17:57Z","lastTransitionTime":"2026-02-18T19:17:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:17:57 crc kubenswrapper[4942]: I0218 19:17:57.035170 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 19:17:57 crc kubenswrapper[4942]: E0218 19:17:57.035329 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 19:17:57 crc kubenswrapper[4942]: E0218 19:17:57.055581 4942 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:17:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:57Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:17:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:57Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:17:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:57Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:17:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:57Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"26ba8477-3134-4454-b1a3-81cc0f315017\\\",\\\"systemUUID\\\":\\\"15e4da6b-0b96-4412-ada2-f835d7e5f88a\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:17:57Z is after 2025-08-24T17:21:41Z" Feb 18 19:17:57 crc kubenswrapper[4942]: I0218 19:17:57.060719 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:17:57 crc kubenswrapper[4942]: I0218 19:17:57.060769 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:17:57 crc kubenswrapper[4942]: I0218 19:17:57.060783 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:17:57 crc kubenswrapper[4942]: I0218 19:17:57.060800 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:17:57 crc kubenswrapper[4942]: I0218 19:17:57.060812 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:17:57Z","lastTransitionTime":"2026-02-18T19:17:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:17:57 crc kubenswrapper[4942]: E0218 19:17:57.079621 4942 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:17:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:57Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:17:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:57Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:17:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:57Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:17:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:57Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"26ba8477-3134-4454-b1a3-81cc0f315017\\\",\\\"systemUUID\\\":\\\"15e4da6b-0b96-4412-ada2-f835d7e5f88a\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:17:57Z is after 2025-08-24T17:21:41Z" Feb 18 19:17:57 crc kubenswrapper[4942]: I0218 19:17:57.084324 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:17:57 crc kubenswrapper[4942]: I0218 19:17:57.084554 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:17:57 crc kubenswrapper[4942]: I0218 19:17:57.084710 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:17:57 crc kubenswrapper[4942]: I0218 19:17:57.084866 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:17:57 crc kubenswrapper[4942]: I0218 19:17:57.084973 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:17:57Z","lastTransitionTime":"2026-02-18T19:17:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:17:57 crc kubenswrapper[4942]: E0218 19:17:57.100434 4942 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:17:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:57Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:17:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:57Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:17:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:57Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:17:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:57Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"26ba8477-3134-4454-b1a3-81cc0f315017\\\",\\\"systemUUID\\\":\\\"15e4da6b-0b96-4412-ada2-f835d7e5f88a\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:17:57Z is after 2025-08-24T17:21:41Z" Feb 18 19:17:57 crc kubenswrapper[4942]: E0218 19:17:57.101016 4942 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 18 19:17:57 crc kubenswrapper[4942]: I0218 19:17:57.103370 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:17:57 crc kubenswrapper[4942]: I0218 19:17:57.103449 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:17:57 crc kubenswrapper[4942]: I0218 19:17:57.103466 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:17:57 crc kubenswrapper[4942]: I0218 19:17:57.103486 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:17:57 crc kubenswrapper[4942]: I0218 19:17:57.103497 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:17:57Z","lastTransitionTime":"2026-02-18T19:17:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:17:57 crc kubenswrapper[4942]: I0218 19:17:57.206635 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:17:57 crc kubenswrapper[4942]: I0218 19:17:57.206715 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:17:57 crc kubenswrapper[4942]: I0218 19:17:57.206737 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:17:57 crc kubenswrapper[4942]: I0218 19:17:57.206794 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:17:57 crc kubenswrapper[4942]: I0218 19:17:57.206815 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:17:57Z","lastTransitionTime":"2026-02-18T19:17:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:17:57 crc kubenswrapper[4942]: I0218 19:17:57.311929 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:17:57 crc kubenswrapper[4942]: I0218 19:17:57.311987 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:17:57 crc kubenswrapper[4942]: I0218 19:17:57.312004 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:17:57 crc kubenswrapper[4942]: I0218 19:17:57.312034 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:17:57 crc kubenswrapper[4942]: I0218 19:17:57.312052 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:17:57Z","lastTransitionTime":"2026-02-18T19:17:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:17:57 crc kubenswrapper[4942]: I0218 19:17:57.414860 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:17:57 crc kubenswrapper[4942]: I0218 19:17:57.414904 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:17:57 crc kubenswrapper[4942]: I0218 19:17:57.414916 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:17:57 crc kubenswrapper[4942]: I0218 19:17:57.414937 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:17:57 crc kubenswrapper[4942]: I0218 19:17:57.414951 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:17:57Z","lastTransitionTime":"2026-02-18T19:17:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:17:57 crc kubenswrapper[4942]: I0218 19:17:57.517880 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:17:57 crc kubenswrapper[4942]: I0218 19:17:57.517934 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:17:57 crc kubenswrapper[4942]: I0218 19:17:57.517953 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:17:57 crc kubenswrapper[4942]: I0218 19:17:57.517976 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:17:57 crc kubenswrapper[4942]: I0218 19:17:57.517991 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:17:57Z","lastTransitionTime":"2026-02-18T19:17:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:17:57 crc kubenswrapper[4942]: I0218 19:17:57.621812 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:17:57 crc kubenswrapper[4942]: I0218 19:17:57.621897 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:17:57 crc kubenswrapper[4942]: I0218 19:17:57.621917 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:17:57 crc kubenswrapper[4942]: I0218 19:17:57.621943 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:17:57 crc kubenswrapper[4942]: I0218 19:17:57.621962 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:17:57Z","lastTransitionTime":"2026-02-18T19:17:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:17:57 crc kubenswrapper[4942]: I0218 19:17:57.708250 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 19:17:57 crc kubenswrapper[4942]: I0218 19:17:57.708308 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 19:17:57 crc kubenswrapper[4942]: E0218 19:17:57.708553 4942 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 18 19:17:57 crc kubenswrapper[4942]: E0218 19:17:57.708559 4942 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 18 19:17:57 crc kubenswrapper[4942]: E0218 19:17:57.708635 4942 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 18 19:17:57 crc kubenswrapper[4942]: E0218 19:17:57.708657 4942 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 18 19:17:57 crc kubenswrapper[4942]: E0218 19:17:57.708580 4942 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 18 19:17:57 crc kubenswrapper[4942]: E0218 19:17:57.708747 4942 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-18 19:18:13.70871734 +0000 UTC m=+53.413650045 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 18 19:17:57 crc kubenswrapper[4942]: E0218 19:17:57.708783 4942 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 18 19:17:57 crc kubenswrapper[4942]: E0218 19:17:57.708882 4942 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-18 19:18:13.708865873 +0000 UTC m=+53.413798578 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 18 19:17:57 crc kubenswrapper[4942]: I0218 19:17:57.725067 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:17:57 crc kubenswrapper[4942]: I0218 19:17:57.725112 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:17:57 crc kubenswrapper[4942]: I0218 19:17:57.725144 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:17:57 crc kubenswrapper[4942]: I0218 19:17:57.725161 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:17:57 crc kubenswrapper[4942]: I0218 19:17:57.725170 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:17:57Z","lastTransitionTime":"2026-02-18T19:17:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:17:57 crc kubenswrapper[4942]: I0218 19:17:57.827576 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:17:57 crc kubenswrapper[4942]: I0218 19:17:57.827631 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:17:57 crc kubenswrapper[4942]: I0218 19:17:57.827647 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:17:57 crc kubenswrapper[4942]: I0218 19:17:57.827672 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:17:57 crc kubenswrapper[4942]: I0218 19:17:57.827690 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:17:57Z","lastTransitionTime":"2026-02-18T19:17:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:17:57 crc kubenswrapper[4942]: I0218 19:17:57.910071 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 19:17:57 crc kubenswrapper[4942]: I0218 19:17:57.910170 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/ac5b5f40-34db-4aeb-abb4-57204673bd53-metrics-certs\") pod \"network-metrics-daemon-qwg6q\" (UID: \"ac5b5f40-34db-4aeb-abb4-57204673bd53\") " pod="openshift-multus/network-metrics-daemon-qwg6q" Feb 18 19:17:57 crc kubenswrapper[4942]: E0218 19:17:57.910274 4942 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 18 19:17:57 crc kubenswrapper[4942]: E0218 19:17:57.910270 4942 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 19:18:13.910232867 +0000 UTC m=+53.615165572 (durationBeforeRetry 16s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 19:17:57 crc kubenswrapper[4942]: E0218 19:17:57.910377 4942 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ac5b5f40-34db-4aeb-abb4-57204673bd53-metrics-certs podName:ac5b5f40-34db-4aeb-abb4-57204673bd53 nodeName:}" failed. No retries permitted until 2026-02-18 19:17:59.91035785 +0000 UTC m=+39.615290555 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/ac5b5f40-34db-4aeb-abb4-57204673bd53-metrics-certs") pod "network-metrics-daemon-qwg6q" (UID: "ac5b5f40-34db-4aeb-abb4-57204673bd53") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 18 19:17:57 crc kubenswrapper[4942]: I0218 19:17:57.910417 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 19:17:57 crc kubenswrapper[4942]: I0218 19:17:57.910476 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 19:17:57 crc kubenswrapper[4942]: E0218 19:17:57.910566 4942 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 18 19:17:57 crc kubenswrapper[4942]: E0218 19:17:57.910630 4942 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-18 19:18:13.910613546 +0000 UTC m=+53.615546231 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 18 19:17:57 crc kubenswrapper[4942]: E0218 19:17:57.910650 4942 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 18 19:17:57 crc kubenswrapper[4942]: E0218 19:17:57.910837 4942 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-18 19:18:13.910803581 +0000 UTC m=+53.615736406 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 18 19:17:57 crc kubenswrapper[4942]: I0218 19:17:57.931439 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:17:57 crc kubenswrapper[4942]: I0218 19:17:57.931508 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:17:57 crc kubenswrapper[4942]: I0218 19:17:57.931531 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:17:57 crc kubenswrapper[4942]: I0218 19:17:57.931561 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:17:57 crc kubenswrapper[4942]: I0218 19:17:57.931584 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:17:57Z","lastTransitionTime":"2026-02-18T19:17:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:17:57 crc kubenswrapper[4942]: I0218 19:17:57.979218 4942 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-10 03:47:59.850819303 +0000 UTC Feb 18 19:17:58 crc kubenswrapper[4942]: I0218 19:17:58.034312 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:17:58 crc kubenswrapper[4942]: I0218 19:17:58.034365 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:17:58 crc kubenswrapper[4942]: I0218 19:17:58.034381 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:17:58 crc kubenswrapper[4942]: I0218 19:17:58.034401 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:17:58 crc kubenswrapper[4942]: I0218 19:17:58.034415 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:17:58Z","lastTransitionTime":"2026-02-18T19:17:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:17:58 crc kubenswrapper[4942]: I0218 19:17:58.035122 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qwg6q" Feb 18 19:17:58 crc kubenswrapper[4942]: I0218 19:17:58.035227 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 19:17:58 crc kubenswrapper[4942]: I0218 19:17:58.035124 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 19:17:58 crc kubenswrapper[4942]: E0218 19:17:58.035490 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 19:17:58 crc kubenswrapper[4942]: E0218 19:17:58.035246 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qwg6q" podUID="ac5b5f40-34db-4aeb-abb4-57204673bd53" Feb 18 19:17:58 crc kubenswrapper[4942]: E0218 19:17:58.035564 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 19:17:58 crc kubenswrapper[4942]: I0218 19:17:58.136508 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:17:58 crc kubenswrapper[4942]: I0218 19:17:58.136651 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:17:58 crc kubenswrapper[4942]: I0218 19:17:58.136675 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:17:58 crc kubenswrapper[4942]: I0218 19:17:58.136699 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:17:58 crc kubenswrapper[4942]: I0218 19:17:58.136719 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:17:58Z","lastTransitionTime":"2026-02-18T19:17:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:17:58 crc kubenswrapper[4942]: I0218 19:17:58.239614 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:17:58 crc kubenswrapper[4942]: I0218 19:17:58.239655 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:17:58 crc kubenswrapper[4942]: I0218 19:17:58.239667 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:17:58 crc kubenswrapper[4942]: I0218 19:17:58.239686 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:17:58 crc kubenswrapper[4942]: I0218 19:17:58.239699 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:17:58Z","lastTransitionTime":"2026-02-18T19:17:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:17:58 crc kubenswrapper[4942]: I0218 19:17:58.342786 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:17:58 crc kubenswrapper[4942]: I0218 19:17:58.342837 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:17:58 crc kubenswrapper[4942]: I0218 19:17:58.342849 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:17:58 crc kubenswrapper[4942]: I0218 19:17:58.342873 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:17:58 crc kubenswrapper[4942]: I0218 19:17:58.342887 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:17:58Z","lastTransitionTime":"2026-02-18T19:17:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:17:58 crc kubenswrapper[4942]: I0218 19:17:58.446251 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:17:58 crc kubenswrapper[4942]: I0218 19:17:58.446308 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:17:58 crc kubenswrapper[4942]: I0218 19:17:58.446326 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:17:58 crc kubenswrapper[4942]: I0218 19:17:58.446349 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:17:58 crc kubenswrapper[4942]: I0218 19:17:58.446364 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:17:58Z","lastTransitionTime":"2026-02-18T19:17:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:17:58 crc kubenswrapper[4942]: I0218 19:17:58.549599 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:17:58 crc kubenswrapper[4942]: I0218 19:17:58.549664 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:17:58 crc kubenswrapper[4942]: I0218 19:17:58.549677 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:17:58 crc kubenswrapper[4942]: I0218 19:17:58.549703 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:17:58 crc kubenswrapper[4942]: I0218 19:17:58.549721 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:17:58Z","lastTransitionTime":"2026-02-18T19:17:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:17:58 crc kubenswrapper[4942]: I0218 19:17:58.653053 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:17:58 crc kubenswrapper[4942]: I0218 19:17:58.653132 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:17:58 crc kubenswrapper[4942]: I0218 19:17:58.653152 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:17:58 crc kubenswrapper[4942]: I0218 19:17:58.653187 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:17:58 crc kubenswrapper[4942]: I0218 19:17:58.653210 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:17:58Z","lastTransitionTime":"2026-02-18T19:17:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:17:58 crc kubenswrapper[4942]: I0218 19:17:58.756298 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:17:58 crc kubenswrapper[4942]: I0218 19:17:58.756392 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:17:58 crc kubenswrapper[4942]: I0218 19:17:58.756420 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:17:58 crc kubenswrapper[4942]: I0218 19:17:58.756456 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:17:58 crc kubenswrapper[4942]: I0218 19:17:58.756484 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:17:58Z","lastTransitionTime":"2026-02-18T19:17:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:17:58 crc kubenswrapper[4942]: I0218 19:17:58.860359 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:17:58 crc kubenswrapper[4942]: I0218 19:17:58.860428 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:17:58 crc kubenswrapper[4942]: I0218 19:17:58.860446 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:17:58 crc kubenswrapper[4942]: I0218 19:17:58.860474 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:17:58 crc kubenswrapper[4942]: I0218 19:17:58.860494 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:17:58Z","lastTransitionTime":"2026-02-18T19:17:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:17:58 crc kubenswrapper[4942]: I0218 19:17:58.963938 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:17:58 crc kubenswrapper[4942]: I0218 19:17:58.964019 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:17:58 crc kubenswrapper[4942]: I0218 19:17:58.964034 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:17:58 crc kubenswrapper[4942]: I0218 19:17:58.964053 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:17:58 crc kubenswrapper[4942]: I0218 19:17:58.964067 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:17:58Z","lastTransitionTime":"2026-02-18T19:17:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:17:58 crc kubenswrapper[4942]: I0218 19:17:58.979395 4942 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-11 08:34:28.370997943 +0000 UTC Feb 18 19:17:59 crc kubenswrapper[4942]: I0218 19:17:59.035641 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 19:17:59 crc kubenswrapper[4942]: E0218 19:17:59.036479 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 19:17:59 crc kubenswrapper[4942]: I0218 19:17:59.036965 4942 scope.go:117] "RemoveContainer" containerID="b82ad9b4d66dae63d0d0fcf0dd65333cf43c3b1d6006e129b7db1c6320bfdee8" Feb 18 19:17:59 crc kubenswrapper[4942]: I0218 19:17:59.070231 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:17:59 crc kubenswrapper[4942]: I0218 19:17:59.070300 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:17:59 crc kubenswrapper[4942]: I0218 19:17:59.070314 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:17:59 crc kubenswrapper[4942]: I0218 19:17:59.070336 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:17:59 crc kubenswrapper[4942]: I0218 19:17:59.070356 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:17:59Z","lastTransitionTime":"2026-02-18T19:17:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:17:59 crc kubenswrapper[4942]: I0218 19:17:59.173905 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:17:59 crc kubenswrapper[4942]: I0218 19:17:59.174008 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:17:59 crc kubenswrapper[4942]: I0218 19:17:59.174025 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:17:59 crc kubenswrapper[4942]: I0218 19:17:59.174083 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:17:59 crc kubenswrapper[4942]: I0218 19:17:59.174099 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:17:59Z","lastTransitionTime":"2026-02-18T19:17:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:17:59 crc kubenswrapper[4942]: I0218 19:17:59.277235 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:17:59 crc kubenswrapper[4942]: I0218 19:17:59.277647 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:17:59 crc kubenswrapper[4942]: I0218 19:17:59.277669 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:17:59 crc kubenswrapper[4942]: I0218 19:17:59.277699 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:17:59 crc kubenswrapper[4942]: I0218 19:17:59.277719 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:17:59Z","lastTransitionTime":"2026-02-18T19:17:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:17:59 crc kubenswrapper[4942]: I0218 19:17:59.380990 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:17:59 crc kubenswrapper[4942]: I0218 19:17:59.381057 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:17:59 crc kubenswrapper[4942]: I0218 19:17:59.381078 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:17:59 crc kubenswrapper[4942]: I0218 19:17:59.381106 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:17:59 crc kubenswrapper[4942]: I0218 19:17:59.381126 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:17:59Z","lastTransitionTime":"2026-02-18T19:17:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:17:59 crc kubenswrapper[4942]: I0218 19:17:59.404039 4942 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/1.log" Feb 18 19:17:59 crc kubenswrapper[4942]: I0218 19:17:59.408021 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"c787e65428258ae002dd2569d2e100857851a5b699d573b42e59d1be987da8b3"} Feb 18 19:17:59 crc kubenswrapper[4942]: I0218 19:17:59.408826 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 18 19:17:59 crc kubenswrapper[4942]: I0218 19:17:59.432802 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4da93830-99a3-4d84-91c8-a5352a987b3f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://beecfbdf76954e7b9895240b52a2ec033ec3b81094ece02095f67a5f389d0383\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca3d8e99733c89b17e7211c9bae268f8e75942d896d32a6e2e9fc7e613000a6d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee5e19c2c5a503ae69e8052828713b9b399137e0fb7f3a06865d4d7f6b29c954\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c787e65428258ae002dd2569d2e100857851a5b699d573b42e59d1be987da8b3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b82ad9b4d66dae63d0d0fcf0dd65333cf43c3b1d6006e129b7db1c6320bfdee8\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"message\\\":\\\"le observer\\\\nW0218 19:17:41.723890 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0218 19:17:41.724123 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0218 19:17:41.725411 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3231040961/tls.crt::/tmp/serving-cert-3231040961/tls.key\\\\\\\"\\\\nI0218 19:17:41.923908 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0218 19:17:41.936017 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0218 19:17:41.936045 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0218 19:17:41.936073 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0218 19:17:41.936079 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0218 19:17:41.944174 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0218 19:17:41.944200 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 19:17:41.944205 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 19:17:41.944211 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0218 19:17:41.944214 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0218 19:17:41.944217 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0218 19:17:41.944220 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0218 19:17:41.944371 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0218 19:17:41.958094 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T19:17:41Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5fcd5de3303bba82e4a354de9f77b9aac574912955c2e49e2e74232f4d432a88\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:23Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6adb31d2464d541db843bddad1c43811ca11900db12deeb0d7c13ff8a3186d09\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6adb31d2464d541db843bddad1c43811ca11900db12deeb0d7c13ff8a3186d09\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:17:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:17:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:17:21Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:17:59Z is after 2025-08-24T17:21:41Z" Feb 18 19:17:59 crc kubenswrapper[4942]: I0218 19:17:59.451599 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:17:59Z is after 2025-08-24T17:21:41Z" Feb 18 19:17:59 crc kubenswrapper[4942]: I0218 19:17:59.470164 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-wxck8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"69ef2748-687e-4223-998e-7bd92ad8aaaf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ba4df5c822ff37a1a027d1908aab6472cd0b5a6ab0a2b5e5d1b172774107727\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vscpp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:17:45Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-wxck8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:17:59Z is after 2025-08-24T17:21:41Z" Feb 18 19:17:59 crc kubenswrapper[4942]: I0218 19:17:59.483810 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:17:59 crc kubenswrapper[4942]: I0218 19:17:59.483875 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:17:59 crc kubenswrapper[4942]: I0218 19:17:59.483895 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:17:59 crc kubenswrapper[4942]: I0218 19:17:59.483923 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:17:59 crc kubenswrapper[4942]: I0218 19:17:59.483943 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:17:59Z","lastTransitionTime":"2026-02-18T19:17:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:17:59 crc kubenswrapper[4942]: I0218 19:17:59.486591 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b5d2b9d-7ec0-41fa-a073-399c6fd41eb6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2c8b81c113e461032be39d6328308bad3189a9e84d987da987d43e8e2f6449fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3654d3b4a5084ce9ffb9ef8aeab6155788b56ac636aee44b098f6e9d457a8d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3a247d311cfbec62a54df5757a344bbc7ea516a66ccdeb67aecbbe268a4fbe4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://117748c4c4fa5e68d4b927639faa447ed3a984e0d7364a2224abe27e178d5746\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:17:21Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:17:59Z is after 2025-08-24T17:21:41Z" Feb 18 19:17:59 crc kubenswrapper[4942]: I0218 19:17:59.503590 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8cc6e8b6926e9cadf0bfdedb3a9fd0e5a7a902ba1cc703cd0396c3d7b2ec8666\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://45c0716738e2acbb0104b2ce05e3f23fd6933b653297d10972914500f3e55cd3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:17:59Z is after 2025-08-24T17:21:41Z" Feb 18 19:17:59 crc kubenswrapper[4942]: I0218 19:17:59.518048 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-xk99z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8f8b40cd-7bbd-4189-a8c0-f4131e8b9add\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b7ea4ede9f2f9b4438bc9befcf913e5b8c7b9dc765fa1edce809e17c5ac933a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2zxvs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3573f095c220e3b1994394b83fdf24c7d1a721ccee2755042f520467f21ae1fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2zxvs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:17:54Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-xk99z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:17:59Z is after 2025-08-24T17:21:41Z" Feb 18 19:17:59 crc kubenswrapper[4942]: I0218 19:17:59.531726 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-qwg6q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ac5b5f40-34db-4aeb-abb4-57204673bd53\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:56Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:56Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kmmq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kmmq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:17:56Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-qwg6q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:17:59Z is after 2025-08-24T17:21:41Z" Feb 18 19:17:59 crc kubenswrapper[4942]: I0218 19:17:59.545911 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:17:59Z is after 2025-08-24T17:21:41Z" Feb 18 19:17:59 crc kubenswrapper[4942]: I0218 19:17:59.556973 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-5pgvt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f163820b-df8b-4e07-9b74-d5f3332580a6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://97b02b2ef091c462632d385e824d90a6dc8270726bb3b5dfaa6c3036e99d323f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjg6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:17:41Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-5pgvt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:17:59Z is after 2025-08-24T17:21:41Z" Feb 18 19:17:59 crc kubenswrapper[4942]: I0218 19:17:59.571958 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-2rbc4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1ec6cacf-a09d-43b8-89fc-aea1d5a0ca9d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d379b6cff5fad06493f1e137d6f8de20b35e5350025c5875db8afb23cf30ac97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-27v7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://26e15915f01864e6357f914244e51cc52eb7c1c79fdda6cebfb23c7723978fc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://26e15915f01864e6357f914244e51cc52eb7c1c79fdda6cebfb23c7723978fc7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:17:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-27v7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3845cae9bbde573b86221f80bf461886dadf5c5149bb19824e505c703e168ba3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3845cae9bbde573b86221f80bf461886dadf5c5149bb19824e505c703e168ba3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:17:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-27v7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://83b7a573c632fbc4da1c088193202dcbe4f5f09d82c98b12e901a06acc877b80\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://83b7a573c632fbc4da1c088193202dcbe4f5f09d82c98b12e901a06acc877b80\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:17:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:17:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-27v7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2730d908eb063a0dc3278a304a8b7b9aee84bb6df39693e476d6517362864da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b2730d908eb063a0dc3278a304a8b7b9aee84bb6df39693e476d6517362864da\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:17:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:17:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-27v7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://86ba552c18df4c07b6d6b34acf51c27ec696374ddd079486c045e1cb9f68f703\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://86ba552c18df4c07b6d6b34acf51c27ec696374ddd079486c045e1cb9f68f703\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:17:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:17:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-27v7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://522b8abd41e12aecabbbc8a1f16dd8978b1e72b0984784780349570290bcc168\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://522b8abd41e12aecabbbc8a1f16dd8978b1e72b0984784780349570290bcc168\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:17:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:17:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-27v7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:17:42Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-2rbc4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:17:59Z is after 2025-08-24T17:21:41Z" Feb 18 19:17:59 crc kubenswrapper[4942]: I0218 19:17:59.587281 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:17:59 crc kubenswrapper[4942]: I0218 19:17:59.587616 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:17:59 crc kubenswrapper[4942]: I0218 19:17:59.587703 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:17:59 crc kubenswrapper[4942]: I0218 19:17:59.587813 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:17:59 crc kubenswrapper[4942]: I0218 19:17:59.587979 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:17:59Z","lastTransitionTime":"2026-02-18T19:17:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:17:59 crc kubenswrapper[4942]: I0218 19:17:59.589504 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://36f5db0de79285e1aca04aee9ebb8824353d8746f2f7df24be858a55db3c9abf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:17:59Z is after 2025-08-24T17:21:41Z" Feb 18 19:17:59 crc kubenswrapper[4942]: I0218 19:17:59.606481 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2e4be8605467674f949e5b4b8d282634126ab56d2983d5ffadb64ca4043b79b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:17:59Z is after 2025-08-24T17:21:41Z" Feb 18 19:17:59 crc kubenswrapper[4942]: I0218 19:17:59.621429 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:17:59Z is after 2025-08-24T17:21:41Z" Feb 18 19:17:59 crc kubenswrapper[4942]: I0218 19:17:59.637583 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wqxh4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"28921539-823a-4439-a230-3b5aed7085cc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d1f426cf3a46e9dbd6da2d7e0d1dc2649a781bb63b9b116e2e96e297ffe685f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c2zj5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3f2583de812c35d32f50918d2ea1071672e650d7bb1eca09416558ca25526b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c2zj5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:17:42Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wqxh4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:17:59Z is after 2025-08-24T17:21:41Z" Feb 18 19:17:59 crc kubenswrapper[4942]: I0218 19:17:59.654329 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-8jfwb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"75150b8c-7a02-497b-86c3-eabc9c8dbc55\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f6aba9b40a3a963de7e8fb8f2a121318f0800350a41caa30b6aef71468e5e0e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-65c5q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:17:42Z\\\"}}\" for pod \"openshift-multus\"/\"multus-8jfwb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:17:59Z is after 2025-08-24T17:21:41Z" Feb 18 19:17:59 crc kubenswrapper[4942]: I0218 19:17:59.681110 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-89fzv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"45dc4164-81a9-44cf-b86a-dff571bc0417\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:42Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:42Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e988175a524e389ddf3e3a47acb65910ac3bf3b812e14b76d988f13e2cdc5dc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl7tj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9333dac09e056ca12a248589ed4a097788b86ab83f9a1014d76d8bad88f1800c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl7tj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bcc9ee5f12cc3a3518c9fe13c16743e946e59b82dc01239767afb1e4afb2e4b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl7tj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2e222b580b244e85a382499ae61c72779f95fdab87e4d4c723d29b488219f94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl7tj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6351d0088a3e9c170ebe043fa700ef7f870c52f40d751b4fd13ac7b5bfa5e3b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl7tj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://427d7c083c5040fc6afe217c7850f1114323977542e83eb35d0a71b4bef6ecc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl7tj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://093e5e3bd5a3d7277ee21a03cf707e96c859c4d827efe302bd1a67ee3491c717\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://093e5e3bd5a3d7277ee21a03cf707e96c859c4d827efe302bd1a67ee3491c717\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-18T19:17:53Z\\\",\\\"message\\\":\\\"4.36:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {ba175bbe-5cc4-47e6-a32d-57693e1320bd}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0218 19:17:53.331601 6390 factory.go:1336] Added *v1.EgressIP event handler 8\\\\nI0218 19:17:53.331580 6390 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-kube-controller-manager/kube-controller-manager]} name:Service_openshift-kube-controller-manager/kube-controller-manager_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.36:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {ba175bbe-5cc4-47e6-a32d-57693e1320bd}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0218 19:17:53.333326 6390 factory.go:1336] Added *v1.EgressFirewall event handler 9\\\\nI0218 19:17:53.333620 6390 controller.go:132] Adding controller ef_node_controller event handlers\\\\nI0218 19:17:53.333683 6390 ovnkube.go:599] Stopped ovnkube\\\\nI0218 19:17:53.333723 6390 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0218 19:17:53.333863 6390 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T19:17:52Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-89fzv_openshift-ovn-kubernetes(45dc4164-81a9-44cf-b86a-dff571bc0417)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl7tj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c498aa99d3ec10af57c279f23804f4dce52a99d2c73fafe2bd9dc6ea454c7a23\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl7tj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://581689b9e064557a35e24e6d5c15a73036b8499700959fd330e9ebee15543edc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://581689b9e064557a35e24e6d5c15a73036b8499700959fd330e9ebee15543edc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:17:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:17:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl7tj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:17:42Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-89fzv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:17:59Z is after 2025-08-24T17:21:41Z" Feb 18 19:17:59 crc kubenswrapper[4942]: I0218 19:17:59.690693 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:17:59 crc kubenswrapper[4942]: I0218 19:17:59.690751 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:17:59 crc kubenswrapper[4942]: I0218 19:17:59.690795 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:17:59 crc kubenswrapper[4942]: I0218 19:17:59.690822 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:17:59 crc kubenswrapper[4942]: I0218 19:17:59.690840 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:17:59Z","lastTransitionTime":"2026-02-18T19:17:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:17:59 crc kubenswrapper[4942]: I0218 19:17:59.793578 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:17:59 crc kubenswrapper[4942]: I0218 19:17:59.793615 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:17:59 crc kubenswrapper[4942]: I0218 19:17:59.793626 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:17:59 crc kubenswrapper[4942]: I0218 19:17:59.793648 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:17:59 crc kubenswrapper[4942]: I0218 19:17:59.793661 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:17:59Z","lastTransitionTime":"2026-02-18T19:17:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:17:59 crc kubenswrapper[4942]: I0218 19:17:59.896609 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:17:59 crc kubenswrapper[4942]: I0218 19:17:59.896654 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:17:59 crc kubenswrapper[4942]: I0218 19:17:59.896669 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:17:59 crc kubenswrapper[4942]: I0218 19:17:59.896688 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:17:59 crc kubenswrapper[4942]: I0218 19:17:59.896700 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:17:59Z","lastTransitionTime":"2026-02-18T19:17:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:17:59 crc kubenswrapper[4942]: I0218 19:17:59.946136 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/ac5b5f40-34db-4aeb-abb4-57204673bd53-metrics-certs\") pod \"network-metrics-daemon-qwg6q\" (UID: \"ac5b5f40-34db-4aeb-abb4-57204673bd53\") " pod="openshift-multus/network-metrics-daemon-qwg6q" Feb 18 19:17:59 crc kubenswrapper[4942]: E0218 19:17:59.946312 4942 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 18 19:17:59 crc kubenswrapper[4942]: E0218 19:17:59.946381 4942 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ac5b5f40-34db-4aeb-abb4-57204673bd53-metrics-certs podName:ac5b5f40-34db-4aeb-abb4-57204673bd53 nodeName:}" failed. No retries permitted until 2026-02-18 19:18:03.946364725 +0000 UTC m=+43.651297390 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/ac5b5f40-34db-4aeb-abb4-57204673bd53-metrics-certs") pod "network-metrics-daemon-qwg6q" (UID: "ac5b5f40-34db-4aeb-abb4-57204673bd53") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 18 19:17:59 crc kubenswrapper[4942]: I0218 19:17:59.979917 4942 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-30 17:27:27.07841183 +0000 UTC Feb 18 19:17:59 crc kubenswrapper[4942]: I0218 19:17:59.999338 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:17:59 crc kubenswrapper[4942]: I0218 19:17:59.999437 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:17:59 crc kubenswrapper[4942]: I0218 19:17:59.999447 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:17:59 crc kubenswrapper[4942]: I0218 19:17:59.999469 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:17:59 crc kubenswrapper[4942]: I0218 19:17:59.999478 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:17:59Z","lastTransitionTime":"2026-02-18T19:17:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:00 crc kubenswrapper[4942]: I0218 19:18:00.035687 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 19:18:00 crc kubenswrapper[4942]: I0218 19:18:00.035826 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 19:18:00 crc kubenswrapper[4942]: E0218 19:18:00.035898 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 19:18:00 crc kubenswrapper[4942]: I0218 19:18:00.035844 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qwg6q" Feb 18 19:18:00 crc kubenswrapper[4942]: E0218 19:18:00.036010 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 19:18:00 crc kubenswrapper[4942]: E0218 19:18:00.036120 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qwg6q" podUID="ac5b5f40-34db-4aeb-abb4-57204673bd53" Feb 18 19:18:00 crc kubenswrapper[4942]: I0218 19:18:00.102146 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:00 crc kubenswrapper[4942]: I0218 19:18:00.102200 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:00 crc kubenswrapper[4942]: I0218 19:18:00.102217 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:00 crc kubenswrapper[4942]: I0218 19:18:00.102239 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:00 crc kubenswrapper[4942]: I0218 19:18:00.102254 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:00Z","lastTransitionTime":"2026-02-18T19:18:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:00 crc kubenswrapper[4942]: I0218 19:18:00.205471 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:00 crc kubenswrapper[4942]: I0218 19:18:00.205535 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:00 crc kubenswrapper[4942]: I0218 19:18:00.205552 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:00 crc kubenswrapper[4942]: I0218 19:18:00.205577 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:00 crc kubenswrapper[4942]: I0218 19:18:00.205807 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:00Z","lastTransitionTime":"2026-02-18T19:18:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:00 crc kubenswrapper[4942]: I0218 19:18:00.310445 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:00 crc kubenswrapper[4942]: I0218 19:18:00.310497 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:00 crc kubenswrapper[4942]: I0218 19:18:00.310507 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:00 crc kubenswrapper[4942]: I0218 19:18:00.310530 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:00 crc kubenswrapper[4942]: I0218 19:18:00.310542 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:00Z","lastTransitionTime":"2026-02-18T19:18:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:00 crc kubenswrapper[4942]: I0218 19:18:00.412742 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:00 crc kubenswrapper[4942]: I0218 19:18:00.412807 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:00 crc kubenswrapper[4942]: I0218 19:18:00.412819 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:00 crc kubenswrapper[4942]: I0218 19:18:00.412836 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:00 crc kubenswrapper[4942]: I0218 19:18:00.412847 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:00Z","lastTransitionTime":"2026-02-18T19:18:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:00 crc kubenswrapper[4942]: I0218 19:18:00.516515 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:00 crc kubenswrapper[4942]: I0218 19:18:00.516563 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:00 crc kubenswrapper[4942]: I0218 19:18:00.516571 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:00 crc kubenswrapper[4942]: I0218 19:18:00.516589 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:00 crc kubenswrapper[4942]: I0218 19:18:00.516600 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:00Z","lastTransitionTime":"2026-02-18T19:18:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:00 crc kubenswrapper[4942]: I0218 19:18:00.620119 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:00 crc kubenswrapper[4942]: I0218 19:18:00.620195 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:00 crc kubenswrapper[4942]: I0218 19:18:00.620209 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:00 crc kubenswrapper[4942]: I0218 19:18:00.620232 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:00 crc kubenswrapper[4942]: I0218 19:18:00.620270 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:00Z","lastTransitionTime":"2026-02-18T19:18:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:00 crc kubenswrapper[4942]: I0218 19:18:00.723334 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:00 crc kubenswrapper[4942]: I0218 19:18:00.723457 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:00 crc kubenswrapper[4942]: I0218 19:18:00.723492 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:00 crc kubenswrapper[4942]: I0218 19:18:00.723523 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:00 crc kubenswrapper[4942]: I0218 19:18:00.723545 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:00Z","lastTransitionTime":"2026-02-18T19:18:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:00 crc kubenswrapper[4942]: I0218 19:18:00.827203 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:00 crc kubenswrapper[4942]: I0218 19:18:00.827281 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:00 crc kubenswrapper[4942]: I0218 19:18:00.827293 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:00 crc kubenswrapper[4942]: I0218 19:18:00.827323 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:00 crc kubenswrapper[4942]: I0218 19:18:00.827337 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:00Z","lastTransitionTime":"2026-02-18T19:18:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:00 crc kubenswrapper[4942]: I0218 19:18:00.930419 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:00 crc kubenswrapper[4942]: I0218 19:18:00.930495 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:00 crc kubenswrapper[4942]: I0218 19:18:00.930509 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:00 crc kubenswrapper[4942]: I0218 19:18:00.930530 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:00 crc kubenswrapper[4942]: I0218 19:18:00.930541 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:00Z","lastTransitionTime":"2026-02-18T19:18:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:00 crc kubenswrapper[4942]: I0218 19:18:00.980410 4942 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-23 07:05:11.098544652 +0000 UTC Feb 18 19:18:01 crc kubenswrapper[4942]: I0218 19:18:01.034993 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 19:18:01 crc kubenswrapper[4942]: I0218 19:18:01.035168 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:01 crc kubenswrapper[4942]: I0218 19:18:01.035280 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:01 crc kubenswrapper[4942]: I0218 19:18:01.035307 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:01 crc kubenswrapper[4942]: I0218 19:18:01.035339 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:01 crc kubenswrapper[4942]: I0218 19:18:01.035361 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:01Z","lastTransitionTime":"2026-02-18T19:18:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:01 crc kubenswrapper[4942]: E0218 19:18:01.035370 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 19:18:01 crc kubenswrapper[4942]: I0218 19:18:01.059849 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4da93830-99a3-4d84-91c8-a5352a987b3f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://beecfbdf76954e7b9895240b52a2ec033ec3b81094ece02095f67a5f389d0383\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca3d8e99733c89b17e7211c9bae268f8e75942d896d32a6e2e9fc7e613000a6d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee5e19c2c5a503ae69e8052828713b9b399137e0fb7f3a06865d4d7f6b29c954\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c787e65428258ae002dd2569d2e100857851a5b699d573b42e59d1be987da8b3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b82ad9b4d66dae63d0d0fcf0dd65333cf43c3b1d6006e129b7db1c6320bfdee8\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"message\\\":\\\"le observer\\\\nW0218 19:17:41.723890 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0218 19:17:41.724123 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0218 19:17:41.725411 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3231040961/tls.crt::/tmp/serving-cert-3231040961/tls.key\\\\\\\"\\\\nI0218 19:17:41.923908 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0218 19:17:41.936017 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0218 19:17:41.936045 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0218 19:17:41.936073 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0218 19:17:41.936079 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0218 19:17:41.944174 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0218 19:17:41.944200 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 19:17:41.944205 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 19:17:41.944211 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0218 19:17:41.944214 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0218 19:17:41.944217 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0218 19:17:41.944220 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0218 19:17:41.944371 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0218 19:17:41.958094 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T19:17:41Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5fcd5de3303bba82e4a354de9f77b9aac574912955c2e49e2e74232f4d432a88\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:23Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6adb31d2464d541db843bddad1c43811ca11900db12deeb0d7c13ff8a3186d09\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6adb31d2464d541db843bddad1c43811ca11900db12deeb0d7c13ff8a3186d09\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:17:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:17:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:17:21Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:01Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:01 crc kubenswrapper[4942]: I0218 19:18:01.080701 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:01Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:01 crc kubenswrapper[4942]: I0218 19:18:01.099826 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-wxck8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"69ef2748-687e-4223-998e-7bd92ad8aaaf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ba4df5c822ff37a1a027d1908aab6472cd0b5a6ab0a2b5e5d1b172774107727\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vscpp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:17:45Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-wxck8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:01Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:01 crc kubenswrapper[4942]: I0218 19:18:01.123271 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b5d2b9d-7ec0-41fa-a073-399c6fd41eb6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2c8b81c113e461032be39d6328308bad3189a9e84d987da987d43e8e2f6449fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3654d3b4a5084ce9ffb9ef8aeab6155788b56ac636aee44b098f6e9d457a8d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3a247d311cfbec62a54df5757a344bbc7ea516a66ccdeb67aecbbe268a4fbe4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://117748c4c4fa5e68d4b927639faa447ed3a984e0d7364a2224abe27e178d5746\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:17:21Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:01Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:01 crc kubenswrapper[4942]: I0218 19:18:01.139043 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:01 crc kubenswrapper[4942]: I0218 19:18:01.139106 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:01 crc kubenswrapper[4942]: I0218 19:18:01.139118 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:01 crc kubenswrapper[4942]: I0218 19:18:01.139143 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:01 crc kubenswrapper[4942]: I0218 19:18:01.139129 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8cc6e8b6926e9cadf0bfdedb3a9fd0e5a7a902ba1cc703cd0396c3d7b2ec8666\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://45c0716738e2acbb0104b2ce05e3f23fd6933b653297d10972914500f3e55cd3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:01Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:01 crc kubenswrapper[4942]: I0218 19:18:01.139156 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:01Z","lastTransitionTime":"2026-02-18T19:18:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:01 crc kubenswrapper[4942]: I0218 19:18:01.155733 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-xk99z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8f8b40cd-7bbd-4189-a8c0-f4131e8b9add\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b7ea4ede9f2f9b4438bc9befcf913e5b8c7b9dc765fa1edce809e17c5ac933a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2zxvs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3573f095c220e3b1994394b83fdf24c7d1a721ccee2755042f520467f21ae1fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2zxvs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:17:54Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-xk99z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:01Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:01 crc kubenswrapper[4942]: I0218 19:18:01.172186 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-qwg6q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ac5b5f40-34db-4aeb-abb4-57204673bd53\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:56Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:56Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kmmq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kmmq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:17:56Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-qwg6q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:01Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:01 crc kubenswrapper[4942]: I0218 19:18:01.190077 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:01Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:01 crc kubenswrapper[4942]: I0218 19:18:01.205634 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-5pgvt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f163820b-df8b-4e07-9b74-d5f3332580a6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://97b02b2ef091c462632d385e824d90a6dc8270726bb3b5dfaa6c3036e99d323f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjg6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:17:41Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-5pgvt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:01Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:01 crc kubenswrapper[4942]: I0218 19:18:01.229878 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-2rbc4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1ec6cacf-a09d-43b8-89fc-aea1d5a0ca9d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d379b6cff5fad06493f1e137d6f8de20b35e5350025c5875db8afb23cf30ac97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-27v7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://26e15915f01864e6357f914244e51cc52eb7c1c79fdda6cebfb23c7723978fc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://26e15915f01864e6357f914244e51cc52eb7c1c79fdda6cebfb23c7723978fc7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:17:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-27v7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3845cae9bbde573b86221f80bf461886dadf5c5149bb19824e505c703e168ba3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3845cae9bbde573b86221f80bf461886dadf5c5149bb19824e505c703e168ba3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:17:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-27v7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://83b7a573c632fbc4da1c088193202dcbe4f5f09d82c98b12e901a06acc877b80\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://83b7a573c632fbc4da1c088193202dcbe4f5f09d82c98b12e901a06acc877b80\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:17:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:17:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-27v7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2730d908eb063a0dc3278a304a8b7b9aee84bb6df39693e476d6517362864da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b2730d908eb063a0dc3278a304a8b7b9aee84bb6df39693e476d6517362864da\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:17:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:17:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-27v7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://86ba552c18df4c07b6d6b34acf51c27ec696374ddd079486c045e1cb9f68f703\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://86ba552c18df4c07b6d6b34acf51c27ec696374ddd079486c045e1cb9f68f703\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:17:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:17:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-27v7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://522b8abd41e12aecabbbc8a1f16dd8978b1e72b0984784780349570290bcc168\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://522b8abd41e12aecabbbc8a1f16dd8978b1e72b0984784780349570290bcc168\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:17:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:17:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-27v7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:17:42Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-2rbc4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:01Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:01 crc kubenswrapper[4942]: I0218 19:18:01.242081 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:01 crc kubenswrapper[4942]: I0218 19:18:01.242161 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:01 crc kubenswrapper[4942]: I0218 19:18:01.242180 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:01 crc kubenswrapper[4942]: I0218 19:18:01.242209 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:01 crc kubenswrapper[4942]: I0218 19:18:01.242229 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:01Z","lastTransitionTime":"2026-02-18T19:18:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:01 crc kubenswrapper[4942]: I0218 19:18:01.248679 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-8jfwb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"75150b8c-7a02-497b-86c3-eabc9c8dbc55\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f6aba9b40a3a963de7e8fb8f2a121318f0800350a41caa30b6aef71468e5e0e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-65c5q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:17:42Z\\\"}}\" for pod \"openshift-multus\"/\"multus-8jfwb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:01Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:01 crc kubenswrapper[4942]: I0218 19:18:01.275022 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-89fzv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"45dc4164-81a9-44cf-b86a-dff571bc0417\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:42Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:42Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e988175a524e389ddf3e3a47acb65910ac3bf3b812e14b76d988f13e2cdc5dc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl7tj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9333dac09e056ca12a248589ed4a097788b86ab83f9a1014d76d8bad88f1800c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl7tj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bcc9ee5f12cc3a3518c9fe13c16743e946e59b82dc01239767afb1e4afb2e4b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl7tj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2e222b580b244e85a382499ae61c72779f95fdab87e4d4c723d29b488219f94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl7tj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6351d0088a3e9c170ebe043fa700ef7f870c52f40d751b4fd13ac7b5bfa5e3b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl7tj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://427d7c083c5040fc6afe217c7850f1114323977542e83eb35d0a71b4bef6ecc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl7tj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://093e5e3bd5a3d7277ee21a03cf707e96c859c4d827efe302bd1a67ee3491c717\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://093e5e3bd5a3d7277ee21a03cf707e96c859c4d827efe302bd1a67ee3491c717\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-18T19:17:53Z\\\",\\\"message\\\":\\\"4.36:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {ba175bbe-5cc4-47e6-a32d-57693e1320bd}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0218 19:17:53.331601 6390 factory.go:1336] Added *v1.EgressIP event handler 8\\\\nI0218 19:17:53.331580 6390 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-kube-controller-manager/kube-controller-manager]} name:Service_openshift-kube-controller-manager/kube-controller-manager_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.36:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {ba175bbe-5cc4-47e6-a32d-57693e1320bd}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0218 19:17:53.333326 6390 factory.go:1336] Added *v1.EgressFirewall event handler 9\\\\nI0218 19:17:53.333620 6390 controller.go:132] Adding controller ef_node_controller event handlers\\\\nI0218 19:17:53.333683 6390 ovnkube.go:599] Stopped ovnkube\\\\nI0218 19:17:53.333723 6390 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0218 19:17:53.333863 6390 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T19:17:52Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-89fzv_openshift-ovn-kubernetes(45dc4164-81a9-44cf-b86a-dff571bc0417)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl7tj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c498aa99d3ec10af57c279f23804f4dce52a99d2c73fafe2bd9dc6ea454c7a23\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl7tj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://581689b9e064557a35e24e6d5c15a73036b8499700959fd330e9ebee15543edc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://581689b9e064557a35e24e6d5c15a73036b8499700959fd330e9ebee15543edc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:17:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:17:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl7tj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:17:42Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-89fzv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:01Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:01 crc kubenswrapper[4942]: I0218 19:18:01.294086 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://36f5db0de79285e1aca04aee9ebb8824353d8746f2f7df24be858a55db3c9abf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:01Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:01 crc kubenswrapper[4942]: I0218 19:18:01.308992 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2e4be8605467674f949e5b4b8d282634126ab56d2983d5ffadb64ca4043b79b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:01Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:01 crc kubenswrapper[4942]: I0218 19:18:01.322789 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:01Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:01 crc kubenswrapper[4942]: I0218 19:18:01.336910 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wqxh4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"28921539-823a-4439-a230-3b5aed7085cc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d1f426cf3a46e9dbd6da2d7e0d1dc2649a781bb63b9b116e2e96e297ffe685f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c2zj5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3f2583de812c35d32f50918d2ea1071672e650d7bb1eca09416558ca25526b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c2zj5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:17:42Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wqxh4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:01Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:01 crc kubenswrapper[4942]: I0218 19:18:01.345024 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:01 crc kubenswrapper[4942]: I0218 19:18:01.345107 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:01 crc kubenswrapper[4942]: I0218 19:18:01.345141 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:01 crc kubenswrapper[4942]: I0218 19:18:01.345158 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:01 crc kubenswrapper[4942]: I0218 19:18:01.345170 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:01Z","lastTransitionTime":"2026-02-18T19:18:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:01 crc kubenswrapper[4942]: I0218 19:18:01.448209 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:01 crc kubenswrapper[4942]: I0218 19:18:01.448281 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:01 crc kubenswrapper[4942]: I0218 19:18:01.448297 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:01 crc kubenswrapper[4942]: I0218 19:18:01.448325 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:01 crc kubenswrapper[4942]: I0218 19:18:01.448343 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:01Z","lastTransitionTime":"2026-02-18T19:18:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:01 crc kubenswrapper[4942]: I0218 19:18:01.552057 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:01 crc kubenswrapper[4942]: I0218 19:18:01.552118 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:01 crc kubenswrapper[4942]: I0218 19:18:01.552135 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:01 crc kubenswrapper[4942]: I0218 19:18:01.552163 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:01 crc kubenswrapper[4942]: I0218 19:18:01.552182 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:01Z","lastTransitionTime":"2026-02-18T19:18:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:01 crc kubenswrapper[4942]: I0218 19:18:01.656560 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:01 crc kubenswrapper[4942]: I0218 19:18:01.656608 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:01 crc kubenswrapper[4942]: I0218 19:18:01.656619 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:01 crc kubenswrapper[4942]: I0218 19:18:01.656638 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:01 crc kubenswrapper[4942]: I0218 19:18:01.656650 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:01Z","lastTransitionTime":"2026-02-18T19:18:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:01 crc kubenswrapper[4942]: I0218 19:18:01.762636 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:01 crc kubenswrapper[4942]: I0218 19:18:01.762856 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:01 crc kubenswrapper[4942]: I0218 19:18:01.762888 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:01 crc kubenswrapper[4942]: I0218 19:18:01.762928 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:01 crc kubenswrapper[4942]: I0218 19:18:01.762965 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:01Z","lastTransitionTime":"2026-02-18T19:18:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:01 crc kubenswrapper[4942]: I0218 19:18:01.868362 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:01 crc kubenswrapper[4942]: I0218 19:18:01.868817 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:01 crc kubenswrapper[4942]: I0218 19:18:01.868941 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:01 crc kubenswrapper[4942]: I0218 19:18:01.869088 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:01 crc kubenswrapper[4942]: I0218 19:18:01.869200 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:01Z","lastTransitionTime":"2026-02-18T19:18:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:01 crc kubenswrapper[4942]: I0218 19:18:01.971693 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:01 crc kubenswrapper[4942]: I0218 19:18:01.971748 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:01 crc kubenswrapper[4942]: I0218 19:18:01.971798 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:01 crc kubenswrapper[4942]: I0218 19:18:01.971827 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:01 crc kubenswrapper[4942]: I0218 19:18:01.971845 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:01Z","lastTransitionTime":"2026-02-18T19:18:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:01 crc kubenswrapper[4942]: I0218 19:18:01.981187 4942 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-16 00:17:37.004494459 +0000 UTC Feb 18 19:18:02 crc kubenswrapper[4942]: I0218 19:18:02.035458 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 19:18:02 crc kubenswrapper[4942]: I0218 19:18:02.035485 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qwg6q" Feb 18 19:18:02 crc kubenswrapper[4942]: I0218 19:18:02.035485 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 19:18:02 crc kubenswrapper[4942]: E0218 19:18:02.035679 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 19:18:02 crc kubenswrapper[4942]: E0218 19:18:02.035914 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qwg6q" podUID="ac5b5f40-34db-4aeb-abb4-57204673bd53" Feb 18 19:18:02 crc kubenswrapper[4942]: E0218 19:18:02.036029 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 19:18:02 crc kubenswrapper[4942]: I0218 19:18:02.075908 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:02 crc kubenswrapper[4942]: I0218 19:18:02.075999 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:02 crc kubenswrapper[4942]: I0218 19:18:02.076073 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:02 crc kubenswrapper[4942]: I0218 19:18:02.076101 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:02 crc kubenswrapper[4942]: I0218 19:18:02.076151 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:02Z","lastTransitionTime":"2026-02-18T19:18:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:02 crc kubenswrapper[4942]: I0218 19:18:02.179264 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:02 crc kubenswrapper[4942]: I0218 19:18:02.179580 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:02 crc kubenswrapper[4942]: I0218 19:18:02.179657 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:02 crc kubenswrapper[4942]: I0218 19:18:02.179738 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:02 crc kubenswrapper[4942]: I0218 19:18:02.179837 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:02Z","lastTransitionTime":"2026-02-18T19:18:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:02 crc kubenswrapper[4942]: I0218 19:18:02.282536 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:02 crc kubenswrapper[4942]: I0218 19:18:02.282592 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:02 crc kubenswrapper[4942]: I0218 19:18:02.282608 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:02 crc kubenswrapper[4942]: I0218 19:18:02.282632 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:02 crc kubenswrapper[4942]: I0218 19:18:02.282648 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:02Z","lastTransitionTime":"2026-02-18T19:18:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:02 crc kubenswrapper[4942]: I0218 19:18:02.385596 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:02 crc kubenswrapper[4942]: I0218 19:18:02.385703 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:02 crc kubenswrapper[4942]: I0218 19:18:02.385724 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:02 crc kubenswrapper[4942]: I0218 19:18:02.385749 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:02 crc kubenswrapper[4942]: I0218 19:18:02.385798 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:02Z","lastTransitionTime":"2026-02-18T19:18:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:02 crc kubenswrapper[4942]: I0218 19:18:02.489216 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:02 crc kubenswrapper[4942]: I0218 19:18:02.489281 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:02 crc kubenswrapper[4942]: I0218 19:18:02.489300 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:02 crc kubenswrapper[4942]: I0218 19:18:02.489325 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:02 crc kubenswrapper[4942]: I0218 19:18:02.489344 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:02Z","lastTransitionTime":"2026-02-18T19:18:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:02 crc kubenswrapper[4942]: I0218 19:18:02.592693 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:02 crc kubenswrapper[4942]: I0218 19:18:02.593015 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:02 crc kubenswrapper[4942]: I0218 19:18:02.593035 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:02 crc kubenswrapper[4942]: I0218 19:18:02.593061 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:02 crc kubenswrapper[4942]: I0218 19:18:02.593080 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:02Z","lastTransitionTime":"2026-02-18T19:18:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:02 crc kubenswrapper[4942]: I0218 19:18:02.696473 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:02 crc kubenswrapper[4942]: I0218 19:18:02.696523 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:02 crc kubenswrapper[4942]: I0218 19:18:02.696533 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:02 crc kubenswrapper[4942]: I0218 19:18:02.696560 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:02 crc kubenswrapper[4942]: I0218 19:18:02.696574 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:02Z","lastTransitionTime":"2026-02-18T19:18:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:02 crc kubenswrapper[4942]: I0218 19:18:02.799950 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:02 crc kubenswrapper[4942]: I0218 19:18:02.800009 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:02 crc kubenswrapper[4942]: I0218 19:18:02.800030 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:02 crc kubenswrapper[4942]: I0218 19:18:02.800059 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:02 crc kubenswrapper[4942]: I0218 19:18:02.800080 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:02Z","lastTransitionTime":"2026-02-18T19:18:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:02 crc kubenswrapper[4942]: I0218 19:18:02.905068 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:02 crc kubenswrapper[4942]: I0218 19:18:02.905122 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:02 crc kubenswrapper[4942]: I0218 19:18:02.905140 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:02 crc kubenswrapper[4942]: I0218 19:18:02.905166 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:02 crc kubenswrapper[4942]: I0218 19:18:02.905183 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:02Z","lastTransitionTime":"2026-02-18T19:18:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:02 crc kubenswrapper[4942]: I0218 19:18:02.982012 4942 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-21 08:13:31.966740158 +0000 UTC Feb 18 19:18:03 crc kubenswrapper[4942]: I0218 19:18:03.010452 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:03 crc kubenswrapper[4942]: I0218 19:18:03.010516 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:03 crc kubenswrapper[4942]: I0218 19:18:03.010534 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:03 crc kubenswrapper[4942]: I0218 19:18:03.010560 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:03 crc kubenswrapper[4942]: I0218 19:18:03.010580 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:03Z","lastTransitionTime":"2026-02-18T19:18:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:03 crc kubenswrapper[4942]: I0218 19:18:03.035072 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 19:18:03 crc kubenswrapper[4942]: E0218 19:18:03.035242 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 19:18:03 crc kubenswrapper[4942]: I0218 19:18:03.113820 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:03 crc kubenswrapper[4942]: I0218 19:18:03.113876 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:03 crc kubenswrapper[4942]: I0218 19:18:03.113896 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:03 crc kubenswrapper[4942]: I0218 19:18:03.113921 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:03 crc kubenswrapper[4942]: I0218 19:18:03.113940 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:03Z","lastTransitionTime":"2026-02-18T19:18:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:03 crc kubenswrapper[4942]: I0218 19:18:03.217495 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:03 crc kubenswrapper[4942]: I0218 19:18:03.217597 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:03 crc kubenswrapper[4942]: I0218 19:18:03.217615 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:03 crc kubenswrapper[4942]: I0218 19:18:03.217641 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:03 crc kubenswrapper[4942]: I0218 19:18:03.217659 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:03Z","lastTransitionTime":"2026-02-18T19:18:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:03 crc kubenswrapper[4942]: I0218 19:18:03.321548 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:03 crc kubenswrapper[4942]: I0218 19:18:03.321652 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:03 crc kubenswrapper[4942]: I0218 19:18:03.321679 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:03 crc kubenswrapper[4942]: I0218 19:18:03.321717 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:03 crc kubenswrapper[4942]: I0218 19:18:03.321749 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:03Z","lastTransitionTime":"2026-02-18T19:18:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:03 crc kubenswrapper[4942]: I0218 19:18:03.424861 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:03 crc kubenswrapper[4942]: I0218 19:18:03.424921 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:03 crc kubenswrapper[4942]: I0218 19:18:03.424939 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:03 crc kubenswrapper[4942]: I0218 19:18:03.424965 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:03 crc kubenswrapper[4942]: I0218 19:18:03.424984 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:03Z","lastTransitionTime":"2026-02-18T19:18:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:03 crc kubenswrapper[4942]: I0218 19:18:03.528266 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:03 crc kubenswrapper[4942]: I0218 19:18:03.528356 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:03 crc kubenswrapper[4942]: I0218 19:18:03.528380 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:03 crc kubenswrapper[4942]: I0218 19:18:03.528417 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:03 crc kubenswrapper[4942]: I0218 19:18:03.528440 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:03Z","lastTransitionTime":"2026-02-18T19:18:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:03 crc kubenswrapper[4942]: I0218 19:18:03.630668 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:03 crc kubenswrapper[4942]: I0218 19:18:03.630711 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:03 crc kubenswrapper[4942]: I0218 19:18:03.630721 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:03 crc kubenswrapper[4942]: I0218 19:18:03.630737 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:03 crc kubenswrapper[4942]: I0218 19:18:03.630747 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:03Z","lastTransitionTime":"2026-02-18T19:18:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:03 crc kubenswrapper[4942]: I0218 19:18:03.733678 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:03 crc kubenswrapper[4942]: I0218 19:18:03.733725 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:03 crc kubenswrapper[4942]: I0218 19:18:03.733734 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:03 crc kubenswrapper[4942]: I0218 19:18:03.733751 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:03 crc kubenswrapper[4942]: I0218 19:18:03.733777 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:03Z","lastTransitionTime":"2026-02-18T19:18:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:03 crc kubenswrapper[4942]: I0218 19:18:03.837241 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:03 crc kubenswrapper[4942]: I0218 19:18:03.837316 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:03 crc kubenswrapper[4942]: I0218 19:18:03.837334 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:03 crc kubenswrapper[4942]: I0218 19:18:03.837365 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:03 crc kubenswrapper[4942]: I0218 19:18:03.837385 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:03Z","lastTransitionTime":"2026-02-18T19:18:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:03 crc kubenswrapper[4942]: I0218 19:18:03.940665 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:03 crc kubenswrapper[4942]: I0218 19:18:03.940728 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:03 crc kubenswrapper[4942]: I0218 19:18:03.940741 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:03 crc kubenswrapper[4942]: I0218 19:18:03.940778 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:03 crc kubenswrapper[4942]: I0218 19:18:03.940793 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:03Z","lastTransitionTime":"2026-02-18T19:18:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:03 crc kubenswrapper[4942]: I0218 19:18:03.982633 4942 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-11 23:17:46.000892871 +0000 UTC Feb 18 19:18:03 crc kubenswrapper[4942]: I0218 19:18:03.995670 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/ac5b5f40-34db-4aeb-abb4-57204673bd53-metrics-certs\") pod \"network-metrics-daemon-qwg6q\" (UID: \"ac5b5f40-34db-4aeb-abb4-57204673bd53\") " pod="openshift-multus/network-metrics-daemon-qwg6q" Feb 18 19:18:03 crc kubenswrapper[4942]: E0218 19:18:03.995915 4942 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 18 19:18:03 crc kubenswrapper[4942]: E0218 19:18:03.996009 4942 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ac5b5f40-34db-4aeb-abb4-57204673bd53-metrics-certs podName:ac5b5f40-34db-4aeb-abb4-57204673bd53 nodeName:}" failed. No retries permitted until 2026-02-18 19:18:11.995986193 +0000 UTC m=+51.700918888 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/ac5b5f40-34db-4aeb-abb4-57204673bd53-metrics-certs") pod "network-metrics-daemon-qwg6q" (UID: "ac5b5f40-34db-4aeb-abb4-57204673bd53") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 18 19:18:04 crc kubenswrapper[4942]: I0218 19:18:04.035876 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 19:18:04 crc kubenswrapper[4942]: I0218 19:18:04.035940 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qwg6q" Feb 18 19:18:04 crc kubenswrapper[4942]: I0218 19:18:04.035894 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 19:18:04 crc kubenswrapper[4942]: E0218 19:18:04.036099 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 19:18:04 crc kubenswrapper[4942]: E0218 19:18:04.036297 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 19:18:04 crc kubenswrapper[4942]: E0218 19:18:04.036562 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qwg6q" podUID="ac5b5f40-34db-4aeb-abb4-57204673bd53" Feb 18 19:18:04 crc kubenswrapper[4942]: I0218 19:18:04.044996 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:04 crc kubenswrapper[4942]: I0218 19:18:04.045067 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:04 crc kubenswrapper[4942]: I0218 19:18:04.045089 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:04 crc kubenswrapper[4942]: I0218 19:18:04.045117 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:04 crc kubenswrapper[4942]: I0218 19:18:04.045137 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:04Z","lastTransitionTime":"2026-02-18T19:18:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:04 crc kubenswrapper[4942]: I0218 19:18:04.149529 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:04 crc kubenswrapper[4942]: I0218 19:18:04.149678 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:04 crc kubenswrapper[4942]: I0218 19:18:04.149704 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:04 crc kubenswrapper[4942]: I0218 19:18:04.149740 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:04 crc kubenswrapper[4942]: I0218 19:18:04.149799 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:04Z","lastTransitionTime":"2026-02-18T19:18:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:04 crc kubenswrapper[4942]: I0218 19:18:04.252919 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:04 crc kubenswrapper[4942]: I0218 19:18:04.252989 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:04 crc kubenswrapper[4942]: I0218 19:18:04.253006 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:04 crc kubenswrapper[4942]: I0218 19:18:04.253032 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:04 crc kubenswrapper[4942]: I0218 19:18:04.253052 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:04Z","lastTransitionTime":"2026-02-18T19:18:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:04 crc kubenswrapper[4942]: I0218 19:18:04.356543 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:04 crc kubenswrapper[4942]: I0218 19:18:04.356616 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:04 crc kubenswrapper[4942]: I0218 19:18:04.356630 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:04 crc kubenswrapper[4942]: I0218 19:18:04.356653 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:04 crc kubenswrapper[4942]: I0218 19:18:04.356671 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:04Z","lastTransitionTime":"2026-02-18T19:18:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:04 crc kubenswrapper[4942]: I0218 19:18:04.461181 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:04 crc kubenswrapper[4942]: I0218 19:18:04.461280 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:04 crc kubenswrapper[4942]: I0218 19:18:04.461303 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:04 crc kubenswrapper[4942]: I0218 19:18:04.461337 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:04 crc kubenswrapper[4942]: I0218 19:18:04.461362 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:04Z","lastTransitionTime":"2026-02-18T19:18:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:04 crc kubenswrapper[4942]: I0218 19:18:04.564846 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:04 crc kubenswrapper[4942]: I0218 19:18:04.565017 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:04 crc kubenswrapper[4942]: I0218 19:18:04.565038 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:04 crc kubenswrapper[4942]: I0218 19:18:04.565069 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:04 crc kubenswrapper[4942]: I0218 19:18:04.565094 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:04Z","lastTransitionTime":"2026-02-18T19:18:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:04 crc kubenswrapper[4942]: I0218 19:18:04.668329 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:04 crc kubenswrapper[4942]: I0218 19:18:04.668418 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:04 crc kubenswrapper[4942]: I0218 19:18:04.668447 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:04 crc kubenswrapper[4942]: I0218 19:18:04.668482 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:04 crc kubenswrapper[4942]: I0218 19:18:04.668512 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:04Z","lastTransitionTime":"2026-02-18T19:18:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:04 crc kubenswrapper[4942]: I0218 19:18:04.771703 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:04 crc kubenswrapper[4942]: I0218 19:18:04.771800 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:04 crc kubenswrapper[4942]: I0218 19:18:04.771821 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:04 crc kubenswrapper[4942]: I0218 19:18:04.771846 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:04 crc kubenswrapper[4942]: I0218 19:18:04.771864 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:04Z","lastTransitionTime":"2026-02-18T19:18:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:04 crc kubenswrapper[4942]: I0218 19:18:04.876271 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:04 crc kubenswrapper[4942]: I0218 19:18:04.876361 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:04 crc kubenswrapper[4942]: I0218 19:18:04.876383 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:04 crc kubenswrapper[4942]: I0218 19:18:04.876412 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:04 crc kubenswrapper[4942]: I0218 19:18:04.876431 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:04Z","lastTransitionTime":"2026-02-18T19:18:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:04 crc kubenswrapper[4942]: I0218 19:18:04.980239 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:04 crc kubenswrapper[4942]: I0218 19:18:04.980300 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:04 crc kubenswrapper[4942]: I0218 19:18:04.980319 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:04 crc kubenswrapper[4942]: I0218 19:18:04.980346 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:04 crc kubenswrapper[4942]: I0218 19:18:04.980370 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:04Z","lastTransitionTime":"2026-02-18T19:18:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:04 crc kubenswrapper[4942]: I0218 19:18:04.983230 4942 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-14 11:07:16.234636081 +0000 UTC Feb 18 19:18:05 crc kubenswrapper[4942]: I0218 19:18:05.035281 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 19:18:05 crc kubenswrapper[4942]: E0218 19:18:05.035504 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 19:18:05 crc kubenswrapper[4942]: I0218 19:18:05.083610 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:05 crc kubenswrapper[4942]: I0218 19:18:05.083683 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:05 crc kubenswrapper[4942]: I0218 19:18:05.083693 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:05 crc kubenswrapper[4942]: I0218 19:18:05.083714 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:05 crc kubenswrapper[4942]: I0218 19:18:05.083728 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:05Z","lastTransitionTime":"2026-02-18T19:18:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:05 crc kubenswrapper[4942]: I0218 19:18:05.186511 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:05 crc kubenswrapper[4942]: I0218 19:18:05.186596 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:05 crc kubenswrapper[4942]: I0218 19:18:05.186618 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:05 crc kubenswrapper[4942]: I0218 19:18:05.186648 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:05 crc kubenswrapper[4942]: I0218 19:18:05.186668 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:05Z","lastTransitionTime":"2026-02-18T19:18:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:05 crc kubenswrapper[4942]: I0218 19:18:05.291180 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:05 crc kubenswrapper[4942]: I0218 19:18:05.291248 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:05 crc kubenswrapper[4942]: I0218 19:18:05.291271 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:05 crc kubenswrapper[4942]: I0218 19:18:05.291301 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:05 crc kubenswrapper[4942]: I0218 19:18:05.291324 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:05Z","lastTransitionTime":"2026-02-18T19:18:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:05 crc kubenswrapper[4942]: I0218 19:18:05.395406 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:05 crc kubenswrapper[4942]: I0218 19:18:05.395512 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:05 crc kubenswrapper[4942]: I0218 19:18:05.395535 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:05 crc kubenswrapper[4942]: I0218 19:18:05.395564 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:05 crc kubenswrapper[4942]: I0218 19:18:05.395583 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:05Z","lastTransitionTime":"2026-02-18T19:18:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:05 crc kubenswrapper[4942]: I0218 19:18:05.499524 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:05 crc kubenswrapper[4942]: I0218 19:18:05.499663 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:05 crc kubenswrapper[4942]: I0218 19:18:05.499725 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:05 crc kubenswrapper[4942]: I0218 19:18:05.499804 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:05 crc kubenswrapper[4942]: I0218 19:18:05.499834 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:05Z","lastTransitionTime":"2026-02-18T19:18:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:05 crc kubenswrapper[4942]: I0218 19:18:05.602572 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:05 crc kubenswrapper[4942]: I0218 19:18:05.602648 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:05 crc kubenswrapper[4942]: I0218 19:18:05.602685 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:05 crc kubenswrapper[4942]: I0218 19:18:05.602704 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:05 crc kubenswrapper[4942]: I0218 19:18:05.602718 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:05Z","lastTransitionTime":"2026-02-18T19:18:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:05 crc kubenswrapper[4942]: I0218 19:18:05.706220 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:05 crc kubenswrapper[4942]: I0218 19:18:05.706298 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:05 crc kubenswrapper[4942]: I0218 19:18:05.706315 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:05 crc kubenswrapper[4942]: I0218 19:18:05.706343 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:05 crc kubenswrapper[4942]: I0218 19:18:05.706361 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:05Z","lastTransitionTime":"2026-02-18T19:18:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:05 crc kubenswrapper[4942]: I0218 19:18:05.810324 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:05 crc kubenswrapper[4942]: I0218 19:18:05.810408 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:05 crc kubenswrapper[4942]: I0218 19:18:05.810426 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:05 crc kubenswrapper[4942]: I0218 19:18:05.810456 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:05 crc kubenswrapper[4942]: I0218 19:18:05.810475 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:05Z","lastTransitionTime":"2026-02-18T19:18:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:05 crc kubenswrapper[4942]: I0218 19:18:05.913160 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:05 crc kubenswrapper[4942]: I0218 19:18:05.913230 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:05 crc kubenswrapper[4942]: I0218 19:18:05.913241 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:05 crc kubenswrapper[4942]: I0218 19:18:05.913256 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:05 crc kubenswrapper[4942]: I0218 19:18:05.913266 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:05Z","lastTransitionTime":"2026-02-18T19:18:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:05 crc kubenswrapper[4942]: I0218 19:18:05.983885 4942 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-09 05:37:29.484712551 +0000 UTC Feb 18 19:18:06 crc kubenswrapper[4942]: I0218 19:18:06.016917 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:06 crc kubenswrapper[4942]: I0218 19:18:06.016969 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:06 crc kubenswrapper[4942]: I0218 19:18:06.016980 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:06 crc kubenswrapper[4942]: I0218 19:18:06.017002 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:06 crc kubenswrapper[4942]: I0218 19:18:06.017015 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:06Z","lastTransitionTime":"2026-02-18T19:18:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:06 crc kubenswrapper[4942]: I0218 19:18:06.035328 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 19:18:06 crc kubenswrapper[4942]: E0218 19:18:06.035458 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 19:18:06 crc kubenswrapper[4942]: I0218 19:18:06.035557 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 19:18:06 crc kubenswrapper[4942]: I0218 19:18:06.035568 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qwg6q" Feb 18 19:18:06 crc kubenswrapper[4942]: E0218 19:18:06.035861 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 19:18:06 crc kubenswrapper[4942]: E0218 19:18:06.035998 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qwg6q" podUID="ac5b5f40-34db-4aeb-abb4-57204673bd53" Feb 18 19:18:06 crc kubenswrapper[4942]: I0218 19:18:06.120092 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:06 crc kubenswrapper[4942]: I0218 19:18:06.120187 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:06 crc kubenswrapper[4942]: I0218 19:18:06.120279 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:06 crc kubenswrapper[4942]: I0218 19:18:06.120318 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:06 crc kubenswrapper[4942]: I0218 19:18:06.120342 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:06Z","lastTransitionTime":"2026-02-18T19:18:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:06 crc kubenswrapper[4942]: I0218 19:18:06.223851 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:06 crc kubenswrapper[4942]: I0218 19:18:06.223938 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:06 crc kubenswrapper[4942]: I0218 19:18:06.223960 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:06 crc kubenswrapper[4942]: I0218 19:18:06.223985 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:06 crc kubenswrapper[4942]: I0218 19:18:06.224005 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:06Z","lastTransitionTime":"2026-02-18T19:18:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:06 crc kubenswrapper[4942]: I0218 19:18:06.327860 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:06 crc kubenswrapper[4942]: I0218 19:18:06.327912 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:06 crc kubenswrapper[4942]: I0218 19:18:06.327923 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:06 crc kubenswrapper[4942]: I0218 19:18:06.327939 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:06 crc kubenswrapper[4942]: I0218 19:18:06.327950 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:06Z","lastTransitionTime":"2026-02-18T19:18:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:06 crc kubenswrapper[4942]: I0218 19:18:06.431298 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:06 crc kubenswrapper[4942]: I0218 19:18:06.431339 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:06 crc kubenswrapper[4942]: I0218 19:18:06.431348 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:06 crc kubenswrapper[4942]: I0218 19:18:06.431365 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:06 crc kubenswrapper[4942]: I0218 19:18:06.431377 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:06Z","lastTransitionTime":"2026-02-18T19:18:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:06 crc kubenswrapper[4942]: I0218 19:18:06.539349 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:06 crc kubenswrapper[4942]: I0218 19:18:06.539402 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:06 crc kubenswrapper[4942]: I0218 19:18:06.539416 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:06 crc kubenswrapper[4942]: I0218 19:18:06.539616 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:06 crc kubenswrapper[4942]: I0218 19:18:06.539664 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:06Z","lastTransitionTime":"2026-02-18T19:18:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:06 crc kubenswrapper[4942]: I0218 19:18:06.643980 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:06 crc kubenswrapper[4942]: I0218 19:18:06.644060 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:06 crc kubenswrapper[4942]: I0218 19:18:06.644079 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:06 crc kubenswrapper[4942]: I0218 19:18:06.644102 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:06 crc kubenswrapper[4942]: I0218 19:18:06.644121 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:06Z","lastTransitionTime":"2026-02-18T19:18:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:06 crc kubenswrapper[4942]: I0218 19:18:06.747648 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:06 crc kubenswrapper[4942]: I0218 19:18:06.747716 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:06 crc kubenswrapper[4942]: I0218 19:18:06.747733 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:06 crc kubenswrapper[4942]: I0218 19:18:06.747756 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:06 crc kubenswrapper[4942]: I0218 19:18:06.747812 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:06Z","lastTransitionTime":"2026-02-18T19:18:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:06 crc kubenswrapper[4942]: I0218 19:18:06.850654 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:06 crc kubenswrapper[4942]: I0218 19:18:06.850721 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:06 crc kubenswrapper[4942]: I0218 19:18:06.850794 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:06 crc kubenswrapper[4942]: I0218 19:18:06.850834 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:06 crc kubenswrapper[4942]: I0218 19:18:06.850858 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:06Z","lastTransitionTime":"2026-02-18T19:18:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:06 crc kubenswrapper[4942]: I0218 19:18:06.953900 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:06 crc kubenswrapper[4942]: I0218 19:18:06.953985 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:06 crc kubenswrapper[4942]: I0218 19:18:06.954004 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:06 crc kubenswrapper[4942]: I0218 19:18:06.954034 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:06 crc kubenswrapper[4942]: I0218 19:18:06.954052 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:06Z","lastTransitionTime":"2026-02-18T19:18:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:06 crc kubenswrapper[4942]: I0218 19:18:06.984828 4942 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-19 21:37:30.341159677 +0000 UTC Feb 18 19:18:07 crc kubenswrapper[4942]: I0218 19:18:07.035845 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 19:18:07 crc kubenswrapper[4942]: E0218 19:18:07.036044 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 19:18:07 crc kubenswrapper[4942]: I0218 19:18:07.037121 4942 scope.go:117] "RemoveContainer" containerID="093e5e3bd5a3d7277ee21a03cf707e96c859c4d827efe302bd1a67ee3491c717" Feb 18 19:18:07 crc kubenswrapper[4942]: I0218 19:18:07.057222 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:07 crc kubenswrapper[4942]: I0218 19:18:07.057294 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:07 crc kubenswrapper[4942]: I0218 19:18:07.057312 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:07 crc kubenswrapper[4942]: I0218 19:18:07.057338 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:07 crc kubenswrapper[4942]: I0218 19:18:07.057357 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:07Z","lastTransitionTime":"2026-02-18T19:18:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:07 crc kubenswrapper[4942]: I0218 19:18:07.306838 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:07 crc kubenswrapper[4942]: I0218 19:18:07.307322 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:07 crc kubenswrapper[4942]: I0218 19:18:07.307348 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:07 crc kubenswrapper[4942]: I0218 19:18:07.307383 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:07 crc kubenswrapper[4942]: I0218 19:18:07.307407 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:07Z","lastTransitionTime":"2026-02-18T19:18:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:07 crc kubenswrapper[4942]: E0218 19:18:07.327825 4942 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:18:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:07Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:18:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:07Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:18:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:07Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:18:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:07Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"26ba8477-3134-4454-b1a3-81cc0f315017\\\",\\\"systemUUID\\\":\\\"15e4da6b-0b96-4412-ada2-f835d7e5f88a\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:07Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:07 crc kubenswrapper[4942]: I0218 19:18:07.334093 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:07 crc kubenswrapper[4942]: I0218 19:18:07.334199 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:07 crc kubenswrapper[4942]: I0218 19:18:07.334227 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:07 crc kubenswrapper[4942]: I0218 19:18:07.334263 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:07 crc kubenswrapper[4942]: I0218 19:18:07.334289 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:07Z","lastTransitionTime":"2026-02-18T19:18:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:07 crc kubenswrapper[4942]: E0218 19:18:07.356170 4942 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:18:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:07Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:18:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:07Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:18:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:07Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:18:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:07Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"26ba8477-3134-4454-b1a3-81cc0f315017\\\",\\\"systemUUID\\\":\\\"15e4da6b-0b96-4412-ada2-f835d7e5f88a\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:07Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:07 crc kubenswrapper[4942]: I0218 19:18:07.362422 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:07 crc kubenswrapper[4942]: I0218 19:18:07.362491 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:07 crc kubenswrapper[4942]: I0218 19:18:07.362510 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:07 crc kubenswrapper[4942]: I0218 19:18:07.362539 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:07 crc kubenswrapper[4942]: I0218 19:18:07.362558 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:07Z","lastTransitionTime":"2026-02-18T19:18:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:07 crc kubenswrapper[4942]: E0218 19:18:07.385952 4942 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:18:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:07Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:18:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:07Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:18:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:07Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:18:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:07Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"26ba8477-3134-4454-b1a3-81cc0f315017\\\",\\\"systemUUID\\\":\\\"15e4da6b-0b96-4412-ada2-f835d7e5f88a\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:07Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:07 crc kubenswrapper[4942]: I0218 19:18:07.391701 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:07 crc kubenswrapper[4942]: I0218 19:18:07.391737 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:07 crc kubenswrapper[4942]: I0218 19:18:07.391745 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:07 crc kubenswrapper[4942]: I0218 19:18:07.391780 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:07 crc kubenswrapper[4942]: I0218 19:18:07.391790 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:07Z","lastTransitionTime":"2026-02-18T19:18:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:07 crc kubenswrapper[4942]: E0218 19:18:07.407713 4942 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:18:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:07Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:18:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:07Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:18:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:07Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:18:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:07Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"26ba8477-3134-4454-b1a3-81cc0f315017\\\",\\\"systemUUID\\\":\\\"15e4da6b-0b96-4412-ada2-f835d7e5f88a\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:07Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:07 crc kubenswrapper[4942]: I0218 19:18:07.412807 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:07 crc kubenswrapper[4942]: I0218 19:18:07.413230 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:07 crc kubenswrapper[4942]: I0218 19:18:07.413245 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:07 crc kubenswrapper[4942]: I0218 19:18:07.413262 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:07 crc kubenswrapper[4942]: I0218 19:18:07.413293 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:07Z","lastTransitionTime":"2026-02-18T19:18:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:07 crc kubenswrapper[4942]: E0218 19:18:07.426721 4942 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:18:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:07Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:18:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:07Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:18:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:07Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:18:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:07Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"26ba8477-3134-4454-b1a3-81cc0f315017\\\",\\\"systemUUID\\\":\\\"15e4da6b-0b96-4412-ada2-f835d7e5f88a\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:07Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:07 crc kubenswrapper[4942]: E0218 19:18:07.426982 4942 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 18 19:18:07 crc kubenswrapper[4942]: I0218 19:18:07.431225 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:07 crc kubenswrapper[4942]: I0218 19:18:07.431290 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:07 crc kubenswrapper[4942]: I0218 19:18:07.431311 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:07 crc kubenswrapper[4942]: I0218 19:18:07.431351 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:07 crc kubenswrapper[4942]: I0218 19:18:07.431372 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:07Z","lastTransitionTime":"2026-02-18T19:18:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:07 crc kubenswrapper[4942]: I0218 19:18:07.442393 4942 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-89fzv_45dc4164-81a9-44cf-b86a-dff571bc0417/ovnkube-controller/1.log" Feb 18 19:18:07 crc kubenswrapper[4942]: I0218 19:18:07.446652 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-89fzv" event={"ID":"45dc4164-81a9-44cf-b86a-dff571bc0417","Type":"ContainerStarted","Data":"5429604f7b234287bf3af48f519550433f88494f95c33feb27806630d47483a5"} Feb 18 19:18:07 crc kubenswrapper[4942]: I0218 19:18:07.447453 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-89fzv" Feb 18 19:18:07 crc kubenswrapper[4942]: I0218 19:18:07.466863 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wqxh4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"28921539-823a-4439-a230-3b5aed7085cc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d1f426cf3a46e9dbd6da2d7e0d1dc2649a781bb63b9b116e2e96e297ffe685f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c2zj5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3f2583de812c35d32f50918d2ea1071672e650d7bb1eca09416558ca25526b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c2zj5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:17:42Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wqxh4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:07Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:07 crc kubenswrapper[4942]: I0218 19:18:07.518807 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-8jfwb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"75150b8c-7a02-497b-86c3-eabc9c8dbc55\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f6aba9b40a3a963de7e8fb8f2a121318f0800350a41caa30b6aef71468e5e0e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-65c5q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:17:42Z\\\"}}\" for pod \"openshift-multus\"/\"multus-8jfwb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:07Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:07 crc kubenswrapper[4942]: I0218 19:18:07.537252 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:07 crc kubenswrapper[4942]: I0218 19:18:07.537289 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:07 crc kubenswrapper[4942]: I0218 19:18:07.537297 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:07 crc kubenswrapper[4942]: I0218 19:18:07.537312 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:07 crc kubenswrapper[4942]: I0218 19:18:07.537324 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:07Z","lastTransitionTime":"2026-02-18T19:18:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:07 crc kubenswrapper[4942]: I0218 19:18:07.546638 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-89fzv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"45dc4164-81a9-44cf-b86a-dff571bc0417\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:42Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:42Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e988175a524e389ddf3e3a47acb65910ac3bf3b812e14b76d988f13e2cdc5dc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl7tj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9333dac09e056ca12a248589ed4a097788b86ab83f9a1014d76d8bad88f1800c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl7tj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bcc9ee5f12cc3a3518c9fe13c16743e946e59b82dc01239767afb1e4afb2e4b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl7tj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2e222b580b244e85a382499ae61c72779f95fdab87e4d4c723d29b488219f94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl7tj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6351d0088a3e9c170ebe043fa700ef7f870c52f40d751b4fd13ac7b5bfa5e3b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl7tj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://427d7c083c5040fc6afe217c7850f1114323977542e83eb35d0a71b4bef6ecc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl7tj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5429604f7b234287bf3af48f519550433f88494f95c33feb27806630d47483a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://093e5e3bd5a3d7277ee21a03cf707e96c859c4d827efe302bd1a67ee3491c717\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-18T19:17:53Z\\\",\\\"message\\\":\\\"4.36:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {ba175bbe-5cc4-47e6-a32d-57693e1320bd}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0218 19:17:53.331601 6390 factory.go:1336] Added *v1.EgressIP event handler 8\\\\nI0218 19:17:53.331580 6390 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-kube-controller-manager/kube-controller-manager]} name:Service_openshift-kube-controller-manager/kube-controller-manager_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.36:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {ba175bbe-5cc4-47e6-a32d-57693e1320bd}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0218 19:17:53.333326 6390 factory.go:1336] Added *v1.EgressFirewall event handler 9\\\\nI0218 19:17:53.333620 6390 controller.go:132] Adding controller ef_node_controller event handlers\\\\nI0218 19:17:53.333683 6390 ovnkube.go:599] Stopped ovnkube\\\\nI0218 19:17:53.333723 6390 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0218 19:17:53.333863 6390 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T19:17:52Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl7tj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c498aa99d3ec10af57c279f23804f4dce52a99d2c73fafe2bd9dc6ea454c7a23\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl7tj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://581689b9e064557a35e24e6d5c15a73036b8499700959fd330e9ebee15543edc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://581689b9e064557a35e24e6d5c15a73036b8499700959fd330e9ebee15543edc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:17:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:17:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl7tj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:17:42Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-89fzv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:07Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:07 crc kubenswrapper[4942]: I0218 19:18:07.561055 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://36f5db0de79285e1aca04aee9ebb8824353d8746f2f7df24be858a55db3c9abf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:07Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:07 crc kubenswrapper[4942]: I0218 19:18:07.574131 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2e4be8605467674f949e5b4b8d282634126ab56d2983d5ffadb64ca4043b79b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:07Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:07 crc kubenswrapper[4942]: I0218 19:18:07.590481 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:07Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:07 crc kubenswrapper[4942]: I0218 19:18:07.605454 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4da93830-99a3-4d84-91c8-a5352a987b3f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://beecfbdf76954e7b9895240b52a2ec033ec3b81094ece02095f67a5f389d0383\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca3d8e99733c89b17e7211c9bae268f8e75942d896d32a6e2e9fc7e613000a6d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee5e19c2c5a503ae69e8052828713b9b399137e0fb7f3a06865d4d7f6b29c954\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c787e65428258ae002dd2569d2e100857851a5b699d573b42e59d1be987da8b3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b82ad9b4d66dae63d0d0fcf0dd65333cf43c3b1d6006e129b7db1c6320bfdee8\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"message\\\":\\\"le observer\\\\nW0218 19:17:41.723890 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0218 19:17:41.724123 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0218 19:17:41.725411 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3231040961/tls.crt::/tmp/serving-cert-3231040961/tls.key\\\\\\\"\\\\nI0218 19:17:41.923908 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0218 19:17:41.936017 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0218 19:17:41.936045 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0218 19:17:41.936073 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0218 19:17:41.936079 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0218 19:17:41.944174 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0218 19:17:41.944200 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 19:17:41.944205 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 19:17:41.944211 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0218 19:17:41.944214 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0218 19:17:41.944217 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0218 19:17:41.944220 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0218 19:17:41.944371 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0218 19:17:41.958094 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T19:17:41Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5fcd5de3303bba82e4a354de9f77b9aac574912955c2e49e2e74232f4d432a88\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:23Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6adb31d2464d541db843bddad1c43811ca11900db12deeb0d7c13ff8a3186d09\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6adb31d2464d541db843bddad1c43811ca11900db12deeb0d7c13ff8a3186d09\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:17:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:17:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:17:21Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:07Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:07 crc kubenswrapper[4942]: I0218 19:18:07.621946 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:07Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:07 crc kubenswrapper[4942]: I0218 19:18:07.633063 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-wxck8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"69ef2748-687e-4223-998e-7bd92ad8aaaf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ba4df5c822ff37a1a027d1908aab6472cd0b5a6ab0a2b5e5d1b172774107727\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vscpp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:17:45Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-wxck8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:07Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:07 crc kubenswrapper[4942]: I0218 19:18:07.639713 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:07 crc kubenswrapper[4942]: I0218 19:18:07.639745 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:07 crc kubenswrapper[4942]: I0218 19:18:07.639753 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:07 crc kubenswrapper[4942]: I0218 19:18:07.639781 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:07 crc kubenswrapper[4942]: I0218 19:18:07.639791 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:07Z","lastTransitionTime":"2026-02-18T19:18:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:07 crc kubenswrapper[4942]: I0218 19:18:07.643592 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b5d2b9d-7ec0-41fa-a073-399c6fd41eb6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2c8b81c113e461032be39d6328308bad3189a9e84d987da987d43e8e2f6449fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3654d3b4a5084ce9ffb9ef8aeab6155788b56ac636aee44b098f6e9d457a8d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3a247d311cfbec62a54df5757a344bbc7ea516a66ccdeb67aecbbe268a4fbe4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://117748c4c4fa5e68d4b927639faa447ed3a984e0d7364a2224abe27e178d5746\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:17:21Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:07Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:07 crc kubenswrapper[4942]: I0218 19:18:07.656709 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8cc6e8b6926e9cadf0bfdedb3a9fd0e5a7a902ba1cc703cd0396c3d7b2ec8666\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://45c0716738e2acbb0104b2ce05e3f23fd6933b653297d10972914500f3e55cd3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:07Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:07 crc kubenswrapper[4942]: I0218 19:18:07.673622 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-xk99z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8f8b40cd-7bbd-4189-a8c0-f4131e8b9add\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b7ea4ede9f2f9b4438bc9befcf913e5b8c7b9dc765fa1edce809e17c5ac933a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2zxvs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3573f095c220e3b1994394b83fdf24c7d1a721ccee2755042f520467f21ae1fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2zxvs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:17:54Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-xk99z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:07Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:07 crc kubenswrapper[4942]: I0218 19:18:07.695328 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-qwg6q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ac5b5f40-34db-4aeb-abb4-57204673bd53\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:56Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:56Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kmmq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kmmq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:17:56Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-qwg6q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:07Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:07 crc kubenswrapper[4942]: I0218 19:18:07.711910 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:07Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:07 crc kubenswrapper[4942]: I0218 19:18:07.728739 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-5pgvt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f163820b-df8b-4e07-9b74-d5f3332580a6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://97b02b2ef091c462632d385e824d90a6dc8270726bb3b5dfaa6c3036e99d323f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjg6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:17:41Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-5pgvt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:07Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:07 crc kubenswrapper[4942]: I0218 19:18:07.742139 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:07 crc kubenswrapper[4942]: I0218 19:18:07.742203 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:07 crc kubenswrapper[4942]: I0218 19:18:07.742215 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:07 crc kubenswrapper[4942]: I0218 19:18:07.742236 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:07 crc kubenswrapper[4942]: I0218 19:18:07.742250 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:07Z","lastTransitionTime":"2026-02-18T19:18:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:07 crc kubenswrapper[4942]: I0218 19:18:07.744292 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-2rbc4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1ec6cacf-a09d-43b8-89fc-aea1d5a0ca9d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d379b6cff5fad06493f1e137d6f8de20b35e5350025c5875db8afb23cf30ac97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-27v7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://26e15915f01864e6357f914244e51cc52eb7c1c79fdda6cebfb23c7723978fc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://26e15915f01864e6357f914244e51cc52eb7c1c79fdda6cebfb23c7723978fc7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:17:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-27v7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3845cae9bbde573b86221f80bf461886dadf5c5149bb19824e505c703e168ba3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3845cae9bbde573b86221f80bf461886dadf5c5149bb19824e505c703e168ba3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:17:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-27v7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://83b7a573c632fbc4da1c088193202dcbe4f5f09d82c98b12e901a06acc877b80\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://83b7a573c632fbc4da1c088193202dcbe4f5f09d82c98b12e901a06acc877b80\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:17:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:17:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-27v7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2730d908eb063a0dc3278a304a8b7b9aee84bb6df39693e476d6517362864da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b2730d908eb063a0dc3278a304a8b7b9aee84bb6df39693e476d6517362864da\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:17:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:17:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-27v7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://86ba552c18df4c07b6d6b34acf51c27ec696374ddd079486c045e1cb9f68f703\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://86ba552c18df4c07b6d6b34acf51c27ec696374ddd079486c045e1cb9f68f703\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:17:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:17:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-27v7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://522b8abd41e12aecabbbc8a1f16dd8978b1e72b0984784780349570290bcc168\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://522b8abd41e12aecabbbc8a1f16dd8978b1e72b0984784780349570290bcc168\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:17:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:17:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-27v7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:17:42Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-2rbc4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:07Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:07 crc kubenswrapper[4942]: I0218 19:18:07.844556 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:07 crc kubenswrapper[4942]: I0218 19:18:07.844602 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:07 crc kubenswrapper[4942]: I0218 19:18:07.844612 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:07 crc kubenswrapper[4942]: I0218 19:18:07.844629 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:07 crc kubenswrapper[4942]: I0218 19:18:07.844641 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:07Z","lastTransitionTime":"2026-02-18T19:18:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:07 crc kubenswrapper[4942]: I0218 19:18:07.947300 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:07 crc kubenswrapper[4942]: I0218 19:18:07.947346 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:07 crc kubenswrapper[4942]: I0218 19:18:07.947357 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:07 crc kubenswrapper[4942]: I0218 19:18:07.947376 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:07 crc kubenswrapper[4942]: I0218 19:18:07.947387 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:07Z","lastTransitionTime":"2026-02-18T19:18:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:07 crc kubenswrapper[4942]: I0218 19:18:07.985908 4942 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-03 10:11:39.798510575 +0000 UTC Feb 18 19:18:08 crc kubenswrapper[4942]: I0218 19:18:08.035682 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 19:18:08 crc kubenswrapper[4942]: I0218 19:18:08.035789 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 19:18:08 crc kubenswrapper[4942]: I0218 19:18:08.035701 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qwg6q" Feb 18 19:18:08 crc kubenswrapper[4942]: E0218 19:18:08.035967 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 19:18:08 crc kubenswrapper[4942]: E0218 19:18:08.036220 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qwg6q" podUID="ac5b5f40-34db-4aeb-abb4-57204673bd53" Feb 18 19:18:08 crc kubenswrapper[4942]: E0218 19:18:08.036449 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 19:18:08 crc kubenswrapper[4942]: I0218 19:18:08.050281 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:08 crc kubenswrapper[4942]: I0218 19:18:08.050345 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:08 crc kubenswrapper[4942]: I0218 19:18:08.050364 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:08 crc kubenswrapper[4942]: I0218 19:18:08.050392 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:08 crc kubenswrapper[4942]: I0218 19:18:08.050411 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:08Z","lastTransitionTime":"2026-02-18T19:18:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:08 crc kubenswrapper[4942]: I0218 19:18:08.153432 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:08 crc kubenswrapper[4942]: I0218 19:18:08.153492 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:08 crc kubenswrapper[4942]: I0218 19:18:08.153505 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:08 crc kubenswrapper[4942]: I0218 19:18:08.153526 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:08 crc kubenswrapper[4942]: I0218 19:18:08.153539 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:08Z","lastTransitionTime":"2026-02-18T19:18:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:08 crc kubenswrapper[4942]: I0218 19:18:08.256009 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:08 crc kubenswrapper[4942]: I0218 19:18:08.256057 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:08 crc kubenswrapper[4942]: I0218 19:18:08.256068 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:08 crc kubenswrapper[4942]: I0218 19:18:08.256089 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:08 crc kubenswrapper[4942]: I0218 19:18:08.256104 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:08Z","lastTransitionTime":"2026-02-18T19:18:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:08 crc kubenswrapper[4942]: I0218 19:18:08.359357 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:08 crc kubenswrapper[4942]: I0218 19:18:08.359430 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:08 crc kubenswrapper[4942]: I0218 19:18:08.359448 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:08 crc kubenswrapper[4942]: I0218 19:18:08.359477 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:08 crc kubenswrapper[4942]: I0218 19:18:08.359498 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:08Z","lastTransitionTime":"2026-02-18T19:18:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:08 crc kubenswrapper[4942]: I0218 19:18:08.460076 4942 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-89fzv_45dc4164-81a9-44cf-b86a-dff571bc0417/ovnkube-controller/2.log" Feb 18 19:18:08 crc kubenswrapper[4942]: I0218 19:18:08.461244 4942 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-89fzv_45dc4164-81a9-44cf-b86a-dff571bc0417/ovnkube-controller/1.log" Feb 18 19:18:08 crc kubenswrapper[4942]: I0218 19:18:08.462509 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:08 crc kubenswrapper[4942]: I0218 19:18:08.462574 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:08 crc kubenswrapper[4942]: I0218 19:18:08.462593 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:08 crc kubenswrapper[4942]: I0218 19:18:08.462623 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:08 crc kubenswrapper[4942]: I0218 19:18:08.462646 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:08Z","lastTransitionTime":"2026-02-18T19:18:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:08 crc kubenswrapper[4942]: I0218 19:18:08.466396 4942 generic.go:334] "Generic (PLEG): container finished" podID="45dc4164-81a9-44cf-b86a-dff571bc0417" containerID="5429604f7b234287bf3af48f519550433f88494f95c33feb27806630d47483a5" exitCode=1 Feb 18 19:18:08 crc kubenswrapper[4942]: I0218 19:18:08.466441 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-89fzv" event={"ID":"45dc4164-81a9-44cf-b86a-dff571bc0417","Type":"ContainerDied","Data":"5429604f7b234287bf3af48f519550433f88494f95c33feb27806630d47483a5"} Feb 18 19:18:08 crc kubenswrapper[4942]: I0218 19:18:08.466481 4942 scope.go:117] "RemoveContainer" containerID="093e5e3bd5a3d7277ee21a03cf707e96c859c4d827efe302bd1a67ee3491c717" Feb 18 19:18:08 crc kubenswrapper[4942]: I0218 19:18:08.467588 4942 scope.go:117] "RemoveContainer" containerID="5429604f7b234287bf3af48f519550433f88494f95c33feb27806630d47483a5" Feb 18 19:18:08 crc kubenswrapper[4942]: E0218 19:18:08.467961 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-89fzv_openshift-ovn-kubernetes(45dc4164-81a9-44cf-b86a-dff571bc0417)\"" pod="openshift-ovn-kubernetes/ovnkube-node-89fzv" podUID="45dc4164-81a9-44cf-b86a-dff571bc0417" Feb 18 19:18:08 crc kubenswrapper[4942]: I0218 19:18:08.487243 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:08Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:08 crc kubenswrapper[4942]: I0218 19:18:08.502485 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-5pgvt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f163820b-df8b-4e07-9b74-d5f3332580a6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://97b02b2ef091c462632d385e824d90a6dc8270726bb3b5dfaa6c3036e99d323f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjg6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:17:41Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-5pgvt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:08Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:08 crc kubenswrapper[4942]: I0218 19:18:08.527107 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-2rbc4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1ec6cacf-a09d-43b8-89fc-aea1d5a0ca9d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d379b6cff5fad06493f1e137d6f8de20b35e5350025c5875db8afb23cf30ac97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-27v7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://26e15915f01864e6357f914244e51cc52eb7c1c79fdda6cebfb23c7723978fc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://26e15915f01864e6357f914244e51cc52eb7c1c79fdda6cebfb23c7723978fc7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:17:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-27v7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3845cae9bbde573b86221f80bf461886dadf5c5149bb19824e505c703e168ba3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3845cae9bbde573b86221f80bf461886dadf5c5149bb19824e505c703e168ba3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:17:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-27v7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://83b7a573c632fbc4da1c088193202dcbe4f5f09d82c98b12e901a06acc877b80\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://83b7a573c632fbc4da1c088193202dcbe4f5f09d82c98b12e901a06acc877b80\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:17:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:17:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-27v7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2730d908eb063a0dc3278a304a8b7b9aee84bb6df39693e476d6517362864da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b2730d908eb063a0dc3278a304a8b7b9aee84bb6df39693e476d6517362864da\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:17:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:17:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-27v7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://86ba552c18df4c07b6d6b34acf51c27ec696374ddd079486c045e1cb9f68f703\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://86ba552c18df4c07b6d6b34acf51c27ec696374ddd079486c045e1cb9f68f703\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:17:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:17:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-27v7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://522b8abd41e12aecabbbc8a1f16dd8978b1e72b0984784780349570290bcc168\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://522b8abd41e12aecabbbc8a1f16dd8978b1e72b0984784780349570290bcc168\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:17:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:17:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-27v7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:17:42Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-2rbc4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:08Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:08 crc kubenswrapper[4942]: I0218 19:18:08.546034 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wqxh4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"28921539-823a-4439-a230-3b5aed7085cc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d1f426cf3a46e9dbd6da2d7e0d1dc2649a781bb63b9b116e2e96e297ffe685f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c2zj5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3f2583de812c35d32f50918d2ea1071672e650d7bb1eca09416558ca25526b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c2zj5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:17:42Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wqxh4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:08Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:08 crc kubenswrapper[4942]: I0218 19:18:08.565413 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:08 crc kubenswrapper[4942]: I0218 19:18:08.565457 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:08 crc kubenswrapper[4942]: I0218 19:18:08.565468 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:08 crc kubenswrapper[4942]: I0218 19:18:08.565488 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:08 crc kubenswrapper[4942]: I0218 19:18:08.565502 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:08Z","lastTransitionTime":"2026-02-18T19:18:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:08 crc kubenswrapper[4942]: I0218 19:18:08.570279 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-8jfwb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"75150b8c-7a02-497b-86c3-eabc9c8dbc55\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f6aba9b40a3a963de7e8fb8f2a121318f0800350a41caa30b6aef71468e5e0e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-65c5q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:17:42Z\\\"}}\" for pod \"openshift-multus\"/\"multus-8jfwb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:08Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:08 crc kubenswrapper[4942]: I0218 19:18:08.605289 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-89fzv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"45dc4164-81a9-44cf-b86a-dff571bc0417\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:42Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:42Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e988175a524e389ddf3e3a47acb65910ac3bf3b812e14b76d988f13e2cdc5dc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl7tj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9333dac09e056ca12a248589ed4a097788b86ab83f9a1014d76d8bad88f1800c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl7tj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bcc9ee5f12cc3a3518c9fe13c16743e946e59b82dc01239767afb1e4afb2e4b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl7tj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2e222b580b244e85a382499ae61c72779f95fdab87e4d4c723d29b488219f94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl7tj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6351d0088a3e9c170ebe043fa700ef7f870c52f40d751b4fd13ac7b5bfa5e3b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl7tj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://427d7c083c5040fc6afe217c7850f1114323977542e83eb35d0a71b4bef6ecc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl7tj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5429604f7b234287bf3af48f519550433f88494f95c33feb27806630d47483a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://093e5e3bd5a3d7277ee21a03cf707e96c859c4d827efe302bd1a67ee3491c717\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-18T19:17:53Z\\\",\\\"message\\\":\\\"4.36:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {ba175bbe-5cc4-47e6-a32d-57693e1320bd}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0218 19:17:53.331601 6390 factory.go:1336] Added *v1.EgressIP event handler 8\\\\nI0218 19:17:53.331580 6390 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-kube-controller-manager/kube-controller-manager]} name:Service_openshift-kube-controller-manager/kube-controller-manager_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.36:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {ba175bbe-5cc4-47e6-a32d-57693e1320bd}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0218 19:17:53.333326 6390 factory.go:1336] Added *v1.EgressFirewall event handler 9\\\\nI0218 19:17:53.333620 6390 controller.go:132] Adding controller ef_node_controller event handlers\\\\nI0218 19:17:53.333683 6390 ovnkube.go:599] Stopped ovnkube\\\\nI0218 19:17:53.333723 6390 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0218 19:17:53.333863 6390 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T19:17:52Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5429604f7b234287bf3af48f519550433f88494f95c33feb27806630d47483a5\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-18T19:18:08Z\\\",\\\"message\\\":\\\"I0218 19:18:08.178608 6626 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0218 19:18:08.178631 6626 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0218 19:18:08.178626 6626 handler.go:208] Removed *v1.Node event handler 2\\\\nI0218 19:18:08.178651 6626 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0218 19:18:08.178672 6626 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0218 19:18:08.178699 6626 handler.go:208] Removed *v1.Node event handler 7\\\\nI0218 19:18:08.178672 6626 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0218 19:18:08.178745 6626 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0218 19:18:08.178787 6626 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0218 19:18:08.178795 6626 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0218 19:18:08.178814 6626 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0218 19:18:08.178831 6626 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0218 19:18:08.178834 6626 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0218 19:18:08.178850 6626 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0218 19:18:08.178859 6626 factory.go:656] Stopping watch factory\\\\nI0218 19:18:08.178876 6626 ovnkube.go:599] Stopped ovnkube\\\\nI0218 19:18:0\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T19:18:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl7tj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c498aa99d3ec10af57c279f23804f4dce52a99d2c73fafe2bd9dc6ea454c7a23\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl7tj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://581689b9e064557a35e24e6d5c15a73036b8499700959fd330e9ebee15543edc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://581689b9e064557a35e24e6d5c15a73036b8499700959fd330e9ebee15543edc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:17:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:17:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl7tj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:17:42Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-89fzv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:08Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:08 crc kubenswrapper[4942]: I0218 19:18:08.629933 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://36f5db0de79285e1aca04aee9ebb8824353d8746f2f7df24be858a55db3c9abf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:08Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:08 crc kubenswrapper[4942]: I0218 19:18:08.650650 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2e4be8605467674f949e5b4b8d282634126ab56d2983d5ffadb64ca4043b79b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:08Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:08 crc kubenswrapper[4942]: I0218 19:18:08.668056 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:08 crc kubenswrapper[4942]: I0218 19:18:08.668124 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:08 crc kubenswrapper[4942]: I0218 19:18:08.668145 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:08 crc kubenswrapper[4942]: I0218 19:18:08.668171 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:08 crc kubenswrapper[4942]: I0218 19:18:08.668191 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:08Z","lastTransitionTime":"2026-02-18T19:18:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:08 crc kubenswrapper[4942]: I0218 19:18:08.670886 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:08Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:08 crc kubenswrapper[4942]: I0218 19:18:08.696405 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4da93830-99a3-4d84-91c8-a5352a987b3f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://beecfbdf76954e7b9895240b52a2ec033ec3b81094ece02095f67a5f389d0383\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca3d8e99733c89b17e7211c9bae268f8e75942d896d32a6e2e9fc7e613000a6d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee5e19c2c5a503ae69e8052828713b9b399137e0fb7f3a06865d4d7f6b29c954\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c787e65428258ae002dd2569d2e100857851a5b699d573b42e59d1be987da8b3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b82ad9b4d66dae63d0d0fcf0dd65333cf43c3b1d6006e129b7db1c6320bfdee8\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"message\\\":\\\"le observer\\\\nW0218 19:17:41.723890 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0218 19:17:41.724123 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0218 19:17:41.725411 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3231040961/tls.crt::/tmp/serving-cert-3231040961/tls.key\\\\\\\"\\\\nI0218 19:17:41.923908 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0218 19:17:41.936017 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0218 19:17:41.936045 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0218 19:17:41.936073 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0218 19:17:41.936079 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0218 19:17:41.944174 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0218 19:17:41.944200 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 19:17:41.944205 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 19:17:41.944211 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0218 19:17:41.944214 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0218 19:17:41.944217 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0218 19:17:41.944220 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0218 19:17:41.944371 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0218 19:17:41.958094 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T19:17:41Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5fcd5de3303bba82e4a354de9f77b9aac574912955c2e49e2e74232f4d432a88\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:23Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6adb31d2464d541db843bddad1c43811ca11900db12deeb0d7c13ff8a3186d09\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6adb31d2464d541db843bddad1c43811ca11900db12deeb0d7c13ff8a3186d09\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:17:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:17:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:17:21Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:08Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:08 crc kubenswrapper[4942]: I0218 19:18:08.715987 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:08Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:08 crc kubenswrapper[4942]: I0218 19:18:08.733074 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-wxck8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"69ef2748-687e-4223-998e-7bd92ad8aaaf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ba4df5c822ff37a1a027d1908aab6472cd0b5a6ab0a2b5e5d1b172774107727\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vscpp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:17:45Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-wxck8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:08Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:08 crc kubenswrapper[4942]: I0218 19:18:08.750538 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b5d2b9d-7ec0-41fa-a073-399c6fd41eb6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2c8b81c113e461032be39d6328308bad3189a9e84d987da987d43e8e2f6449fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3654d3b4a5084ce9ffb9ef8aeab6155788b56ac636aee44b098f6e9d457a8d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3a247d311cfbec62a54df5757a344bbc7ea516a66ccdeb67aecbbe268a4fbe4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://117748c4c4fa5e68d4b927639faa447ed3a984e0d7364a2224abe27e178d5746\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:17:21Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:08Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:08 crc kubenswrapper[4942]: I0218 19:18:08.770653 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8cc6e8b6926e9cadf0bfdedb3a9fd0e5a7a902ba1cc703cd0396c3d7b2ec8666\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://45c0716738e2acbb0104b2ce05e3f23fd6933b653297d10972914500f3e55cd3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:08Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:08 crc kubenswrapper[4942]: I0218 19:18:08.771158 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:08 crc kubenswrapper[4942]: I0218 19:18:08.771211 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:08 crc kubenswrapper[4942]: I0218 19:18:08.771223 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:08 crc kubenswrapper[4942]: I0218 19:18:08.771244 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:08 crc kubenswrapper[4942]: I0218 19:18:08.771257 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:08Z","lastTransitionTime":"2026-02-18T19:18:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:08 crc kubenswrapper[4942]: I0218 19:18:08.787184 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-xk99z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8f8b40cd-7bbd-4189-a8c0-f4131e8b9add\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b7ea4ede9f2f9b4438bc9befcf913e5b8c7b9dc765fa1edce809e17c5ac933a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2zxvs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3573f095c220e3b1994394b83fdf24c7d1a721ccee2755042f520467f21ae1fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2zxvs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:17:54Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-xk99z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:08Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:08 crc kubenswrapper[4942]: I0218 19:18:08.801517 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-qwg6q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ac5b5f40-34db-4aeb-abb4-57204673bd53\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:56Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:56Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kmmq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kmmq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:17:56Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-qwg6q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:08Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:08 crc kubenswrapper[4942]: I0218 19:18:08.874711 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:08 crc kubenswrapper[4942]: I0218 19:18:08.874788 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:08 crc kubenswrapper[4942]: I0218 19:18:08.874801 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:08 crc kubenswrapper[4942]: I0218 19:18:08.874822 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:08 crc kubenswrapper[4942]: I0218 19:18:08.874836 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:08Z","lastTransitionTime":"2026-02-18T19:18:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:08 crc kubenswrapper[4942]: I0218 19:18:08.978685 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:08 crc kubenswrapper[4942]: I0218 19:18:08.978755 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:08 crc kubenswrapper[4942]: I0218 19:18:08.978788 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:08 crc kubenswrapper[4942]: I0218 19:18:08.978811 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:08 crc kubenswrapper[4942]: I0218 19:18:08.978826 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:08Z","lastTransitionTime":"2026-02-18T19:18:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:08 crc kubenswrapper[4942]: I0218 19:18:08.986374 4942 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-03 07:21:25.35222522 +0000 UTC Feb 18 19:18:09 crc kubenswrapper[4942]: I0218 19:18:09.035200 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 19:18:09 crc kubenswrapper[4942]: E0218 19:18:09.035412 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 19:18:09 crc kubenswrapper[4942]: I0218 19:18:09.083158 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:09 crc kubenswrapper[4942]: I0218 19:18:09.083246 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:09 crc kubenswrapper[4942]: I0218 19:18:09.083272 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:09 crc kubenswrapper[4942]: I0218 19:18:09.083305 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:09 crc kubenswrapper[4942]: I0218 19:18:09.083332 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:09Z","lastTransitionTime":"2026-02-18T19:18:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:09 crc kubenswrapper[4942]: I0218 19:18:09.186021 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:09 crc kubenswrapper[4942]: I0218 19:18:09.186082 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:09 crc kubenswrapper[4942]: I0218 19:18:09.186099 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:09 crc kubenswrapper[4942]: I0218 19:18:09.186123 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:09 crc kubenswrapper[4942]: I0218 19:18:09.186139 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:09Z","lastTransitionTime":"2026-02-18T19:18:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:09 crc kubenswrapper[4942]: I0218 19:18:09.289650 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:09 crc kubenswrapper[4942]: I0218 19:18:09.289726 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:09 crc kubenswrapper[4942]: I0218 19:18:09.289746 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:09 crc kubenswrapper[4942]: I0218 19:18:09.289802 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:09 crc kubenswrapper[4942]: I0218 19:18:09.289827 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:09Z","lastTransitionTime":"2026-02-18T19:18:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:09 crc kubenswrapper[4942]: I0218 19:18:09.393311 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:09 crc kubenswrapper[4942]: I0218 19:18:09.393383 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:09 crc kubenswrapper[4942]: I0218 19:18:09.393401 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:09 crc kubenswrapper[4942]: I0218 19:18:09.393431 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:09 crc kubenswrapper[4942]: I0218 19:18:09.393450 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:09Z","lastTransitionTime":"2026-02-18T19:18:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:09 crc kubenswrapper[4942]: I0218 19:18:09.473296 4942 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-89fzv_45dc4164-81a9-44cf-b86a-dff571bc0417/ovnkube-controller/2.log" Feb 18 19:18:09 crc kubenswrapper[4942]: I0218 19:18:09.478557 4942 scope.go:117] "RemoveContainer" containerID="5429604f7b234287bf3af48f519550433f88494f95c33feb27806630d47483a5" Feb 18 19:18:09 crc kubenswrapper[4942]: E0218 19:18:09.478851 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-89fzv_openshift-ovn-kubernetes(45dc4164-81a9-44cf-b86a-dff571bc0417)\"" pod="openshift-ovn-kubernetes/ovnkube-node-89fzv" podUID="45dc4164-81a9-44cf-b86a-dff571bc0417" Feb 18 19:18:09 crc kubenswrapper[4942]: I0218 19:18:09.496440 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:09 crc kubenswrapper[4942]: I0218 19:18:09.496503 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:09 crc kubenswrapper[4942]: I0218 19:18:09.496522 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:09 crc kubenswrapper[4942]: I0218 19:18:09.496550 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:09 crc kubenswrapper[4942]: I0218 19:18:09.496574 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:09Z","lastTransitionTime":"2026-02-18T19:18:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:09 crc kubenswrapper[4942]: I0218 19:18:09.500282 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:09Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:09 crc kubenswrapper[4942]: I0218 19:18:09.516441 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-5pgvt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f163820b-df8b-4e07-9b74-d5f3332580a6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://97b02b2ef091c462632d385e824d90a6dc8270726bb3b5dfaa6c3036e99d323f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjg6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:17:41Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-5pgvt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:09Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:09 crc kubenswrapper[4942]: I0218 19:18:09.537895 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-2rbc4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1ec6cacf-a09d-43b8-89fc-aea1d5a0ca9d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d379b6cff5fad06493f1e137d6f8de20b35e5350025c5875db8afb23cf30ac97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-27v7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://26e15915f01864e6357f914244e51cc52eb7c1c79fdda6cebfb23c7723978fc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://26e15915f01864e6357f914244e51cc52eb7c1c79fdda6cebfb23c7723978fc7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:17:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-27v7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3845cae9bbde573b86221f80bf461886dadf5c5149bb19824e505c703e168ba3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3845cae9bbde573b86221f80bf461886dadf5c5149bb19824e505c703e168ba3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:17:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-27v7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://83b7a573c632fbc4da1c088193202dcbe4f5f09d82c98b12e901a06acc877b80\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://83b7a573c632fbc4da1c088193202dcbe4f5f09d82c98b12e901a06acc877b80\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:17:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:17:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-27v7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2730d908eb063a0dc3278a304a8b7b9aee84bb6df39693e476d6517362864da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b2730d908eb063a0dc3278a304a8b7b9aee84bb6df39693e476d6517362864da\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:17:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:17:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-27v7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://86ba552c18df4c07b6d6b34acf51c27ec696374ddd079486c045e1cb9f68f703\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://86ba552c18df4c07b6d6b34acf51c27ec696374ddd079486c045e1cb9f68f703\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:17:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:17:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-27v7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://522b8abd41e12aecabbbc8a1f16dd8978b1e72b0984784780349570290bcc168\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://522b8abd41e12aecabbbc8a1f16dd8978b1e72b0984784780349570290bcc168\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:17:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:17:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-27v7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:17:42Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-2rbc4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:09Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:09 crc kubenswrapper[4942]: I0218 19:18:09.558600 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:09Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:09 crc kubenswrapper[4942]: I0218 19:18:09.577063 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wqxh4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"28921539-823a-4439-a230-3b5aed7085cc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d1f426cf3a46e9dbd6da2d7e0d1dc2649a781bb63b9b116e2e96e297ffe685f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c2zj5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3f2583de812c35d32f50918d2ea1071672e650d7bb1eca09416558ca25526b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c2zj5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:17:42Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wqxh4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:09Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:09 crc kubenswrapper[4942]: I0218 19:18:09.596012 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-8jfwb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"75150b8c-7a02-497b-86c3-eabc9c8dbc55\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f6aba9b40a3a963de7e8fb8f2a121318f0800350a41caa30b6aef71468e5e0e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-65c5q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:17:42Z\\\"}}\" for pod \"openshift-multus\"/\"multus-8jfwb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:09Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:09 crc kubenswrapper[4942]: I0218 19:18:09.600150 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:09 crc kubenswrapper[4942]: I0218 19:18:09.600227 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:09 crc kubenswrapper[4942]: I0218 19:18:09.600246 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:09 crc kubenswrapper[4942]: I0218 19:18:09.600275 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:09 crc kubenswrapper[4942]: I0218 19:18:09.600292 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:09Z","lastTransitionTime":"2026-02-18T19:18:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:09 crc kubenswrapper[4942]: I0218 19:18:09.634879 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-89fzv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"45dc4164-81a9-44cf-b86a-dff571bc0417\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:42Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:42Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e988175a524e389ddf3e3a47acb65910ac3bf3b812e14b76d988f13e2cdc5dc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl7tj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9333dac09e056ca12a248589ed4a097788b86ab83f9a1014d76d8bad88f1800c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl7tj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bcc9ee5f12cc3a3518c9fe13c16743e946e59b82dc01239767afb1e4afb2e4b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl7tj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2e222b580b244e85a382499ae61c72779f95fdab87e4d4c723d29b488219f94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl7tj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6351d0088a3e9c170ebe043fa700ef7f870c52f40d751b4fd13ac7b5bfa5e3b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl7tj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://427d7c083c5040fc6afe217c7850f1114323977542e83eb35d0a71b4bef6ecc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl7tj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5429604f7b234287bf3af48f519550433f88494f95c33feb27806630d47483a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5429604f7b234287bf3af48f519550433f88494f95c33feb27806630d47483a5\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-18T19:18:08Z\\\",\\\"message\\\":\\\"I0218 19:18:08.178608 6626 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0218 19:18:08.178631 6626 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0218 19:18:08.178626 6626 handler.go:208] Removed *v1.Node event handler 2\\\\nI0218 19:18:08.178651 6626 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0218 19:18:08.178672 6626 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0218 19:18:08.178699 6626 handler.go:208] Removed *v1.Node event handler 7\\\\nI0218 19:18:08.178672 6626 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0218 19:18:08.178745 6626 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0218 19:18:08.178787 6626 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0218 19:18:08.178795 6626 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0218 19:18:08.178814 6626 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0218 19:18:08.178831 6626 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0218 19:18:08.178834 6626 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0218 19:18:08.178850 6626 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0218 19:18:08.178859 6626 factory.go:656] Stopping watch factory\\\\nI0218 19:18:08.178876 6626 ovnkube.go:599] Stopped ovnkube\\\\nI0218 19:18:0\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T19:18:07Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-89fzv_openshift-ovn-kubernetes(45dc4164-81a9-44cf-b86a-dff571bc0417)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl7tj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c498aa99d3ec10af57c279f23804f4dce52a99d2c73fafe2bd9dc6ea454c7a23\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl7tj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://581689b9e064557a35e24e6d5c15a73036b8499700959fd330e9ebee15543edc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://581689b9e064557a35e24e6d5c15a73036b8499700959fd330e9ebee15543edc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:17:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:17:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl7tj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:17:42Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-89fzv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:09Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:09 crc kubenswrapper[4942]: I0218 19:18:09.656350 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://36f5db0de79285e1aca04aee9ebb8824353d8746f2f7df24be858a55db3c9abf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:09Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:09 crc kubenswrapper[4942]: I0218 19:18:09.676133 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2e4be8605467674f949e5b4b8d282634126ab56d2983d5ffadb64ca4043b79b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:09Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:09 crc kubenswrapper[4942]: I0218 19:18:09.700085 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4da93830-99a3-4d84-91c8-a5352a987b3f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://beecfbdf76954e7b9895240b52a2ec033ec3b81094ece02095f67a5f389d0383\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca3d8e99733c89b17e7211c9bae268f8e75942d896d32a6e2e9fc7e613000a6d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee5e19c2c5a503ae69e8052828713b9b399137e0fb7f3a06865d4d7f6b29c954\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c787e65428258ae002dd2569d2e100857851a5b699d573b42e59d1be987da8b3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b82ad9b4d66dae63d0d0fcf0dd65333cf43c3b1d6006e129b7db1c6320bfdee8\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"message\\\":\\\"le observer\\\\nW0218 19:17:41.723890 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0218 19:17:41.724123 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0218 19:17:41.725411 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3231040961/tls.crt::/tmp/serving-cert-3231040961/tls.key\\\\\\\"\\\\nI0218 19:17:41.923908 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0218 19:17:41.936017 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0218 19:17:41.936045 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0218 19:17:41.936073 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0218 19:17:41.936079 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0218 19:17:41.944174 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0218 19:17:41.944200 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 19:17:41.944205 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 19:17:41.944211 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0218 19:17:41.944214 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0218 19:17:41.944217 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0218 19:17:41.944220 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0218 19:17:41.944371 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0218 19:17:41.958094 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T19:17:41Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5fcd5de3303bba82e4a354de9f77b9aac574912955c2e49e2e74232f4d432a88\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:23Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6adb31d2464d541db843bddad1c43811ca11900db12deeb0d7c13ff8a3186d09\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6adb31d2464d541db843bddad1c43811ca11900db12deeb0d7c13ff8a3186d09\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:17:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:17:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:17:21Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:09Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:09 crc kubenswrapper[4942]: I0218 19:18:09.707000 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:09 crc kubenswrapper[4942]: I0218 19:18:09.707059 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:09 crc kubenswrapper[4942]: I0218 19:18:09.707078 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:09 crc kubenswrapper[4942]: I0218 19:18:09.707106 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:09 crc kubenswrapper[4942]: I0218 19:18:09.707125 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:09Z","lastTransitionTime":"2026-02-18T19:18:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:09 crc kubenswrapper[4942]: I0218 19:18:09.723096 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:09Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:09 crc kubenswrapper[4942]: I0218 19:18:09.741040 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-wxck8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"69ef2748-687e-4223-998e-7bd92ad8aaaf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ba4df5c822ff37a1a027d1908aab6472cd0b5a6ab0a2b5e5d1b172774107727\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vscpp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:17:45Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-wxck8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:09Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:09 crc kubenswrapper[4942]: I0218 19:18:09.761995 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b5d2b9d-7ec0-41fa-a073-399c6fd41eb6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2c8b81c113e461032be39d6328308bad3189a9e84d987da987d43e8e2f6449fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3654d3b4a5084ce9ffb9ef8aeab6155788b56ac636aee44b098f6e9d457a8d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3a247d311cfbec62a54df5757a344bbc7ea516a66ccdeb67aecbbe268a4fbe4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://117748c4c4fa5e68d4b927639faa447ed3a984e0d7364a2224abe27e178d5746\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:17:21Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:09Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:09 crc kubenswrapper[4942]: I0218 19:18:09.783731 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8cc6e8b6926e9cadf0bfdedb3a9fd0e5a7a902ba1cc703cd0396c3d7b2ec8666\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://45c0716738e2acbb0104b2ce05e3f23fd6933b653297d10972914500f3e55cd3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:09Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:09 crc kubenswrapper[4942]: I0218 19:18:09.803509 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-xk99z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8f8b40cd-7bbd-4189-a8c0-f4131e8b9add\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b7ea4ede9f2f9b4438bc9befcf913e5b8c7b9dc765fa1edce809e17c5ac933a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2zxvs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3573f095c220e3b1994394b83fdf24c7d1a721ccee2755042f520467f21ae1fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2zxvs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:17:54Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-xk99z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:09Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:09 crc kubenswrapper[4942]: I0218 19:18:09.810296 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:09 crc kubenswrapper[4942]: I0218 19:18:09.810373 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:09 crc kubenswrapper[4942]: I0218 19:18:09.810392 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:09 crc kubenswrapper[4942]: I0218 19:18:09.810419 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:09 crc kubenswrapper[4942]: I0218 19:18:09.810437 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:09Z","lastTransitionTime":"2026-02-18T19:18:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:09 crc kubenswrapper[4942]: I0218 19:18:09.821818 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-qwg6q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ac5b5f40-34db-4aeb-abb4-57204673bd53\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:56Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:56Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kmmq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kmmq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:17:56Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-qwg6q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:09Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:09 crc kubenswrapper[4942]: I0218 19:18:09.914338 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:09 crc kubenswrapper[4942]: I0218 19:18:09.914397 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:09 crc kubenswrapper[4942]: I0218 19:18:09.914417 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:09 crc kubenswrapper[4942]: I0218 19:18:09.914443 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:09 crc kubenswrapper[4942]: I0218 19:18:09.914461 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:09Z","lastTransitionTime":"2026-02-18T19:18:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:09 crc kubenswrapper[4942]: I0218 19:18:09.987498 4942 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-01 16:35:45.531100897 +0000 UTC Feb 18 19:18:10 crc kubenswrapper[4942]: I0218 19:18:10.017948 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:10 crc kubenswrapper[4942]: I0218 19:18:10.018018 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:10 crc kubenswrapper[4942]: I0218 19:18:10.018043 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:10 crc kubenswrapper[4942]: I0218 19:18:10.018071 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:10 crc kubenswrapper[4942]: I0218 19:18:10.018089 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:10Z","lastTransitionTime":"2026-02-18T19:18:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:10 crc kubenswrapper[4942]: I0218 19:18:10.035528 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 19:18:10 crc kubenswrapper[4942]: I0218 19:18:10.035632 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 19:18:10 crc kubenswrapper[4942]: I0218 19:18:10.035623 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qwg6q" Feb 18 19:18:10 crc kubenswrapper[4942]: E0218 19:18:10.036023 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 19:18:10 crc kubenswrapper[4942]: E0218 19:18:10.036573 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 19:18:10 crc kubenswrapper[4942]: E0218 19:18:10.037183 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qwg6q" podUID="ac5b5f40-34db-4aeb-abb4-57204673bd53" Feb 18 19:18:10 crc kubenswrapper[4942]: I0218 19:18:10.121840 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:10 crc kubenswrapper[4942]: I0218 19:18:10.121928 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:10 crc kubenswrapper[4942]: I0218 19:18:10.121955 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:10 crc kubenswrapper[4942]: I0218 19:18:10.121990 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:10 crc kubenswrapper[4942]: I0218 19:18:10.122014 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:10Z","lastTransitionTime":"2026-02-18T19:18:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:10 crc kubenswrapper[4942]: I0218 19:18:10.225215 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:10 crc kubenswrapper[4942]: I0218 19:18:10.225268 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:10 crc kubenswrapper[4942]: I0218 19:18:10.225285 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:10 crc kubenswrapper[4942]: I0218 19:18:10.225310 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:10 crc kubenswrapper[4942]: I0218 19:18:10.225328 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:10Z","lastTransitionTime":"2026-02-18T19:18:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:10 crc kubenswrapper[4942]: I0218 19:18:10.329239 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:10 crc kubenswrapper[4942]: I0218 19:18:10.329320 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:10 crc kubenswrapper[4942]: I0218 19:18:10.329345 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:10 crc kubenswrapper[4942]: I0218 19:18:10.329378 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:10 crc kubenswrapper[4942]: I0218 19:18:10.329401 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:10Z","lastTransitionTime":"2026-02-18T19:18:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:10 crc kubenswrapper[4942]: I0218 19:18:10.432302 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:10 crc kubenswrapper[4942]: I0218 19:18:10.432396 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:10 crc kubenswrapper[4942]: I0218 19:18:10.432419 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:10 crc kubenswrapper[4942]: I0218 19:18:10.432448 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:10 crc kubenswrapper[4942]: I0218 19:18:10.432467 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:10Z","lastTransitionTime":"2026-02-18T19:18:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:10 crc kubenswrapper[4942]: I0218 19:18:10.535121 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:10 crc kubenswrapper[4942]: I0218 19:18:10.535190 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:10 crc kubenswrapper[4942]: I0218 19:18:10.535199 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:10 crc kubenswrapper[4942]: I0218 19:18:10.535218 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:10 crc kubenswrapper[4942]: I0218 19:18:10.535247 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:10Z","lastTransitionTime":"2026-02-18T19:18:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:10 crc kubenswrapper[4942]: I0218 19:18:10.638890 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:10 crc kubenswrapper[4942]: I0218 19:18:10.639363 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:10 crc kubenswrapper[4942]: I0218 19:18:10.639398 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:10 crc kubenswrapper[4942]: I0218 19:18:10.639421 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:10 crc kubenswrapper[4942]: I0218 19:18:10.639435 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:10Z","lastTransitionTime":"2026-02-18T19:18:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:10 crc kubenswrapper[4942]: I0218 19:18:10.743407 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:10 crc kubenswrapper[4942]: I0218 19:18:10.743481 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:10 crc kubenswrapper[4942]: I0218 19:18:10.743498 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:10 crc kubenswrapper[4942]: I0218 19:18:10.743525 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:10 crc kubenswrapper[4942]: I0218 19:18:10.743546 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:10Z","lastTransitionTime":"2026-02-18T19:18:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:10 crc kubenswrapper[4942]: I0218 19:18:10.847701 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:10 crc kubenswrapper[4942]: I0218 19:18:10.847792 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:10 crc kubenswrapper[4942]: I0218 19:18:10.847805 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:10 crc kubenswrapper[4942]: I0218 19:18:10.847824 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:10 crc kubenswrapper[4942]: I0218 19:18:10.847837 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:10Z","lastTransitionTime":"2026-02-18T19:18:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:10 crc kubenswrapper[4942]: I0218 19:18:10.951680 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:10 crc kubenswrapper[4942]: I0218 19:18:10.951730 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:10 crc kubenswrapper[4942]: I0218 19:18:10.951743 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:10 crc kubenswrapper[4942]: I0218 19:18:10.951793 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:10 crc kubenswrapper[4942]: I0218 19:18:10.951810 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:10Z","lastTransitionTime":"2026-02-18T19:18:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:10 crc kubenswrapper[4942]: I0218 19:18:10.988444 4942 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-03 23:37:04.329019429 +0000 UTC Feb 18 19:18:11 crc kubenswrapper[4942]: I0218 19:18:11.035328 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 19:18:11 crc kubenswrapper[4942]: E0218 19:18:11.035581 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 19:18:11 crc kubenswrapper[4942]: I0218 19:18:11.055237 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:11 crc kubenswrapper[4942]: I0218 19:18:11.055315 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:11 crc kubenswrapper[4942]: I0218 19:18:11.055333 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:11 crc kubenswrapper[4942]: I0218 19:18:11.055408 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:11 crc kubenswrapper[4942]: I0218 19:18:11.055492 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:11Z","lastTransitionTime":"2026-02-18T19:18:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:11 crc kubenswrapper[4942]: I0218 19:18:11.063117 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b5d2b9d-7ec0-41fa-a073-399c6fd41eb6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2c8b81c113e461032be39d6328308bad3189a9e84d987da987d43e8e2f6449fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3654d3b4a5084ce9ffb9ef8aeab6155788b56ac636aee44b098f6e9d457a8d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3a247d311cfbec62a54df5757a344bbc7ea516a66ccdeb67aecbbe268a4fbe4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://117748c4c4fa5e68d4b927639faa447ed3a984e0d7364a2224abe27e178d5746\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:17:21Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:11Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:11 crc kubenswrapper[4942]: I0218 19:18:11.085010 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8cc6e8b6926e9cadf0bfdedb3a9fd0e5a7a902ba1cc703cd0396c3d7b2ec8666\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://45c0716738e2acbb0104b2ce05e3f23fd6933b653297d10972914500f3e55cd3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:11Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:11 crc kubenswrapper[4942]: I0218 19:18:11.106618 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-xk99z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8f8b40cd-7bbd-4189-a8c0-f4131e8b9add\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b7ea4ede9f2f9b4438bc9befcf913e5b8c7b9dc765fa1edce809e17c5ac933a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2zxvs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3573f095c220e3b1994394b83fdf24c7d1a721ccee2755042f520467f21ae1fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2zxvs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:17:54Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-xk99z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:11Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:11 crc kubenswrapper[4942]: I0218 19:18:11.124813 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-qwg6q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ac5b5f40-34db-4aeb-abb4-57204673bd53\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:56Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:56Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kmmq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kmmq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:17:56Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-qwg6q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:11Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:11 crc kubenswrapper[4942]: I0218 19:18:11.148023 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:11Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:11 crc kubenswrapper[4942]: I0218 19:18:11.158366 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:11 crc kubenswrapper[4942]: I0218 19:18:11.158428 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:11 crc kubenswrapper[4942]: I0218 19:18:11.158447 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:11 crc kubenswrapper[4942]: I0218 19:18:11.158473 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:11 crc kubenswrapper[4942]: I0218 19:18:11.158492 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:11Z","lastTransitionTime":"2026-02-18T19:18:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:11 crc kubenswrapper[4942]: I0218 19:18:11.166814 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-5pgvt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f163820b-df8b-4e07-9b74-d5f3332580a6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://97b02b2ef091c462632d385e824d90a6dc8270726bb3b5dfaa6c3036e99d323f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjg6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:17:41Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-5pgvt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:11Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:11 crc kubenswrapper[4942]: I0218 19:18:11.190971 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-2rbc4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1ec6cacf-a09d-43b8-89fc-aea1d5a0ca9d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d379b6cff5fad06493f1e137d6f8de20b35e5350025c5875db8afb23cf30ac97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-27v7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://26e15915f01864e6357f914244e51cc52eb7c1c79fdda6cebfb23c7723978fc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://26e15915f01864e6357f914244e51cc52eb7c1c79fdda6cebfb23c7723978fc7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:17:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-27v7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3845cae9bbde573b86221f80bf461886dadf5c5149bb19824e505c703e168ba3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3845cae9bbde573b86221f80bf461886dadf5c5149bb19824e505c703e168ba3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:17:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-27v7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://83b7a573c632fbc4da1c088193202dcbe4f5f09d82c98b12e901a06acc877b80\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://83b7a573c632fbc4da1c088193202dcbe4f5f09d82c98b12e901a06acc877b80\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:17:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:17:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-27v7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2730d908eb063a0dc3278a304a8b7b9aee84bb6df39693e476d6517362864da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b2730d908eb063a0dc3278a304a8b7b9aee84bb6df39693e476d6517362864da\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:17:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:17:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-27v7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://86ba552c18df4c07b6d6b34acf51c27ec696374ddd079486c045e1cb9f68f703\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://86ba552c18df4c07b6d6b34acf51c27ec696374ddd079486c045e1cb9f68f703\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:17:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:17:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-27v7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://522b8abd41e12aecabbbc8a1f16dd8978b1e72b0984784780349570290bcc168\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://522b8abd41e12aecabbbc8a1f16dd8978b1e72b0984784780349570290bcc168\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:17:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:17:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-27v7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:17:42Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-2rbc4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:11Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:11 crc kubenswrapper[4942]: I0218 19:18:11.228399 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-89fzv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"45dc4164-81a9-44cf-b86a-dff571bc0417\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:42Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:42Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e988175a524e389ddf3e3a47acb65910ac3bf3b812e14b76d988f13e2cdc5dc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl7tj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9333dac09e056ca12a248589ed4a097788b86ab83f9a1014d76d8bad88f1800c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl7tj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bcc9ee5f12cc3a3518c9fe13c16743e946e59b82dc01239767afb1e4afb2e4b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl7tj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2e222b580b244e85a382499ae61c72779f95fdab87e4d4c723d29b488219f94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl7tj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6351d0088a3e9c170ebe043fa700ef7f870c52f40d751b4fd13ac7b5bfa5e3b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl7tj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://427d7c083c5040fc6afe217c7850f1114323977542e83eb35d0a71b4bef6ecc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl7tj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5429604f7b234287bf3af48f519550433f88494f95c33feb27806630d47483a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5429604f7b234287bf3af48f519550433f88494f95c33feb27806630d47483a5\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-18T19:18:08Z\\\",\\\"message\\\":\\\"I0218 19:18:08.178608 6626 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0218 19:18:08.178631 6626 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0218 19:18:08.178626 6626 handler.go:208] Removed *v1.Node event handler 2\\\\nI0218 19:18:08.178651 6626 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0218 19:18:08.178672 6626 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0218 19:18:08.178699 6626 handler.go:208] Removed *v1.Node event handler 7\\\\nI0218 19:18:08.178672 6626 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0218 19:18:08.178745 6626 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0218 19:18:08.178787 6626 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0218 19:18:08.178795 6626 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0218 19:18:08.178814 6626 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0218 19:18:08.178831 6626 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0218 19:18:08.178834 6626 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0218 19:18:08.178850 6626 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0218 19:18:08.178859 6626 factory.go:656] Stopping watch factory\\\\nI0218 19:18:08.178876 6626 ovnkube.go:599] Stopped ovnkube\\\\nI0218 19:18:0\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T19:18:07Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-89fzv_openshift-ovn-kubernetes(45dc4164-81a9-44cf-b86a-dff571bc0417)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl7tj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c498aa99d3ec10af57c279f23804f4dce52a99d2c73fafe2bd9dc6ea454c7a23\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl7tj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://581689b9e064557a35e24e6d5c15a73036b8499700959fd330e9ebee15543edc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://581689b9e064557a35e24e6d5c15a73036b8499700959fd330e9ebee15543edc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:17:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:17:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl7tj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:17:42Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-89fzv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:11Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:11 crc kubenswrapper[4942]: I0218 19:18:11.247592 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://36f5db0de79285e1aca04aee9ebb8824353d8746f2f7df24be858a55db3c9abf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:11Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:11 crc kubenswrapper[4942]: I0218 19:18:11.264634 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:11 crc kubenswrapper[4942]: I0218 19:18:11.264722 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:11 crc kubenswrapper[4942]: I0218 19:18:11.264745 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:11 crc kubenswrapper[4942]: I0218 19:18:11.264806 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:11 crc kubenswrapper[4942]: I0218 19:18:11.264828 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:11Z","lastTransitionTime":"2026-02-18T19:18:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:11 crc kubenswrapper[4942]: I0218 19:18:11.268513 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2e4be8605467674f949e5b4b8d282634126ab56d2983d5ffadb64ca4043b79b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:11Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:11 crc kubenswrapper[4942]: I0218 19:18:11.286407 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:11Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:11 crc kubenswrapper[4942]: I0218 19:18:11.303906 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wqxh4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"28921539-823a-4439-a230-3b5aed7085cc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d1f426cf3a46e9dbd6da2d7e0d1dc2649a781bb63b9b116e2e96e297ffe685f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c2zj5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3f2583de812c35d32f50918d2ea1071672e650d7bb1eca09416558ca25526b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c2zj5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:17:42Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wqxh4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:11Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:11 crc kubenswrapper[4942]: I0218 19:18:11.319677 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-8jfwb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"75150b8c-7a02-497b-86c3-eabc9c8dbc55\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f6aba9b40a3a963de7e8fb8f2a121318f0800350a41caa30b6aef71468e5e0e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-65c5q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:17:42Z\\\"}}\" for pod \"openshift-multus\"/\"multus-8jfwb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:11Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:11 crc kubenswrapper[4942]: I0218 19:18:11.336529 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4da93830-99a3-4d84-91c8-a5352a987b3f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://beecfbdf76954e7b9895240b52a2ec033ec3b81094ece02095f67a5f389d0383\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca3d8e99733c89b17e7211c9bae268f8e75942d896d32a6e2e9fc7e613000a6d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee5e19c2c5a503ae69e8052828713b9b399137e0fb7f3a06865d4d7f6b29c954\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c787e65428258ae002dd2569d2e100857851a5b699d573b42e59d1be987da8b3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b82ad9b4d66dae63d0d0fcf0dd65333cf43c3b1d6006e129b7db1c6320bfdee8\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"message\\\":\\\"le observer\\\\nW0218 19:17:41.723890 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0218 19:17:41.724123 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0218 19:17:41.725411 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3231040961/tls.crt::/tmp/serving-cert-3231040961/tls.key\\\\\\\"\\\\nI0218 19:17:41.923908 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0218 19:17:41.936017 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0218 19:17:41.936045 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0218 19:17:41.936073 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0218 19:17:41.936079 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0218 19:17:41.944174 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0218 19:17:41.944200 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 19:17:41.944205 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 19:17:41.944211 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0218 19:17:41.944214 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0218 19:17:41.944217 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0218 19:17:41.944220 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0218 19:17:41.944371 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0218 19:17:41.958094 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T19:17:41Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5fcd5de3303bba82e4a354de9f77b9aac574912955c2e49e2e74232f4d432a88\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:23Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6adb31d2464d541db843bddad1c43811ca11900db12deeb0d7c13ff8a3186d09\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6adb31d2464d541db843bddad1c43811ca11900db12deeb0d7c13ff8a3186d09\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:17:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:17:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:17:21Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:11Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:11 crc kubenswrapper[4942]: I0218 19:18:11.352927 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:11Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:11 crc kubenswrapper[4942]: I0218 19:18:11.367438 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-wxck8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"69ef2748-687e-4223-998e-7bd92ad8aaaf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ba4df5c822ff37a1a027d1908aab6472cd0b5a6ab0a2b5e5d1b172774107727\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vscpp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:17:45Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-wxck8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:11Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:11 crc kubenswrapper[4942]: I0218 19:18:11.368926 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:11 crc kubenswrapper[4942]: I0218 19:18:11.369064 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:11 crc kubenswrapper[4942]: I0218 19:18:11.369166 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:11 crc kubenswrapper[4942]: I0218 19:18:11.369264 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:11 crc kubenswrapper[4942]: I0218 19:18:11.369370 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:11Z","lastTransitionTime":"2026-02-18T19:18:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:11 crc kubenswrapper[4942]: I0218 19:18:11.473294 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:11 crc kubenswrapper[4942]: I0218 19:18:11.473577 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:11 crc kubenswrapper[4942]: I0218 19:18:11.473676 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:11 crc kubenswrapper[4942]: I0218 19:18:11.473752 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:11 crc kubenswrapper[4942]: I0218 19:18:11.473854 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:11Z","lastTransitionTime":"2026-02-18T19:18:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:11 crc kubenswrapper[4942]: I0218 19:18:11.578063 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:11 crc kubenswrapper[4942]: I0218 19:18:11.578145 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:11 crc kubenswrapper[4942]: I0218 19:18:11.578170 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:11 crc kubenswrapper[4942]: I0218 19:18:11.578205 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:11 crc kubenswrapper[4942]: I0218 19:18:11.578230 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:11Z","lastTransitionTime":"2026-02-18T19:18:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:11 crc kubenswrapper[4942]: I0218 19:18:11.682150 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:11 crc kubenswrapper[4942]: I0218 19:18:11.682241 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:11 crc kubenswrapper[4942]: I0218 19:18:11.682258 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:11 crc kubenswrapper[4942]: I0218 19:18:11.682291 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:11 crc kubenswrapper[4942]: I0218 19:18:11.682317 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:11Z","lastTransitionTime":"2026-02-18T19:18:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:11 crc kubenswrapper[4942]: I0218 19:18:11.785521 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:11 crc kubenswrapper[4942]: I0218 19:18:11.785600 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:11 crc kubenswrapper[4942]: I0218 19:18:11.785623 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:11 crc kubenswrapper[4942]: I0218 19:18:11.785657 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:11 crc kubenswrapper[4942]: I0218 19:18:11.785680 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:11Z","lastTransitionTime":"2026-02-18T19:18:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:11 crc kubenswrapper[4942]: I0218 19:18:11.888885 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:11 crc kubenswrapper[4942]: I0218 19:18:11.888949 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:11 crc kubenswrapper[4942]: I0218 19:18:11.888968 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:11 crc kubenswrapper[4942]: I0218 19:18:11.888993 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:11 crc kubenswrapper[4942]: I0218 19:18:11.889014 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:11Z","lastTransitionTime":"2026-02-18T19:18:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:11 crc kubenswrapper[4942]: I0218 19:18:11.989005 4942 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-31 08:11:58.571778786 +0000 UTC Feb 18 19:18:11 crc kubenswrapper[4942]: I0218 19:18:11.992606 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:11 crc kubenswrapper[4942]: I0218 19:18:11.992725 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:11 crc kubenswrapper[4942]: I0218 19:18:11.992746 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:11 crc kubenswrapper[4942]: I0218 19:18:11.992861 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:11 crc kubenswrapper[4942]: I0218 19:18:11.992879 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:11Z","lastTransitionTime":"2026-02-18T19:18:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:12 crc kubenswrapper[4942]: I0218 19:18:12.035235 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 19:18:12 crc kubenswrapper[4942]: I0218 19:18:12.035344 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 19:18:12 crc kubenswrapper[4942]: I0218 19:18:12.035275 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qwg6q" Feb 18 19:18:12 crc kubenswrapper[4942]: E0218 19:18:12.035500 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 19:18:12 crc kubenswrapper[4942]: E0218 19:18:12.035743 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qwg6q" podUID="ac5b5f40-34db-4aeb-abb4-57204673bd53" Feb 18 19:18:12 crc kubenswrapper[4942]: E0218 19:18:12.035987 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 19:18:12 crc kubenswrapper[4942]: I0218 19:18:12.065863 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/ac5b5f40-34db-4aeb-abb4-57204673bd53-metrics-certs\") pod \"network-metrics-daemon-qwg6q\" (UID: \"ac5b5f40-34db-4aeb-abb4-57204673bd53\") " pod="openshift-multus/network-metrics-daemon-qwg6q" Feb 18 19:18:12 crc kubenswrapper[4942]: E0218 19:18:12.065997 4942 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 18 19:18:12 crc kubenswrapper[4942]: E0218 19:18:12.066110 4942 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ac5b5f40-34db-4aeb-abb4-57204673bd53-metrics-certs podName:ac5b5f40-34db-4aeb-abb4-57204673bd53 nodeName:}" failed. No retries permitted until 2026-02-18 19:18:28.06607437 +0000 UTC m=+67.771007035 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/ac5b5f40-34db-4aeb-abb4-57204673bd53-metrics-certs") pod "network-metrics-daemon-qwg6q" (UID: "ac5b5f40-34db-4aeb-abb4-57204673bd53") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 18 19:18:12 crc kubenswrapper[4942]: I0218 19:18:12.096201 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:12 crc kubenswrapper[4942]: I0218 19:18:12.096279 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:12 crc kubenswrapper[4942]: I0218 19:18:12.096302 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:12 crc kubenswrapper[4942]: I0218 19:18:12.096334 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:12 crc kubenswrapper[4942]: I0218 19:18:12.096359 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:12Z","lastTransitionTime":"2026-02-18T19:18:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:12 crc kubenswrapper[4942]: I0218 19:18:12.199758 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:12 crc kubenswrapper[4942]: I0218 19:18:12.199865 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:12 crc kubenswrapper[4942]: I0218 19:18:12.199888 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:12 crc kubenswrapper[4942]: I0218 19:18:12.199916 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:12 crc kubenswrapper[4942]: I0218 19:18:12.199938 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:12Z","lastTransitionTime":"2026-02-18T19:18:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:12 crc kubenswrapper[4942]: I0218 19:18:12.304018 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:12 crc kubenswrapper[4942]: I0218 19:18:12.304103 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:12 crc kubenswrapper[4942]: I0218 19:18:12.304128 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:12 crc kubenswrapper[4942]: I0218 19:18:12.304166 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:12 crc kubenswrapper[4942]: I0218 19:18:12.304194 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:12Z","lastTransitionTime":"2026-02-18T19:18:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:12 crc kubenswrapper[4942]: I0218 19:18:12.407772 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:12 crc kubenswrapper[4942]: I0218 19:18:12.407819 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:12 crc kubenswrapper[4942]: I0218 19:18:12.407830 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:12 crc kubenswrapper[4942]: I0218 19:18:12.407853 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:12 crc kubenswrapper[4942]: I0218 19:18:12.407864 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:12Z","lastTransitionTime":"2026-02-18T19:18:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:12 crc kubenswrapper[4942]: I0218 19:18:12.511106 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:12 crc kubenswrapper[4942]: I0218 19:18:12.511162 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:12 crc kubenswrapper[4942]: I0218 19:18:12.511171 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:12 crc kubenswrapper[4942]: I0218 19:18:12.511192 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:12 crc kubenswrapper[4942]: I0218 19:18:12.511203 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:12Z","lastTransitionTime":"2026-02-18T19:18:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:12 crc kubenswrapper[4942]: I0218 19:18:12.614677 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:12 crc kubenswrapper[4942]: I0218 19:18:12.614809 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:12 crc kubenswrapper[4942]: I0218 19:18:12.614834 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:12 crc kubenswrapper[4942]: I0218 19:18:12.614867 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:12 crc kubenswrapper[4942]: I0218 19:18:12.614898 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:12Z","lastTransitionTime":"2026-02-18T19:18:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:12 crc kubenswrapper[4942]: I0218 19:18:12.718345 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:12 crc kubenswrapper[4942]: I0218 19:18:12.718418 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:12 crc kubenswrapper[4942]: I0218 19:18:12.718436 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:12 crc kubenswrapper[4942]: I0218 19:18:12.718463 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:12 crc kubenswrapper[4942]: I0218 19:18:12.718483 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:12Z","lastTransitionTime":"2026-02-18T19:18:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:12 crc kubenswrapper[4942]: I0218 19:18:12.821418 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:12 crc kubenswrapper[4942]: I0218 19:18:12.821500 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:12 crc kubenswrapper[4942]: I0218 19:18:12.821524 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:12 crc kubenswrapper[4942]: I0218 19:18:12.821555 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:12 crc kubenswrapper[4942]: I0218 19:18:12.821577 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:12Z","lastTransitionTime":"2026-02-18T19:18:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:12 crc kubenswrapper[4942]: I0218 19:18:12.931168 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:12 crc kubenswrapper[4942]: I0218 19:18:12.931280 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:12 crc kubenswrapper[4942]: I0218 19:18:12.931305 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:12 crc kubenswrapper[4942]: I0218 19:18:12.931340 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:12 crc kubenswrapper[4942]: I0218 19:18:12.931376 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:12Z","lastTransitionTime":"2026-02-18T19:18:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:12 crc kubenswrapper[4942]: I0218 19:18:12.990070 4942 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-19 12:21:00.52788685 +0000 UTC Feb 18 19:18:13 crc kubenswrapper[4942]: I0218 19:18:13.034532 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:13 crc kubenswrapper[4942]: I0218 19:18:13.034597 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:13 crc kubenswrapper[4942]: I0218 19:18:13.034615 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:13 crc kubenswrapper[4942]: I0218 19:18:13.034642 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:13 crc kubenswrapper[4942]: I0218 19:18:13.034661 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:13Z","lastTransitionTime":"2026-02-18T19:18:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:13 crc kubenswrapper[4942]: I0218 19:18:13.035206 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 19:18:13 crc kubenswrapper[4942]: E0218 19:18:13.035493 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 19:18:13 crc kubenswrapper[4942]: I0218 19:18:13.137928 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:13 crc kubenswrapper[4942]: I0218 19:18:13.137995 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:13 crc kubenswrapper[4942]: I0218 19:18:13.138013 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:13 crc kubenswrapper[4942]: I0218 19:18:13.138041 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:13 crc kubenswrapper[4942]: I0218 19:18:13.138058 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:13Z","lastTransitionTime":"2026-02-18T19:18:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:13 crc kubenswrapper[4942]: I0218 19:18:13.242405 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:13 crc kubenswrapper[4942]: I0218 19:18:13.242462 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:13 crc kubenswrapper[4942]: I0218 19:18:13.242478 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:13 crc kubenswrapper[4942]: I0218 19:18:13.242499 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:13 crc kubenswrapper[4942]: I0218 19:18:13.242516 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:13Z","lastTransitionTime":"2026-02-18T19:18:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:13 crc kubenswrapper[4942]: I0218 19:18:13.345866 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:13 crc kubenswrapper[4942]: I0218 19:18:13.345945 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:13 crc kubenswrapper[4942]: I0218 19:18:13.345963 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:13 crc kubenswrapper[4942]: I0218 19:18:13.345989 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:13 crc kubenswrapper[4942]: I0218 19:18:13.346009 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:13Z","lastTransitionTime":"2026-02-18T19:18:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:13 crc kubenswrapper[4942]: I0218 19:18:13.449546 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:13 crc kubenswrapper[4942]: I0218 19:18:13.449625 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:13 crc kubenswrapper[4942]: I0218 19:18:13.449647 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:13 crc kubenswrapper[4942]: I0218 19:18:13.449677 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:13 crc kubenswrapper[4942]: I0218 19:18:13.449701 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:13Z","lastTransitionTime":"2026-02-18T19:18:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:13 crc kubenswrapper[4942]: I0218 19:18:13.552487 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:13 crc kubenswrapper[4942]: I0218 19:18:13.552558 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:13 crc kubenswrapper[4942]: I0218 19:18:13.552583 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:13 crc kubenswrapper[4942]: I0218 19:18:13.552615 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:13 crc kubenswrapper[4942]: I0218 19:18:13.552637 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:13Z","lastTransitionTime":"2026-02-18T19:18:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:13 crc kubenswrapper[4942]: I0218 19:18:13.655637 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:13 crc kubenswrapper[4942]: I0218 19:18:13.655701 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:13 crc kubenswrapper[4942]: I0218 19:18:13.655719 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:13 crc kubenswrapper[4942]: I0218 19:18:13.655742 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:13 crc kubenswrapper[4942]: I0218 19:18:13.655755 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:13Z","lastTransitionTime":"2026-02-18T19:18:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:13 crc kubenswrapper[4942]: I0218 19:18:13.758223 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:13 crc kubenswrapper[4942]: I0218 19:18:13.758283 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:13 crc kubenswrapper[4942]: I0218 19:18:13.758297 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:13 crc kubenswrapper[4942]: I0218 19:18:13.758317 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:13 crc kubenswrapper[4942]: I0218 19:18:13.758331 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:13Z","lastTransitionTime":"2026-02-18T19:18:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:13 crc kubenswrapper[4942]: I0218 19:18:13.785745 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 19:18:13 crc kubenswrapper[4942]: I0218 19:18:13.785910 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 19:18:13 crc kubenswrapper[4942]: E0218 19:18:13.786132 4942 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 18 19:18:13 crc kubenswrapper[4942]: E0218 19:18:13.786200 4942 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 18 19:18:13 crc kubenswrapper[4942]: E0218 19:18:13.786203 4942 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 18 19:18:13 crc kubenswrapper[4942]: E0218 19:18:13.786264 4942 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 18 19:18:13 crc kubenswrapper[4942]: E0218 19:18:13.786281 4942 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 18 19:18:13 crc kubenswrapper[4942]: E0218 19:18:13.786224 4942 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 18 19:18:13 crc kubenswrapper[4942]: E0218 19:18:13.786357 4942 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-18 19:18:45.786332437 +0000 UTC m=+85.491265112 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 18 19:18:13 crc kubenswrapper[4942]: E0218 19:18:13.786396 4942 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-18 19:18:45.786367498 +0000 UTC m=+85.491300193 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 18 19:18:13 crc kubenswrapper[4942]: I0218 19:18:13.861643 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:13 crc kubenswrapper[4942]: I0218 19:18:13.861691 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:13 crc kubenswrapper[4942]: I0218 19:18:13.861700 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:13 crc kubenswrapper[4942]: I0218 19:18:13.861716 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:13 crc kubenswrapper[4942]: I0218 19:18:13.861726 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:13Z","lastTransitionTime":"2026-02-18T19:18:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:13 crc kubenswrapper[4942]: I0218 19:18:13.965301 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:13 crc kubenswrapper[4942]: I0218 19:18:13.965376 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:13 crc kubenswrapper[4942]: I0218 19:18:13.965394 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:13 crc kubenswrapper[4942]: I0218 19:18:13.965422 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:13 crc kubenswrapper[4942]: I0218 19:18:13.965439 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:13Z","lastTransitionTime":"2026-02-18T19:18:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:13 crc kubenswrapper[4942]: I0218 19:18:13.988116 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 19:18:13 crc kubenswrapper[4942]: E0218 19:18:13.988296 4942 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 19:18:45.988257154 +0000 UTC m=+85.693189859 (durationBeforeRetry 32s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 19:18:13 crc kubenswrapper[4942]: I0218 19:18:13.988448 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 19:18:13 crc kubenswrapper[4942]: I0218 19:18:13.988516 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 19:18:13 crc kubenswrapper[4942]: E0218 19:18:13.988674 4942 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 18 19:18:13 crc kubenswrapper[4942]: E0218 19:18:13.988681 4942 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 18 19:18:13 crc kubenswrapper[4942]: E0218 19:18:13.988756 4942 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-18 19:18:45.988737436 +0000 UTC m=+85.693670141 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 18 19:18:13 crc kubenswrapper[4942]: E0218 19:18:13.988831 4942 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-18 19:18:45.988817058 +0000 UTC m=+85.693749753 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 18 19:18:13 crc kubenswrapper[4942]: I0218 19:18:13.991053 4942 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-12 15:06:01.556037871 +0000 UTC Feb 18 19:18:14 crc kubenswrapper[4942]: I0218 19:18:14.035408 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 19:18:14 crc kubenswrapper[4942]: I0218 19:18:14.035484 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 19:18:14 crc kubenswrapper[4942]: E0218 19:18:14.035611 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 19:18:14 crc kubenswrapper[4942]: I0218 19:18:14.035659 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qwg6q" Feb 18 19:18:14 crc kubenswrapper[4942]: E0218 19:18:14.035848 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 19:18:14 crc kubenswrapper[4942]: E0218 19:18:14.036125 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qwg6q" podUID="ac5b5f40-34db-4aeb-abb4-57204673bd53" Feb 18 19:18:14 crc kubenswrapper[4942]: I0218 19:18:14.068799 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:14 crc kubenswrapper[4942]: I0218 19:18:14.068851 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:14 crc kubenswrapper[4942]: I0218 19:18:14.068868 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:14 crc kubenswrapper[4942]: I0218 19:18:14.068892 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:14 crc kubenswrapper[4942]: I0218 19:18:14.068909 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:14Z","lastTransitionTime":"2026-02-18T19:18:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:14 crc kubenswrapper[4942]: I0218 19:18:14.172096 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:14 crc kubenswrapper[4942]: I0218 19:18:14.172149 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:14 crc kubenswrapper[4942]: I0218 19:18:14.172160 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:14 crc kubenswrapper[4942]: I0218 19:18:14.172182 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:14 crc kubenswrapper[4942]: I0218 19:18:14.172195 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:14Z","lastTransitionTime":"2026-02-18T19:18:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:14 crc kubenswrapper[4942]: I0218 19:18:14.275526 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:14 crc kubenswrapper[4942]: I0218 19:18:14.275971 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:14 crc kubenswrapper[4942]: I0218 19:18:14.275995 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:14 crc kubenswrapper[4942]: I0218 19:18:14.276027 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:14 crc kubenswrapper[4942]: I0218 19:18:14.276049 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:14Z","lastTransitionTime":"2026-02-18T19:18:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:14 crc kubenswrapper[4942]: I0218 19:18:14.380972 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:14 crc kubenswrapper[4942]: I0218 19:18:14.381054 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:14 crc kubenswrapper[4942]: I0218 19:18:14.381075 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:14 crc kubenswrapper[4942]: I0218 19:18:14.381104 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:14 crc kubenswrapper[4942]: I0218 19:18:14.381126 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:14Z","lastTransitionTime":"2026-02-18T19:18:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:14 crc kubenswrapper[4942]: I0218 19:18:14.485217 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:14 crc kubenswrapper[4942]: I0218 19:18:14.485948 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:14 crc kubenswrapper[4942]: I0218 19:18:14.486252 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:14 crc kubenswrapper[4942]: I0218 19:18:14.486743 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:14 crc kubenswrapper[4942]: I0218 19:18:14.487013 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:14Z","lastTransitionTime":"2026-02-18T19:18:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:14 crc kubenswrapper[4942]: I0218 19:18:14.590339 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:14 crc kubenswrapper[4942]: I0218 19:18:14.590752 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:14 crc kubenswrapper[4942]: I0218 19:18:14.590904 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:14 crc kubenswrapper[4942]: I0218 19:18:14.591016 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:14 crc kubenswrapper[4942]: I0218 19:18:14.591106 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:14Z","lastTransitionTime":"2026-02-18T19:18:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:14 crc kubenswrapper[4942]: I0218 19:18:14.694856 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:14 crc kubenswrapper[4942]: I0218 19:18:14.694934 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:14 crc kubenswrapper[4942]: I0218 19:18:14.694954 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:14 crc kubenswrapper[4942]: I0218 19:18:14.694983 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:14 crc kubenswrapper[4942]: I0218 19:18:14.695002 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:14Z","lastTransitionTime":"2026-02-18T19:18:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:14 crc kubenswrapper[4942]: I0218 19:18:14.799236 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:14 crc kubenswrapper[4942]: I0218 19:18:14.799325 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:14 crc kubenswrapper[4942]: I0218 19:18:14.799346 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:14 crc kubenswrapper[4942]: I0218 19:18:14.799375 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:14 crc kubenswrapper[4942]: I0218 19:18:14.799396 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:14Z","lastTransitionTime":"2026-02-18T19:18:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:14 crc kubenswrapper[4942]: I0218 19:18:14.903370 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:14 crc kubenswrapper[4942]: I0218 19:18:14.903458 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:14 crc kubenswrapper[4942]: I0218 19:18:14.903528 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:14 crc kubenswrapper[4942]: I0218 19:18:14.903562 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:14 crc kubenswrapper[4942]: I0218 19:18:14.903585 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:14Z","lastTransitionTime":"2026-02-18T19:18:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:14 crc kubenswrapper[4942]: I0218 19:18:14.991376 4942 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-27 08:10:28.754119891 +0000 UTC Feb 18 19:18:15 crc kubenswrapper[4942]: I0218 19:18:15.006908 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:15 crc kubenswrapper[4942]: I0218 19:18:15.006980 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:15 crc kubenswrapper[4942]: I0218 19:18:15.007006 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:15 crc kubenswrapper[4942]: I0218 19:18:15.007038 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:15 crc kubenswrapper[4942]: I0218 19:18:15.007060 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:15Z","lastTransitionTime":"2026-02-18T19:18:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:15 crc kubenswrapper[4942]: I0218 19:18:15.035664 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 19:18:15 crc kubenswrapper[4942]: E0218 19:18:15.036011 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 19:18:15 crc kubenswrapper[4942]: I0218 19:18:15.109851 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:15 crc kubenswrapper[4942]: I0218 19:18:15.109931 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:15 crc kubenswrapper[4942]: I0218 19:18:15.109955 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:15 crc kubenswrapper[4942]: I0218 19:18:15.109980 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:15 crc kubenswrapper[4942]: I0218 19:18:15.110000 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:15Z","lastTransitionTime":"2026-02-18T19:18:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:15 crc kubenswrapper[4942]: I0218 19:18:15.222436 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:15 crc kubenswrapper[4942]: I0218 19:18:15.222508 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:15 crc kubenswrapper[4942]: I0218 19:18:15.222528 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:15 crc kubenswrapper[4942]: I0218 19:18:15.222553 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:15 crc kubenswrapper[4942]: I0218 19:18:15.222570 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:15Z","lastTransitionTime":"2026-02-18T19:18:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:15 crc kubenswrapper[4942]: I0218 19:18:15.325503 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:15 crc kubenswrapper[4942]: I0218 19:18:15.325561 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:15 crc kubenswrapper[4942]: I0218 19:18:15.325579 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:15 crc kubenswrapper[4942]: I0218 19:18:15.325605 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:15 crc kubenswrapper[4942]: I0218 19:18:15.325623 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:15Z","lastTransitionTime":"2026-02-18T19:18:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:15 crc kubenswrapper[4942]: I0218 19:18:15.428897 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:15 crc kubenswrapper[4942]: I0218 19:18:15.428995 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:15 crc kubenswrapper[4942]: I0218 19:18:15.429013 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:15 crc kubenswrapper[4942]: I0218 19:18:15.429038 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:15 crc kubenswrapper[4942]: I0218 19:18:15.429054 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:15Z","lastTransitionTime":"2026-02-18T19:18:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:15 crc kubenswrapper[4942]: I0218 19:18:15.532751 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:15 crc kubenswrapper[4942]: I0218 19:18:15.533195 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:15 crc kubenswrapper[4942]: I0218 19:18:15.533283 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:15 crc kubenswrapper[4942]: I0218 19:18:15.533375 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:15 crc kubenswrapper[4942]: I0218 19:18:15.533450 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:15Z","lastTransitionTime":"2026-02-18T19:18:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:15 crc kubenswrapper[4942]: I0218 19:18:15.638244 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:15 crc kubenswrapper[4942]: I0218 19:18:15.638300 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:15 crc kubenswrapper[4942]: I0218 19:18:15.638310 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:15 crc kubenswrapper[4942]: I0218 19:18:15.638328 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:15 crc kubenswrapper[4942]: I0218 19:18:15.638341 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:15Z","lastTransitionTime":"2026-02-18T19:18:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:15 crc kubenswrapper[4942]: I0218 19:18:15.741420 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:15 crc kubenswrapper[4942]: I0218 19:18:15.741672 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:15 crc kubenswrapper[4942]: I0218 19:18:15.741702 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:15 crc kubenswrapper[4942]: I0218 19:18:15.741737 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:15 crc kubenswrapper[4942]: I0218 19:18:15.741794 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:15Z","lastTransitionTime":"2026-02-18T19:18:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:15 crc kubenswrapper[4942]: I0218 19:18:15.845386 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:15 crc kubenswrapper[4942]: I0218 19:18:15.845441 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:15 crc kubenswrapper[4942]: I0218 19:18:15.845459 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:15 crc kubenswrapper[4942]: I0218 19:18:15.845484 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:15 crc kubenswrapper[4942]: I0218 19:18:15.845501 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:15Z","lastTransitionTime":"2026-02-18T19:18:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:15 crc kubenswrapper[4942]: I0218 19:18:15.859429 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 18 19:18:15 crc kubenswrapper[4942]: I0218 19:18:15.880688 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:15Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:15 crc kubenswrapper[4942]: I0218 19:18:15.897885 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-5pgvt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f163820b-df8b-4e07-9b74-d5f3332580a6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://97b02b2ef091c462632d385e824d90a6dc8270726bb3b5dfaa6c3036e99d323f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjg6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:17:41Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-5pgvt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:15Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:15 crc kubenswrapper[4942]: I0218 19:18:15.920881 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-2rbc4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1ec6cacf-a09d-43b8-89fc-aea1d5a0ca9d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d379b6cff5fad06493f1e137d6f8de20b35e5350025c5875db8afb23cf30ac97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-27v7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://26e15915f01864e6357f914244e51cc52eb7c1c79fdda6cebfb23c7723978fc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://26e15915f01864e6357f914244e51cc52eb7c1c79fdda6cebfb23c7723978fc7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:17:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-27v7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3845cae9bbde573b86221f80bf461886dadf5c5149bb19824e505c703e168ba3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3845cae9bbde573b86221f80bf461886dadf5c5149bb19824e505c703e168ba3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:17:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-27v7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://83b7a573c632fbc4da1c088193202dcbe4f5f09d82c98b12e901a06acc877b80\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://83b7a573c632fbc4da1c088193202dcbe4f5f09d82c98b12e901a06acc877b80\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:17:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:17:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-27v7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2730d908eb063a0dc3278a304a8b7b9aee84bb6df39693e476d6517362864da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b2730d908eb063a0dc3278a304a8b7b9aee84bb6df39693e476d6517362864da\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:17:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:17:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-27v7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://86ba552c18df4c07b6d6b34acf51c27ec696374ddd079486c045e1cb9f68f703\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://86ba552c18df4c07b6d6b34acf51c27ec696374ddd079486c045e1cb9f68f703\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:17:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:17:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-27v7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://522b8abd41e12aecabbbc8a1f16dd8978b1e72b0984784780349570290bcc168\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://522b8abd41e12aecabbbc8a1f16dd8978b1e72b0984784780349570290bcc168\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:17:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:17:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-27v7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:17:42Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-2rbc4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:15Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:15 crc kubenswrapper[4942]: I0218 19:18:15.941228 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-8jfwb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"75150b8c-7a02-497b-86c3-eabc9c8dbc55\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f6aba9b40a3a963de7e8fb8f2a121318f0800350a41caa30b6aef71468e5e0e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-65c5q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:17:42Z\\\"}}\" for pod \"openshift-multus\"/\"multus-8jfwb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:15Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:15 crc kubenswrapper[4942]: I0218 19:18:15.948472 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:15 crc kubenswrapper[4942]: I0218 19:18:15.948868 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:15 crc kubenswrapper[4942]: I0218 19:18:15.949070 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:15 crc kubenswrapper[4942]: I0218 19:18:15.949221 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:15 crc kubenswrapper[4942]: I0218 19:18:15.949343 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:15Z","lastTransitionTime":"2026-02-18T19:18:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:15 crc kubenswrapper[4942]: I0218 19:18:15.971882 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-89fzv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"45dc4164-81a9-44cf-b86a-dff571bc0417\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:42Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:42Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e988175a524e389ddf3e3a47acb65910ac3bf3b812e14b76d988f13e2cdc5dc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl7tj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9333dac09e056ca12a248589ed4a097788b86ab83f9a1014d76d8bad88f1800c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl7tj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bcc9ee5f12cc3a3518c9fe13c16743e946e59b82dc01239767afb1e4afb2e4b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl7tj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2e222b580b244e85a382499ae61c72779f95fdab87e4d4c723d29b488219f94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl7tj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6351d0088a3e9c170ebe043fa700ef7f870c52f40d751b4fd13ac7b5bfa5e3b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl7tj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://427d7c083c5040fc6afe217c7850f1114323977542e83eb35d0a71b4bef6ecc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl7tj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5429604f7b234287bf3af48f519550433f88494f95c33feb27806630d47483a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5429604f7b234287bf3af48f519550433f88494f95c33feb27806630d47483a5\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-18T19:18:08Z\\\",\\\"message\\\":\\\"I0218 19:18:08.178608 6626 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0218 19:18:08.178631 6626 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0218 19:18:08.178626 6626 handler.go:208] Removed *v1.Node event handler 2\\\\nI0218 19:18:08.178651 6626 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0218 19:18:08.178672 6626 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0218 19:18:08.178699 6626 handler.go:208] Removed *v1.Node event handler 7\\\\nI0218 19:18:08.178672 6626 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0218 19:18:08.178745 6626 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0218 19:18:08.178787 6626 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0218 19:18:08.178795 6626 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0218 19:18:08.178814 6626 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0218 19:18:08.178831 6626 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0218 19:18:08.178834 6626 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0218 19:18:08.178850 6626 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0218 19:18:08.178859 6626 factory.go:656] Stopping watch factory\\\\nI0218 19:18:08.178876 6626 ovnkube.go:599] Stopped ovnkube\\\\nI0218 19:18:0\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T19:18:07Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-89fzv_openshift-ovn-kubernetes(45dc4164-81a9-44cf-b86a-dff571bc0417)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl7tj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c498aa99d3ec10af57c279f23804f4dce52a99d2c73fafe2bd9dc6ea454c7a23\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl7tj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://581689b9e064557a35e24e6d5c15a73036b8499700959fd330e9ebee15543edc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://581689b9e064557a35e24e6d5c15a73036b8499700959fd330e9ebee15543edc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:17:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:17:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl7tj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:17:42Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-89fzv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:15Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:15 crc kubenswrapper[4942]: I0218 19:18:15.992508 4942 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-11 01:10:11.174280786 +0000 UTC Feb 18 19:18:15 crc kubenswrapper[4942]: I0218 19:18:15.992623 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://36f5db0de79285e1aca04aee9ebb8824353d8746f2f7df24be858a55db3c9abf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:15Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:16 crc kubenswrapper[4942]: I0218 19:18:16.009748 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2e4be8605467674f949e5b4b8d282634126ab56d2983d5ffadb64ca4043b79b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:16Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:16 crc kubenswrapper[4942]: I0218 19:18:16.033697 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:16Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:16 crc kubenswrapper[4942]: I0218 19:18:16.034860 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 19:18:16 crc kubenswrapper[4942]: I0218 19:18:16.034976 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 19:18:16 crc kubenswrapper[4942]: E0218 19:18:16.035045 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 19:18:16 crc kubenswrapper[4942]: E0218 19:18:16.035260 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 19:18:16 crc kubenswrapper[4942]: I0218 19:18:16.035299 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qwg6q" Feb 18 19:18:16 crc kubenswrapper[4942]: E0218 19:18:16.035698 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qwg6q" podUID="ac5b5f40-34db-4aeb-abb4-57204673bd53" Feb 18 19:18:16 crc kubenswrapper[4942]: I0218 19:18:16.053194 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:16 crc kubenswrapper[4942]: I0218 19:18:16.053258 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:16 crc kubenswrapper[4942]: I0218 19:18:16.053275 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:16 crc kubenswrapper[4942]: I0218 19:18:16.053300 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:16 crc kubenswrapper[4942]: I0218 19:18:16.053319 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:16Z","lastTransitionTime":"2026-02-18T19:18:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:16 crc kubenswrapper[4942]: I0218 19:18:16.053727 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wqxh4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"28921539-823a-4439-a230-3b5aed7085cc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d1f426cf3a46e9dbd6da2d7e0d1dc2649a781bb63b9b116e2e96e297ffe685f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c2zj5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3f2583de812c35d32f50918d2ea1071672e650d7bb1eca09416558ca25526b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c2zj5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:17:42Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wqxh4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:16Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:16 crc kubenswrapper[4942]: I0218 19:18:16.076069 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4da93830-99a3-4d84-91c8-a5352a987b3f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://beecfbdf76954e7b9895240b52a2ec033ec3b81094ece02095f67a5f389d0383\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca3d8e99733c89b17e7211c9bae268f8e75942d896d32a6e2e9fc7e613000a6d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee5e19c2c5a503ae69e8052828713b9b399137e0fb7f3a06865d4d7f6b29c954\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c787e65428258ae002dd2569d2e100857851a5b699d573b42e59d1be987da8b3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b82ad9b4d66dae63d0d0fcf0dd65333cf43c3b1d6006e129b7db1c6320bfdee8\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"message\\\":\\\"le observer\\\\nW0218 19:17:41.723890 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0218 19:17:41.724123 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0218 19:17:41.725411 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3231040961/tls.crt::/tmp/serving-cert-3231040961/tls.key\\\\\\\"\\\\nI0218 19:17:41.923908 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0218 19:17:41.936017 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0218 19:17:41.936045 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0218 19:17:41.936073 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0218 19:17:41.936079 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0218 19:17:41.944174 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0218 19:17:41.944200 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 19:17:41.944205 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 19:17:41.944211 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0218 19:17:41.944214 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0218 19:17:41.944217 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0218 19:17:41.944220 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0218 19:17:41.944371 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0218 19:17:41.958094 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T19:17:41Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5fcd5de3303bba82e4a354de9f77b9aac574912955c2e49e2e74232f4d432a88\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:23Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6adb31d2464d541db843bddad1c43811ca11900db12deeb0d7c13ff8a3186d09\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6adb31d2464d541db843bddad1c43811ca11900db12deeb0d7c13ff8a3186d09\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:17:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:17:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:17:21Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:16Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:16 crc kubenswrapper[4942]: I0218 19:18:16.095204 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:16Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:16 crc kubenswrapper[4942]: I0218 19:18:16.112723 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-wxck8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"69ef2748-687e-4223-998e-7bd92ad8aaaf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ba4df5c822ff37a1a027d1908aab6472cd0b5a6ab0a2b5e5d1b172774107727\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vscpp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:17:45Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-wxck8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:16Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:16 crc kubenswrapper[4942]: I0218 19:18:16.133523 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b5d2b9d-7ec0-41fa-a073-399c6fd41eb6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2c8b81c113e461032be39d6328308bad3189a9e84d987da987d43e8e2f6449fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3654d3b4a5084ce9ffb9ef8aeab6155788b56ac636aee44b098f6e9d457a8d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3a247d311cfbec62a54df5757a344bbc7ea516a66ccdeb67aecbbe268a4fbe4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://117748c4c4fa5e68d4b927639faa447ed3a984e0d7364a2224abe27e178d5746\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:17:21Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:16Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:16 crc kubenswrapper[4942]: I0218 19:18:16.154044 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8cc6e8b6926e9cadf0bfdedb3a9fd0e5a7a902ba1cc703cd0396c3d7b2ec8666\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://45c0716738e2acbb0104b2ce05e3f23fd6933b653297d10972914500f3e55cd3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:16Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:16 crc kubenswrapper[4942]: I0218 19:18:16.156297 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:16 crc kubenswrapper[4942]: I0218 19:18:16.156340 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:16 crc kubenswrapper[4942]: I0218 19:18:16.156358 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:16 crc kubenswrapper[4942]: I0218 19:18:16.156383 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:16 crc kubenswrapper[4942]: I0218 19:18:16.156402 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:16Z","lastTransitionTime":"2026-02-18T19:18:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:16 crc kubenswrapper[4942]: I0218 19:18:16.172237 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-xk99z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8f8b40cd-7bbd-4189-a8c0-f4131e8b9add\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b7ea4ede9f2f9b4438bc9befcf913e5b8c7b9dc765fa1edce809e17c5ac933a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2zxvs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3573f095c220e3b1994394b83fdf24c7d1a721ccee2755042f520467f21ae1fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2zxvs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:17:54Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-xk99z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:16Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:16 crc kubenswrapper[4942]: I0218 19:18:16.187909 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-qwg6q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ac5b5f40-34db-4aeb-abb4-57204673bd53\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:56Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:56Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kmmq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kmmq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:17:56Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-qwg6q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:16Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:16 crc kubenswrapper[4942]: I0218 19:18:16.260246 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:16 crc kubenswrapper[4942]: I0218 19:18:16.260310 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:16 crc kubenswrapper[4942]: I0218 19:18:16.260331 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:16 crc kubenswrapper[4942]: I0218 19:18:16.260359 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:16 crc kubenswrapper[4942]: I0218 19:18:16.260378 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:16Z","lastTransitionTime":"2026-02-18T19:18:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:16 crc kubenswrapper[4942]: I0218 19:18:16.363123 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:16 crc kubenswrapper[4942]: I0218 19:18:16.363203 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:16 crc kubenswrapper[4942]: I0218 19:18:16.363254 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:16 crc kubenswrapper[4942]: I0218 19:18:16.363291 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:16 crc kubenswrapper[4942]: I0218 19:18:16.363313 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:16Z","lastTransitionTime":"2026-02-18T19:18:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:16 crc kubenswrapper[4942]: I0218 19:18:16.466281 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:16 crc kubenswrapper[4942]: I0218 19:18:16.466324 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:16 crc kubenswrapper[4942]: I0218 19:18:16.466334 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:16 crc kubenswrapper[4942]: I0218 19:18:16.466352 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:16 crc kubenswrapper[4942]: I0218 19:18:16.466364 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:16Z","lastTransitionTime":"2026-02-18T19:18:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:16 crc kubenswrapper[4942]: I0218 19:18:16.569446 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:16 crc kubenswrapper[4942]: I0218 19:18:16.569526 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:16 crc kubenswrapper[4942]: I0218 19:18:16.569549 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:16 crc kubenswrapper[4942]: I0218 19:18:16.569581 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:16 crc kubenswrapper[4942]: I0218 19:18:16.569599 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:16Z","lastTransitionTime":"2026-02-18T19:18:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:16 crc kubenswrapper[4942]: I0218 19:18:16.673328 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:16 crc kubenswrapper[4942]: I0218 19:18:16.673433 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:16 crc kubenswrapper[4942]: I0218 19:18:16.673456 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:16 crc kubenswrapper[4942]: I0218 19:18:16.673484 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:16 crc kubenswrapper[4942]: I0218 19:18:16.673508 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:16Z","lastTransitionTime":"2026-02-18T19:18:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:16 crc kubenswrapper[4942]: I0218 19:18:16.776988 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:16 crc kubenswrapper[4942]: I0218 19:18:16.777058 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:16 crc kubenswrapper[4942]: I0218 19:18:16.777076 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:16 crc kubenswrapper[4942]: I0218 19:18:16.777104 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:16 crc kubenswrapper[4942]: I0218 19:18:16.777124 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:16Z","lastTransitionTime":"2026-02-18T19:18:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:16 crc kubenswrapper[4942]: I0218 19:18:16.880610 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:16 crc kubenswrapper[4942]: I0218 19:18:16.880677 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:16 crc kubenswrapper[4942]: I0218 19:18:16.880695 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:16 crc kubenswrapper[4942]: I0218 19:18:16.880715 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:16 crc kubenswrapper[4942]: I0218 19:18:16.880818 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:16Z","lastTransitionTime":"2026-02-18T19:18:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:16 crc kubenswrapper[4942]: I0218 19:18:16.984077 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:16 crc kubenswrapper[4942]: I0218 19:18:16.984159 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:16 crc kubenswrapper[4942]: I0218 19:18:16.984176 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:16 crc kubenswrapper[4942]: I0218 19:18:16.984205 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:16 crc kubenswrapper[4942]: I0218 19:18:16.984227 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:16Z","lastTransitionTime":"2026-02-18T19:18:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:16 crc kubenswrapper[4942]: I0218 19:18:16.993446 4942 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-31 12:46:55.567108899 +0000 UTC Feb 18 19:18:17 crc kubenswrapper[4942]: I0218 19:18:17.035691 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 19:18:17 crc kubenswrapper[4942]: E0218 19:18:17.035895 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 19:18:17 crc kubenswrapper[4942]: I0218 19:18:17.087749 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:17 crc kubenswrapper[4942]: I0218 19:18:17.087839 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:17 crc kubenswrapper[4942]: I0218 19:18:17.087861 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:17 crc kubenswrapper[4942]: I0218 19:18:17.087890 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:17 crc kubenswrapper[4942]: I0218 19:18:17.087914 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:17Z","lastTransitionTime":"2026-02-18T19:18:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:17 crc kubenswrapper[4942]: I0218 19:18:17.191164 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:17 crc kubenswrapper[4942]: I0218 19:18:17.191230 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:17 crc kubenswrapper[4942]: I0218 19:18:17.191248 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:17 crc kubenswrapper[4942]: I0218 19:18:17.191272 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:17 crc kubenswrapper[4942]: I0218 19:18:17.191290 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:17Z","lastTransitionTime":"2026-02-18T19:18:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:17 crc kubenswrapper[4942]: I0218 19:18:17.295106 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:17 crc kubenswrapper[4942]: I0218 19:18:17.295185 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:17 crc kubenswrapper[4942]: I0218 19:18:17.295209 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:17 crc kubenswrapper[4942]: I0218 19:18:17.295240 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:17 crc kubenswrapper[4942]: I0218 19:18:17.295260 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:17Z","lastTransitionTime":"2026-02-18T19:18:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:17 crc kubenswrapper[4942]: I0218 19:18:17.399027 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:17 crc kubenswrapper[4942]: I0218 19:18:17.399096 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:17 crc kubenswrapper[4942]: I0218 19:18:17.399114 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:17 crc kubenswrapper[4942]: I0218 19:18:17.399138 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:17 crc kubenswrapper[4942]: I0218 19:18:17.399155 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:17Z","lastTransitionTime":"2026-02-18T19:18:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:17 crc kubenswrapper[4942]: I0218 19:18:17.502989 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:17 crc kubenswrapper[4942]: I0218 19:18:17.503063 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:17 crc kubenswrapper[4942]: I0218 19:18:17.503084 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:17 crc kubenswrapper[4942]: I0218 19:18:17.503114 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:17 crc kubenswrapper[4942]: I0218 19:18:17.503164 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:17Z","lastTransitionTime":"2026-02-18T19:18:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:17 crc kubenswrapper[4942]: I0218 19:18:17.586315 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:17 crc kubenswrapper[4942]: I0218 19:18:17.586385 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:17 crc kubenswrapper[4942]: I0218 19:18:17.586402 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:17 crc kubenswrapper[4942]: I0218 19:18:17.586430 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:17 crc kubenswrapper[4942]: I0218 19:18:17.586448 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:17Z","lastTransitionTime":"2026-02-18T19:18:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:17 crc kubenswrapper[4942]: E0218 19:18:17.607879 4942 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:18:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:17Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:18:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:17Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:18:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:17Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:18:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:17Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"26ba8477-3134-4454-b1a3-81cc0f315017\\\",\\\"systemUUID\\\":\\\"15e4da6b-0b96-4412-ada2-f835d7e5f88a\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:17Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:17 crc kubenswrapper[4942]: I0218 19:18:17.613621 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:17 crc kubenswrapper[4942]: I0218 19:18:17.613718 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:17 crc kubenswrapper[4942]: I0218 19:18:17.613740 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:17 crc kubenswrapper[4942]: I0218 19:18:17.613803 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:17 crc kubenswrapper[4942]: I0218 19:18:17.613833 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:17Z","lastTransitionTime":"2026-02-18T19:18:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:17 crc kubenswrapper[4942]: E0218 19:18:17.635390 4942 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:18:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:17Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:18:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:17Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:18:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:17Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:18:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:17Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"26ba8477-3134-4454-b1a3-81cc0f315017\\\",\\\"systemUUID\\\":\\\"15e4da6b-0b96-4412-ada2-f835d7e5f88a\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:17Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:17 crc kubenswrapper[4942]: I0218 19:18:17.640553 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:17 crc kubenswrapper[4942]: I0218 19:18:17.640603 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:17 crc kubenswrapper[4942]: I0218 19:18:17.640620 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:17 crc kubenswrapper[4942]: I0218 19:18:17.640645 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:17 crc kubenswrapper[4942]: I0218 19:18:17.640662 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:17Z","lastTransitionTime":"2026-02-18T19:18:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:17 crc kubenswrapper[4942]: E0218 19:18:17.662980 4942 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:18:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:17Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:18:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:17Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:18:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:17Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:18:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:17Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"26ba8477-3134-4454-b1a3-81cc0f315017\\\",\\\"systemUUID\\\":\\\"15e4da6b-0b96-4412-ada2-f835d7e5f88a\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:17Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:17 crc kubenswrapper[4942]: I0218 19:18:17.665404 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 18 19:18:17 crc kubenswrapper[4942]: I0218 19:18:17.669131 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:17 crc kubenswrapper[4942]: I0218 19:18:17.669172 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:17 crc kubenswrapper[4942]: I0218 19:18:17.669187 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:17 crc kubenswrapper[4942]: I0218 19:18:17.669214 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:17 crc kubenswrapper[4942]: I0218 19:18:17.669233 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:17Z","lastTransitionTime":"2026-02-18T19:18:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:17 crc kubenswrapper[4942]: I0218 19:18:17.682254 4942 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/openshift-kube-scheduler-crc"] Feb 18 19:18:17 crc kubenswrapper[4942]: I0218 19:18:17.685193 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:17Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:17 crc kubenswrapper[4942]: E0218 19:18:17.697589 4942 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:18:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:17Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:18:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:17Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:18:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:17Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:18:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:17Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"26ba8477-3134-4454-b1a3-81cc0f315017\\\",\\\"systemUUID\\\":\\\"15e4da6b-0b96-4412-ada2-f835d7e5f88a\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:17Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:17 crc kubenswrapper[4942]: I0218 19:18:17.701434 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-wxck8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"69ef2748-687e-4223-998e-7bd92ad8aaaf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ba4df5c822ff37a1a027d1908aab6472cd0b5a6ab0a2b5e5d1b172774107727\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vscpp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:17:45Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-wxck8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:17Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:17 crc kubenswrapper[4942]: I0218 19:18:17.702971 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:17 crc kubenswrapper[4942]: I0218 19:18:17.703004 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:17 crc kubenswrapper[4942]: I0218 19:18:17.703017 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:17 crc kubenswrapper[4942]: I0218 19:18:17.703035 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:17 crc kubenswrapper[4942]: I0218 19:18:17.703051 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:17Z","lastTransitionTime":"2026-02-18T19:18:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:17 crc kubenswrapper[4942]: E0218 19:18:17.723430 4942 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:18:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:17Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:18:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:17Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:18:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:17Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:18:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:17Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"26ba8477-3134-4454-b1a3-81cc0f315017\\\",\\\"systemUUID\\\":\\\"15e4da6b-0b96-4412-ada2-f835d7e5f88a\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:17Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:17 crc kubenswrapper[4942]: E0218 19:18:17.723806 4942 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 18 19:18:17 crc kubenswrapper[4942]: I0218 19:18:17.723941 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4da93830-99a3-4d84-91c8-a5352a987b3f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://beecfbdf76954e7b9895240b52a2ec033ec3b81094ece02095f67a5f389d0383\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca3d8e99733c89b17e7211c9bae268f8e75942d896d32a6e2e9fc7e613000a6d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee5e19c2c5a503ae69e8052828713b9b399137e0fb7f3a06865d4d7f6b29c954\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c787e65428258ae002dd2569d2e100857851a5b699d573b42e59d1be987da8b3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b82ad9b4d66dae63d0d0fcf0dd65333cf43c3b1d6006e129b7db1c6320bfdee8\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"message\\\":\\\"le observer\\\\nW0218 19:17:41.723890 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0218 19:17:41.724123 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0218 19:17:41.725411 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3231040961/tls.crt::/tmp/serving-cert-3231040961/tls.key\\\\\\\"\\\\nI0218 19:17:41.923908 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0218 19:17:41.936017 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0218 19:17:41.936045 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0218 19:17:41.936073 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0218 19:17:41.936079 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0218 19:17:41.944174 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0218 19:17:41.944200 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 19:17:41.944205 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 19:17:41.944211 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0218 19:17:41.944214 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0218 19:17:41.944217 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0218 19:17:41.944220 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0218 19:17:41.944371 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0218 19:17:41.958094 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T19:17:41Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5fcd5de3303bba82e4a354de9f77b9aac574912955c2e49e2e74232f4d432a88\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:23Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6adb31d2464d541db843bddad1c43811ca11900db12deeb0d7c13ff8a3186d09\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6adb31d2464d541db843bddad1c43811ca11900db12deeb0d7c13ff8a3186d09\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:17:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:17:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:17:21Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:17Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:17 crc kubenswrapper[4942]: I0218 19:18:17.726649 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:17 crc kubenswrapper[4942]: I0218 19:18:17.726694 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:17 crc kubenswrapper[4942]: I0218 19:18:17.726711 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:17 crc kubenswrapper[4942]: I0218 19:18:17.726737 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:17 crc kubenswrapper[4942]: I0218 19:18:17.726754 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:17Z","lastTransitionTime":"2026-02-18T19:18:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:17 crc kubenswrapper[4942]: I0218 19:18:17.744237 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8cc6e8b6926e9cadf0bfdedb3a9fd0e5a7a902ba1cc703cd0396c3d7b2ec8666\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://45c0716738e2acbb0104b2ce05e3f23fd6933b653297d10972914500f3e55cd3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:17Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:17 crc kubenswrapper[4942]: I0218 19:18:17.764302 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-xk99z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8f8b40cd-7bbd-4189-a8c0-f4131e8b9add\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b7ea4ede9f2f9b4438bc9befcf913e5b8c7b9dc765fa1edce809e17c5ac933a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2zxvs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3573f095c220e3b1994394b83fdf24c7d1a721ccee2755042f520467f21ae1fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2zxvs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:17:54Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-xk99z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:17Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:17 crc kubenswrapper[4942]: I0218 19:18:17.778173 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-qwg6q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ac5b5f40-34db-4aeb-abb4-57204673bd53\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:56Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:56Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kmmq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kmmq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:17:56Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-qwg6q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:17Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:17 crc kubenswrapper[4942]: I0218 19:18:17.798227 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b5d2b9d-7ec0-41fa-a073-399c6fd41eb6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2c8b81c113e461032be39d6328308bad3189a9e84d987da987d43e8e2f6449fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3654d3b4a5084ce9ffb9ef8aeab6155788b56ac636aee44b098f6e9d457a8d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3a247d311cfbec62a54df5757a344bbc7ea516a66ccdeb67aecbbe268a4fbe4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://117748c4c4fa5e68d4b927639faa447ed3a984e0d7364a2224abe27e178d5746\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:17:21Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:17Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:17 crc kubenswrapper[4942]: I0218 19:18:17.813823 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:17Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:17 crc kubenswrapper[4942]: I0218 19:18:17.826741 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-5pgvt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f163820b-df8b-4e07-9b74-d5f3332580a6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://97b02b2ef091c462632d385e824d90a6dc8270726bb3b5dfaa6c3036e99d323f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjg6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:17:41Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-5pgvt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:17Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:17 crc kubenswrapper[4942]: I0218 19:18:17.829261 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:17 crc kubenswrapper[4942]: I0218 19:18:17.829339 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:17 crc kubenswrapper[4942]: I0218 19:18:17.829361 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:17 crc kubenswrapper[4942]: I0218 19:18:17.829389 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:17 crc kubenswrapper[4942]: I0218 19:18:17.829411 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:17Z","lastTransitionTime":"2026-02-18T19:18:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:17 crc kubenswrapper[4942]: I0218 19:18:17.850478 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-2rbc4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1ec6cacf-a09d-43b8-89fc-aea1d5a0ca9d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d379b6cff5fad06493f1e137d6f8de20b35e5350025c5875db8afb23cf30ac97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-27v7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://26e15915f01864e6357f914244e51cc52eb7c1c79fdda6cebfb23c7723978fc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://26e15915f01864e6357f914244e51cc52eb7c1c79fdda6cebfb23c7723978fc7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:17:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-27v7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3845cae9bbde573b86221f80bf461886dadf5c5149bb19824e505c703e168ba3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3845cae9bbde573b86221f80bf461886dadf5c5149bb19824e505c703e168ba3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:17:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-27v7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://83b7a573c632fbc4da1c088193202dcbe4f5f09d82c98b12e901a06acc877b80\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://83b7a573c632fbc4da1c088193202dcbe4f5f09d82c98b12e901a06acc877b80\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:17:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:17:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-27v7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2730d908eb063a0dc3278a304a8b7b9aee84bb6df39693e476d6517362864da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b2730d908eb063a0dc3278a304a8b7b9aee84bb6df39693e476d6517362864da\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:17:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:17:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-27v7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://86ba552c18df4c07b6d6b34acf51c27ec696374ddd079486c045e1cb9f68f703\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://86ba552c18df4c07b6d6b34acf51c27ec696374ddd079486c045e1cb9f68f703\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:17:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:17:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-27v7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://522b8abd41e12aecabbbc8a1f16dd8978b1e72b0984784780349570290bcc168\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://522b8abd41e12aecabbbc8a1f16dd8978b1e72b0984784780349570290bcc168\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:17:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:17:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-27v7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:17:42Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-2rbc4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:17Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:17 crc kubenswrapper[4942]: I0218 19:18:17.869469 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://36f5db0de79285e1aca04aee9ebb8824353d8746f2f7df24be858a55db3c9abf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:17Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:17 crc kubenswrapper[4942]: I0218 19:18:17.886918 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2e4be8605467674f949e5b4b8d282634126ab56d2983d5ffadb64ca4043b79b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:17Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:17 crc kubenswrapper[4942]: I0218 19:18:17.906087 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:17Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:17 crc kubenswrapper[4942]: I0218 19:18:17.923530 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wqxh4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"28921539-823a-4439-a230-3b5aed7085cc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d1f426cf3a46e9dbd6da2d7e0d1dc2649a781bb63b9b116e2e96e297ffe685f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c2zj5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3f2583de812c35d32f50918d2ea1071672e650d7bb1eca09416558ca25526b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c2zj5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:17:42Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wqxh4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:17Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:17 crc kubenswrapper[4942]: I0218 19:18:17.932022 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:17 crc kubenswrapper[4942]: I0218 19:18:17.932093 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:17 crc kubenswrapper[4942]: I0218 19:18:17.932119 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:17 crc kubenswrapper[4942]: I0218 19:18:17.932153 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:17 crc kubenswrapper[4942]: I0218 19:18:17.932177 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:17Z","lastTransitionTime":"2026-02-18T19:18:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:17 crc kubenswrapper[4942]: I0218 19:18:17.944673 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-8jfwb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"75150b8c-7a02-497b-86c3-eabc9c8dbc55\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f6aba9b40a3a963de7e8fb8f2a121318f0800350a41caa30b6aef71468e5e0e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-65c5q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:17:42Z\\\"}}\" for pod \"openshift-multus\"/\"multus-8jfwb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:17Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:17 crc kubenswrapper[4942]: I0218 19:18:17.983599 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-89fzv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"45dc4164-81a9-44cf-b86a-dff571bc0417\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:42Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:42Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e988175a524e389ddf3e3a47acb65910ac3bf3b812e14b76d988f13e2cdc5dc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl7tj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9333dac09e056ca12a248589ed4a097788b86ab83f9a1014d76d8bad88f1800c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl7tj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bcc9ee5f12cc3a3518c9fe13c16743e946e59b82dc01239767afb1e4afb2e4b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl7tj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2e222b580b244e85a382499ae61c72779f95fdab87e4d4c723d29b488219f94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl7tj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6351d0088a3e9c170ebe043fa700ef7f870c52f40d751b4fd13ac7b5bfa5e3b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl7tj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://427d7c083c5040fc6afe217c7850f1114323977542e83eb35d0a71b4bef6ecc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl7tj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5429604f7b234287bf3af48f519550433f88494f95c33feb27806630d47483a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5429604f7b234287bf3af48f519550433f88494f95c33feb27806630d47483a5\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-18T19:18:08Z\\\",\\\"message\\\":\\\"I0218 19:18:08.178608 6626 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0218 19:18:08.178631 6626 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0218 19:18:08.178626 6626 handler.go:208] Removed *v1.Node event handler 2\\\\nI0218 19:18:08.178651 6626 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0218 19:18:08.178672 6626 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0218 19:18:08.178699 6626 handler.go:208] Removed *v1.Node event handler 7\\\\nI0218 19:18:08.178672 6626 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0218 19:18:08.178745 6626 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0218 19:18:08.178787 6626 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0218 19:18:08.178795 6626 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0218 19:18:08.178814 6626 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0218 19:18:08.178831 6626 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0218 19:18:08.178834 6626 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0218 19:18:08.178850 6626 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0218 19:18:08.178859 6626 factory.go:656] Stopping watch factory\\\\nI0218 19:18:08.178876 6626 ovnkube.go:599] Stopped ovnkube\\\\nI0218 19:18:0\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T19:18:07Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-89fzv_openshift-ovn-kubernetes(45dc4164-81a9-44cf-b86a-dff571bc0417)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl7tj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c498aa99d3ec10af57c279f23804f4dce52a99d2c73fafe2bd9dc6ea454c7a23\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl7tj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://581689b9e064557a35e24e6d5c15a73036b8499700959fd330e9ebee15543edc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://581689b9e064557a35e24e6d5c15a73036b8499700959fd330e9ebee15543edc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:17:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:17:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl7tj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:17:42Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-89fzv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:17Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:17 crc kubenswrapper[4942]: I0218 19:18:17.994533 4942 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-17 05:09:12.68325088 +0000 UTC Feb 18 19:18:18 crc kubenswrapper[4942]: I0218 19:18:18.034748 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:18 crc kubenswrapper[4942]: I0218 19:18:18.034812 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:18 crc kubenswrapper[4942]: I0218 19:18:18.034824 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:18 crc kubenswrapper[4942]: I0218 19:18:18.034844 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:18 crc kubenswrapper[4942]: I0218 19:18:18.034858 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:18Z","lastTransitionTime":"2026-02-18T19:18:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:18 crc kubenswrapper[4942]: I0218 19:18:18.035406 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 19:18:18 crc kubenswrapper[4942]: I0218 19:18:18.035521 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 19:18:18 crc kubenswrapper[4942]: E0218 19:18:18.035624 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 19:18:18 crc kubenswrapper[4942]: I0218 19:18:18.035831 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qwg6q" Feb 18 19:18:18 crc kubenswrapper[4942]: E0218 19:18:18.035984 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qwg6q" podUID="ac5b5f40-34db-4aeb-abb4-57204673bd53" Feb 18 19:18:18 crc kubenswrapper[4942]: E0218 19:18:18.036337 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 19:18:18 crc kubenswrapper[4942]: I0218 19:18:18.138341 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:18 crc kubenswrapper[4942]: I0218 19:18:18.138414 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:18 crc kubenswrapper[4942]: I0218 19:18:18.138429 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:18 crc kubenswrapper[4942]: I0218 19:18:18.138452 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:18 crc kubenswrapper[4942]: I0218 19:18:18.138466 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:18Z","lastTransitionTime":"2026-02-18T19:18:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:18 crc kubenswrapper[4942]: I0218 19:18:18.240755 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:18 crc kubenswrapper[4942]: I0218 19:18:18.240846 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:18 crc kubenswrapper[4942]: I0218 19:18:18.240864 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:18 crc kubenswrapper[4942]: I0218 19:18:18.240889 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:18 crc kubenswrapper[4942]: I0218 19:18:18.240908 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:18Z","lastTransitionTime":"2026-02-18T19:18:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:18 crc kubenswrapper[4942]: I0218 19:18:18.344099 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:18 crc kubenswrapper[4942]: I0218 19:18:18.344153 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:18 crc kubenswrapper[4942]: I0218 19:18:18.344172 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:18 crc kubenswrapper[4942]: I0218 19:18:18.344199 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:18 crc kubenswrapper[4942]: I0218 19:18:18.344218 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:18Z","lastTransitionTime":"2026-02-18T19:18:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:18 crc kubenswrapper[4942]: I0218 19:18:18.447316 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:18 crc kubenswrapper[4942]: I0218 19:18:18.447379 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:18 crc kubenswrapper[4942]: I0218 19:18:18.447398 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:18 crc kubenswrapper[4942]: I0218 19:18:18.447424 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:18 crc kubenswrapper[4942]: I0218 19:18:18.447445 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:18Z","lastTransitionTime":"2026-02-18T19:18:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:18 crc kubenswrapper[4942]: I0218 19:18:18.549902 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:18 crc kubenswrapper[4942]: I0218 19:18:18.549965 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:18 crc kubenswrapper[4942]: I0218 19:18:18.549986 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:18 crc kubenswrapper[4942]: I0218 19:18:18.550017 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:18 crc kubenswrapper[4942]: I0218 19:18:18.550039 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:18Z","lastTransitionTime":"2026-02-18T19:18:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:18 crc kubenswrapper[4942]: I0218 19:18:18.653392 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:18 crc kubenswrapper[4942]: I0218 19:18:18.653467 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:18 crc kubenswrapper[4942]: I0218 19:18:18.653488 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:18 crc kubenswrapper[4942]: I0218 19:18:18.653518 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:18 crc kubenswrapper[4942]: I0218 19:18:18.653542 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:18Z","lastTransitionTime":"2026-02-18T19:18:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:18 crc kubenswrapper[4942]: I0218 19:18:18.756567 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:18 crc kubenswrapper[4942]: I0218 19:18:18.756640 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:18 crc kubenswrapper[4942]: I0218 19:18:18.756661 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:18 crc kubenswrapper[4942]: I0218 19:18:18.756688 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:18 crc kubenswrapper[4942]: I0218 19:18:18.756709 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:18Z","lastTransitionTime":"2026-02-18T19:18:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:18 crc kubenswrapper[4942]: I0218 19:18:18.859953 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:18 crc kubenswrapper[4942]: I0218 19:18:18.860035 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:18 crc kubenswrapper[4942]: I0218 19:18:18.860072 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:18 crc kubenswrapper[4942]: I0218 19:18:18.860099 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:18 crc kubenswrapper[4942]: I0218 19:18:18.860124 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:18Z","lastTransitionTime":"2026-02-18T19:18:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:18 crc kubenswrapper[4942]: I0218 19:18:18.963827 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:18 crc kubenswrapper[4942]: I0218 19:18:18.963901 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:18 crc kubenswrapper[4942]: I0218 19:18:18.963926 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:18 crc kubenswrapper[4942]: I0218 19:18:18.963953 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:18 crc kubenswrapper[4942]: I0218 19:18:18.963974 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:18Z","lastTransitionTime":"2026-02-18T19:18:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:18 crc kubenswrapper[4942]: I0218 19:18:18.995571 4942 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-28 22:38:31.910584604 +0000 UTC Feb 18 19:18:19 crc kubenswrapper[4942]: I0218 19:18:19.035390 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 19:18:19 crc kubenswrapper[4942]: E0218 19:18:19.035641 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 19:18:19 crc kubenswrapper[4942]: I0218 19:18:19.067275 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:19 crc kubenswrapper[4942]: I0218 19:18:19.067337 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:19 crc kubenswrapper[4942]: I0218 19:18:19.067358 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:19 crc kubenswrapper[4942]: I0218 19:18:19.067384 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:19 crc kubenswrapper[4942]: I0218 19:18:19.067403 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:19Z","lastTransitionTime":"2026-02-18T19:18:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:19 crc kubenswrapper[4942]: I0218 19:18:19.170987 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:19 crc kubenswrapper[4942]: I0218 19:18:19.171074 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:19 crc kubenswrapper[4942]: I0218 19:18:19.171094 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:19 crc kubenswrapper[4942]: I0218 19:18:19.171119 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:19 crc kubenswrapper[4942]: I0218 19:18:19.171139 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:19Z","lastTransitionTime":"2026-02-18T19:18:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:19 crc kubenswrapper[4942]: I0218 19:18:19.274099 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:19 crc kubenswrapper[4942]: I0218 19:18:19.274162 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:19 crc kubenswrapper[4942]: I0218 19:18:19.274179 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:19 crc kubenswrapper[4942]: I0218 19:18:19.274209 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:19 crc kubenswrapper[4942]: I0218 19:18:19.274228 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:19Z","lastTransitionTime":"2026-02-18T19:18:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:19 crc kubenswrapper[4942]: I0218 19:18:19.378210 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:19 crc kubenswrapper[4942]: I0218 19:18:19.378286 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:19 crc kubenswrapper[4942]: I0218 19:18:19.378304 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:19 crc kubenswrapper[4942]: I0218 19:18:19.378336 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:19 crc kubenswrapper[4942]: I0218 19:18:19.378354 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:19Z","lastTransitionTime":"2026-02-18T19:18:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:19 crc kubenswrapper[4942]: I0218 19:18:19.482060 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:19 crc kubenswrapper[4942]: I0218 19:18:19.482133 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:19 crc kubenswrapper[4942]: I0218 19:18:19.482151 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:19 crc kubenswrapper[4942]: I0218 19:18:19.482180 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:19 crc kubenswrapper[4942]: I0218 19:18:19.482200 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:19Z","lastTransitionTime":"2026-02-18T19:18:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:19 crc kubenswrapper[4942]: I0218 19:18:19.586137 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:19 crc kubenswrapper[4942]: I0218 19:18:19.586199 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:19 crc kubenswrapper[4942]: I0218 19:18:19.586216 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:19 crc kubenswrapper[4942]: I0218 19:18:19.586241 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:19 crc kubenswrapper[4942]: I0218 19:18:19.586258 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:19Z","lastTransitionTime":"2026-02-18T19:18:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:19 crc kubenswrapper[4942]: I0218 19:18:19.689453 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:19 crc kubenswrapper[4942]: I0218 19:18:19.689532 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:19 crc kubenswrapper[4942]: I0218 19:18:19.689548 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:19 crc kubenswrapper[4942]: I0218 19:18:19.689570 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:19 crc kubenswrapper[4942]: I0218 19:18:19.689583 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:19Z","lastTransitionTime":"2026-02-18T19:18:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:19 crc kubenswrapper[4942]: I0218 19:18:19.793182 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:19 crc kubenswrapper[4942]: I0218 19:18:19.793268 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:19 crc kubenswrapper[4942]: I0218 19:18:19.793290 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:19 crc kubenswrapper[4942]: I0218 19:18:19.793324 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:19 crc kubenswrapper[4942]: I0218 19:18:19.793351 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:19Z","lastTransitionTime":"2026-02-18T19:18:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:19 crc kubenswrapper[4942]: I0218 19:18:19.897037 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:19 crc kubenswrapper[4942]: I0218 19:18:19.897135 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:19 crc kubenswrapper[4942]: I0218 19:18:19.897160 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:19 crc kubenswrapper[4942]: I0218 19:18:19.897197 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:19 crc kubenswrapper[4942]: I0218 19:18:19.897225 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:19Z","lastTransitionTime":"2026-02-18T19:18:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:19 crc kubenswrapper[4942]: I0218 19:18:19.996219 4942 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-12 23:14:28.480010051 +0000 UTC Feb 18 19:18:20 crc kubenswrapper[4942]: I0218 19:18:20.000622 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:20 crc kubenswrapper[4942]: I0218 19:18:20.000670 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:20 crc kubenswrapper[4942]: I0218 19:18:20.000678 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:20 crc kubenswrapper[4942]: I0218 19:18:20.000696 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:20 crc kubenswrapper[4942]: I0218 19:18:20.000711 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:20Z","lastTransitionTime":"2026-02-18T19:18:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:20 crc kubenswrapper[4942]: I0218 19:18:20.035287 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 19:18:20 crc kubenswrapper[4942]: I0218 19:18:20.035336 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 19:18:20 crc kubenswrapper[4942]: I0218 19:18:20.035464 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qwg6q" Feb 18 19:18:20 crc kubenswrapper[4942]: E0218 19:18:20.035465 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 19:18:20 crc kubenswrapper[4942]: E0218 19:18:20.035677 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 19:18:20 crc kubenswrapper[4942]: E0218 19:18:20.035750 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qwg6q" podUID="ac5b5f40-34db-4aeb-abb4-57204673bd53" Feb 18 19:18:20 crc kubenswrapper[4942]: I0218 19:18:20.036424 4942 scope.go:117] "RemoveContainer" containerID="5429604f7b234287bf3af48f519550433f88494f95c33feb27806630d47483a5" Feb 18 19:18:20 crc kubenswrapper[4942]: E0218 19:18:20.036592 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-89fzv_openshift-ovn-kubernetes(45dc4164-81a9-44cf-b86a-dff571bc0417)\"" pod="openshift-ovn-kubernetes/ovnkube-node-89fzv" podUID="45dc4164-81a9-44cf-b86a-dff571bc0417" Feb 18 19:18:20 crc kubenswrapper[4942]: I0218 19:18:20.104092 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:20 crc kubenswrapper[4942]: I0218 19:18:20.104134 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:20 crc kubenswrapper[4942]: I0218 19:18:20.104145 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:20 crc kubenswrapper[4942]: I0218 19:18:20.104163 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:20 crc kubenswrapper[4942]: I0218 19:18:20.104178 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:20Z","lastTransitionTime":"2026-02-18T19:18:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:20 crc kubenswrapper[4942]: I0218 19:18:20.208105 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:20 crc kubenswrapper[4942]: I0218 19:18:20.208175 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:20 crc kubenswrapper[4942]: I0218 19:18:20.208189 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:20 crc kubenswrapper[4942]: I0218 19:18:20.208211 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:20 crc kubenswrapper[4942]: I0218 19:18:20.208227 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:20Z","lastTransitionTime":"2026-02-18T19:18:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:20 crc kubenswrapper[4942]: I0218 19:18:20.311707 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:20 crc kubenswrapper[4942]: I0218 19:18:20.311824 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:20 crc kubenswrapper[4942]: I0218 19:18:20.311839 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:20 crc kubenswrapper[4942]: I0218 19:18:20.311861 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:20 crc kubenswrapper[4942]: I0218 19:18:20.311874 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:20Z","lastTransitionTime":"2026-02-18T19:18:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:20 crc kubenswrapper[4942]: I0218 19:18:20.415846 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:20 crc kubenswrapper[4942]: I0218 19:18:20.415919 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:20 crc kubenswrapper[4942]: I0218 19:18:20.415937 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:20 crc kubenswrapper[4942]: I0218 19:18:20.415964 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:20 crc kubenswrapper[4942]: I0218 19:18:20.415981 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:20Z","lastTransitionTime":"2026-02-18T19:18:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:20 crc kubenswrapper[4942]: I0218 19:18:20.524257 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:20 crc kubenswrapper[4942]: I0218 19:18:20.524659 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:20 crc kubenswrapper[4942]: I0218 19:18:20.525339 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:20 crc kubenswrapper[4942]: I0218 19:18:20.525374 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:20 crc kubenswrapper[4942]: I0218 19:18:20.525391 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:20Z","lastTransitionTime":"2026-02-18T19:18:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:20 crc kubenswrapper[4942]: I0218 19:18:20.628775 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:20 crc kubenswrapper[4942]: I0218 19:18:20.628836 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:20 crc kubenswrapper[4942]: I0218 19:18:20.628849 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:20 crc kubenswrapper[4942]: I0218 19:18:20.628871 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:20 crc kubenswrapper[4942]: I0218 19:18:20.628885 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:20Z","lastTransitionTime":"2026-02-18T19:18:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:20 crc kubenswrapper[4942]: I0218 19:18:20.731985 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:20 crc kubenswrapper[4942]: I0218 19:18:20.732047 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:20 crc kubenswrapper[4942]: I0218 19:18:20.732059 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:20 crc kubenswrapper[4942]: I0218 19:18:20.732079 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:20 crc kubenswrapper[4942]: I0218 19:18:20.732094 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:20Z","lastTransitionTime":"2026-02-18T19:18:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:20 crc kubenswrapper[4942]: I0218 19:18:20.835923 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:20 crc kubenswrapper[4942]: I0218 19:18:20.835987 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:20 crc kubenswrapper[4942]: I0218 19:18:20.836013 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:20 crc kubenswrapper[4942]: I0218 19:18:20.836046 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:20 crc kubenswrapper[4942]: I0218 19:18:20.836070 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:20Z","lastTransitionTime":"2026-02-18T19:18:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:20 crc kubenswrapper[4942]: I0218 19:18:20.939008 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:20 crc kubenswrapper[4942]: I0218 19:18:20.939102 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:20 crc kubenswrapper[4942]: I0218 19:18:20.939126 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:20 crc kubenswrapper[4942]: I0218 19:18:20.939158 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:20 crc kubenswrapper[4942]: I0218 19:18:20.939180 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:20Z","lastTransitionTime":"2026-02-18T19:18:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:20 crc kubenswrapper[4942]: I0218 19:18:20.997348 4942 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-26 08:32:28.419151421 +0000 UTC Feb 18 19:18:21 crc kubenswrapper[4942]: I0218 19:18:21.035151 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 19:18:21 crc kubenswrapper[4942]: E0218 19:18:21.035347 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 19:18:21 crc kubenswrapper[4942]: I0218 19:18:21.042376 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:21 crc kubenswrapper[4942]: I0218 19:18:21.042420 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:21 crc kubenswrapper[4942]: I0218 19:18:21.042437 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:21 crc kubenswrapper[4942]: I0218 19:18:21.042462 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:21 crc kubenswrapper[4942]: I0218 19:18:21.042478 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:21Z","lastTransitionTime":"2026-02-18T19:18:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:21 crc kubenswrapper[4942]: I0218 19:18:21.058789 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b5d2b9d-7ec0-41fa-a073-399c6fd41eb6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2c8b81c113e461032be39d6328308bad3189a9e84d987da987d43e8e2f6449fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3654d3b4a5084ce9ffb9ef8aeab6155788b56ac636aee44b098f6e9d457a8d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3a247d311cfbec62a54df5757a344bbc7ea516a66ccdeb67aecbbe268a4fbe4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://117748c4c4fa5e68d4b927639faa447ed3a984e0d7364a2224abe27e178d5746\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:17:21Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:21Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:21 crc kubenswrapper[4942]: I0218 19:18:21.079670 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8cc6e8b6926e9cadf0bfdedb3a9fd0e5a7a902ba1cc703cd0396c3d7b2ec8666\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://45c0716738e2acbb0104b2ce05e3f23fd6933b653297d10972914500f3e55cd3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:21Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:21 crc kubenswrapper[4942]: I0218 19:18:21.101868 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-xk99z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8f8b40cd-7bbd-4189-a8c0-f4131e8b9add\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b7ea4ede9f2f9b4438bc9befcf913e5b8c7b9dc765fa1edce809e17c5ac933a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2zxvs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3573f095c220e3b1994394b83fdf24c7d1a721ccee2755042f520467f21ae1fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2zxvs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:17:54Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-xk99z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:21Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:21 crc kubenswrapper[4942]: I0218 19:18:21.118921 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-qwg6q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ac5b5f40-34db-4aeb-abb4-57204673bd53\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:56Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:56Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kmmq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kmmq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:17:56Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-qwg6q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:21Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:21 crc kubenswrapper[4942]: I0218 19:18:21.137852 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:21Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:21 crc kubenswrapper[4942]: I0218 19:18:21.145532 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:21 crc kubenswrapper[4942]: I0218 19:18:21.145590 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:21 crc kubenswrapper[4942]: I0218 19:18:21.145609 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:21 crc kubenswrapper[4942]: I0218 19:18:21.145635 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:21 crc kubenswrapper[4942]: I0218 19:18:21.145653 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:21Z","lastTransitionTime":"2026-02-18T19:18:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:21 crc kubenswrapper[4942]: I0218 19:18:21.155293 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-5pgvt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f163820b-df8b-4e07-9b74-d5f3332580a6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://97b02b2ef091c462632d385e824d90a6dc8270726bb3b5dfaa6c3036e99d323f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjg6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:17:41Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-5pgvt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:21Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:21 crc kubenswrapper[4942]: I0218 19:18:21.182272 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-2rbc4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1ec6cacf-a09d-43b8-89fc-aea1d5a0ca9d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d379b6cff5fad06493f1e137d6f8de20b35e5350025c5875db8afb23cf30ac97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-27v7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://26e15915f01864e6357f914244e51cc52eb7c1c79fdda6cebfb23c7723978fc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://26e15915f01864e6357f914244e51cc52eb7c1c79fdda6cebfb23c7723978fc7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:17:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-27v7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3845cae9bbde573b86221f80bf461886dadf5c5149bb19824e505c703e168ba3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3845cae9bbde573b86221f80bf461886dadf5c5149bb19824e505c703e168ba3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:17:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-27v7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://83b7a573c632fbc4da1c088193202dcbe4f5f09d82c98b12e901a06acc877b80\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://83b7a573c632fbc4da1c088193202dcbe4f5f09d82c98b12e901a06acc877b80\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:17:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:17:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-27v7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2730d908eb063a0dc3278a304a8b7b9aee84bb6df39693e476d6517362864da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b2730d908eb063a0dc3278a304a8b7b9aee84bb6df39693e476d6517362864da\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:17:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:17:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-27v7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://86ba552c18df4c07b6d6b34acf51c27ec696374ddd079486c045e1cb9f68f703\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://86ba552c18df4c07b6d6b34acf51c27ec696374ddd079486c045e1cb9f68f703\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:17:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:17:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-27v7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://522b8abd41e12aecabbbc8a1f16dd8978b1e72b0984784780349570290bcc168\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://522b8abd41e12aecabbbc8a1f16dd8978b1e72b0984784780349570290bcc168\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:17:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:17:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-27v7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:17:42Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-2rbc4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:21Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:21 crc kubenswrapper[4942]: I0218 19:18:21.199866 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wqxh4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"28921539-823a-4439-a230-3b5aed7085cc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d1f426cf3a46e9dbd6da2d7e0d1dc2649a781bb63b9b116e2e96e297ffe685f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c2zj5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3f2583de812c35d32f50918d2ea1071672e650d7bb1eca09416558ca25526b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c2zj5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:17:42Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wqxh4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:21Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:21 crc kubenswrapper[4942]: I0218 19:18:21.220317 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-8jfwb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"75150b8c-7a02-497b-86c3-eabc9c8dbc55\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f6aba9b40a3a963de7e8fb8f2a121318f0800350a41caa30b6aef71468e5e0e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-65c5q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:17:42Z\\\"}}\" for pod \"openshift-multus\"/\"multus-8jfwb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:21Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:21 crc kubenswrapper[4942]: I0218 19:18:21.249486 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:21 crc kubenswrapper[4942]: I0218 19:18:21.250085 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:21 crc kubenswrapper[4942]: I0218 19:18:21.250395 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:21 crc kubenswrapper[4942]: I0218 19:18:21.250823 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:21 crc kubenswrapper[4942]: I0218 19:18:21.251501 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:21Z","lastTransitionTime":"2026-02-18T19:18:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:21 crc kubenswrapper[4942]: I0218 19:18:21.255036 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-89fzv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"45dc4164-81a9-44cf-b86a-dff571bc0417\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:42Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:42Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e988175a524e389ddf3e3a47acb65910ac3bf3b812e14b76d988f13e2cdc5dc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl7tj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9333dac09e056ca12a248589ed4a097788b86ab83f9a1014d76d8bad88f1800c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl7tj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bcc9ee5f12cc3a3518c9fe13c16743e946e59b82dc01239767afb1e4afb2e4b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl7tj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2e222b580b244e85a382499ae61c72779f95fdab87e4d4c723d29b488219f94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl7tj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6351d0088a3e9c170ebe043fa700ef7f870c52f40d751b4fd13ac7b5bfa5e3b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl7tj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://427d7c083c5040fc6afe217c7850f1114323977542e83eb35d0a71b4bef6ecc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl7tj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5429604f7b234287bf3af48f519550433f88494f95c33feb27806630d47483a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5429604f7b234287bf3af48f519550433f88494f95c33feb27806630d47483a5\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-18T19:18:08Z\\\",\\\"message\\\":\\\"I0218 19:18:08.178608 6626 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0218 19:18:08.178631 6626 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0218 19:18:08.178626 6626 handler.go:208] Removed *v1.Node event handler 2\\\\nI0218 19:18:08.178651 6626 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0218 19:18:08.178672 6626 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0218 19:18:08.178699 6626 handler.go:208] Removed *v1.Node event handler 7\\\\nI0218 19:18:08.178672 6626 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0218 19:18:08.178745 6626 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0218 19:18:08.178787 6626 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0218 19:18:08.178795 6626 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0218 19:18:08.178814 6626 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0218 19:18:08.178831 6626 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0218 19:18:08.178834 6626 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0218 19:18:08.178850 6626 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0218 19:18:08.178859 6626 factory.go:656] Stopping watch factory\\\\nI0218 19:18:08.178876 6626 ovnkube.go:599] Stopped ovnkube\\\\nI0218 19:18:0\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T19:18:07Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-89fzv_openshift-ovn-kubernetes(45dc4164-81a9-44cf-b86a-dff571bc0417)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl7tj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c498aa99d3ec10af57c279f23804f4dce52a99d2c73fafe2bd9dc6ea454c7a23\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl7tj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://581689b9e064557a35e24e6d5c15a73036b8499700959fd330e9ebee15543edc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://581689b9e064557a35e24e6d5c15a73036b8499700959fd330e9ebee15543edc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:17:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:17:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl7tj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:17:42Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-89fzv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:21Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:21 crc kubenswrapper[4942]: I0218 19:18:21.273563 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"276d1ade-b018-4a59-8184-e121ff600ea0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bf61d811b92484ed6f2e49184a29d51957000ce926d74afe7b452b8845673afb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://691cb927291454a41fe8552c32737d52f8430e180870cd9c2bdc827926f15cd0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a3ed5634c2ead9b37bd3c51e5ba9f710e1a2b4430552bfce39b234bc7efdac5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8f965989f2401534556e39f4940e0a03935cf6ff85e89a9401fdfc20fc84dbc3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8f965989f2401534556e39f4940e0a03935cf6ff85e89a9401fdfc20fc84dbc3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:17:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:17:22Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:17:21Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:21Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:21 crc kubenswrapper[4942]: I0218 19:18:21.292488 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://36f5db0de79285e1aca04aee9ebb8824353d8746f2f7df24be858a55db3c9abf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:21Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:21 crc kubenswrapper[4942]: I0218 19:18:21.308553 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2e4be8605467674f949e5b4b8d282634126ab56d2983d5ffadb64ca4043b79b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:21Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:21 crc kubenswrapper[4942]: I0218 19:18:21.324601 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:21Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:21 crc kubenswrapper[4942]: I0218 19:18:21.340111 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4da93830-99a3-4d84-91c8-a5352a987b3f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://beecfbdf76954e7b9895240b52a2ec033ec3b81094ece02095f67a5f389d0383\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca3d8e99733c89b17e7211c9bae268f8e75942d896d32a6e2e9fc7e613000a6d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee5e19c2c5a503ae69e8052828713b9b399137e0fb7f3a06865d4d7f6b29c954\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c787e65428258ae002dd2569d2e100857851a5b699d573b42e59d1be987da8b3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b82ad9b4d66dae63d0d0fcf0dd65333cf43c3b1d6006e129b7db1c6320bfdee8\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"message\\\":\\\"le observer\\\\nW0218 19:17:41.723890 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0218 19:17:41.724123 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0218 19:17:41.725411 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3231040961/tls.crt::/tmp/serving-cert-3231040961/tls.key\\\\\\\"\\\\nI0218 19:17:41.923908 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0218 19:17:41.936017 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0218 19:17:41.936045 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0218 19:17:41.936073 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0218 19:17:41.936079 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0218 19:17:41.944174 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0218 19:17:41.944200 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 19:17:41.944205 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 19:17:41.944211 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0218 19:17:41.944214 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0218 19:17:41.944217 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0218 19:17:41.944220 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0218 19:17:41.944371 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0218 19:17:41.958094 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T19:17:41Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5fcd5de3303bba82e4a354de9f77b9aac574912955c2e49e2e74232f4d432a88\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:23Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6adb31d2464d541db843bddad1c43811ca11900db12deeb0d7c13ff8a3186d09\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6adb31d2464d541db843bddad1c43811ca11900db12deeb0d7c13ff8a3186d09\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:17:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:17:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:17:21Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:21Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:21 crc kubenswrapper[4942]: I0218 19:18:21.354747 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:21 crc kubenswrapper[4942]: I0218 19:18:21.354826 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:21 crc kubenswrapper[4942]: I0218 19:18:21.354843 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:21 crc kubenswrapper[4942]: I0218 19:18:21.354867 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:21 crc kubenswrapper[4942]: I0218 19:18:21.354886 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:21Z","lastTransitionTime":"2026-02-18T19:18:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:21 crc kubenswrapper[4942]: I0218 19:18:21.356006 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:21Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:21 crc kubenswrapper[4942]: I0218 19:18:21.368667 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-wxck8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"69ef2748-687e-4223-998e-7bd92ad8aaaf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ba4df5c822ff37a1a027d1908aab6472cd0b5a6ab0a2b5e5d1b172774107727\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vscpp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:17:45Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-wxck8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:21Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:21 crc kubenswrapper[4942]: I0218 19:18:21.458434 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:21 crc kubenswrapper[4942]: I0218 19:18:21.458477 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:21 crc kubenswrapper[4942]: I0218 19:18:21.458489 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:21 crc kubenswrapper[4942]: I0218 19:18:21.458511 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:21 crc kubenswrapper[4942]: I0218 19:18:21.458524 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:21Z","lastTransitionTime":"2026-02-18T19:18:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:21 crc kubenswrapper[4942]: I0218 19:18:21.561600 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:21 crc kubenswrapper[4942]: I0218 19:18:21.561666 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:21 crc kubenswrapper[4942]: I0218 19:18:21.561685 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:21 crc kubenswrapper[4942]: I0218 19:18:21.561719 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:21 crc kubenswrapper[4942]: I0218 19:18:21.561740 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:21Z","lastTransitionTime":"2026-02-18T19:18:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:21 crc kubenswrapper[4942]: I0218 19:18:21.665675 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:21 crc kubenswrapper[4942]: I0218 19:18:21.665732 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:21 crc kubenswrapper[4942]: I0218 19:18:21.665751 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:21 crc kubenswrapper[4942]: I0218 19:18:21.665816 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:21 crc kubenswrapper[4942]: I0218 19:18:21.665842 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:21Z","lastTransitionTime":"2026-02-18T19:18:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:21 crc kubenswrapper[4942]: I0218 19:18:21.769325 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:21 crc kubenswrapper[4942]: I0218 19:18:21.769414 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:21 crc kubenswrapper[4942]: I0218 19:18:21.769432 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:21 crc kubenswrapper[4942]: I0218 19:18:21.769949 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:21 crc kubenswrapper[4942]: I0218 19:18:21.769984 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:21Z","lastTransitionTime":"2026-02-18T19:18:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:21 crc kubenswrapper[4942]: I0218 19:18:21.872681 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:21 crc kubenswrapper[4942]: I0218 19:18:21.873051 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:21 crc kubenswrapper[4942]: I0218 19:18:21.873122 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:21 crc kubenswrapper[4942]: I0218 19:18:21.873192 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:21 crc kubenswrapper[4942]: I0218 19:18:21.873259 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:21Z","lastTransitionTime":"2026-02-18T19:18:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:21 crc kubenswrapper[4942]: I0218 19:18:21.976717 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:21 crc kubenswrapper[4942]: I0218 19:18:21.976821 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:21 crc kubenswrapper[4942]: I0218 19:18:21.976847 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:21 crc kubenswrapper[4942]: I0218 19:18:21.976875 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:21 crc kubenswrapper[4942]: I0218 19:18:21.976896 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:21Z","lastTransitionTime":"2026-02-18T19:18:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:21 crc kubenswrapper[4942]: I0218 19:18:21.998443 4942 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-11 21:35:06.193859524 +0000 UTC Feb 18 19:18:22 crc kubenswrapper[4942]: I0218 19:18:22.035111 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 19:18:22 crc kubenswrapper[4942]: E0218 19:18:22.035344 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 19:18:22 crc kubenswrapper[4942]: I0218 19:18:22.035646 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 19:18:22 crc kubenswrapper[4942]: E0218 19:18:22.035819 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 19:18:22 crc kubenswrapper[4942]: I0218 19:18:22.036044 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qwg6q" Feb 18 19:18:22 crc kubenswrapper[4942]: E0218 19:18:22.036162 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qwg6q" podUID="ac5b5f40-34db-4aeb-abb4-57204673bd53" Feb 18 19:18:22 crc kubenswrapper[4942]: I0218 19:18:22.079968 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:22 crc kubenswrapper[4942]: I0218 19:18:22.080044 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:22 crc kubenswrapper[4942]: I0218 19:18:22.080061 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:22 crc kubenswrapper[4942]: I0218 19:18:22.080089 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:22 crc kubenswrapper[4942]: I0218 19:18:22.080110 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:22Z","lastTransitionTime":"2026-02-18T19:18:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:22 crc kubenswrapper[4942]: I0218 19:18:22.183241 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:22 crc kubenswrapper[4942]: I0218 19:18:22.183290 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:22 crc kubenswrapper[4942]: I0218 19:18:22.183302 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:22 crc kubenswrapper[4942]: I0218 19:18:22.183322 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:22 crc kubenswrapper[4942]: I0218 19:18:22.183336 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:22Z","lastTransitionTime":"2026-02-18T19:18:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:22 crc kubenswrapper[4942]: I0218 19:18:22.286246 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:22 crc kubenswrapper[4942]: I0218 19:18:22.286337 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:22 crc kubenswrapper[4942]: I0218 19:18:22.286354 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:22 crc kubenswrapper[4942]: I0218 19:18:22.286382 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:22 crc kubenswrapper[4942]: I0218 19:18:22.286401 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:22Z","lastTransitionTime":"2026-02-18T19:18:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:22 crc kubenswrapper[4942]: I0218 19:18:22.389519 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:22 crc kubenswrapper[4942]: I0218 19:18:22.389568 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:22 crc kubenswrapper[4942]: I0218 19:18:22.389580 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:22 crc kubenswrapper[4942]: I0218 19:18:22.389600 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:22 crc kubenswrapper[4942]: I0218 19:18:22.389610 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:22Z","lastTransitionTime":"2026-02-18T19:18:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:22 crc kubenswrapper[4942]: I0218 19:18:22.492456 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:22 crc kubenswrapper[4942]: I0218 19:18:22.492524 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:22 crc kubenswrapper[4942]: I0218 19:18:22.492537 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:22 crc kubenswrapper[4942]: I0218 19:18:22.492556 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:22 crc kubenswrapper[4942]: I0218 19:18:22.492571 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:22Z","lastTransitionTime":"2026-02-18T19:18:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:22 crc kubenswrapper[4942]: I0218 19:18:22.595373 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:22 crc kubenswrapper[4942]: I0218 19:18:22.595427 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:22 crc kubenswrapper[4942]: I0218 19:18:22.595444 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:22 crc kubenswrapper[4942]: I0218 19:18:22.595463 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:22 crc kubenswrapper[4942]: I0218 19:18:22.595477 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:22Z","lastTransitionTime":"2026-02-18T19:18:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:22 crc kubenswrapper[4942]: I0218 19:18:22.698490 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:22 crc kubenswrapper[4942]: I0218 19:18:22.698547 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:22 crc kubenswrapper[4942]: I0218 19:18:22.698558 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:22 crc kubenswrapper[4942]: I0218 19:18:22.698577 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:22 crc kubenswrapper[4942]: I0218 19:18:22.698588 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:22Z","lastTransitionTime":"2026-02-18T19:18:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:22 crc kubenswrapper[4942]: I0218 19:18:22.803497 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:22 crc kubenswrapper[4942]: I0218 19:18:22.803581 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:22 crc kubenswrapper[4942]: I0218 19:18:22.803607 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:22 crc kubenswrapper[4942]: I0218 19:18:22.803644 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:22 crc kubenswrapper[4942]: I0218 19:18:22.803669 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:22Z","lastTransitionTime":"2026-02-18T19:18:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:22 crc kubenswrapper[4942]: I0218 19:18:22.908749 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:22 crc kubenswrapper[4942]: I0218 19:18:22.908825 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:22 crc kubenswrapper[4942]: I0218 19:18:22.908843 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:22 crc kubenswrapper[4942]: I0218 19:18:22.908867 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:22 crc kubenswrapper[4942]: I0218 19:18:22.908886 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:22Z","lastTransitionTime":"2026-02-18T19:18:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:23 crc kubenswrapper[4942]: I0218 19:18:22.999694 4942 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-01 20:05:05.627976294 +0000 UTC Feb 18 19:18:23 crc kubenswrapper[4942]: I0218 19:18:23.012901 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:23 crc kubenswrapper[4942]: I0218 19:18:23.012942 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:23 crc kubenswrapper[4942]: I0218 19:18:23.012953 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:23 crc kubenswrapper[4942]: I0218 19:18:23.012971 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:23 crc kubenswrapper[4942]: I0218 19:18:23.012983 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:23Z","lastTransitionTime":"2026-02-18T19:18:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:23 crc kubenswrapper[4942]: I0218 19:18:23.035483 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 19:18:23 crc kubenswrapper[4942]: E0218 19:18:23.035644 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 19:18:23 crc kubenswrapper[4942]: I0218 19:18:23.116647 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:23 crc kubenswrapper[4942]: I0218 19:18:23.116735 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:23 crc kubenswrapper[4942]: I0218 19:18:23.116756 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:23 crc kubenswrapper[4942]: I0218 19:18:23.116814 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:23 crc kubenswrapper[4942]: I0218 19:18:23.116833 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:23Z","lastTransitionTime":"2026-02-18T19:18:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:23 crc kubenswrapper[4942]: I0218 19:18:23.220508 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:23 crc kubenswrapper[4942]: I0218 19:18:23.220605 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:23 crc kubenswrapper[4942]: I0218 19:18:23.220625 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:23 crc kubenswrapper[4942]: I0218 19:18:23.220655 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:23 crc kubenswrapper[4942]: I0218 19:18:23.220678 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:23Z","lastTransitionTime":"2026-02-18T19:18:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:23 crc kubenswrapper[4942]: I0218 19:18:23.323698 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:23 crc kubenswrapper[4942]: I0218 19:18:23.323798 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:23 crc kubenswrapper[4942]: I0218 19:18:23.323819 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:23 crc kubenswrapper[4942]: I0218 19:18:23.323845 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:23 crc kubenswrapper[4942]: I0218 19:18:23.323863 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:23Z","lastTransitionTime":"2026-02-18T19:18:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:23 crc kubenswrapper[4942]: I0218 19:18:23.426866 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:23 crc kubenswrapper[4942]: I0218 19:18:23.426941 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:23 crc kubenswrapper[4942]: I0218 19:18:23.426963 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:23 crc kubenswrapper[4942]: I0218 19:18:23.426992 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:23 crc kubenswrapper[4942]: I0218 19:18:23.427012 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:23Z","lastTransitionTime":"2026-02-18T19:18:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:23 crc kubenswrapper[4942]: I0218 19:18:23.530218 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:23 crc kubenswrapper[4942]: I0218 19:18:23.530280 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:23 crc kubenswrapper[4942]: I0218 19:18:23.530299 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:23 crc kubenswrapper[4942]: I0218 19:18:23.530325 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:23 crc kubenswrapper[4942]: I0218 19:18:23.530344 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:23Z","lastTransitionTime":"2026-02-18T19:18:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:23 crc kubenswrapper[4942]: I0218 19:18:23.633329 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:23 crc kubenswrapper[4942]: I0218 19:18:23.633413 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:23 crc kubenswrapper[4942]: I0218 19:18:23.633432 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:23 crc kubenswrapper[4942]: I0218 19:18:23.633460 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:23 crc kubenswrapper[4942]: I0218 19:18:23.633484 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:23Z","lastTransitionTime":"2026-02-18T19:18:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:23 crc kubenswrapper[4942]: I0218 19:18:23.738259 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:23 crc kubenswrapper[4942]: I0218 19:18:23.738326 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:23 crc kubenswrapper[4942]: I0218 19:18:23.738537 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:23 crc kubenswrapper[4942]: I0218 19:18:23.738561 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:23 crc kubenswrapper[4942]: I0218 19:18:23.738576 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:23Z","lastTransitionTime":"2026-02-18T19:18:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:23 crc kubenswrapper[4942]: I0218 19:18:23.842575 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:23 crc kubenswrapper[4942]: I0218 19:18:23.842623 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:23 crc kubenswrapper[4942]: I0218 19:18:23.842639 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:23 crc kubenswrapper[4942]: I0218 19:18:23.842661 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:23 crc kubenswrapper[4942]: I0218 19:18:23.842674 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:23Z","lastTransitionTime":"2026-02-18T19:18:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:23 crc kubenswrapper[4942]: I0218 19:18:23.946725 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:23 crc kubenswrapper[4942]: I0218 19:18:23.946839 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:23 crc kubenswrapper[4942]: I0218 19:18:23.946866 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:23 crc kubenswrapper[4942]: I0218 19:18:23.946897 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:23 crc kubenswrapper[4942]: I0218 19:18:23.946918 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:23Z","lastTransitionTime":"2026-02-18T19:18:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:24 crc kubenswrapper[4942]: I0218 19:18:24.000584 4942 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-23 23:39:48.95829545 +0000 UTC Feb 18 19:18:24 crc kubenswrapper[4942]: I0218 19:18:24.035435 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qwg6q" Feb 18 19:18:24 crc kubenswrapper[4942]: I0218 19:18:24.035539 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 19:18:24 crc kubenswrapper[4942]: I0218 19:18:24.035640 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 19:18:24 crc kubenswrapper[4942]: E0218 19:18:24.035864 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qwg6q" podUID="ac5b5f40-34db-4aeb-abb4-57204673bd53" Feb 18 19:18:24 crc kubenswrapper[4942]: E0218 19:18:24.036067 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 19:18:24 crc kubenswrapper[4942]: E0218 19:18:24.036186 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 19:18:24 crc kubenswrapper[4942]: I0218 19:18:24.051095 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:24 crc kubenswrapper[4942]: I0218 19:18:24.051159 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:24 crc kubenswrapper[4942]: I0218 19:18:24.051170 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:24 crc kubenswrapper[4942]: I0218 19:18:24.051192 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:24 crc kubenswrapper[4942]: I0218 19:18:24.051205 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:24Z","lastTransitionTime":"2026-02-18T19:18:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:24 crc kubenswrapper[4942]: I0218 19:18:24.155055 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:24 crc kubenswrapper[4942]: I0218 19:18:24.155134 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:24 crc kubenswrapper[4942]: I0218 19:18:24.155153 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:24 crc kubenswrapper[4942]: I0218 19:18:24.155180 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:24 crc kubenswrapper[4942]: I0218 19:18:24.155200 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:24Z","lastTransitionTime":"2026-02-18T19:18:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:24 crc kubenswrapper[4942]: I0218 19:18:24.258913 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:24 crc kubenswrapper[4942]: I0218 19:18:24.258993 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:24 crc kubenswrapper[4942]: I0218 19:18:24.259011 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:24 crc kubenswrapper[4942]: I0218 19:18:24.259037 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:24 crc kubenswrapper[4942]: I0218 19:18:24.259054 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:24Z","lastTransitionTime":"2026-02-18T19:18:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:24 crc kubenswrapper[4942]: I0218 19:18:24.362933 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:24 crc kubenswrapper[4942]: I0218 19:18:24.363003 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:24 crc kubenswrapper[4942]: I0218 19:18:24.363022 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:24 crc kubenswrapper[4942]: I0218 19:18:24.363049 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:24 crc kubenswrapper[4942]: I0218 19:18:24.363067 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:24Z","lastTransitionTime":"2026-02-18T19:18:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:24 crc kubenswrapper[4942]: I0218 19:18:24.466877 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:24 crc kubenswrapper[4942]: I0218 19:18:24.466930 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:24 crc kubenswrapper[4942]: I0218 19:18:24.466947 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:24 crc kubenswrapper[4942]: I0218 19:18:24.466971 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:24 crc kubenswrapper[4942]: I0218 19:18:24.466991 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:24Z","lastTransitionTime":"2026-02-18T19:18:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:24 crc kubenswrapper[4942]: I0218 19:18:24.569887 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:24 crc kubenswrapper[4942]: I0218 19:18:24.569968 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:24 crc kubenswrapper[4942]: I0218 19:18:24.569985 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:24 crc kubenswrapper[4942]: I0218 19:18:24.570013 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:24 crc kubenswrapper[4942]: I0218 19:18:24.570037 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:24Z","lastTransitionTime":"2026-02-18T19:18:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:24 crc kubenswrapper[4942]: I0218 19:18:24.672992 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:24 crc kubenswrapper[4942]: I0218 19:18:24.673037 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:24 crc kubenswrapper[4942]: I0218 19:18:24.673047 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:24 crc kubenswrapper[4942]: I0218 19:18:24.673064 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:24 crc kubenswrapper[4942]: I0218 19:18:24.673075 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:24Z","lastTransitionTime":"2026-02-18T19:18:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:24 crc kubenswrapper[4942]: I0218 19:18:24.775551 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:24 crc kubenswrapper[4942]: I0218 19:18:24.775615 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:24 crc kubenswrapper[4942]: I0218 19:18:24.775628 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:24 crc kubenswrapper[4942]: I0218 19:18:24.775653 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:24 crc kubenswrapper[4942]: I0218 19:18:24.775673 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:24Z","lastTransitionTime":"2026-02-18T19:18:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:24 crc kubenswrapper[4942]: I0218 19:18:24.879288 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:24 crc kubenswrapper[4942]: I0218 19:18:24.879393 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:24 crc kubenswrapper[4942]: I0218 19:18:24.879446 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:24 crc kubenswrapper[4942]: I0218 19:18:24.879480 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:24 crc kubenswrapper[4942]: I0218 19:18:24.879551 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:24Z","lastTransitionTime":"2026-02-18T19:18:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:24 crc kubenswrapper[4942]: I0218 19:18:24.984031 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:24 crc kubenswrapper[4942]: I0218 19:18:24.984084 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:24 crc kubenswrapper[4942]: I0218 19:18:24.984095 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:24 crc kubenswrapper[4942]: I0218 19:18:24.984115 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:24 crc kubenswrapper[4942]: I0218 19:18:24.984127 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:24Z","lastTransitionTime":"2026-02-18T19:18:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:25 crc kubenswrapper[4942]: I0218 19:18:25.001715 4942 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-14 23:37:48.663896929 +0000 UTC Feb 18 19:18:25 crc kubenswrapper[4942]: I0218 19:18:25.035781 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 19:18:25 crc kubenswrapper[4942]: E0218 19:18:25.035992 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 19:18:25 crc kubenswrapper[4942]: I0218 19:18:25.088715 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:25 crc kubenswrapper[4942]: I0218 19:18:25.088850 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:25 crc kubenswrapper[4942]: I0218 19:18:25.088873 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:25 crc kubenswrapper[4942]: I0218 19:18:25.088907 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:25 crc kubenswrapper[4942]: I0218 19:18:25.088928 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:25Z","lastTransitionTime":"2026-02-18T19:18:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:25 crc kubenswrapper[4942]: I0218 19:18:25.192353 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:25 crc kubenswrapper[4942]: I0218 19:18:25.192451 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:25 crc kubenswrapper[4942]: I0218 19:18:25.192506 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:25 crc kubenswrapper[4942]: I0218 19:18:25.192536 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:25 crc kubenswrapper[4942]: I0218 19:18:25.192559 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:25Z","lastTransitionTime":"2026-02-18T19:18:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:25 crc kubenswrapper[4942]: I0218 19:18:25.297475 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:25 crc kubenswrapper[4942]: I0218 19:18:25.297592 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:25 crc kubenswrapper[4942]: I0218 19:18:25.297618 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:25 crc kubenswrapper[4942]: I0218 19:18:25.297650 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:25 crc kubenswrapper[4942]: I0218 19:18:25.297672 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:25Z","lastTransitionTime":"2026-02-18T19:18:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:25 crc kubenswrapper[4942]: I0218 19:18:25.400619 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:25 crc kubenswrapper[4942]: I0218 19:18:25.400684 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:25 crc kubenswrapper[4942]: I0218 19:18:25.400704 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:25 crc kubenswrapper[4942]: I0218 19:18:25.400725 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:25 crc kubenswrapper[4942]: I0218 19:18:25.400742 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:25Z","lastTransitionTime":"2026-02-18T19:18:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:25 crc kubenswrapper[4942]: I0218 19:18:25.503841 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:25 crc kubenswrapper[4942]: I0218 19:18:25.503900 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:25 crc kubenswrapper[4942]: I0218 19:18:25.503912 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:25 crc kubenswrapper[4942]: I0218 19:18:25.503930 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:25 crc kubenswrapper[4942]: I0218 19:18:25.503942 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:25Z","lastTransitionTime":"2026-02-18T19:18:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:25 crc kubenswrapper[4942]: I0218 19:18:25.608713 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:25 crc kubenswrapper[4942]: I0218 19:18:25.608809 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:25 crc kubenswrapper[4942]: I0218 19:18:25.608832 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:25 crc kubenswrapper[4942]: I0218 19:18:25.608860 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:25 crc kubenswrapper[4942]: I0218 19:18:25.608880 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:25Z","lastTransitionTime":"2026-02-18T19:18:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:25 crc kubenswrapper[4942]: I0218 19:18:25.712056 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:25 crc kubenswrapper[4942]: I0218 19:18:25.712121 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:25 crc kubenswrapper[4942]: I0218 19:18:25.712145 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:25 crc kubenswrapper[4942]: I0218 19:18:25.712177 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:25 crc kubenswrapper[4942]: I0218 19:18:25.712203 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:25Z","lastTransitionTime":"2026-02-18T19:18:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:25 crc kubenswrapper[4942]: I0218 19:18:25.816039 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:25 crc kubenswrapper[4942]: I0218 19:18:25.816095 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:25 crc kubenswrapper[4942]: I0218 19:18:25.816110 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:25 crc kubenswrapper[4942]: I0218 19:18:25.816130 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:25 crc kubenswrapper[4942]: I0218 19:18:25.816142 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:25Z","lastTransitionTime":"2026-02-18T19:18:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:25 crc kubenswrapper[4942]: I0218 19:18:25.919152 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:25 crc kubenswrapper[4942]: I0218 19:18:25.919206 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:25 crc kubenswrapper[4942]: I0218 19:18:25.919221 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:25 crc kubenswrapper[4942]: I0218 19:18:25.919239 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:25 crc kubenswrapper[4942]: I0218 19:18:25.919251 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:25Z","lastTransitionTime":"2026-02-18T19:18:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:26 crc kubenswrapper[4942]: I0218 19:18:26.002540 4942 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-20 05:50:17.213514892 +0000 UTC Feb 18 19:18:26 crc kubenswrapper[4942]: I0218 19:18:26.021252 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:26 crc kubenswrapper[4942]: I0218 19:18:26.021313 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:26 crc kubenswrapper[4942]: I0218 19:18:26.021322 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:26 crc kubenswrapper[4942]: I0218 19:18:26.021337 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:26 crc kubenswrapper[4942]: I0218 19:18:26.021346 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:26Z","lastTransitionTime":"2026-02-18T19:18:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:26 crc kubenswrapper[4942]: I0218 19:18:26.035397 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 19:18:26 crc kubenswrapper[4942]: I0218 19:18:26.035397 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 19:18:26 crc kubenswrapper[4942]: E0218 19:18:26.035530 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 19:18:26 crc kubenswrapper[4942]: I0218 19:18:26.035507 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qwg6q" Feb 18 19:18:26 crc kubenswrapper[4942]: E0218 19:18:26.035592 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 19:18:26 crc kubenswrapper[4942]: E0218 19:18:26.035869 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qwg6q" podUID="ac5b5f40-34db-4aeb-abb4-57204673bd53" Feb 18 19:18:26 crc kubenswrapper[4942]: I0218 19:18:26.123810 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:26 crc kubenswrapper[4942]: I0218 19:18:26.123855 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:26 crc kubenswrapper[4942]: I0218 19:18:26.123866 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:26 crc kubenswrapper[4942]: I0218 19:18:26.123884 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:26 crc kubenswrapper[4942]: I0218 19:18:26.123896 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:26Z","lastTransitionTime":"2026-02-18T19:18:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:26 crc kubenswrapper[4942]: I0218 19:18:26.227541 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:26 crc kubenswrapper[4942]: I0218 19:18:26.227590 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:26 crc kubenswrapper[4942]: I0218 19:18:26.227600 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:26 crc kubenswrapper[4942]: I0218 19:18:26.227620 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:26 crc kubenswrapper[4942]: I0218 19:18:26.227631 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:26Z","lastTransitionTime":"2026-02-18T19:18:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:26 crc kubenswrapper[4942]: I0218 19:18:26.330037 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:26 crc kubenswrapper[4942]: I0218 19:18:26.330088 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:26 crc kubenswrapper[4942]: I0218 19:18:26.330100 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:26 crc kubenswrapper[4942]: I0218 19:18:26.330119 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:26 crc kubenswrapper[4942]: I0218 19:18:26.330131 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:26Z","lastTransitionTime":"2026-02-18T19:18:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:26 crc kubenswrapper[4942]: I0218 19:18:26.432604 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:26 crc kubenswrapper[4942]: I0218 19:18:26.432695 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:26 crc kubenswrapper[4942]: I0218 19:18:26.432707 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:26 crc kubenswrapper[4942]: I0218 19:18:26.432728 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:26 crc kubenswrapper[4942]: I0218 19:18:26.432741 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:26Z","lastTransitionTime":"2026-02-18T19:18:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:26 crc kubenswrapper[4942]: I0218 19:18:26.535209 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:26 crc kubenswrapper[4942]: I0218 19:18:26.535246 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:26 crc kubenswrapper[4942]: I0218 19:18:26.535256 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:26 crc kubenswrapper[4942]: I0218 19:18:26.535277 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:26 crc kubenswrapper[4942]: I0218 19:18:26.535290 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:26Z","lastTransitionTime":"2026-02-18T19:18:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:26 crc kubenswrapper[4942]: I0218 19:18:26.638437 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:26 crc kubenswrapper[4942]: I0218 19:18:26.638490 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:26 crc kubenswrapper[4942]: I0218 19:18:26.638501 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:26 crc kubenswrapper[4942]: I0218 19:18:26.638522 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:26 crc kubenswrapper[4942]: I0218 19:18:26.638534 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:26Z","lastTransitionTime":"2026-02-18T19:18:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:26 crc kubenswrapper[4942]: I0218 19:18:26.740382 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:26 crc kubenswrapper[4942]: I0218 19:18:26.740416 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:26 crc kubenswrapper[4942]: I0218 19:18:26.740426 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:26 crc kubenswrapper[4942]: I0218 19:18:26.740442 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:26 crc kubenswrapper[4942]: I0218 19:18:26.740451 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:26Z","lastTransitionTime":"2026-02-18T19:18:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:26 crc kubenswrapper[4942]: I0218 19:18:26.843120 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:26 crc kubenswrapper[4942]: I0218 19:18:26.843479 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:26 crc kubenswrapper[4942]: I0218 19:18:26.843617 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:26 crc kubenswrapper[4942]: I0218 19:18:26.843738 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:26 crc kubenswrapper[4942]: I0218 19:18:26.843977 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:26Z","lastTransitionTime":"2026-02-18T19:18:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:26 crc kubenswrapper[4942]: I0218 19:18:26.946108 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:26 crc kubenswrapper[4942]: I0218 19:18:26.946192 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:26 crc kubenswrapper[4942]: I0218 19:18:26.946207 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:26 crc kubenswrapper[4942]: I0218 19:18:26.946230 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:26 crc kubenswrapper[4942]: I0218 19:18:26.946242 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:26Z","lastTransitionTime":"2026-02-18T19:18:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:27 crc kubenswrapper[4942]: I0218 19:18:27.003329 4942 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-31 12:17:41.7437808 +0000 UTC Feb 18 19:18:27 crc kubenswrapper[4942]: I0218 19:18:27.034799 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 19:18:27 crc kubenswrapper[4942]: E0218 19:18:27.034929 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 19:18:27 crc kubenswrapper[4942]: I0218 19:18:27.048645 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:27 crc kubenswrapper[4942]: I0218 19:18:27.048678 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:27 crc kubenswrapper[4942]: I0218 19:18:27.048688 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:27 crc kubenswrapper[4942]: I0218 19:18:27.048700 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:27 crc kubenswrapper[4942]: I0218 19:18:27.048710 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:27Z","lastTransitionTime":"2026-02-18T19:18:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:27 crc kubenswrapper[4942]: I0218 19:18:27.151161 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:27 crc kubenswrapper[4942]: I0218 19:18:27.151202 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:27 crc kubenswrapper[4942]: I0218 19:18:27.151215 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:27 crc kubenswrapper[4942]: I0218 19:18:27.151231 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:27 crc kubenswrapper[4942]: I0218 19:18:27.151243 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:27Z","lastTransitionTime":"2026-02-18T19:18:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:27 crc kubenswrapper[4942]: I0218 19:18:27.253712 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:27 crc kubenswrapper[4942]: I0218 19:18:27.253864 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:27 crc kubenswrapper[4942]: I0218 19:18:27.253964 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:27 crc kubenswrapper[4942]: I0218 19:18:27.254050 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:27 crc kubenswrapper[4942]: I0218 19:18:27.254140 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:27Z","lastTransitionTime":"2026-02-18T19:18:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:27 crc kubenswrapper[4942]: I0218 19:18:27.357640 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:27 crc kubenswrapper[4942]: I0218 19:18:27.357719 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:27 crc kubenswrapper[4942]: I0218 19:18:27.357737 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:27 crc kubenswrapper[4942]: I0218 19:18:27.357797 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:27 crc kubenswrapper[4942]: I0218 19:18:27.357821 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:27Z","lastTransitionTime":"2026-02-18T19:18:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:27 crc kubenswrapper[4942]: I0218 19:18:27.461043 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:27 crc kubenswrapper[4942]: I0218 19:18:27.461090 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:27 crc kubenswrapper[4942]: I0218 19:18:27.461100 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:27 crc kubenswrapper[4942]: I0218 19:18:27.461116 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:27 crc kubenswrapper[4942]: I0218 19:18:27.461125 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:27Z","lastTransitionTime":"2026-02-18T19:18:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:27 crc kubenswrapper[4942]: I0218 19:18:27.564125 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:27 crc kubenswrapper[4942]: I0218 19:18:27.564210 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:27 crc kubenswrapper[4942]: I0218 19:18:27.564270 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:27 crc kubenswrapper[4942]: I0218 19:18:27.564304 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:27 crc kubenswrapper[4942]: I0218 19:18:27.564327 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:27Z","lastTransitionTime":"2026-02-18T19:18:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:27 crc kubenswrapper[4942]: I0218 19:18:27.667063 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:27 crc kubenswrapper[4942]: I0218 19:18:27.667120 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:27 crc kubenswrapper[4942]: I0218 19:18:27.667134 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:27 crc kubenswrapper[4942]: I0218 19:18:27.667157 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:27 crc kubenswrapper[4942]: I0218 19:18:27.667177 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:27Z","lastTransitionTime":"2026-02-18T19:18:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:27 crc kubenswrapper[4942]: I0218 19:18:27.770177 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:27 crc kubenswrapper[4942]: I0218 19:18:27.770236 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:27 crc kubenswrapper[4942]: I0218 19:18:27.770247 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:27 crc kubenswrapper[4942]: I0218 19:18:27.770266 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:27 crc kubenswrapper[4942]: I0218 19:18:27.770279 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:27Z","lastTransitionTime":"2026-02-18T19:18:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:27 crc kubenswrapper[4942]: I0218 19:18:27.873330 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:27 crc kubenswrapper[4942]: I0218 19:18:27.873412 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:27 crc kubenswrapper[4942]: I0218 19:18:27.873439 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:27 crc kubenswrapper[4942]: I0218 19:18:27.873467 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:27 crc kubenswrapper[4942]: I0218 19:18:27.873486 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:27Z","lastTransitionTime":"2026-02-18T19:18:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:27 crc kubenswrapper[4942]: I0218 19:18:27.917848 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:27 crc kubenswrapper[4942]: I0218 19:18:27.917896 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:27 crc kubenswrapper[4942]: I0218 19:18:27.917908 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:27 crc kubenswrapper[4942]: I0218 19:18:27.917928 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:27 crc kubenswrapper[4942]: I0218 19:18:27.917940 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:27Z","lastTransitionTime":"2026-02-18T19:18:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:27 crc kubenswrapper[4942]: E0218 19:18:27.935218 4942 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:18:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:27Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:18:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:27Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:18:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:27Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:18:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:27Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"26ba8477-3134-4454-b1a3-81cc0f315017\\\",\\\"systemUUID\\\":\\\"15e4da6b-0b96-4412-ada2-f835d7e5f88a\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:27Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:27 crc kubenswrapper[4942]: I0218 19:18:27.946160 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:27 crc kubenswrapper[4942]: I0218 19:18:27.946256 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:27 crc kubenswrapper[4942]: I0218 19:18:27.946281 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:27 crc kubenswrapper[4942]: I0218 19:18:27.946312 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:27 crc kubenswrapper[4942]: I0218 19:18:27.946341 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:27Z","lastTransitionTime":"2026-02-18T19:18:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:27 crc kubenswrapper[4942]: E0218 19:18:27.968713 4942 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:18:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:27Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:18:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:27Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:18:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:27Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:18:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:27Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"26ba8477-3134-4454-b1a3-81cc0f315017\\\",\\\"systemUUID\\\":\\\"15e4da6b-0b96-4412-ada2-f835d7e5f88a\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:27Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:27 crc kubenswrapper[4942]: I0218 19:18:27.974001 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:27 crc kubenswrapper[4942]: I0218 19:18:27.974082 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:27 crc kubenswrapper[4942]: I0218 19:18:27.974103 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:27 crc kubenswrapper[4942]: I0218 19:18:27.974133 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:27 crc kubenswrapper[4942]: I0218 19:18:27.974155 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:27Z","lastTransitionTime":"2026-02-18T19:18:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:27 crc kubenswrapper[4942]: E0218 19:18:27.991225 4942 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:18:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:27Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:18:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:27Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:18:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:27Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:18:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:27Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"26ba8477-3134-4454-b1a3-81cc0f315017\\\",\\\"systemUUID\\\":\\\"15e4da6b-0b96-4412-ada2-f835d7e5f88a\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:27Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:27 crc kubenswrapper[4942]: I0218 19:18:27.996664 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:27 crc kubenswrapper[4942]: I0218 19:18:27.996721 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:27 crc kubenswrapper[4942]: I0218 19:18:27.996735 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:27 crc kubenswrapper[4942]: I0218 19:18:27.996755 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:27 crc kubenswrapper[4942]: I0218 19:18:27.996796 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:27Z","lastTransitionTime":"2026-02-18T19:18:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:28 crc kubenswrapper[4942]: I0218 19:18:28.003799 4942 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-11 21:57:37.290663388 +0000 UTC Feb 18 19:18:28 crc kubenswrapper[4942]: E0218 19:18:28.010142 4942 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:18:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:27Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:18:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:27Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:18:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:27Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:18:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:27Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"26ba8477-3134-4454-b1a3-81cc0f315017\\\",\\\"systemUUID\\\":\\\"15e4da6b-0b96-4412-ada2-f835d7e5f88a\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:28Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:28 crc kubenswrapper[4942]: I0218 19:18:28.014785 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:28 crc kubenswrapper[4942]: I0218 19:18:28.014836 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:28 crc kubenswrapper[4942]: I0218 19:18:28.014849 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:28 crc kubenswrapper[4942]: I0218 19:18:28.014870 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:28 crc kubenswrapper[4942]: I0218 19:18:28.014882 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:28Z","lastTransitionTime":"2026-02-18T19:18:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:28 crc kubenswrapper[4942]: E0218 19:18:28.031246 4942 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:18:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:28Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:18:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:28Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:18:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:28Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:18:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:28Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"26ba8477-3134-4454-b1a3-81cc0f315017\\\",\\\"systemUUID\\\":\\\"15e4da6b-0b96-4412-ada2-f835d7e5f88a\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:28Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:28 crc kubenswrapper[4942]: E0218 19:18:28.031380 4942 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 18 19:18:28 crc kubenswrapper[4942]: I0218 19:18:28.033229 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:28 crc kubenswrapper[4942]: I0218 19:18:28.033262 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:28 crc kubenswrapper[4942]: I0218 19:18:28.033276 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:28 crc kubenswrapper[4942]: I0218 19:18:28.033294 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:28 crc kubenswrapper[4942]: I0218 19:18:28.033305 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:28Z","lastTransitionTime":"2026-02-18T19:18:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:28 crc kubenswrapper[4942]: I0218 19:18:28.035999 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 19:18:28 crc kubenswrapper[4942]: E0218 19:18:28.036166 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 19:18:28 crc kubenswrapper[4942]: I0218 19:18:28.036429 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qwg6q" Feb 18 19:18:28 crc kubenswrapper[4942]: E0218 19:18:28.036550 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qwg6q" podUID="ac5b5f40-34db-4aeb-abb4-57204673bd53" Feb 18 19:18:28 crc kubenswrapper[4942]: I0218 19:18:28.036985 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 19:18:28 crc kubenswrapper[4942]: E0218 19:18:28.037090 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 19:18:28 crc kubenswrapper[4942]: I0218 19:18:28.047899 4942 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-crc"] Feb 18 19:18:28 crc kubenswrapper[4942]: I0218 19:18:28.137649 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:28 crc kubenswrapper[4942]: I0218 19:18:28.137722 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:28 crc kubenswrapper[4942]: I0218 19:18:28.137745 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:28 crc kubenswrapper[4942]: I0218 19:18:28.137824 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:28 crc kubenswrapper[4942]: I0218 19:18:28.137848 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:28Z","lastTransitionTime":"2026-02-18T19:18:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:28 crc kubenswrapper[4942]: I0218 19:18:28.154350 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/ac5b5f40-34db-4aeb-abb4-57204673bd53-metrics-certs\") pod \"network-metrics-daemon-qwg6q\" (UID: \"ac5b5f40-34db-4aeb-abb4-57204673bd53\") " pod="openshift-multus/network-metrics-daemon-qwg6q" Feb 18 19:18:28 crc kubenswrapper[4942]: E0218 19:18:28.154517 4942 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 18 19:18:28 crc kubenswrapper[4942]: E0218 19:18:28.154586 4942 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ac5b5f40-34db-4aeb-abb4-57204673bd53-metrics-certs podName:ac5b5f40-34db-4aeb-abb4-57204673bd53 nodeName:}" failed. No retries permitted until 2026-02-18 19:19:00.15456942 +0000 UTC m=+99.859502085 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/ac5b5f40-34db-4aeb-abb4-57204673bd53-metrics-certs") pod "network-metrics-daemon-qwg6q" (UID: "ac5b5f40-34db-4aeb-abb4-57204673bd53") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 18 19:18:28 crc kubenswrapper[4942]: I0218 19:18:28.240597 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:28 crc kubenswrapper[4942]: I0218 19:18:28.240660 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:28 crc kubenswrapper[4942]: I0218 19:18:28.240676 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:28 crc kubenswrapper[4942]: I0218 19:18:28.240697 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:28 crc kubenswrapper[4942]: I0218 19:18:28.240712 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:28Z","lastTransitionTime":"2026-02-18T19:18:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:28 crc kubenswrapper[4942]: I0218 19:18:28.344628 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:28 crc kubenswrapper[4942]: I0218 19:18:28.344686 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:28 crc kubenswrapper[4942]: I0218 19:18:28.344695 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:28 crc kubenswrapper[4942]: I0218 19:18:28.344714 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:28 crc kubenswrapper[4942]: I0218 19:18:28.344725 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:28Z","lastTransitionTime":"2026-02-18T19:18:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:28 crc kubenswrapper[4942]: I0218 19:18:28.448292 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:28 crc kubenswrapper[4942]: I0218 19:18:28.448358 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:28 crc kubenswrapper[4942]: I0218 19:18:28.448374 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:28 crc kubenswrapper[4942]: I0218 19:18:28.448405 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:28 crc kubenswrapper[4942]: I0218 19:18:28.448423 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:28Z","lastTransitionTime":"2026-02-18T19:18:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:28 crc kubenswrapper[4942]: I0218 19:18:28.551653 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:28 crc kubenswrapper[4942]: I0218 19:18:28.551716 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:28 crc kubenswrapper[4942]: I0218 19:18:28.551735 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:28 crc kubenswrapper[4942]: I0218 19:18:28.551789 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:28 crc kubenswrapper[4942]: I0218 19:18:28.551807 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:28Z","lastTransitionTime":"2026-02-18T19:18:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:28 crc kubenswrapper[4942]: I0218 19:18:28.655363 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:28 crc kubenswrapper[4942]: I0218 19:18:28.655435 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:28 crc kubenswrapper[4942]: I0218 19:18:28.655456 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:28 crc kubenswrapper[4942]: I0218 19:18:28.655481 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:28 crc kubenswrapper[4942]: I0218 19:18:28.655502 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:28Z","lastTransitionTime":"2026-02-18T19:18:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:28 crc kubenswrapper[4942]: I0218 19:18:28.758244 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:28 crc kubenswrapper[4942]: I0218 19:18:28.758280 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:28 crc kubenswrapper[4942]: I0218 19:18:28.758289 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:28 crc kubenswrapper[4942]: I0218 19:18:28.758306 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:28 crc kubenswrapper[4942]: I0218 19:18:28.758320 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:28Z","lastTransitionTime":"2026-02-18T19:18:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:28 crc kubenswrapper[4942]: I0218 19:18:28.862055 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:28 crc kubenswrapper[4942]: I0218 19:18:28.862125 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:28 crc kubenswrapper[4942]: I0218 19:18:28.862144 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:28 crc kubenswrapper[4942]: I0218 19:18:28.862169 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:28 crc kubenswrapper[4942]: I0218 19:18:28.862187 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:28Z","lastTransitionTime":"2026-02-18T19:18:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:28 crc kubenswrapper[4942]: I0218 19:18:28.965977 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:28 crc kubenswrapper[4942]: I0218 19:18:28.966027 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:28 crc kubenswrapper[4942]: I0218 19:18:28.966037 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:28 crc kubenswrapper[4942]: I0218 19:18:28.966057 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:28 crc kubenswrapper[4942]: I0218 19:18:28.966065 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:28Z","lastTransitionTime":"2026-02-18T19:18:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:29 crc kubenswrapper[4942]: I0218 19:18:29.004746 4942 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-09 06:10:25.733891518 +0000 UTC Feb 18 19:18:29 crc kubenswrapper[4942]: I0218 19:18:29.035434 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 19:18:29 crc kubenswrapper[4942]: E0218 19:18:29.035725 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 19:18:29 crc kubenswrapper[4942]: I0218 19:18:29.068716 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:29 crc kubenswrapper[4942]: I0218 19:18:29.068794 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:29 crc kubenswrapper[4942]: I0218 19:18:29.068806 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:29 crc kubenswrapper[4942]: I0218 19:18:29.068827 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:29 crc kubenswrapper[4942]: I0218 19:18:29.068840 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:29Z","lastTransitionTime":"2026-02-18T19:18:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:29 crc kubenswrapper[4942]: I0218 19:18:29.171656 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:29 crc kubenswrapper[4942]: I0218 19:18:29.171721 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:29 crc kubenswrapper[4942]: I0218 19:18:29.171744 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:29 crc kubenswrapper[4942]: I0218 19:18:29.171799 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:29 crc kubenswrapper[4942]: I0218 19:18:29.171820 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:29Z","lastTransitionTime":"2026-02-18T19:18:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:29 crc kubenswrapper[4942]: I0218 19:18:29.274374 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:29 crc kubenswrapper[4942]: I0218 19:18:29.274419 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:29 crc kubenswrapper[4942]: I0218 19:18:29.274438 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:29 crc kubenswrapper[4942]: I0218 19:18:29.274462 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:29 crc kubenswrapper[4942]: I0218 19:18:29.274479 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:29Z","lastTransitionTime":"2026-02-18T19:18:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:29 crc kubenswrapper[4942]: I0218 19:18:29.377238 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:29 crc kubenswrapper[4942]: I0218 19:18:29.377292 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:29 crc kubenswrapper[4942]: I0218 19:18:29.377304 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:29 crc kubenswrapper[4942]: I0218 19:18:29.377324 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:29 crc kubenswrapper[4942]: I0218 19:18:29.377339 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:29Z","lastTransitionTime":"2026-02-18T19:18:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:29 crc kubenswrapper[4942]: I0218 19:18:29.480589 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:29 crc kubenswrapper[4942]: I0218 19:18:29.480663 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:29 crc kubenswrapper[4942]: I0218 19:18:29.480684 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:29 crc kubenswrapper[4942]: I0218 19:18:29.480710 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:29 crc kubenswrapper[4942]: I0218 19:18:29.480727 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:29Z","lastTransitionTime":"2026-02-18T19:18:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:29 crc kubenswrapper[4942]: I0218 19:18:29.583113 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:29 crc kubenswrapper[4942]: I0218 19:18:29.583165 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:29 crc kubenswrapper[4942]: I0218 19:18:29.583178 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:29 crc kubenswrapper[4942]: I0218 19:18:29.583197 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:29 crc kubenswrapper[4942]: I0218 19:18:29.583207 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:29Z","lastTransitionTime":"2026-02-18T19:18:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:29 crc kubenswrapper[4942]: I0218 19:18:29.597787 4942 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-8jfwb_75150b8c-7a02-497b-86c3-eabc9c8dbc55/kube-multus/0.log" Feb 18 19:18:29 crc kubenswrapper[4942]: I0218 19:18:29.597856 4942 generic.go:334] "Generic (PLEG): container finished" podID="75150b8c-7a02-497b-86c3-eabc9c8dbc55" containerID="f6aba9b40a3a963de7e8fb8f2a121318f0800350a41caa30b6aef71468e5e0e4" exitCode=1 Feb 18 19:18:29 crc kubenswrapper[4942]: I0218 19:18:29.597901 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-8jfwb" event={"ID":"75150b8c-7a02-497b-86c3-eabc9c8dbc55","Type":"ContainerDied","Data":"f6aba9b40a3a963de7e8fb8f2a121318f0800350a41caa30b6aef71468e5e0e4"} Feb 18 19:18:29 crc kubenswrapper[4942]: I0218 19:18:29.598619 4942 scope.go:117] "RemoveContainer" containerID="f6aba9b40a3a963de7e8fb8f2a121318f0800350a41caa30b6aef71468e5e0e4" Feb 18 19:18:29 crc kubenswrapper[4942]: I0218 19:18:29.616466 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://36f5db0de79285e1aca04aee9ebb8824353d8746f2f7df24be858a55db3c9abf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:29Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:29 crc kubenswrapper[4942]: I0218 19:18:29.631668 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2e4be8605467674f949e5b4b8d282634126ab56d2983d5ffadb64ca4043b79b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:29Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:29 crc kubenswrapper[4942]: I0218 19:18:29.646918 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:29Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:29 crc kubenswrapper[4942]: I0218 19:18:29.658928 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wqxh4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"28921539-823a-4439-a230-3b5aed7085cc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d1f426cf3a46e9dbd6da2d7e0d1dc2649a781bb63b9b116e2e96e297ffe685f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c2zj5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3f2583de812c35d32f50918d2ea1071672e650d7bb1eca09416558ca25526b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c2zj5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:17:42Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wqxh4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:29Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:29 crc kubenswrapper[4942]: I0218 19:18:29.679870 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-8jfwb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"75150b8c-7a02-497b-86c3-eabc9c8dbc55\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:29Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:29Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f6aba9b40a3a963de7e8fb8f2a121318f0800350a41caa30b6aef71468e5e0e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f6aba9b40a3a963de7e8fb8f2a121318f0800350a41caa30b6aef71468e5e0e4\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-18T19:18:29Z\\\",\\\"message\\\":\\\"2026-02-18T19:17:44+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_4d82d104-9414-4e68-8849-a8f62a9a5d29\\\\n2026-02-18T19:17:44+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_4d82d104-9414-4e68-8849-a8f62a9a5d29 to /host/opt/cni/bin/\\\\n2026-02-18T19:17:44Z [verbose] multus-daemon started\\\\n2026-02-18T19:17:44Z [verbose] Readiness Indicator file check\\\\n2026-02-18T19:18:29Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T19:17:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-65c5q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:17:42Z\\\"}}\" for pod \"openshift-multus\"/\"multus-8jfwb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:29Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:29 crc kubenswrapper[4942]: I0218 19:18:29.686508 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:29 crc kubenswrapper[4942]: I0218 19:18:29.686536 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:29 crc kubenswrapper[4942]: I0218 19:18:29.686549 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:29 crc kubenswrapper[4942]: I0218 19:18:29.686568 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:29 crc kubenswrapper[4942]: I0218 19:18:29.686583 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:29Z","lastTransitionTime":"2026-02-18T19:18:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:29 crc kubenswrapper[4942]: I0218 19:18:29.704168 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-89fzv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"45dc4164-81a9-44cf-b86a-dff571bc0417\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:42Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:42Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e988175a524e389ddf3e3a47acb65910ac3bf3b812e14b76d988f13e2cdc5dc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl7tj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9333dac09e056ca12a248589ed4a097788b86ab83f9a1014d76d8bad88f1800c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl7tj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bcc9ee5f12cc3a3518c9fe13c16743e946e59b82dc01239767afb1e4afb2e4b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl7tj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2e222b580b244e85a382499ae61c72779f95fdab87e4d4c723d29b488219f94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl7tj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6351d0088a3e9c170ebe043fa700ef7f870c52f40d751b4fd13ac7b5bfa5e3b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl7tj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://427d7c083c5040fc6afe217c7850f1114323977542e83eb35d0a71b4bef6ecc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl7tj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5429604f7b234287bf3af48f519550433f88494f95c33feb27806630d47483a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5429604f7b234287bf3af48f519550433f88494f95c33feb27806630d47483a5\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-18T19:18:08Z\\\",\\\"message\\\":\\\"I0218 19:18:08.178608 6626 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0218 19:18:08.178631 6626 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0218 19:18:08.178626 6626 handler.go:208] Removed *v1.Node event handler 2\\\\nI0218 19:18:08.178651 6626 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0218 19:18:08.178672 6626 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0218 19:18:08.178699 6626 handler.go:208] Removed *v1.Node event handler 7\\\\nI0218 19:18:08.178672 6626 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0218 19:18:08.178745 6626 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0218 19:18:08.178787 6626 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0218 19:18:08.178795 6626 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0218 19:18:08.178814 6626 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0218 19:18:08.178831 6626 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0218 19:18:08.178834 6626 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0218 19:18:08.178850 6626 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0218 19:18:08.178859 6626 factory.go:656] Stopping watch factory\\\\nI0218 19:18:08.178876 6626 ovnkube.go:599] Stopped ovnkube\\\\nI0218 19:18:0\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T19:18:07Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-89fzv_openshift-ovn-kubernetes(45dc4164-81a9-44cf-b86a-dff571bc0417)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl7tj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c498aa99d3ec10af57c279f23804f4dce52a99d2c73fafe2bd9dc6ea454c7a23\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl7tj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://581689b9e064557a35e24e6d5c15a73036b8499700959fd330e9ebee15543edc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://581689b9e064557a35e24e6d5c15a73036b8499700959fd330e9ebee15543edc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:17:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:17:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl7tj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:17:42Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-89fzv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:29Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:29 crc kubenswrapper[4942]: I0218 19:18:29.720122 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"276d1ade-b018-4a59-8184-e121ff600ea0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bf61d811b92484ed6f2e49184a29d51957000ce926d74afe7b452b8845673afb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://691cb927291454a41fe8552c32737d52f8430e180870cd9c2bdc827926f15cd0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a3ed5634c2ead9b37bd3c51e5ba9f710e1a2b4430552bfce39b234bc7efdac5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8f965989f2401534556e39f4940e0a03935cf6ff85e89a9401fdfc20fc84dbc3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8f965989f2401534556e39f4940e0a03935cf6ff85e89a9401fdfc20fc84dbc3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:17:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:17:22Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:17:21Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:29Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:29 crc kubenswrapper[4942]: I0218 19:18:29.735583 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dab011ca-f26a-4a5e-b093-b1f4dc0e5efa\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c16c164479a6aa22042dd8b972db6fc6b802a7a1fc1a50b1538e85b6afe9b913\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://de8f04ef11faf93e27b40bb3839d1dabcfbb8248407854c379262f626810c92a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://de8f04ef11faf93e27b40bb3839d1dabcfbb8248407854c379262f626810c92a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:17:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:17:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:17:21Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:29Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:29 crc kubenswrapper[4942]: I0218 19:18:29.751960 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-wxck8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"69ef2748-687e-4223-998e-7bd92ad8aaaf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ba4df5c822ff37a1a027d1908aab6472cd0b5a6ab0a2b5e5d1b172774107727\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vscpp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:17:45Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-wxck8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:29Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:29 crc kubenswrapper[4942]: I0218 19:18:29.767939 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4da93830-99a3-4d84-91c8-a5352a987b3f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://beecfbdf76954e7b9895240b52a2ec033ec3b81094ece02095f67a5f389d0383\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca3d8e99733c89b17e7211c9bae268f8e75942d896d32a6e2e9fc7e613000a6d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee5e19c2c5a503ae69e8052828713b9b399137e0fb7f3a06865d4d7f6b29c954\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c787e65428258ae002dd2569d2e100857851a5b699d573b42e59d1be987da8b3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b82ad9b4d66dae63d0d0fcf0dd65333cf43c3b1d6006e129b7db1c6320bfdee8\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"message\\\":\\\"le observer\\\\nW0218 19:17:41.723890 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0218 19:17:41.724123 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0218 19:17:41.725411 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3231040961/tls.crt::/tmp/serving-cert-3231040961/tls.key\\\\\\\"\\\\nI0218 19:17:41.923908 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0218 19:17:41.936017 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0218 19:17:41.936045 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0218 19:17:41.936073 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0218 19:17:41.936079 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0218 19:17:41.944174 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0218 19:17:41.944200 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 19:17:41.944205 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 19:17:41.944211 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0218 19:17:41.944214 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0218 19:17:41.944217 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0218 19:17:41.944220 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0218 19:17:41.944371 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0218 19:17:41.958094 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T19:17:41Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5fcd5de3303bba82e4a354de9f77b9aac574912955c2e49e2e74232f4d432a88\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:23Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6adb31d2464d541db843bddad1c43811ca11900db12deeb0d7c13ff8a3186d09\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6adb31d2464d541db843bddad1c43811ca11900db12deeb0d7c13ff8a3186d09\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:17:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:17:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:17:21Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:29Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:29 crc kubenswrapper[4942]: I0218 19:18:29.779910 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:29Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:29 crc kubenswrapper[4942]: I0218 19:18:29.789182 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:29 crc kubenswrapper[4942]: I0218 19:18:29.789223 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:29 crc kubenswrapper[4942]: I0218 19:18:29.789235 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:29 crc kubenswrapper[4942]: I0218 19:18:29.789255 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:29 crc kubenswrapper[4942]: I0218 19:18:29.789266 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:29Z","lastTransitionTime":"2026-02-18T19:18:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:29 crc kubenswrapper[4942]: I0218 19:18:29.789876 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-xk99z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8f8b40cd-7bbd-4189-a8c0-f4131e8b9add\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b7ea4ede9f2f9b4438bc9befcf913e5b8c7b9dc765fa1edce809e17c5ac933a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2zxvs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3573f095c220e3b1994394b83fdf24c7d1a721ccee2755042f520467f21ae1fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2zxvs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:17:54Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-xk99z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:29Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:29 crc kubenswrapper[4942]: I0218 19:18:29.798400 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-qwg6q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ac5b5f40-34db-4aeb-abb4-57204673bd53\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:56Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:56Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kmmq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kmmq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:17:56Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-qwg6q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:29Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:29 crc kubenswrapper[4942]: I0218 19:18:29.808077 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b5d2b9d-7ec0-41fa-a073-399c6fd41eb6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2c8b81c113e461032be39d6328308bad3189a9e84d987da987d43e8e2f6449fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3654d3b4a5084ce9ffb9ef8aeab6155788b56ac636aee44b098f6e9d457a8d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3a247d311cfbec62a54df5757a344bbc7ea516a66ccdeb67aecbbe268a4fbe4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://117748c4c4fa5e68d4b927639faa447ed3a984e0d7364a2224abe27e178d5746\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:17:21Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:29Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:29 crc kubenswrapper[4942]: I0218 19:18:29.817848 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8cc6e8b6926e9cadf0bfdedb3a9fd0e5a7a902ba1cc703cd0396c3d7b2ec8666\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://45c0716738e2acbb0104b2ce05e3f23fd6933b653297d10972914500f3e55cd3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:29Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:29 crc kubenswrapper[4942]: I0218 19:18:29.826735 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-5pgvt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f163820b-df8b-4e07-9b74-d5f3332580a6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://97b02b2ef091c462632d385e824d90a6dc8270726bb3b5dfaa6c3036e99d323f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjg6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:17:41Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-5pgvt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:29Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:29 crc kubenswrapper[4942]: I0218 19:18:29.838162 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-2rbc4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1ec6cacf-a09d-43b8-89fc-aea1d5a0ca9d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d379b6cff5fad06493f1e137d6f8de20b35e5350025c5875db8afb23cf30ac97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-27v7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://26e15915f01864e6357f914244e51cc52eb7c1c79fdda6cebfb23c7723978fc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://26e15915f01864e6357f914244e51cc52eb7c1c79fdda6cebfb23c7723978fc7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:17:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-27v7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3845cae9bbde573b86221f80bf461886dadf5c5149bb19824e505c703e168ba3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3845cae9bbde573b86221f80bf461886dadf5c5149bb19824e505c703e168ba3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:17:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-27v7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://83b7a573c632fbc4da1c088193202dcbe4f5f09d82c98b12e901a06acc877b80\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://83b7a573c632fbc4da1c088193202dcbe4f5f09d82c98b12e901a06acc877b80\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:17:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:17:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-27v7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2730d908eb063a0dc3278a304a8b7b9aee84bb6df39693e476d6517362864da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b2730d908eb063a0dc3278a304a8b7b9aee84bb6df39693e476d6517362864da\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:17:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:17:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-27v7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://86ba552c18df4c07b6d6b34acf51c27ec696374ddd079486c045e1cb9f68f703\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://86ba552c18df4c07b6d6b34acf51c27ec696374ddd079486c045e1cb9f68f703\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:17:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:17:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-27v7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://522b8abd41e12aecabbbc8a1f16dd8978b1e72b0984784780349570290bcc168\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://522b8abd41e12aecabbbc8a1f16dd8978b1e72b0984784780349570290bcc168\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:17:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:17:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-27v7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:17:42Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-2rbc4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:29Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:29 crc kubenswrapper[4942]: I0218 19:18:29.848259 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:29Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:29 crc kubenswrapper[4942]: I0218 19:18:29.891615 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:29 crc kubenswrapper[4942]: I0218 19:18:29.891656 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:29 crc kubenswrapper[4942]: I0218 19:18:29.891665 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:29 crc kubenswrapper[4942]: I0218 19:18:29.891682 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:29 crc kubenswrapper[4942]: I0218 19:18:29.891692 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:29Z","lastTransitionTime":"2026-02-18T19:18:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:29 crc kubenswrapper[4942]: I0218 19:18:29.994961 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:29 crc kubenswrapper[4942]: I0218 19:18:29.995023 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:29 crc kubenswrapper[4942]: I0218 19:18:29.995042 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:29 crc kubenswrapper[4942]: I0218 19:18:29.995067 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:29 crc kubenswrapper[4942]: I0218 19:18:29.995084 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:29Z","lastTransitionTime":"2026-02-18T19:18:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:30 crc kubenswrapper[4942]: I0218 19:18:30.005267 4942 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-14 11:48:56.204228142 +0000 UTC Feb 18 19:18:30 crc kubenswrapper[4942]: I0218 19:18:30.035643 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 19:18:30 crc kubenswrapper[4942]: I0218 19:18:30.035690 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qwg6q" Feb 18 19:18:30 crc kubenswrapper[4942]: I0218 19:18:30.035750 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 19:18:30 crc kubenswrapper[4942]: E0218 19:18:30.035856 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 19:18:30 crc kubenswrapper[4942]: E0218 19:18:30.036035 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 19:18:30 crc kubenswrapper[4942]: E0218 19:18:30.036183 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qwg6q" podUID="ac5b5f40-34db-4aeb-abb4-57204673bd53" Feb 18 19:18:30 crc kubenswrapper[4942]: I0218 19:18:30.097868 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:30 crc kubenswrapper[4942]: I0218 19:18:30.097922 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:30 crc kubenswrapper[4942]: I0218 19:18:30.097933 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:30 crc kubenswrapper[4942]: I0218 19:18:30.097952 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:30 crc kubenswrapper[4942]: I0218 19:18:30.097964 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:30Z","lastTransitionTime":"2026-02-18T19:18:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:30 crc kubenswrapper[4942]: I0218 19:18:30.200925 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:30 crc kubenswrapper[4942]: I0218 19:18:30.200972 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:30 crc kubenswrapper[4942]: I0218 19:18:30.200982 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:30 crc kubenswrapper[4942]: I0218 19:18:30.200998 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:30 crc kubenswrapper[4942]: I0218 19:18:30.201010 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:30Z","lastTransitionTime":"2026-02-18T19:18:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:30 crc kubenswrapper[4942]: I0218 19:18:30.303481 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:30 crc kubenswrapper[4942]: I0218 19:18:30.303565 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:30 crc kubenswrapper[4942]: I0218 19:18:30.303587 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:30 crc kubenswrapper[4942]: I0218 19:18:30.303613 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:30 crc kubenswrapper[4942]: I0218 19:18:30.303630 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:30Z","lastTransitionTime":"2026-02-18T19:18:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:30 crc kubenswrapper[4942]: I0218 19:18:30.406111 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:30 crc kubenswrapper[4942]: I0218 19:18:30.406198 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:30 crc kubenswrapper[4942]: I0218 19:18:30.406221 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:30 crc kubenswrapper[4942]: I0218 19:18:30.406253 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:30 crc kubenswrapper[4942]: I0218 19:18:30.406275 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:30Z","lastTransitionTime":"2026-02-18T19:18:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:30 crc kubenswrapper[4942]: I0218 19:18:30.509642 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:30 crc kubenswrapper[4942]: I0218 19:18:30.509699 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:30 crc kubenswrapper[4942]: I0218 19:18:30.509712 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:30 crc kubenswrapper[4942]: I0218 19:18:30.509732 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:30 crc kubenswrapper[4942]: I0218 19:18:30.509745 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:30Z","lastTransitionTime":"2026-02-18T19:18:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:30 crc kubenswrapper[4942]: I0218 19:18:30.604113 4942 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-8jfwb_75150b8c-7a02-497b-86c3-eabc9c8dbc55/kube-multus/0.log" Feb 18 19:18:30 crc kubenswrapper[4942]: I0218 19:18:30.604186 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-8jfwb" event={"ID":"75150b8c-7a02-497b-86c3-eabc9c8dbc55","Type":"ContainerStarted","Data":"4ea9fbe1ac2843b80786e84d58bed874d360e223686eac9666589a7841d71c46"} Feb 18 19:18:30 crc kubenswrapper[4942]: I0218 19:18:30.617595 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:30 crc kubenswrapper[4942]: I0218 19:18:30.617633 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:30 crc kubenswrapper[4942]: I0218 19:18:30.617644 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:30 crc kubenswrapper[4942]: I0218 19:18:30.617666 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:30 crc kubenswrapper[4942]: I0218 19:18:30.617677 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:30Z","lastTransitionTime":"2026-02-18T19:18:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:30 crc kubenswrapper[4942]: I0218 19:18:30.620015 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wqxh4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"28921539-823a-4439-a230-3b5aed7085cc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d1f426cf3a46e9dbd6da2d7e0d1dc2649a781bb63b9b116e2e96e297ffe685f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c2zj5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3f2583de812c35d32f50918d2ea1071672e650d7bb1eca09416558ca25526b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c2zj5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:17:42Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wqxh4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:30Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:30 crc kubenswrapper[4942]: I0218 19:18:30.662986 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-8jfwb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"75150b8c-7a02-497b-86c3-eabc9c8dbc55\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4ea9fbe1ac2843b80786e84d58bed874d360e223686eac9666589a7841d71c46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f6aba9b40a3a963de7e8fb8f2a121318f0800350a41caa30b6aef71468e5e0e4\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-18T19:18:29Z\\\",\\\"message\\\":\\\"2026-02-18T19:17:44+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_4d82d104-9414-4e68-8849-a8f62a9a5d29\\\\n2026-02-18T19:17:44+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_4d82d104-9414-4e68-8849-a8f62a9a5d29 to /host/opt/cni/bin/\\\\n2026-02-18T19:17:44Z [verbose] multus-daemon started\\\\n2026-02-18T19:17:44Z [verbose] Readiness Indicator file check\\\\n2026-02-18T19:18:29Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T19:17:42Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-65c5q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:17:42Z\\\"}}\" for pod \"openshift-multus\"/\"multus-8jfwb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:30Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:30 crc kubenswrapper[4942]: I0218 19:18:30.699841 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-89fzv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"45dc4164-81a9-44cf-b86a-dff571bc0417\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:42Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:42Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e988175a524e389ddf3e3a47acb65910ac3bf3b812e14b76d988f13e2cdc5dc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl7tj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9333dac09e056ca12a248589ed4a097788b86ab83f9a1014d76d8bad88f1800c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl7tj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bcc9ee5f12cc3a3518c9fe13c16743e946e59b82dc01239767afb1e4afb2e4b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl7tj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2e222b580b244e85a382499ae61c72779f95fdab87e4d4c723d29b488219f94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl7tj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6351d0088a3e9c170ebe043fa700ef7f870c52f40d751b4fd13ac7b5bfa5e3b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl7tj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://427d7c083c5040fc6afe217c7850f1114323977542e83eb35d0a71b4bef6ecc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl7tj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5429604f7b234287bf3af48f519550433f88494f95c33feb27806630d47483a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5429604f7b234287bf3af48f519550433f88494f95c33feb27806630d47483a5\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-18T19:18:08Z\\\",\\\"message\\\":\\\"I0218 19:18:08.178608 6626 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0218 19:18:08.178631 6626 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0218 19:18:08.178626 6626 handler.go:208] Removed *v1.Node event handler 2\\\\nI0218 19:18:08.178651 6626 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0218 19:18:08.178672 6626 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0218 19:18:08.178699 6626 handler.go:208] Removed *v1.Node event handler 7\\\\nI0218 19:18:08.178672 6626 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0218 19:18:08.178745 6626 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0218 19:18:08.178787 6626 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0218 19:18:08.178795 6626 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0218 19:18:08.178814 6626 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0218 19:18:08.178831 6626 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0218 19:18:08.178834 6626 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0218 19:18:08.178850 6626 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0218 19:18:08.178859 6626 factory.go:656] Stopping watch factory\\\\nI0218 19:18:08.178876 6626 ovnkube.go:599] Stopped ovnkube\\\\nI0218 19:18:0\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T19:18:07Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-89fzv_openshift-ovn-kubernetes(45dc4164-81a9-44cf-b86a-dff571bc0417)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl7tj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c498aa99d3ec10af57c279f23804f4dce52a99d2c73fafe2bd9dc6ea454c7a23\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl7tj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://581689b9e064557a35e24e6d5c15a73036b8499700959fd330e9ebee15543edc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://581689b9e064557a35e24e6d5c15a73036b8499700959fd330e9ebee15543edc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:17:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:17:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl7tj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:17:42Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-89fzv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:30Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:30 crc kubenswrapper[4942]: I0218 19:18:30.713538 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"276d1ade-b018-4a59-8184-e121ff600ea0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bf61d811b92484ed6f2e49184a29d51957000ce926d74afe7b452b8845673afb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://691cb927291454a41fe8552c32737d52f8430e180870cd9c2bdc827926f15cd0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a3ed5634c2ead9b37bd3c51e5ba9f710e1a2b4430552bfce39b234bc7efdac5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8f965989f2401534556e39f4940e0a03935cf6ff85e89a9401fdfc20fc84dbc3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8f965989f2401534556e39f4940e0a03935cf6ff85e89a9401fdfc20fc84dbc3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:17:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:17:22Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:17:21Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:30Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:30 crc kubenswrapper[4942]: I0218 19:18:30.719488 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:30 crc kubenswrapper[4942]: I0218 19:18:30.719553 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:30 crc kubenswrapper[4942]: I0218 19:18:30.719569 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:30 crc kubenswrapper[4942]: I0218 19:18:30.719596 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:30 crc kubenswrapper[4942]: I0218 19:18:30.719613 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:30Z","lastTransitionTime":"2026-02-18T19:18:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:30 crc kubenswrapper[4942]: I0218 19:18:30.726671 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dab011ca-f26a-4a5e-b093-b1f4dc0e5efa\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c16c164479a6aa22042dd8b972db6fc6b802a7a1fc1a50b1538e85b6afe9b913\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://de8f04ef11faf93e27b40bb3839d1dabcfbb8248407854c379262f626810c92a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://de8f04ef11faf93e27b40bb3839d1dabcfbb8248407854c379262f626810c92a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:17:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:17:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:17:21Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:30Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:30 crc kubenswrapper[4942]: I0218 19:18:30.739412 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://36f5db0de79285e1aca04aee9ebb8824353d8746f2f7df24be858a55db3c9abf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:30Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:30 crc kubenswrapper[4942]: I0218 19:18:30.753856 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2e4be8605467674f949e5b4b8d282634126ab56d2983d5ffadb64ca4043b79b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:30Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:30 crc kubenswrapper[4942]: I0218 19:18:30.768539 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:30Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:30 crc kubenswrapper[4942]: I0218 19:18:30.784660 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4da93830-99a3-4d84-91c8-a5352a987b3f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://beecfbdf76954e7b9895240b52a2ec033ec3b81094ece02095f67a5f389d0383\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca3d8e99733c89b17e7211c9bae268f8e75942d896d32a6e2e9fc7e613000a6d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee5e19c2c5a503ae69e8052828713b9b399137e0fb7f3a06865d4d7f6b29c954\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c787e65428258ae002dd2569d2e100857851a5b699d573b42e59d1be987da8b3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b82ad9b4d66dae63d0d0fcf0dd65333cf43c3b1d6006e129b7db1c6320bfdee8\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"message\\\":\\\"le observer\\\\nW0218 19:17:41.723890 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0218 19:17:41.724123 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0218 19:17:41.725411 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3231040961/tls.crt::/tmp/serving-cert-3231040961/tls.key\\\\\\\"\\\\nI0218 19:17:41.923908 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0218 19:17:41.936017 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0218 19:17:41.936045 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0218 19:17:41.936073 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0218 19:17:41.936079 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0218 19:17:41.944174 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0218 19:17:41.944200 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 19:17:41.944205 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 19:17:41.944211 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0218 19:17:41.944214 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0218 19:17:41.944217 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0218 19:17:41.944220 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0218 19:17:41.944371 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0218 19:17:41.958094 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T19:17:41Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5fcd5de3303bba82e4a354de9f77b9aac574912955c2e49e2e74232f4d432a88\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:23Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6adb31d2464d541db843bddad1c43811ca11900db12deeb0d7c13ff8a3186d09\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6adb31d2464d541db843bddad1c43811ca11900db12deeb0d7c13ff8a3186d09\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:17:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:17:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:17:21Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:30Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:30 crc kubenswrapper[4942]: I0218 19:18:30.799308 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:30Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:30 crc kubenswrapper[4942]: I0218 19:18:30.816841 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-wxck8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"69ef2748-687e-4223-998e-7bd92ad8aaaf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ba4df5c822ff37a1a027d1908aab6472cd0b5a6ab0a2b5e5d1b172774107727\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vscpp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:17:45Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-wxck8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:30Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:30 crc kubenswrapper[4942]: I0218 19:18:30.822020 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:30 crc kubenswrapper[4942]: I0218 19:18:30.822049 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:30 crc kubenswrapper[4942]: I0218 19:18:30.822058 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:30 crc kubenswrapper[4942]: I0218 19:18:30.822074 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:30 crc kubenswrapper[4942]: I0218 19:18:30.822086 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:30Z","lastTransitionTime":"2026-02-18T19:18:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:30 crc kubenswrapper[4942]: I0218 19:18:30.832365 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b5d2b9d-7ec0-41fa-a073-399c6fd41eb6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2c8b81c113e461032be39d6328308bad3189a9e84d987da987d43e8e2f6449fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3654d3b4a5084ce9ffb9ef8aeab6155788b56ac636aee44b098f6e9d457a8d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3a247d311cfbec62a54df5757a344bbc7ea516a66ccdeb67aecbbe268a4fbe4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://117748c4c4fa5e68d4b927639faa447ed3a984e0d7364a2224abe27e178d5746\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:17:21Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:30Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:30 crc kubenswrapper[4942]: I0218 19:18:30.847418 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8cc6e8b6926e9cadf0bfdedb3a9fd0e5a7a902ba1cc703cd0396c3d7b2ec8666\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://45c0716738e2acbb0104b2ce05e3f23fd6933b653297d10972914500f3e55cd3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:30Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:30 crc kubenswrapper[4942]: I0218 19:18:30.864299 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-xk99z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8f8b40cd-7bbd-4189-a8c0-f4131e8b9add\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b7ea4ede9f2f9b4438bc9befcf913e5b8c7b9dc765fa1edce809e17c5ac933a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2zxvs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3573f095c220e3b1994394b83fdf24c7d1a721ccee2755042f520467f21ae1fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2zxvs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:17:54Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-xk99z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:30Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:30 crc kubenswrapper[4942]: I0218 19:18:30.878675 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-qwg6q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ac5b5f40-34db-4aeb-abb4-57204673bd53\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:56Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:56Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kmmq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kmmq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:17:56Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-qwg6q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:30Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:30 crc kubenswrapper[4942]: I0218 19:18:30.898933 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:30Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:30 crc kubenswrapper[4942]: I0218 19:18:30.911122 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-5pgvt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f163820b-df8b-4e07-9b74-d5f3332580a6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://97b02b2ef091c462632d385e824d90a6dc8270726bb3b5dfaa6c3036e99d323f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjg6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:17:41Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-5pgvt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:30Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:30 crc kubenswrapper[4942]: I0218 19:18:30.925716 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:30 crc kubenswrapper[4942]: I0218 19:18:30.925798 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:30 crc kubenswrapper[4942]: I0218 19:18:30.925814 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:30 crc kubenswrapper[4942]: I0218 19:18:30.925834 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:30 crc kubenswrapper[4942]: I0218 19:18:30.925848 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:30Z","lastTransitionTime":"2026-02-18T19:18:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:30 crc kubenswrapper[4942]: I0218 19:18:30.930353 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-2rbc4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1ec6cacf-a09d-43b8-89fc-aea1d5a0ca9d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d379b6cff5fad06493f1e137d6f8de20b35e5350025c5875db8afb23cf30ac97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-27v7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://26e15915f01864e6357f914244e51cc52eb7c1c79fdda6cebfb23c7723978fc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://26e15915f01864e6357f914244e51cc52eb7c1c79fdda6cebfb23c7723978fc7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:17:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-27v7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3845cae9bbde573b86221f80bf461886dadf5c5149bb19824e505c703e168ba3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3845cae9bbde573b86221f80bf461886dadf5c5149bb19824e505c703e168ba3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:17:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-27v7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://83b7a573c632fbc4da1c088193202dcbe4f5f09d82c98b12e901a06acc877b80\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://83b7a573c632fbc4da1c088193202dcbe4f5f09d82c98b12e901a06acc877b80\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:17:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:17:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-27v7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2730d908eb063a0dc3278a304a8b7b9aee84bb6df39693e476d6517362864da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b2730d908eb063a0dc3278a304a8b7b9aee84bb6df39693e476d6517362864da\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:17:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:17:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-27v7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://86ba552c18df4c07b6d6b34acf51c27ec696374ddd079486c045e1cb9f68f703\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://86ba552c18df4c07b6d6b34acf51c27ec696374ddd079486c045e1cb9f68f703\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:17:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:17:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-27v7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://522b8abd41e12aecabbbc8a1f16dd8978b1e72b0984784780349570290bcc168\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://522b8abd41e12aecabbbc8a1f16dd8978b1e72b0984784780349570290bcc168\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:17:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:17:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-27v7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:17:42Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-2rbc4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:30Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:31 crc kubenswrapper[4942]: I0218 19:18:31.005571 4942 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-13 10:06:20.233793737 +0000 UTC Feb 18 19:18:31 crc kubenswrapper[4942]: I0218 19:18:31.029334 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:31 crc kubenswrapper[4942]: I0218 19:18:31.029375 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:31 crc kubenswrapper[4942]: I0218 19:18:31.029385 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:31 crc kubenswrapper[4942]: I0218 19:18:31.029402 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:31 crc kubenswrapper[4942]: I0218 19:18:31.029412 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:31Z","lastTransitionTime":"2026-02-18T19:18:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:31 crc kubenswrapper[4942]: I0218 19:18:31.035150 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 19:18:31 crc kubenswrapper[4942]: E0218 19:18:31.035318 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 19:18:31 crc kubenswrapper[4942]: I0218 19:18:31.056207 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:31Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:31 crc kubenswrapper[4942]: I0218 19:18:31.074935 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wqxh4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"28921539-823a-4439-a230-3b5aed7085cc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d1f426cf3a46e9dbd6da2d7e0d1dc2649a781bb63b9b116e2e96e297ffe685f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c2zj5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3f2583de812c35d32f50918d2ea1071672e650d7bb1eca09416558ca25526b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c2zj5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:17:42Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wqxh4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:31Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:31 crc kubenswrapper[4942]: I0218 19:18:31.098734 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-8jfwb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"75150b8c-7a02-497b-86c3-eabc9c8dbc55\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4ea9fbe1ac2843b80786e84d58bed874d360e223686eac9666589a7841d71c46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f6aba9b40a3a963de7e8fb8f2a121318f0800350a41caa30b6aef71468e5e0e4\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-18T19:18:29Z\\\",\\\"message\\\":\\\"2026-02-18T19:17:44+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_4d82d104-9414-4e68-8849-a8f62a9a5d29\\\\n2026-02-18T19:17:44+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_4d82d104-9414-4e68-8849-a8f62a9a5d29 to /host/opt/cni/bin/\\\\n2026-02-18T19:17:44Z [verbose] multus-daemon started\\\\n2026-02-18T19:17:44Z [verbose] Readiness Indicator file check\\\\n2026-02-18T19:18:29Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T19:17:42Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-65c5q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:17:42Z\\\"}}\" for pod \"openshift-multus\"/\"multus-8jfwb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:31Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:31 crc kubenswrapper[4942]: I0218 19:18:31.122108 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-89fzv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"45dc4164-81a9-44cf-b86a-dff571bc0417\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:42Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:42Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e988175a524e389ddf3e3a47acb65910ac3bf3b812e14b76d988f13e2cdc5dc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl7tj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9333dac09e056ca12a248589ed4a097788b86ab83f9a1014d76d8bad88f1800c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl7tj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bcc9ee5f12cc3a3518c9fe13c16743e946e59b82dc01239767afb1e4afb2e4b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl7tj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2e222b580b244e85a382499ae61c72779f95fdab87e4d4c723d29b488219f94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl7tj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6351d0088a3e9c170ebe043fa700ef7f870c52f40d751b4fd13ac7b5bfa5e3b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl7tj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://427d7c083c5040fc6afe217c7850f1114323977542e83eb35d0a71b4bef6ecc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl7tj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5429604f7b234287bf3af48f519550433f88494f95c33feb27806630d47483a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5429604f7b234287bf3af48f519550433f88494f95c33feb27806630d47483a5\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-18T19:18:08Z\\\",\\\"message\\\":\\\"I0218 19:18:08.178608 6626 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0218 19:18:08.178631 6626 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0218 19:18:08.178626 6626 handler.go:208] Removed *v1.Node event handler 2\\\\nI0218 19:18:08.178651 6626 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0218 19:18:08.178672 6626 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0218 19:18:08.178699 6626 handler.go:208] Removed *v1.Node event handler 7\\\\nI0218 19:18:08.178672 6626 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0218 19:18:08.178745 6626 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0218 19:18:08.178787 6626 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0218 19:18:08.178795 6626 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0218 19:18:08.178814 6626 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0218 19:18:08.178831 6626 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0218 19:18:08.178834 6626 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0218 19:18:08.178850 6626 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0218 19:18:08.178859 6626 factory.go:656] Stopping watch factory\\\\nI0218 19:18:08.178876 6626 ovnkube.go:599] Stopped ovnkube\\\\nI0218 19:18:0\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T19:18:07Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-89fzv_openshift-ovn-kubernetes(45dc4164-81a9-44cf-b86a-dff571bc0417)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl7tj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c498aa99d3ec10af57c279f23804f4dce52a99d2c73fafe2bd9dc6ea454c7a23\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl7tj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://581689b9e064557a35e24e6d5c15a73036b8499700959fd330e9ebee15543edc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://581689b9e064557a35e24e6d5c15a73036b8499700959fd330e9ebee15543edc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:17:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:17:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl7tj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:17:42Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-89fzv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:31Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:31 crc kubenswrapper[4942]: I0218 19:18:31.132178 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:31 crc kubenswrapper[4942]: I0218 19:18:31.132236 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:31 crc kubenswrapper[4942]: I0218 19:18:31.132249 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:31 crc kubenswrapper[4942]: I0218 19:18:31.132269 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:31 crc kubenswrapper[4942]: I0218 19:18:31.132287 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:31Z","lastTransitionTime":"2026-02-18T19:18:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:31 crc kubenswrapper[4942]: I0218 19:18:31.141130 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"276d1ade-b018-4a59-8184-e121ff600ea0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bf61d811b92484ed6f2e49184a29d51957000ce926d74afe7b452b8845673afb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://691cb927291454a41fe8552c32737d52f8430e180870cd9c2bdc827926f15cd0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a3ed5634c2ead9b37bd3c51e5ba9f710e1a2b4430552bfce39b234bc7efdac5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8f965989f2401534556e39f4940e0a03935cf6ff85e89a9401fdfc20fc84dbc3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8f965989f2401534556e39f4940e0a03935cf6ff85e89a9401fdfc20fc84dbc3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:17:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:17:22Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:17:21Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:31Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:31 crc kubenswrapper[4942]: I0218 19:18:31.156725 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dab011ca-f26a-4a5e-b093-b1f4dc0e5efa\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c16c164479a6aa22042dd8b972db6fc6b802a7a1fc1a50b1538e85b6afe9b913\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://de8f04ef11faf93e27b40bb3839d1dabcfbb8248407854c379262f626810c92a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://de8f04ef11faf93e27b40bb3839d1dabcfbb8248407854c379262f626810c92a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:17:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:17:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:17:21Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:31Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:31 crc kubenswrapper[4942]: I0218 19:18:31.180550 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://36f5db0de79285e1aca04aee9ebb8824353d8746f2f7df24be858a55db3c9abf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:31Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:31 crc kubenswrapper[4942]: I0218 19:18:31.195717 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2e4be8605467674f949e5b4b8d282634126ab56d2983d5ffadb64ca4043b79b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:31Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:31 crc kubenswrapper[4942]: I0218 19:18:31.219036 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4da93830-99a3-4d84-91c8-a5352a987b3f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://beecfbdf76954e7b9895240b52a2ec033ec3b81094ece02095f67a5f389d0383\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca3d8e99733c89b17e7211c9bae268f8e75942d896d32a6e2e9fc7e613000a6d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee5e19c2c5a503ae69e8052828713b9b399137e0fb7f3a06865d4d7f6b29c954\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c787e65428258ae002dd2569d2e100857851a5b699d573b42e59d1be987da8b3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b82ad9b4d66dae63d0d0fcf0dd65333cf43c3b1d6006e129b7db1c6320bfdee8\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"message\\\":\\\"le observer\\\\nW0218 19:17:41.723890 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0218 19:17:41.724123 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0218 19:17:41.725411 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3231040961/tls.crt::/tmp/serving-cert-3231040961/tls.key\\\\\\\"\\\\nI0218 19:17:41.923908 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0218 19:17:41.936017 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0218 19:17:41.936045 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0218 19:17:41.936073 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0218 19:17:41.936079 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0218 19:17:41.944174 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0218 19:17:41.944200 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 19:17:41.944205 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 19:17:41.944211 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0218 19:17:41.944214 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0218 19:17:41.944217 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0218 19:17:41.944220 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0218 19:17:41.944371 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0218 19:17:41.958094 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T19:17:41Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5fcd5de3303bba82e4a354de9f77b9aac574912955c2e49e2e74232f4d432a88\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:23Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6adb31d2464d541db843bddad1c43811ca11900db12deeb0d7c13ff8a3186d09\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6adb31d2464d541db843bddad1c43811ca11900db12deeb0d7c13ff8a3186d09\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:17:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:17:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:17:21Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:31Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:31 crc kubenswrapper[4942]: I0218 19:18:31.233470 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:31Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:31 crc kubenswrapper[4942]: I0218 19:18:31.235808 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:31 crc kubenswrapper[4942]: I0218 19:18:31.235858 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:31 crc kubenswrapper[4942]: I0218 19:18:31.235873 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:31 crc kubenswrapper[4942]: I0218 19:18:31.235896 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:31 crc kubenswrapper[4942]: I0218 19:18:31.235912 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:31Z","lastTransitionTime":"2026-02-18T19:18:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:31 crc kubenswrapper[4942]: I0218 19:18:31.243614 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-wxck8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"69ef2748-687e-4223-998e-7bd92ad8aaaf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ba4df5c822ff37a1a027d1908aab6472cd0b5a6ab0a2b5e5d1b172774107727\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vscpp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:17:45Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-wxck8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:31Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:31 crc kubenswrapper[4942]: I0218 19:18:31.258109 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b5d2b9d-7ec0-41fa-a073-399c6fd41eb6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2c8b81c113e461032be39d6328308bad3189a9e84d987da987d43e8e2f6449fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3654d3b4a5084ce9ffb9ef8aeab6155788b56ac636aee44b098f6e9d457a8d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3a247d311cfbec62a54df5757a344bbc7ea516a66ccdeb67aecbbe268a4fbe4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://117748c4c4fa5e68d4b927639faa447ed3a984e0d7364a2224abe27e178d5746\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:17:21Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:31Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:31 crc kubenswrapper[4942]: I0218 19:18:31.273444 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8cc6e8b6926e9cadf0bfdedb3a9fd0e5a7a902ba1cc703cd0396c3d7b2ec8666\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://45c0716738e2acbb0104b2ce05e3f23fd6933b653297d10972914500f3e55cd3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:31Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:31 crc kubenswrapper[4942]: I0218 19:18:31.287124 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-xk99z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8f8b40cd-7bbd-4189-a8c0-f4131e8b9add\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b7ea4ede9f2f9b4438bc9befcf913e5b8c7b9dc765fa1edce809e17c5ac933a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2zxvs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3573f095c220e3b1994394b83fdf24c7d1a721ccee2755042f520467f21ae1fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2zxvs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:17:54Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-xk99z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:31Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:31 crc kubenswrapper[4942]: I0218 19:18:31.300004 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-qwg6q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ac5b5f40-34db-4aeb-abb4-57204673bd53\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:56Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:56Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kmmq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kmmq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:17:56Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-qwg6q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:31Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:31 crc kubenswrapper[4942]: I0218 19:18:31.316492 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:31Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:31 crc kubenswrapper[4942]: I0218 19:18:31.327693 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-5pgvt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f163820b-df8b-4e07-9b74-d5f3332580a6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://97b02b2ef091c462632d385e824d90a6dc8270726bb3b5dfaa6c3036e99d323f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjg6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:17:41Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-5pgvt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:31Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:31 crc kubenswrapper[4942]: I0218 19:18:31.338689 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:31 crc kubenswrapper[4942]: I0218 19:18:31.338728 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:31 crc kubenswrapper[4942]: I0218 19:18:31.338739 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:31 crc kubenswrapper[4942]: I0218 19:18:31.338754 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:31 crc kubenswrapper[4942]: I0218 19:18:31.338807 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:31Z","lastTransitionTime":"2026-02-18T19:18:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:31 crc kubenswrapper[4942]: I0218 19:18:31.340955 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-2rbc4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1ec6cacf-a09d-43b8-89fc-aea1d5a0ca9d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d379b6cff5fad06493f1e137d6f8de20b35e5350025c5875db8afb23cf30ac97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-27v7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://26e15915f01864e6357f914244e51cc52eb7c1c79fdda6cebfb23c7723978fc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://26e15915f01864e6357f914244e51cc52eb7c1c79fdda6cebfb23c7723978fc7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:17:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-27v7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3845cae9bbde573b86221f80bf461886dadf5c5149bb19824e505c703e168ba3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3845cae9bbde573b86221f80bf461886dadf5c5149bb19824e505c703e168ba3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:17:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-27v7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://83b7a573c632fbc4da1c088193202dcbe4f5f09d82c98b12e901a06acc877b80\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://83b7a573c632fbc4da1c088193202dcbe4f5f09d82c98b12e901a06acc877b80\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:17:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:17:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-27v7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2730d908eb063a0dc3278a304a8b7b9aee84bb6df39693e476d6517362864da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b2730d908eb063a0dc3278a304a8b7b9aee84bb6df39693e476d6517362864da\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:17:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:17:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-27v7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://86ba552c18df4c07b6d6b34acf51c27ec696374ddd079486c045e1cb9f68f703\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://86ba552c18df4c07b6d6b34acf51c27ec696374ddd079486c045e1cb9f68f703\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:17:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:17:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-27v7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://522b8abd41e12aecabbbc8a1f16dd8978b1e72b0984784780349570290bcc168\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://522b8abd41e12aecabbbc8a1f16dd8978b1e72b0984784780349570290bcc168\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:17:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:17:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-27v7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:17:42Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-2rbc4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:31Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:31 crc kubenswrapper[4942]: I0218 19:18:31.442902 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:31 crc kubenswrapper[4942]: I0218 19:18:31.442954 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:31 crc kubenswrapper[4942]: I0218 19:18:31.442964 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:31 crc kubenswrapper[4942]: I0218 19:18:31.442982 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:31 crc kubenswrapper[4942]: I0218 19:18:31.442993 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:31Z","lastTransitionTime":"2026-02-18T19:18:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:31 crc kubenswrapper[4942]: I0218 19:18:31.545836 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:31 crc kubenswrapper[4942]: I0218 19:18:31.545882 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:31 crc kubenswrapper[4942]: I0218 19:18:31.545891 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:31 crc kubenswrapper[4942]: I0218 19:18:31.545909 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:31 crc kubenswrapper[4942]: I0218 19:18:31.545917 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:31Z","lastTransitionTime":"2026-02-18T19:18:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:31 crc kubenswrapper[4942]: I0218 19:18:31.649478 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:31 crc kubenswrapper[4942]: I0218 19:18:31.649528 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:31 crc kubenswrapper[4942]: I0218 19:18:31.649541 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:31 crc kubenswrapper[4942]: I0218 19:18:31.649561 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:31 crc kubenswrapper[4942]: I0218 19:18:31.649572 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:31Z","lastTransitionTime":"2026-02-18T19:18:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:31 crc kubenswrapper[4942]: I0218 19:18:31.752048 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:31 crc kubenswrapper[4942]: I0218 19:18:31.752099 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:31 crc kubenswrapper[4942]: I0218 19:18:31.752111 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:31 crc kubenswrapper[4942]: I0218 19:18:31.752130 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:31 crc kubenswrapper[4942]: I0218 19:18:31.752142 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:31Z","lastTransitionTime":"2026-02-18T19:18:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:31 crc kubenswrapper[4942]: I0218 19:18:31.855453 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:31 crc kubenswrapper[4942]: I0218 19:18:31.855520 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:31 crc kubenswrapper[4942]: I0218 19:18:31.855539 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:31 crc kubenswrapper[4942]: I0218 19:18:31.855567 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:31 crc kubenswrapper[4942]: I0218 19:18:31.855587 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:31Z","lastTransitionTime":"2026-02-18T19:18:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:31 crc kubenswrapper[4942]: I0218 19:18:31.959954 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:31 crc kubenswrapper[4942]: I0218 19:18:31.960043 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:31 crc kubenswrapper[4942]: I0218 19:18:31.960074 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:31 crc kubenswrapper[4942]: I0218 19:18:31.960097 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:31 crc kubenswrapper[4942]: I0218 19:18:31.960124 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:31Z","lastTransitionTime":"2026-02-18T19:18:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:32 crc kubenswrapper[4942]: I0218 19:18:32.006878 4942 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-08 16:20:59.567592936 +0000 UTC Feb 18 19:18:32 crc kubenswrapper[4942]: I0218 19:18:32.035339 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 19:18:32 crc kubenswrapper[4942]: I0218 19:18:32.035376 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 19:18:32 crc kubenswrapper[4942]: I0218 19:18:32.035393 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qwg6q" Feb 18 19:18:32 crc kubenswrapper[4942]: E0218 19:18:32.035659 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 19:18:32 crc kubenswrapper[4942]: E0218 19:18:32.035807 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qwg6q" podUID="ac5b5f40-34db-4aeb-abb4-57204673bd53" Feb 18 19:18:32 crc kubenswrapper[4942]: E0218 19:18:32.035947 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 19:18:32 crc kubenswrapper[4942]: I0218 19:18:32.062834 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:32 crc kubenswrapper[4942]: I0218 19:18:32.062878 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:32 crc kubenswrapper[4942]: I0218 19:18:32.062896 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:32 crc kubenswrapper[4942]: I0218 19:18:32.062917 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:32 crc kubenswrapper[4942]: I0218 19:18:32.062931 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:32Z","lastTransitionTime":"2026-02-18T19:18:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:32 crc kubenswrapper[4942]: I0218 19:18:32.166055 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:32 crc kubenswrapper[4942]: I0218 19:18:32.166116 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:32 crc kubenswrapper[4942]: I0218 19:18:32.166127 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:32 crc kubenswrapper[4942]: I0218 19:18:32.166147 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:32 crc kubenswrapper[4942]: I0218 19:18:32.166161 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:32Z","lastTransitionTime":"2026-02-18T19:18:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:32 crc kubenswrapper[4942]: I0218 19:18:32.268733 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:32 crc kubenswrapper[4942]: I0218 19:18:32.268798 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:32 crc kubenswrapper[4942]: I0218 19:18:32.268810 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:32 crc kubenswrapper[4942]: I0218 19:18:32.268826 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:32 crc kubenswrapper[4942]: I0218 19:18:32.268838 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:32Z","lastTransitionTime":"2026-02-18T19:18:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:32 crc kubenswrapper[4942]: I0218 19:18:32.371916 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:32 crc kubenswrapper[4942]: I0218 19:18:32.371969 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:32 crc kubenswrapper[4942]: I0218 19:18:32.371980 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:32 crc kubenswrapper[4942]: I0218 19:18:32.371999 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:32 crc kubenswrapper[4942]: I0218 19:18:32.372012 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:32Z","lastTransitionTime":"2026-02-18T19:18:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:32 crc kubenswrapper[4942]: I0218 19:18:32.474953 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:32 crc kubenswrapper[4942]: I0218 19:18:32.475119 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:32 crc kubenswrapper[4942]: I0218 19:18:32.475130 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:32 crc kubenswrapper[4942]: I0218 19:18:32.475146 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:32 crc kubenswrapper[4942]: I0218 19:18:32.475156 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:32Z","lastTransitionTime":"2026-02-18T19:18:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:32 crc kubenswrapper[4942]: I0218 19:18:32.578388 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:32 crc kubenswrapper[4942]: I0218 19:18:32.578421 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:32 crc kubenswrapper[4942]: I0218 19:18:32.578429 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:32 crc kubenswrapper[4942]: I0218 19:18:32.578444 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:32 crc kubenswrapper[4942]: I0218 19:18:32.578454 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:32Z","lastTransitionTime":"2026-02-18T19:18:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:32 crc kubenswrapper[4942]: I0218 19:18:32.680533 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:32 crc kubenswrapper[4942]: I0218 19:18:32.680589 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:32 crc kubenswrapper[4942]: I0218 19:18:32.680601 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:32 crc kubenswrapper[4942]: I0218 19:18:32.680622 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:32 crc kubenswrapper[4942]: I0218 19:18:32.680638 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:32Z","lastTransitionTime":"2026-02-18T19:18:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:32 crc kubenswrapper[4942]: I0218 19:18:32.783145 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:32 crc kubenswrapper[4942]: I0218 19:18:32.783220 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:32 crc kubenswrapper[4942]: I0218 19:18:32.783231 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:32 crc kubenswrapper[4942]: I0218 19:18:32.783252 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:32 crc kubenswrapper[4942]: I0218 19:18:32.783268 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:32Z","lastTransitionTime":"2026-02-18T19:18:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:32 crc kubenswrapper[4942]: I0218 19:18:32.885756 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:32 crc kubenswrapper[4942]: I0218 19:18:32.885864 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:32 crc kubenswrapper[4942]: I0218 19:18:32.885885 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:32 crc kubenswrapper[4942]: I0218 19:18:32.885917 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:32 crc kubenswrapper[4942]: I0218 19:18:32.885940 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:32Z","lastTransitionTime":"2026-02-18T19:18:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:32 crc kubenswrapper[4942]: I0218 19:18:32.988915 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:32 crc kubenswrapper[4942]: I0218 19:18:32.989029 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:32 crc kubenswrapper[4942]: I0218 19:18:32.989049 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:32 crc kubenswrapper[4942]: I0218 19:18:32.989119 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:32 crc kubenswrapper[4942]: I0218 19:18:32.989138 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:32Z","lastTransitionTime":"2026-02-18T19:18:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:33 crc kubenswrapper[4942]: I0218 19:18:33.007175 4942 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-25 03:32:09.515489795 +0000 UTC Feb 18 19:18:33 crc kubenswrapper[4942]: I0218 19:18:33.035902 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 19:18:33 crc kubenswrapper[4942]: E0218 19:18:33.036189 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 19:18:33 crc kubenswrapper[4942]: I0218 19:18:33.091250 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:33 crc kubenswrapper[4942]: I0218 19:18:33.091294 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:33 crc kubenswrapper[4942]: I0218 19:18:33.091307 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:33 crc kubenswrapper[4942]: I0218 19:18:33.091325 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:33 crc kubenswrapper[4942]: I0218 19:18:33.091338 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:33Z","lastTransitionTime":"2026-02-18T19:18:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:33 crc kubenswrapper[4942]: I0218 19:18:33.194130 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:33 crc kubenswrapper[4942]: I0218 19:18:33.194201 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:33 crc kubenswrapper[4942]: I0218 19:18:33.194212 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:33 crc kubenswrapper[4942]: I0218 19:18:33.194232 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:33 crc kubenswrapper[4942]: I0218 19:18:33.194248 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:33Z","lastTransitionTime":"2026-02-18T19:18:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:33 crc kubenswrapper[4942]: I0218 19:18:33.297228 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:33 crc kubenswrapper[4942]: I0218 19:18:33.297284 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:33 crc kubenswrapper[4942]: I0218 19:18:33.297298 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:33 crc kubenswrapper[4942]: I0218 19:18:33.297317 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:33 crc kubenswrapper[4942]: I0218 19:18:33.297328 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:33Z","lastTransitionTime":"2026-02-18T19:18:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:33 crc kubenswrapper[4942]: I0218 19:18:33.400901 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:33 crc kubenswrapper[4942]: I0218 19:18:33.400965 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:33 crc kubenswrapper[4942]: I0218 19:18:33.400978 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:33 crc kubenswrapper[4942]: I0218 19:18:33.401004 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:33 crc kubenswrapper[4942]: I0218 19:18:33.401020 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:33Z","lastTransitionTime":"2026-02-18T19:18:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:33 crc kubenswrapper[4942]: I0218 19:18:33.504015 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:33 crc kubenswrapper[4942]: I0218 19:18:33.504059 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:33 crc kubenswrapper[4942]: I0218 19:18:33.504070 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:33 crc kubenswrapper[4942]: I0218 19:18:33.504093 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:33 crc kubenswrapper[4942]: I0218 19:18:33.504105 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:33Z","lastTransitionTime":"2026-02-18T19:18:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:33 crc kubenswrapper[4942]: I0218 19:18:33.606865 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:33 crc kubenswrapper[4942]: I0218 19:18:33.606928 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:33 crc kubenswrapper[4942]: I0218 19:18:33.606946 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:33 crc kubenswrapper[4942]: I0218 19:18:33.606972 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:33 crc kubenswrapper[4942]: I0218 19:18:33.606991 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:33Z","lastTransitionTime":"2026-02-18T19:18:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:33 crc kubenswrapper[4942]: I0218 19:18:33.709427 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:33 crc kubenswrapper[4942]: I0218 19:18:33.709493 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:33 crc kubenswrapper[4942]: I0218 19:18:33.709515 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:33 crc kubenswrapper[4942]: I0218 19:18:33.709546 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:33 crc kubenswrapper[4942]: I0218 19:18:33.709568 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:33Z","lastTransitionTime":"2026-02-18T19:18:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:33 crc kubenswrapper[4942]: I0218 19:18:33.812459 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:33 crc kubenswrapper[4942]: I0218 19:18:33.812520 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:33 crc kubenswrapper[4942]: I0218 19:18:33.812541 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:33 crc kubenswrapper[4942]: I0218 19:18:33.812572 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:33 crc kubenswrapper[4942]: I0218 19:18:33.812595 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:33Z","lastTransitionTime":"2026-02-18T19:18:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:33 crc kubenswrapper[4942]: I0218 19:18:33.915122 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:33 crc kubenswrapper[4942]: I0218 19:18:33.915302 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:33 crc kubenswrapper[4942]: I0218 19:18:33.915329 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:33 crc kubenswrapper[4942]: I0218 19:18:33.915366 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:33 crc kubenswrapper[4942]: I0218 19:18:33.915386 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:33Z","lastTransitionTime":"2026-02-18T19:18:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:34 crc kubenswrapper[4942]: I0218 19:18:34.007369 4942 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-13 17:27:48.649414084 +0000 UTC Feb 18 19:18:34 crc kubenswrapper[4942]: I0218 19:18:34.018558 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:34 crc kubenswrapper[4942]: I0218 19:18:34.018626 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:34 crc kubenswrapper[4942]: I0218 19:18:34.018655 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:34 crc kubenswrapper[4942]: I0218 19:18:34.018687 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:34 crc kubenswrapper[4942]: I0218 19:18:34.018704 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:34Z","lastTransitionTime":"2026-02-18T19:18:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:34 crc kubenswrapper[4942]: I0218 19:18:34.035007 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qwg6q" Feb 18 19:18:34 crc kubenswrapper[4942]: I0218 19:18:34.035156 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 19:18:34 crc kubenswrapper[4942]: I0218 19:18:34.035243 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 19:18:34 crc kubenswrapper[4942]: E0218 19:18:34.035371 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 19:18:34 crc kubenswrapper[4942]: E0218 19:18:34.035278 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qwg6q" podUID="ac5b5f40-34db-4aeb-abb4-57204673bd53" Feb 18 19:18:34 crc kubenswrapper[4942]: E0218 19:18:34.035577 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 19:18:34 crc kubenswrapper[4942]: I0218 19:18:34.122064 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:34 crc kubenswrapper[4942]: I0218 19:18:34.122117 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:34 crc kubenswrapper[4942]: I0218 19:18:34.122133 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:34 crc kubenswrapper[4942]: I0218 19:18:34.122157 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:34 crc kubenswrapper[4942]: I0218 19:18:34.122175 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:34Z","lastTransitionTime":"2026-02-18T19:18:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:34 crc kubenswrapper[4942]: I0218 19:18:34.224422 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:34 crc kubenswrapper[4942]: I0218 19:18:34.224472 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:34 crc kubenswrapper[4942]: I0218 19:18:34.224481 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:34 crc kubenswrapper[4942]: I0218 19:18:34.224498 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:34 crc kubenswrapper[4942]: I0218 19:18:34.224508 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:34Z","lastTransitionTime":"2026-02-18T19:18:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:34 crc kubenswrapper[4942]: I0218 19:18:34.327856 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:34 crc kubenswrapper[4942]: I0218 19:18:34.327920 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:34 crc kubenswrapper[4942]: I0218 19:18:34.327940 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:34 crc kubenswrapper[4942]: I0218 19:18:34.327968 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:34 crc kubenswrapper[4942]: I0218 19:18:34.327988 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:34Z","lastTransitionTime":"2026-02-18T19:18:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:34 crc kubenswrapper[4942]: I0218 19:18:34.431046 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:34 crc kubenswrapper[4942]: I0218 19:18:34.431110 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:34 crc kubenswrapper[4942]: I0218 19:18:34.431133 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:34 crc kubenswrapper[4942]: I0218 19:18:34.431191 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:34 crc kubenswrapper[4942]: I0218 19:18:34.431209 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:34Z","lastTransitionTime":"2026-02-18T19:18:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:34 crc kubenswrapper[4942]: I0218 19:18:34.534440 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:34 crc kubenswrapper[4942]: I0218 19:18:34.534568 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:34 crc kubenswrapper[4942]: I0218 19:18:34.534590 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:34 crc kubenswrapper[4942]: I0218 19:18:34.534625 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:34 crc kubenswrapper[4942]: I0218 19:18:34.534647 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:34Z","lastTransitionTime":"2026-02-18T19:18:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:34 crc kubenswrapper[4942]: I0218 19:18:34.638459 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:34 crc kubenswrapper[4942]: I0218 19:18:34.638521 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:34 crc kubenswrapper[4942]: I0218 19:18:34.638535 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:34 crc kubenswrapper[4942]: I0218 19:18:34.638562 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:34 crc kubenswrapper[4942]: I0218 19:18:34.638577 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:34Z","lastTransitionTime":"2026-02-18T19:18:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:34 crc kubenswrapper[4942]: I0218 19:18:34.742497 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:34 crc kubenswrapper[4942]: I0218 19:18:34.742561 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:34 crc kubenswrapper[4942]: I0218 19:18:34.742579 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:34 crc kubenswrapper[4942]: I0218 19:18:34.742603 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:34 crc kubenswrapper[4942]: I0218 19:18:34.742619 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:34Z","lastTransitionTime":"2026-02-18T19:18:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:34 crc kubenswrapper[4942]: I0218 19:18:34.845991 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:34 crc kubenswrapper[4942]: I0218 19:18:34.846046 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:34 crc kubenswrapper[4942]: I0218 19:18:34.846058 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:34 crc kubenswrapper[4942]: I0218 19:18:34.846079 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:34 crc kubenswrapper[4942]: I0218 19:18:34.846093 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:34Z","lastTransitionTime":"2026-02-18T19:18:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:34 crc kubenswrapper[4942]: I0218 19:18:34.948601 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:34 crc kubenswrapper[4942]: I0218 19:18:34.948661 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:34 crc kubenswrapper[4942]: I0218 19:18:34.948680 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:34 crc kubenswrapper[4942]: I0218 19:18:34.948705 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:34 crc kubenswrapper[4942]: I0218 19:18:34.948725 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:34Z","lastTransitionTime":"2026-02-18T19:18:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:35 crc kubenswrapper[4942]: I0218 19:18:35.008547 4942 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-01 07:25:00.679658309 +0000 UTC Feb 18 19:18:35 crc kubenswrapper[4942]: I0218 19:18:35.035429 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 19:18:35 crc kubenswrapper[4942]: E0218 19:18:35.035987 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 19:18:35 crc kubenswrapper[4942]: I0218 19:18:35.036513 4942 scope.go:117] "RemoveContainer" containerID="5429604f7b234287bf3af48f519550433f88494f95c33feb27806630d47483a5" Feb 18 19:18:35 crc kubenswrapper[4942]: I0218 19:18:35.050706 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:35 crc kubenswrapper[4942]: I0218 19:18:35.050786 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:35 crc kubenswrapper[4942]: I0218 19:18:35.050805 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:35 crc kubenswrapper[4942]: I0218 19:18:35.050829 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:35 crc kubenswrapper[4942]: I0218 19:18:35.050848 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:35Z","lastTransitionTime":"2026-02-18T19:18:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:35 crc kubenswrapper[4942]: I0218 19:18:35.153415 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:35 crc kubenswrapper[4942]: I0218 19:18:35.153484 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:35 crc kubenswrapper[4942]: I0218 19:18:35.153503 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:35 crc kubenswrapper[4942]: I0218 19:18:35.153534 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:35 crc kubenswrapper[4942]: I0218 19:18:35.153555 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:35Z","lastTransitionTime":"2026-02-18T19:18:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:35 crc kubenswrapper[4942]: I0218 19:18:35.256875 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:35 crc kubenswrapper[4942]: I0218 19:18:35.256914 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:35 crc kubenswrapper[4942]: I0218 19:18:35.256925 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:35 crc kubenswrapper[4942]: I0218 19:18:35.256945 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:35 crc kubenswrapper[4942]: I0218 19:18:35.256960 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:35Z","lastTransitionTime":"2026-02-18T19:18:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:35 crc kubenswrapper[4942]: I0218 19:18:35.360752 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:35 crc kubenswrapper[4942]: I0218 19:18:35.360848 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:35 crc kubenswrapper[4942]: I0218 19:18:35.360865 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:35 crc kubenswrapper[4942]: I0218 19:18:35.360893 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:35 crc kubenswrapper[4942]: I0218 19:18:35.360912 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:35Z","lastTransitionTime":"2026-02-18T19:18:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:35 crc kubenswrapper[4942]: I0218 19:18:35.465301 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:35 crc kubenswrapper[4942]: I0218 19:18:35.465343 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:35 crc kubenswrapper[4942]: I0218 19:18:35.465352 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:35 crc kubenswrapper[4942]: I0218 19:18:35.465368 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:35 crc kubenswrapper[4942]: I0218 19:18:35.465379 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:35Z","lastTransitionTime":"2026-02-18T19:18:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:35 crc kubenswrapper[4942]: I0218 19:18:35.567908 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:35 crc kubenswrapper[4942]: I0218 19:18:35.567940 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:35 crc kubenswrapper[4942]: I0218 19:18:35.567949 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:35 crc kubenswrapper[4942]: I0218 19:18:35.567967 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:35 crc kubenswrapper[4942]: I0218 19:18:35.567977 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:35Z","lastTransitionTime":"2026-02-18T19:18:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:35 crc kubenswrapper[4942]: I0218 19:18:35.622290 4942 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-89fzv_45dc4164-81a9-44cf-b86a-dff571bc0417/ovnkube-controller/2.log" Feb 18 19:18:35 crc kubenswrapper[4942]: I0218 19:18:35.624804 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-89fzv" event={"ID":"45dc4164-81a9-44cf-b86a-dff571bc0417","Type":"ContainerStarted","Data":"331d92ab2b896c654b5eb6e9e3372f06c02c3b582188b54cff7b9b6feb78c9a9"} Feb 18 19:18:35 crc kubenswrapper[4942]: I0218 19:18:35.625385 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-89fzv" Feb 18 19:18:35 crc kubenswrapper[4942]: I0218 19:18:35.641192 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:35Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:35 crc kubenswrapper[4942]: I0218 19:18:35.653648 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-5pgvt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f163820b-df8b-4e07-9b74-d5f3332580a6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://97b02b2ef091c462632d385e824d90a6dc8270726bb3b5dfaa6c3036e99d323f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjg6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:17:41Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-5pgvt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:35Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:35 crc kubenswrapper[4942]: I0218 19:18:35.670948 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:35 crc kubenswrapper[4942]: I0218 19:18:35.671012 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:35 crc kubenswrapper[4942]: I0218 19:18:35.671031 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:35 crc kubenswrapper[4942]: I0218 19:18:35.671057 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:35 crc kubenswrapper[4942]: I0218 19:18:35.671076 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:35Z","lastTransitionTime":"2026-02-18T19:18:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:35 crc kubenswrapper[4942]: I0218 19:18:35.673672 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-2rbc4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1ec6cacf-a09d-43b8-89fc-aea1d5a0ca9d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d379b6cff5fad06493f1e137d6f8de20b35e5350025c5875db8afb23cf30ac97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-27v7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://26e15915f01864e6357f914244e51cc52eb7c1c79fdda6cebfb23c7723978fc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://26e15915f01864e6357f914244e51cc52eb7c1c79fdda6cebfb23c7723978fc7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:17:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-27v7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3845cae9bbde573b86221f80bf461886dadf5c5149bb19824e505c703e168ba3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3845cae9bbde573b86221f80bf461886dadf5c5149bb19824e505c703e168ba3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:17:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-27v7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://83b7a573c632fbc4da1c088193202dcbe4f5f09d82c98b12e901a06acc877b80\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://83b7a573c632fbc4da1c088193202dcbe4f5f09d82c98b12e901a06acc877b80\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:17:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:17:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-27v7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2730d908eb063a0dc3278a304a8b7b9aee84bb6df39693e476d6517362864da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b2730d908eb063a0dc3278a304a8b7b9aee84bb6df39693e476d6517362864da\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:17:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:17:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-27v7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://86ba552c18df4c07b6d6b34acf51c27ec696374ddd079486c045e1cb9f68f703\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://86ba552c18df4c07b6d6b34acf51c27ec696374ddd079486c045e1cb9f68f703\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:17:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:17:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-27v7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://522b8abd41e12aecabbbc8a1f16dd8978b1e72b0984784780349570290bcc168\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://522b8abd41e12aecabbbc8a1f16dd8978b1e72b0984784780349570290bcc168\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:17:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:17:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-27v7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:17:42Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-2rbc4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:35Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:35 crc kubenswrapper[4942]: I0218 19:18:35.686289 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"276d1ade-b018-4a59-8184-e121ff600ea0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bf61d811b92484ed6f2e49184a29d51957000ce926d74afe7b452b8845673afb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://691cb927291454a41fe8552c32737d52f8430e180870cd9c2bdc827926f15cd0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a3ed5634c2ead9b37bd3c51e5ba9f710e1a2b4430552bfce39b234bc7efdac5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8f965989f2401534556e39f4940e0a03935cf6ff85e89a9401fdfc20fc84dbc3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8f965989f2401534556e39f4940e0a03935cf6ff85e89a9401fdfc20fc84dbc3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:17:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:17:22Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:17:21Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:35Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:35 crc kubenswrapper[4942]: I0218 19:18:35.701672 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dab011ca-f26a-4a5e-b093-b1f4dc0e5efa\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c16c164479a6aa22042dd8b972db6fc6b802a7a1fc1a50b1538e85b6afe9b913\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://de8f04ef11faf93e27b40bb3839d1dabcfbb8248407854c379262f626810c92a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://de8f04ef11faf93e27b40bb3839d1dabcfbb8248407854c379262f626810c92a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:17:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:17:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:17:21Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:35Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:35 crc kubenswrapper[4942]: I0218 19:18:35.718182 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://36f5db0de79285e1aca04aee9ebb8824353d8746f2f7df24be858a55db3c9abf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:35Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:35 crc kubenswrapper[4942]: I0218 19:18:35.734476 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2e4be8605467674f949e5b4b8d282634126ab56d2983d5ffadb64ca4043b79b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:35Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:35 crc kubenswrapper[4942]: I0218 19:18:35.770406 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:35Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:35 crc kubenswrapper[4942]: I0218 19:18:35.774134 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:35 crc kubenswrapper[4942]: I0218 19:18:35.774170 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:35 crc kubenswrapper[4942]: I0218 19:18:35.774182 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:35 crc kubenswrapper[4942]: I0218 19:18:35.774199 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:35 crc kubenswrapper[4942]: I0218 19:18:35.774210 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:35Z","lastTransitionTime":"2026-02-18T19:18:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:35 crc kubenswrapper[4942]: I0218 19:18:35.783728 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wqxh4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"28921539-823a-4439-a230-3b5aed7085cc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d1f426cf3a46e9dbd6da2d7e0d1dc2649a781bb63b9b116e2e96e297ffe685f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c2zj5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3f2583de812c35d32f50918d2ea1071672e650d7bb1eca09416558ca25526b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c2zj5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:17:42Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wqxh4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:35Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:35 crc kubenswrapper[4942]: I0218 19:18:35.798906 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-8jfwb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"75150b8c-7a02-497b-86c3-eabc9c8dbc55\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4ea9fbe1ac2843b80786e84d58bed874d360e223686eac9666589a7841d71c46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f6aba9b40a3a963de7e8fb8f2a121318f0800350a41caa30b6aef71468e5e0e4\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-18T19:18:29Z\\\",\\\"message\\\":\\\"2026-02-18T19:17:44+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_4d82d104-9414-4e68-8849-a8f62a9a5d29\\\\n2026-02-18T19:17:44+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_4d82d104-9414-4e68-8849-a8f62a9a5d29 to /host/opt/cni/bin/\\\\n2026-02-18T19:17:44Z [verbose] multus-daemon started\\\\n2026-02-18T19:17:44Z [verbose] Readiness Indicator file check\\\\n2026-02-18T19:18:29Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T19:17:42Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-65c5q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:17:42Z\\\"}}\" for pod \"openshift-multus\"/\"multus-8jfwb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:35Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:35 crc kubenswrapper[4942]: I0218 19:18:35.829412 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-89fzv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"45dc4164-81a9-44cf-b86a-dff571bc0417\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:42Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:42Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e988175a524e389ddf3e3a47acb65910ac3bf3b812e14b76d988f13e2cdc5dc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl7tj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9333dac09e056ca12a248589ed4a097788b86ab83f9a1014d76d8bad88f1800c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl7tj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bcc9ee5f12cc3a3518c9fe13c16743e946e59b82dc01239767afb1e4afb2e4b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl7tj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2e222b580b244e85a382499ae61c72779f95fdab87e4d4c723d29b488219f94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl7tj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6351d0088a3e9c170ebe043fa700ef7f870c52f40d751b4fd13ac7b5bfa5e3b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl7tj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://427d7c083c5040fc6afe217c7850f1114323977542e83eb35d0a71b4bef6ecc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl7tj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://331d92ab2b896c654b5eb6e9e3372f06c02c3b582188b54cff7b9b6feb78c9a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5429604f7b234287bf3af48f519550433f88494f95c33feb27806630d47483a5\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-18T19:18:08Z\\\",\\\"message\\\":\\\"I0218 19:18:08.178608 6626 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0218 19:18:08.178631 6626 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0218 19:18:08.178626 6626 handler.go:208] Removed *v1.Node event handler 2\\\\nI0218 19:18:08.178651 6626 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0218 19:18:08.178672 6626 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0218 19:18:08.178699 6626 handler.go:208] Removed *v1.Node event handler 7\\\\nI0218 19:18:08.178672 6626 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0218 19:18:08.178745 6626 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0218 19:18:08.178787 6626 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0218 19:18:08.178795 6626 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0218 19:18:08.178814 6626 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0218 19:18:08.178831 6626 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0218 19:18:08.178834 6626 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0218 19:18:08.178850 6626 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0218 19:18:08.178859 6626 factory.go:656] Stopping watch factory\\\\nI0218 19:18:08.178876 6626 ovnkube.go:599] Stopped ovnkube\\\\nI0218 19:18:0\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T19:18:07Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl7tj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c498aa99d3ec10af57c279f23804f4dce52a99d2c73fafe2bd9dc6ea454c7a23\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl7tj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://581689b9e064557a35e24e6d5c15a73036b8499700959fd330e9ebee15543edc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://581689b9e064557a35e24e6d5c15a73036b8499700959fd330e9ebee15543edc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:17:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:17:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl7tj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:17:42Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-89fzv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:35Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:35 crc kubenswrapper[4942]: I0218 19:18:35.843454 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4da93830-99a3-4d84-91c8-a5352a987b3f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://beecfbdf76954e7b9895240b52a2ec033ec3b81094ece02095f67a5f389d0383\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca3d8e99733c89b17e7211c9bae268f8e75942d896d32a6e2e9fc7e613000a6d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee5e19c2c5a503ae69e8052828713b9b399137e0fb7f3a06865d4d7f6b29c954\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c787e65428258ae002dd2569d2e100857851a5b699d573b42e59d1be987da8b3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b82ad9b4d66dae63d0d0fcf0dd65333cf43c3b1d6006e129b7db1c6320bfdee8\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"message\\\":\\\"le observer\\\\nW0218 19:17:41.723890 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0218 19:17:41.724123 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0218 19:17:41.725411 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3231040961/tls.crt::/tmp/serving-cert-3231040961/tls.key\\\\\\\"\\\\nI0218 19:17:41.923908 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0218 19:17:41.936017 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0218 19:17:41.936045 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0218 19:17:41.936073 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0218 19:17:41.936079 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0218 19:17:41.944174 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0218 19:17:41.944200 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 19:17:41.944205 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 19:17:41.944211 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0218 19:17:41.944214 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0218 19:17:41.944217 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0218 19:17:41.944220 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0218 19:17:41.944371 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0218 19:17:41.958094 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T19:17:41Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5fcd5de3303bba82e4a354de9f77b9aac574912955c2e49e2e74232f4d432a88\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:23Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6adb31d2464d541db843bddad1c43811ca11900db12deeb0d7c13ff8a3186d09\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6adb31d2464d541db843bddad1c43811ca11900db12deeb0d7c13ff8a3186d09\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:17:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:17:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:17:21Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:35Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:35 crc kubenswrapper[4942]: I0218 19:18:35.853662 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:35Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:35 crc kubenswrapper[4942]: I0218 19:18:35.863698 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-wxck8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"69ef2748-687e-4223-998e-7bd92ad8aaaf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ba4df5c822ff37a1a027d1908aab6472cd0b5a6ab0a2b5e5d1b172774107727\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vscpp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:17:45Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-wxck8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:35Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:35 crc kubenswrapper[4942]: I0218 19:18:35.874945 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b5d2b9d-7ec0-41fa-a073-399c6fd41eb6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2c8b81c113e461032be39d6328308bad3189a9e84d987da987d43e8e2f6449fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3654d3b4a5084ce9ffb9ef8aeab6155788b56ac636aee44b098f6e9d457a8d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3a247d311cfbec62a54df5757a344bbc7ea516a66ccdeb67aecbbe268a4fbe4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://117748c4c4fa5e68d4b927639faa447ed3a984e0d7364a2224abe27e178d5746\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:17:21Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:35Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:35 crc kubenswrapper[4942]: I0218 19:18:35.876369 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:35 crc kubenswrapper[4942]: I0218 19:18:35.876399 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:35 crc kubenswrapper[4942]: I0218 19:18:35.876410 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:35 crc kubenswrapper[4942]: I0218 19:18:35.876430 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:35 crc kubenswrapper[4942]: I0218 19:18:35.876442 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:35Z","lastTransitionTime":"2026-02-18T19:18:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:35 crc kubenswrapper[4942]: I0218 19:18:35.892052 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8cc6e8b6926e9cadf0bfdedb3a9fd0e5a7a902ba1cc703cd0396c3d7b2ec8666\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://45c0716738e2acbb0104b2ce05e3f23fd6933b653297d10972914500f3e55cd3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:35Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:35 crc kubenswrapper[4942]: I0218 19:18:35.906415 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-xk99z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8f8b40cd-7bbd-4189-a8c0-f4131e8b9add\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b7ea4ede9f2f9b4438bc9befcf913e5b8c7b9dc765fa1edce809e17c5ac933a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2zxvs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3573f095c220e3b1994394b83fdf24c7d1a721ccee2755042f520467f21ae1fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2zxvs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:17:54Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-xk99z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:35Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:35 crc kubenswrapper[4942]: I0218 19:18:35.916658 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-qwg6q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ac5b5f40-34db-4aeb-abb4-57204673bd53\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:56Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:56Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kmmq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kmmq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:17:56Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-qwg6q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:35Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:35 crc kubenswrapper[4942]: I0218 19:18:35.979915 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:35 crc kubenswrapper[4942]: I0218 19:18:35.981598 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:35 crc kubenswrapper[4942]: I0218 19:18:35.981967 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:35 crc kubenswrapper[4942]: I0218 19:18:35.982589 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:35 crc kubenswrapper[4942]: I0218 19:18:35.982627 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:35Z","lastTransitionTime":"2026-02-18T19:18:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:36 crc kubenswrapper[4942]: I0218 19:18:36.009680 4942 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-29 10:37:13.517830815 +0000 UTC Feb 18 19:18:36 crc kubenswrapper[4942]: I0218 19:18:36.035707 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qwg6q" Feb 18 19:18:36 crc kubenswrapper[4942]: I0218 19:18:36.035856 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 19:18:36 crc kubenswrapper[4942]: E0218 19:18:36.036082 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qwg6q" podUID="ac5b5f40-34db-4aeb-abb4-57204673bd53" Feb 18 19:18:36 crc kubenswrapper[4942]: I0218 19:18:36.036112 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 19:18:36 crc kubenswrapper[4942]: E0218 19:18:36.037503 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 19:18:36 crc kubenswrapper[4942]: E0218 19:18:36.038479 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 19:18:36 crc kubenswrapper[4942]: I0218 19:18:36.087303 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:36 crc kubenswrapper[4942]: I0218 19:18:36.087383 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:36 crc kubenswrapper[4942]: I0218 19:18:36.087398 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:36 crc kubenswrapper[4942]: I0218 19:18:36.087421 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:36 crc kubenswrapper[4942]: I0218 19:18:36.087435 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:36Z","lastTransitionTime":"2026-02-18T19:18:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:36 crc kubenswrapper[4942]: I0218 19:18:36.190738 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:36 crc kubenswrapper[4942]: I0218 19:18:36.190823 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:36 crc kubenswrapper[4942]: I0218 19:18:36.190840 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:36 crc kubenswrapper[4942]: I0218 19:18:36.190866 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:36 crc kubenswrapper[4942]: I0218 19:18:36.190883 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:36Z","lastTransitionTime":"2026-02-18T19:18:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:36 crc kubenswrapper[4942]: I0218 19:18:36.293372 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:36 crc kubenswrapper[4942]: I0218 19:18:36.293406 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:36 crc kubenswrapper[4942]: I0218 19:18:36.293413 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:36 crc kubenswrapper[4942]: I0218 19:18:36.293430 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:36 crc kubenswrapper[4942]: I0218 19:18:36.293439 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:36Z","lastTransitionTime":"2026-02-18T19:18:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:36 crc kubenswrapper[4942]: I0218 19:18:36.396191 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:36 crc kubenswrapper[4942]: I0218 19:18:36.396262 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:36 crc kubenswrapper[4942]: I0218 19:18:36.396279 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:36 crc kubenswrapper[4942]: I0218 19:18:36.396302 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:36 crc kubenswrapper[4942]: I0218 19:18:36.396323 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:36Z","lastTransitionTime":"2026-02-18T19:18:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:36 crc kubenswrapper[4942]: I0218 19:18:36.499281 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:36 crc kubenswrapper[4942]: I0218 19:18:36.499354 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:36 crc kubenswrapper[4942]: I0218 19:18:36.499382 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:36 crc kubenswrapper[4942]: I0218 19:18:36.499408 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:36 crc kubenswrapper[4942]: I0218 19:18:36.499421 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:36Z","lastTransitionTime":"2026-02-18T19:18:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:36 crc kubenswrapper[4942]: I0218 19:18:36.601832 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:36 crc kubenswrapper[4942]: I0218 19:18:36.601873 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:36 crc kubenswrapper[4942]: I0218 19:18:36.601884 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:36 crc kubenswrapper[4942]: I0218 19:18:36.601901 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:36 crc kubenswrapper[4942]: I0218 19:18:36.601915 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:36Z","lastTransitionTime":"2026-02-18T19:18:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:36 crc kubenswrapper[4942]: I0218 19:18:36.631199 4942 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-89fzv_45dc4164-81a9-44cf-b86a-dff571bc0417/ovnkube-controller/3.log" Feb 18 19:18:36 crc kubenswrapper[4942]: I0218 19:18:36.631844 4942 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-89fzv_45dc4164-81a9-44cf-b86a-dff571bc0417/ovnkube-controller/2.log" Feb 18 19:18:36 crc kubenswrapper[4942]: I0218 19:18:36.634986 4942 generic.go:334] "Generic (PLEG): container finished" podID="45dc4164-81a9-44cf-b86a-dff571bc0417" containerID="331d92ab2b896c654b5eb6e9e3372f06c02c3b582188b54cff7b9b6feb78c9a9" exitCode=1 Feb 18 19:18:36 crc kubenswrapper[4942]: I0218 19:18:36.635033 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-89fzv" event={"ID":"45dc4164-81a9-44cf-b86a-dff571bc0417","Type":"ContainerDied","Data":"331d92ab2b896c654b5eb6e9e3372f06c02c3b582188b54cff7b9b6feb78c9a9"} Feb 18 19:18:36 crc kubenswrapper[4942]: I0218 19:18:36.635078 4942 scope.go:117] "RemoveContainer" containerID="5429604f7b234287bf3af48f519550433f88494f95c33feb27806630d47483a5" Feb 18 19:18:36 crc kubenswrapper[4942]: I0218 19:18:36.635708 4942 scope.go:117] "RemoveContainer" containerID="331d92ab2b896c654b5eb6e9e3372f06c02c3b582188b54cff7b9b6feb78c9a9" Feb 18 19:18:36 crc kubenswrapper[4942]: E0218 19:18:36.635893 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-89fzv_openshift-ovn-kubernetes(45dc4164-81a9-44cf-b86a-dff571bc0417)\"" pod="openshift-ovn-kubernetes/ovnkube-node-89fzv" podUID="45dc4164-81a9-44cf-b86a-dff571bc0417" Feb 18 19:18:36 crc kubenswrapper[4942]: I0218 19:18:36.662696 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4da93830-99a3-4d84-91c8-a5352a987b3f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://beecfbdf76954e7b9895240b52a2ec033ec3b81094ece02095f67a5f389d0383\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca3d8e99733c89b17e7211c9bae268f8e75942d896d32a6e2e9fc7e613000a6d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee5e19c2c5a503ae69e8052828713b9b399137e0fb7f3a06865d4d7f6b29c954\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c787e65428258ae002dd2569d2e100857851a5b699d573b42e59d1be987da8b3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b82ad9b4d66dae63d0d0fcf0dd65333cf43c3b1d6006e129b7db1c6320bfdee8\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"message\\\":\\\"le observer\\\\nW0218 19:17:41.723890 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0218 19:17:41.724123 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0218 19:17:41.725411 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3231040961/tls.crt::/tmp/serving-cert-3231040961/tls.key\\\\\\\"\\\\nI0218 19:17:41.923908 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0218 19:17:41.936017 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0218 19:17:41.936045 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0218 19:17:41.936073 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0218 19:17:41.936079 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0218 19:17:41.944174 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0218 19:17:41.944200 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 19:17:41.944205 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 19:17:41.944211 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0218 19:17:41.944214 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0218 19:17:41.944217 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0218 19:17:41.944220 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0218 19:17:41.944371 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0218 19:17:41.958094 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T19:17:41Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5fcd5de3303bba82e4a354de9f77b9aac574912955c2e49e2e74232f4d432a88\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:23Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6adb31d2464d541db843bddad1c43811ca11900db12deeb0d7c13ff8a3186d09\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6adb31d2464d541db843bddad1c43811ca11900db12deeb0d7c13ff8a3186d09\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:17:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:17:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:17:21Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:36Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:36 crc kubenswrapper[4942]: I0218 19:18:36.683176 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:36Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:36 crc kubenswrapper[4942]: I0218 19:18:36.700534 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-wxck8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"69ef2748-687e-4223-998e-7bd92ad8aaaf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ba4df5c822ff37a1a027d1908aab6472cd0b5a6ab0a2b5e5d1b172774107727\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vscpp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:17:45Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-wxck8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:36Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:36 crc kubenswrapper[4942]: I0218 19:18:36.705931 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:36 crc kubenswrapper[4942]: I0218 19:18:36.705983 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:36 crc kubenswrapper[4942]: I0218 19:18:36.705993 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:36 crc kubenswrapper[4942]: I0218 19:18:36.706012 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:36 crc kubenswrapper[4942]: I0218 19:18:36.706027 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:36Z","lastTransitionTime":"2026-02-18T19:18:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:36 crc kubenswrapper[4942]: I0218 19:18:36.722412 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b5d2b9d-7ec0-41fa-a073-399c6fd41eb6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2c8b81c113e461032be39d6328308bad3189a9e84d987da987d43e8e2f6449fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3654d3b4a5084ce9ffb9ef8aeab6155788b56ac636aee44b098f6e9d457a8d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3a247d311cfbec62a54df5757a344bbc7ea516a66ccdeb67aecbbe268a4fbe4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://117748c4c4fa5e68d4b927639faa447ed3a984e0d7364a2224abe27e178d5746\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:17:21Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:36Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:36 crc kubenswrapper[4942]: I0218 19:18:36.743060 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8cc6e8b6926e9cadf0bfdedb3a9fd0e5a7a902ba1cc703cd0396c3d7b2ec8666\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://45c0716738e2acbb0104b2ce05e3f23fd6933b653297d10972914500f3e55cd3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:36Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:36 crc kubenswrapper[4942]: I0218 19:18:36.763581 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-xk99z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8f8b40cd-7bbd-4189-a8c0-f4131e8b9add\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b7ea4ede9f2f9b4438bc9befcf913e5b8c7b9dc765fa1edce809e17c5ac933a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2zxvs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3573f095c220e3b1994394b83fdf24c7d1a721ccee2755042f520467f21ae1fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2zxvs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:17:54Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-xk99z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:36Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:36 crc kubenswrapper[4942]: I0218 19:18:36.778452 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-qwg6q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ac5b5f40-34db-4aeb-abb4-57204673bd53\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:56Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:56Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kmmq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kmmq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:17:56Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-qwg6q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:36Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:36 crc kubenswrapper[4942]: I0218 19:18:36.794681 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:36Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:36 crc kubenswrapper[4942]: I0218 19:18:36.809499 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:36 crc kubenswrapper[4942]: I0218 19:18:36.809561 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:36 crc kubenswrapper[4942]: I0218 19:18:36.809578 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:36 crc kubenswrapper[4942]: I0218 19:18:36.809602 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:36 crc kubenswrapper[4942]: I0218 19:18:36.809854 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:36Z","lastTransitionTime":"2026-02-18T19:18:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:36 crc kubenswrapper[4942]: I0218 19:18:36.810274 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-5pgvt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f163820b-df8b-4e07-9b74-d5f3332580a6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://97b02b2ef091c462632d385e824d90a6dc8270726bb3b5dfaa6c3036e99d323f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjg6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:17:41Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-5pgvt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:36Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:36 crc kubenswrapper[4942]: I0218 19:18:36.828111 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-2rbc4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1ec6cacf-a09d-43b8-89fc-aea1d5a0ca9d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d379b6cff5fad06493f1e137d6f8de20b35e5350025c5875db8afb23cf30ac97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-27v7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://26e15915f01864e6357f914244e51cc52eb7c1c79fdda6cebfb23c7723978fc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://26e15915f01864e6357f914244e51cc52eb7c1c79fdda6cebfb23c7723978fc7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:17:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-27v7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3845cae9bbde573b86221f80bf461886dadf5c5149bb19824e505c703e168ba3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3845cae9bbde573b86221f80bf461886dadf5c5149bb19824e505c703e168ba3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:17:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-27v7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://83b7a573c632fbc4da1c088193202dcbe4f5f09d82c98b12e901a06acc877b80\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://83b7a573c632fbc4da1c088193202dcbe4f5f09d82c98b12e901a06acc877b80\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:17:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:17:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-27v7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2730d908eb063a0dc3278a304a8b7b9aee84bb6df39693e476d6517362864da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b2730d908eb063a0dc3278a304a8b7b9aee84bb6df39693e476d6517362864da\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:17:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:17:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-27v7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://86ba552c18df4c07b6d6b34acf51c27ec696374ddd079486c045e1cb9f68f703\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://86ba552c18df4c07b6d6b34acf51c27ec696374ddd079486c045e1cb9f68f703\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:17:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:17:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-27v7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://522b8abd41e12aecabbbc8a1f16dd8978b1e72b0984784780349570290bcc168\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://522b8abd41e12aecabbbc8a1f16dd8978b1e72b0984784780349570290bcc168\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:17:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:17:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-27v7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:17:42Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-2rbc4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:36Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:36 crc kubenswrapper[4942]: I0218 19:18:36.847300 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:36Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:36 crc kubenswrapper[4942]: I0218 19:18:36.864625 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wqxh4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"28921539-823a-4439-a230-3b5aed7085cc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d1f426cf3a46e9dbd6da2d7e0d1dc2649a781bb63b9b116e2e96e297ffe685f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c2zj5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3f2583de812c35d32f50918d2ea1071672e650d7bb1eca09416558ca25526b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c2zj5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:17:42Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wqxh4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:36Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:36 crc kubenswrapper[4942]: I0218 19:18:36.884417 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-8jfwb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"75150b8c-7a02-497b-86c3-eabc9c8dbc55\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4ea9fbe1ac2843b80786e84d58bed874d360e223686eac9666589a7841d71c46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f6aba9b40a3a963de7e8fb8f2a121318f0800350a41caa30b6aef71468e5e0e4\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-18T19:18:29Z\\\",\\\"message\\\":\\\"2026-02-18T19:17:44+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_4d82d104-9414-4e68-8849-a8f62a9a5d29\\\\n2026-02-18T19:17:44+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_4d82d104-9414-4e68-8849-a8f62a9a5d29 to /host/opt/cni/bin/\\\\n2026-02-18T19:17:44Z [verbose] multus-daemon started\\\\n2026-02-18T19:17:44Z [verbose] Readiness Indicator file check\\\\n2026-02-18T19:18:29Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T19:17:42Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-65c5q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:17:42Z\\\"}}\" for pod \"openshift-multus\"/\"multus-8jfwb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:36Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:36 crc kubenswrapper[4942]: I0218 19:18:36.909114 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-89fzv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"45dc4164-81a9-44cf-b86a-dff571bc0417\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:42Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:42Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e988175a524e389ddf3e3a47acb65910ac3bf3b812e14b76d988f13e2cdc5dc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl7tj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9333dac09e056ca12a248589ed4a097788b86ab83f9a1014d76d8bad88f1800c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl7tj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bcc9ee5f12cc3a3518c9fe13c16743e946e59b82dc01239767afb1e4afb2e4b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl7tj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2e222b580b244e85a382499ae61c72779f95fdab87e4d4c723d29b488219f94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl7tj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6351d0088a3e9c170ebe043fa700ef7f870c52f40d751b4fd13ac7b5bfa5e3b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl7tj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://427d7c083c5040fc6afe217c7850f1114323977542e83eb35d0a71b4bef6ecc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl7tj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://331d92ab2b896c654b5eb6e9e3372f06c02c3b582188b54cff7b9b6feb78c9a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5429604f7b234287bf3af48f519550433f88494f95c33feb27806630d47483a5\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-18T19:18:08Z\\\",\\\"message\\\":\\\"I0218 19:18:08.178608 6626 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0218 19:18:08.178631 6626 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0218 19:18:08.178626 6626 handler.go:208] Removed *v1.Node event handler 2\\\\nI0218 19:18:08.178651 6626 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0218 19:18:08.178672 6626 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0218 19:18:08.178699 6626 handler.go:208] Removed *v1.Node event handler 7\\\\nI0218 19:18:08.178672 6626 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0218 19:18:08.178745 6626 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0218 19:18:08.178787 6626 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0218 19:18:08.178795 6626 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0218 19:18:08.178814 6626 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0218 19:18:08.178831 6626 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0218 19:18:08.178834 6626 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0218 19:18:08.178850 6626 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0218 19:18:08.178859 6626 factory.go:656] Stopping watch factory\\\\nI0218 19:18:08.178876 6626 ovnkube.go:599] Stopped ovnkube\\\\nI0218 19:18:0\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T19:18:07Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://331d92ab2b896c654b5eb6e9e3372f06c02c3b582188b54cff7b9b6feb78c9a9\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-18T19:18:36Z\\\",\\\"message\\\":\\\"il\\\\u003e UUID: UUIDName:}]\\\\nI0218 19:18:36.041536 7014 model_client.go:382] Update operations generated as: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:5c 10.217.0.92]} options:{GoMap:map[iface-id-ver:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:5c 10.217.0.92]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {c94130be-172c-477c-88c4-40cc7eba30fe}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0218 19:18:36.041160 7014 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-node-identity/network-node-identity-vrzqb\\\\nI0218 19:18:36.041605 7014 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-node-identity/network-node-identity-vrzqb\\\\nI0218 19:18:36.041616 7014 ovn.go:134] Ensuring zone local for Pod openshift-network-node-identity/network-node-identity-vrzqb in node crc\\\\nI0218 19:18:36.041625 7014 obj_retry.go:386] Retry successful for *v1.Pod openshift-network-node-identity/network-node-identity-vrzqb after 0 failed attempt(s)\\\\nI0218 19:18:36.041636 7014 default_network_controller.go:776] Recording success event on pod openshift-network-node-identity/network-node-identity-vrzqb\\\\nI0218 19:18:36.040597 7014 model_client.go:398] Mutate operations generated as: [{Op:mutate Table:Logi\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T19:18:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl7tj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c498aa99d3ec10af57c279f23804f4dce52a99d2c73fafe2bd9dc6ea454c7a23\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl7tj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://581689b9e064557a35e24e6d5c15a73036b8499700959fd330e9ebee15543edc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://581689b9e064557a35e24e6d5c15a73036b8499700959fd330e9ebee15543edc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:17:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:17:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl7tj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:17:42Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-89fzv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:36Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:36 crc kubenswrapper[4942]: I0218 19:18:36.913113 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:36 crc kubenswrapper[4942]: I0218 19:18:36.913193 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:36 crc kubenswrapper[4942]: I0218 19:18:36.913219 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:36 crc kubenswrapper[4942]: I0218 19:18:36.913253 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:36 crc kubenswrapper[4942]: I0218 19:18:36.913276 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:36Z","lastTransitionTime":"2026-02-18T19:18:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:36 crc kubenswrapper[4942]: I0218 19:18:36.925522 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"276d1ade-b018-4a59-8184-e121ff600ea0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bf61d811b92484ed6f2e49184a29d51957000ce926d74afe7b452b8845673afb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://691cb927291454a41fe8552c32737d52f8430e180870cd9c2bdc827926f15cd0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a3ed5634c2ead9b37bd3c51e5ba9f710e1a2b4430552bfce39b234bc7efdac5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8f965989f2401534556e39f4940e0a03935cf6ff85e89a9401fdfc20fc84dbc3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8f965989f2401534556e39f4940e0a03935cf6ff85e89a9401fdfc20fc84dbc3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:17:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:17:22Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:17:21Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:36Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:36 crc kubenswrapper[4942]: I0218 19:18:36.940743 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dab011ca-f26a-4a5e-b093-b1f4dc0e5efa\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c16c164479a6aa22042dd8b972db6fc6b802a7a1fc1a50b1538e85b6afe9b913\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://de8f04ef11faf93e27b40bb3839d1dabcfbb8248407854c379262f626810c92a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://de8f04ef11faf93e27b40bb3839d1dabcfbb8248407854c379262f626810c92a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:17:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:17:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:17:21Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:36Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:36 crc kubenswrapper[4942]: I0218 19:18:36.958457 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://36f5db0de79285e1aca04aee9ebb8824353d8746f2f7df24be858a55db3c9abf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:36Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:36 crc kubenswrapper[4942]: I0218 19:18:36.972070 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2e4be8605467674f949e5b4b8d282634126ab56d2983d5ffadb64ca4043b79b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:36Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:37 crc kubenswrapper[4942]: I0218 19:18:37.010491 4942 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-25 18:02:46.102018936 +0000 UTC Feb 18 19:18:37 crc kubenswrapper[4942]: I0218 19:18:37.015879 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:37 crc kubenswrapper[4942]: I0218 19:18:37.015948 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:37 crc kubenswrapper[4942]: I0218 19:18:37.015967 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:37 crc kubenswrapper[4942]: I0218 19:18:37.015996 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:37 crc kubenswrapper[4942]: I0218 19:18:37.016018 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:37Z","lastTransitionTime":"2026-02-18T19:18:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:37 crc kubenswrapper[4942]: I0218 19:18:37.035245 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 19:18:37 crc kubenswrapper[4942]: E0218 19:18:37.035404 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 19:18:37 crc kubenswrapper[4942]: I0218 19:18:37.119047 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:37 crc kubenswrapper[4942]: I0218 19:18:37.119122 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:37 crc kubenswrapper[4942]: I0218 19:18:37.119140 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:37 crc kubenswrapper[4942]: I0218 19:18:37.119169 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:37 crc kubenswrapper[4942]: I0218 19:18:37.119188 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:37Z","lastTransitionTime":"2026-02-18T19:18:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:37 crc kubenswrapper[4942]: I0218 19:18:37.222341 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:37 crc kubenswrapper[4942]: I0218 19:18:37.222405 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:37 crc kubenswrapper[4942]: I0218 19:18:37.222425 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:37 crc kubenswrapper[4942]: I0218 19:18:37.222450 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:37 crc kubenswrapper[4942]: I0218 19:18:37.222469 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:37Z","lastTransitionTime":"2026-02-18T19:18:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:37 crc kubenswrapper[4942]: I0218 19:18:37.326633 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:37 crc kubenswrapper[4942]: I0218 19:18:37.326719 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:37 crc kubenswrapper[4942]: I0218 19:18:37.326739 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:37 crc kubenswrapper[4942]: I0218 19:18:37.326808 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:37 crc kubenswrapper[4942]: I0218 19:18:37.326827 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:37Z","lastTransitionTime":"2026-02-18T19:18:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:37 crc kubenswrapper[4942]: I0218 19:18:37.430334 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:37 crc kubenswrapper[4942]: I0218 19:18:37.430405 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:37 crc kubenswrapper[4942]: I0218 19:18:37.430427 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:37 crc kubenswrapper[4942]: I0218 19:18:37.430456 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:37 crc kubenswrapper[4942]: I0218 19:18:37.430481 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:37Z","lastTransitionTime":"2026-02-18T19:18:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:37 crc kubenswrapper[4942]: I0218 19:18:37.534141 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:37 crc kubenswrapper[4942]: I0218 19:18:37.534210 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:37 crc kubenswrapper[4942]: I0218 19:18:37.534231 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:37 crc kubenswrapper[4942]: I0218 19:18:37.534258 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:37 crc kubenswrapper[4942]: I0218 19:18:37.534278 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:37Z","lastTransitionTime":"2026-02-18T19:18:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:37 crc kubenswrapper[4942]: I0218 19:18:37.638320 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:37 crc kubenswrapper[4942]: I0218 19:18:37.638441 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:37 crc kubenswrapper[4942]: I0218 19:18:37.638458 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:37 crc kubenswrapper[4942]: I0218 19:18:37.638486 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:37 crc kubenswrapper[4942]: I0218 19:18:37.638508 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:37Z","lastTransitionTime":"2026-02-18T19:18:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:37 crc kubenswrapper[4942]: I0218 19:18:37.643858 4942 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-89fzv_45dc4164-81a9-44cf-b86a-dff571bc0417/ovnkube-controller/3.log" Feb 18 19:18:37 crc kubenswrapper[4942]: I0218 19:18:37.650614 4942 scope.go:117] "RemoveContainer" containerID="331d92ab2b896c654b5eb6e9e3372f06c02c3b582188b54cff7b9b6feb78c9a9" Feb 18 19:18:37 crc kubenswrapper[4942]: E0218 19:18:37.650818 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-89fzv_openshift-ovn-kubernetes(45dc4164-81a9-44cf-b86a-dff571bc0417)\"" pod="openshift-ovn-kubernetes/ovnkube-node-89fzv" podUID="45dc4164-81a9-44cf-b86a-dff571bc0417" Feb 18 19:18:37 crc kubenswrapper[4942]: I0218 19:18:37.665840 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-wxck8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"69ef2748-687e-4223-998e-7bd92ad8aaaf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ba4df5c822ff37a1a027d1908aab6472cd0b5a6ab0a2b5e5d1b172774107727\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vscpp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:17:45Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-wxck8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:37Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:37 crc kubenswrapper[4942]: I0218 19:18:37.686798 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4da93830-99a3-4d84-91c8-a5352a987b3f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://beecfbdf76954e7b9895240b52a2ec033ec3b81094ece02095f67a5f389d0383\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca3d8e99733c89b17e7211c9bae268f8e75942d896d32a6e2e9fc7e613000a6d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee5e19c2c5a503ae69e8052828713b9b399137e0fb7f3a06865d4d7f6b29c954\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c787e65428258ae002dd2569d2e100857851a5b699d573b42e59d1be987da8b3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b82ad9b4d66dae63d0d0fcf0dd65333cf43c3b1d6006e129b7db1c6320bfdee8\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"message\\\":\\\"le observer\\\\nW0218 19:17:41.723890 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0218 19:17:41.724123 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0218 19:17:41.725411 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3231040961/tls.crt::/tmp/serving-cert-3231040961/tls.key\\\\\\\"\\\\nI0218 19:17:41.923908 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0218 19:17:41.936017 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0218 19:17:41.936045 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0218 19:17:41.936073 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0218 19:17:41.936079 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0218 19:17:41.944174 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0218 19:17:41.944200 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 19:17:41.944205 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 19:17:41.944211 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0218 19:17:41.944214 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0218 19:17:41.944217 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0218 19:17:41.944220 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0218 19:17:41.944371 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0218 19:17:41.958094 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T19:17:41Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5fcd5de3303bba82e4a354de9f77b9aac574912955c2e49e2e74232f4d432a88\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:23Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6adb31d2464d541db843bddad1c43811ca11900db12deeb0d7c13ff8a3186d09\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6adb31d2464d541db843bddad1c43811ca11900db12deeb0d7c13ff8a3186d09\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:17:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:17:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:17:21Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:37Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:37 crc kubenswrapper[4942]: I0218 19:18:37.702959 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:37Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:37 crc kubenswrapper[4942]: I0218 19:18:37.724327 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-xk99z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8f8b40cd-7bbd-4189-a8c0-f4131e8b9add\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b7ea4ede9f2f9b4438bc9befcf913e5b8c7b9dc765fa1edce809e17c5ac933a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2zxvs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3573f095c220e3b1994394b83fdf24c7d1a721ccee2755042f520467f21ae1fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2zxvs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:17:54Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-xk99z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:37Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:37 crc kubenswrapper[4942]: I0218 19:18:37.739646 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-qwg6q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ac5b5f40-34db-4aeb-abb4-57204673bd53\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:56Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:56Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kmmq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kmmq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:17:56Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-qwg6q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:37Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:37 crc kubenswrapper[4942]: I0218 19:18:37.743473 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:37 crc kubenswrapper[4942]: I0218 19:18:37.743557 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:37 crc kubenswrapper[4942]: I0218 19:18:37.743579 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:37 crc kubenswrapper[4942]: I0218 19:18:37.743613 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:37 crc kubenswrapper[4942]: I0218 19:18:37.743632 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:37Z","lastTransitionTime":"2026-02-18T19:18:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:37 crc kubenswrapper[4942]: I0218 19:18:37.762609 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b5d2b9d-7ec0-41fa-a073-399c6fd41eb6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2c8b81c113e461032be39d6328308bad3189a9e84d987da987d43e8e2f6449fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3654d3b4a5084ce9ffb9ef8aeab6155788b56ac636aee44b098f6e9d457a8d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3a247d311cfbec62a54df5757a344bbc7ea516a66ccdeb67aecbbe268a4fbe4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://117748c4c4fa5e68d4b927639faa447ed3a984e0d7364a2224abe27e178d5746\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:17:21Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:37Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:37 crc kubenswrapper[4942]: I0218 19:18:37.784381 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8cc6e8b6926e9cadf0bfdedb3a9fd0e5a7a902ba1cc703cd0396c3d7b2ec8666\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://45c0716738e2acbb0104b2ce05e3f23fd6933b653297d10972914500f3e55cd3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:37Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:37 crc kubenswrapper[4942]: I0218 19:18:37.801937 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-5pgvt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f163820b-df8b-4e07-9b74-d5f3332580a6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://97b02b2ef091c462632d385e824d90a6dc8270726bb3b5dfaa6c3036e99d323f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjg6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:17:41Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-5pgvt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:37Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:37 crc kubenswrapper[4942]: I0218 19:18:37.826984 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-2rbc4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1ec6cacf-a09d-43b8-89fc-aea1d5a0ca9d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d379b6cff5fad06493f1e137d6f8de20b35e5350025c5875db8afb23cf30ac97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-27v7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://26e15915f01864e6357f914244e51cc52eb7c1c79fdda6cebfb23c7723978fc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://26e15915f01864e6357f914244e51cc52eb7c1c79fdda6cebfb23c7723978fc7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:17:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-27v7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3845cae9bbde573b86221f80bf461886dadf5c5149bb19824e505c703e168ba3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3845cae9bbde573b86221f80bf461886dadf5c5149bb19824e505c703e168ba3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:17:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-27v7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://83b7a573c632fbc4da1c088193202dcbe4f5f09d82c98b12e901a06acc877b80\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://83b7a573c632fbc4da1c088193202dcbe4f5f09d82c98b12e901a06acc877b80\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:17:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:17:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-27v7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2730d908eb063a0dc3278a304a8b7b9aee84bb6df39693e476d6517362864da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b2730d908eb063a0dc3278a304a8b7b9aee84bb6df39693e476d6517362864da\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:17:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:17:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-27v7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://86ba552c18df4c07b6d6b34acf51c27ec696374ddd079486c045e1cb9f68f703\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://86ba552c18df4c07b6d6b34acf51c27ec696374ddd079486c045e1cb9f68f703\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:17:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:17:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-27v7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://522b8abd41e12aecabbbc8a1f16dd8978b1e72b0984784780349570290bcc168\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://522b8abd41e12aecabbbc8a1f16dd8978b1e72b0984784780349570290bcc168\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:17:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:17:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-27v7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:17:42Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-2rbc4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:37Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:37 crc kubenswrapper[4942]: I0218 19:18:37.847300 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:37 crc kubenswrapper[4942]: I0218 19:18:37.847374 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:37 crc kubenswrapper[4942]: I0218 19:18:37.847387 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:37 crc kubenswrapper[4942]: I0218 19:18:37.847410 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:37 crc kubenswrapper[4942]: I0218 19:18:37.847423 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:37Z","lastTransitionTime":"2026-02-18T19:18:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:37 crc kubenswrapper[4942]: I0218 19:18:37.850340 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:37Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:37 crc kubenswrapper[4942]: I0218 19:18:37.871709 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://36f5db0de79285e1aca04aee9ebb8824353d8746f2f7df24be858a55db3c9abf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:37Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:37 crc kubenswrapper[4942]: I0218 19:18:37.891939 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2e4be8605467674f949e5b4b8d282634126ab56d2983d5ffadb64ca4043b79b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:37Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:37 crc kubenswrapper[4942]: I0218 19:18:37.914820 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:37Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:37 crc kubenswrapper[4942]: I0218 19:18:37.934916 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wqxh4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"28921539-823a-4439-a230-3b5aed7085cc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d1f426cf3a46e9dbd6da2d7e0d1dc2649a781bb63b9b116e2e96e297ffe685f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c2zj5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3f2583de812c35d32f50918d2ea1071672e650d7bb1eca09416558ca25526b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c2zj5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:17:42Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wqxh4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:37Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:37 crc kubenswrapper[4942]: I0218 19:18:37.950438 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:37 crc kubenswrapper[4942]: I0218 19:18:37.950500 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:37 crc kubenswrapper[4942]: I0218 19:18:37.950518 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:37 crc kubenswrapper[4942]: I0218 19:18:37.950542 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:37 crc kubenswrapper[4942]: I0218 19:18:37.950560 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:37Z","lastTransitionTime":"2026-02-18T19:18:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:37 crc kubenswrapper[4942]: I0218 19:18:37.957158 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-8jfwb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"75150b8c-7a02-497b-86c3-eabc9c8dbc55\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4ea9fbe1ac2843b80786e84d58bed874d360e223686eac9666589a7841d71c46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f6aba9b40a3a963de7e8fb8f2a121318f0800350a41caa30b6aef71468e5e0e4\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-18T19:18:29Z\\\",\\\"message\\\":\\\"2026-02-18T19:17:44+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_4d82d104-9414-4e68-8849-a8f62a9a5d29\\\\n2026-02-18T19:17:44+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_4d82d104-9414-4e68-8849-a8f62a9a5d29 to /host/opt/cni/bin/\\\\n2026-02-18T19:17:44Z [verbose] multus-daemon started\\\\n2026-02-18T19:17:44Z [verbose] Readiness Indicator file check\\\\n2026-02-18T19:18:29Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T19:17:42Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-65c5q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:17:42Z\\\"}}\" for pod \"openshift-multus\"/\"multus-8jfwb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:37Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:37 crc kubenswrapper[4942]: I0218 19:18:37.989109 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-89fzv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"45dc4164-81a9-44cf-b86a-dff571bc0417\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:42Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:42Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e988175a524e389ddf3e3a47acb65910ac3bf3b812e14b76d988f13e2cdc5dc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl7tj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9333dac09e056ca12a248589ed4a097788b86ab83f9a1014d76d8bad88f1800c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl7tj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bcc9ee5f12cc3a3518c9fe13c16743e946e59b82dc01239767afb1e4afb2e4b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl7tj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2e222b580b244e85a382499ae61c72779f95fdab87e4d4c723d29b488219f94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl7tj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6351d0088a3e9c170ebe043fa700ef7f870c52f40d751b4fd13ac7b5bfa5e3b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl7tj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://427d7c083c5040fc6afe217c7850f1114323977542e83eb35d0a71b4bef6ecc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl7tj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://331d92ab2b896c654b5eb6e9e3372f06c02c3b582188b54cff7b9b6feb78c9a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://331d92ab2b896c654b5eb6e9e3372f06c02c3b582188b54cff7b9b6feb78c9a9\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-18T19:18:36Z\\\",\\\"message\\\":\\\"il\\\\u003e UUID: UUIDName:}]\\\\nI0218 19:18:36.041536 7014 model_client.go:382] Update operations generated as: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:5c 10.217.0.92]} options:{GoMap:map[iface-id-ver:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:5c 10.217.0.92]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {c94130be-172c-477c-88c4-40cc7eba30fe}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0218 19:18:36.041160 7014 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-node-identity/network-node-identity-vrzqb\\\\nI0218 19:18:36.041605 7014 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-node-identity/network-node-identity-vrzqb\\\\nI0218 19:18:36.041616 7014 ovn.go:134] Ensuring zone local for Pod openshift-network-node-identity/network-node-identity-vrzqb in node crc\\\\nI0218 19:18:36.041625 7014 obj_retry.go:386] Retry successful for *v1.Pod openshift-network-node-identity/network-node-identity-vrzqb after 0 failed attempt(s)\\\\nI0218 19:18:36.041636 7014 default_network_controller.go:776] Recording success event on pod openshift-network-node-identity/network-node-identity-vrzqb\\\\nI0218 19:18:36.040597 7014 model_client.go:398] Mutate operations generated as: [{Op:mutate Table:Logi\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T19:18:35Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-89fzv_openshift-ovn-kubernetes(45dc4164-81a9-44cf-b86a-dff571bc0417)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl7tj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c498aa99d3ec10af57c279f23804f4dce52a99d2c73fafe2bd9dc6ea454c7a23\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl7tj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://581689b9e064557a35e24e6d5c15a73036b8499700959fd330e9ebee15543edc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://581689b9e064557a35e24e6d5c15a73036b8499700959fd330e9ebee15543edc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:17:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:17:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl7tj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:17:42Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-89fzv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:37Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:38 crc kubenswrapper[4942]: I0218 19:18:38.008842 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"276d1ade-b018-4a59-8184-e121ff600ea0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bf61d811b92484ed6f2e49184a29d51957000ce926d74afe7b452b8845673afb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://691cb927291454a41fe8552c32737d52f8430e180870cd9c2bdc827926f15cd0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a3ed5634c2ead9b37bd3c51e5ba9f710e1a2b4430552bfce39b234bc7efdac5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8f965989f2401534556e39f4940e0a03935cf6ff85e89a9401fdfc20fc84dbc3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8f965989f2401534556e39f4940e0a03935cf6ff85e89a9401fdfc20fc84dbc3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:17:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:17:22Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:17:21Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:38Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:38 crc kubenswrapper[4942]: I0218 19:18:38.010911 4942 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-27 04:55:36.984697244 +0000 UTC Feb 18 19:18:38 crc kubenswrapper[4942]: I0218 19:18:38.026911 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dab011ca-f26a-4a5e-b093-b1f4dc0e5efa\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c16c164479a6aa22042dd8b972db6fc6b802a7a1fc1a50b1538e85b6afe9b913\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://de8f04ef11faf93e27b40bb3839d1dabcfbb8248407854c379262f626810c92a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://de8f04ef11faf93e27b40bb3839d1dabcfbb8248407854c379262f626810c92a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:17:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:17:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:17:21Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:38Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:38 crc kubenswrapper[4942]: I0218 19:18:38.035081 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 19:18:38 crc kubenswrapper[4942]: I0218 19:18:38.035216 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qwg6q" Feb 18 19:18:38 crc kubenswrapper[4942]: E0218 19:18:38.035273 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 19:18:38 crc kubenswrapper[4942]: I0218 19:18:38.035083 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 19:18:38 crc kubenswrapper[4942]: E0218 19:18:38.035440 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qwg6q" podUID="ac5b5f40-34db-4aeb-abb4-57204673bd53" Feb 18 19:18:38 crc kubenswrapper[4942]: E0218 19:18:38.035687 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 19:18:38 crc kubenswrapper[4942]: I0218 19:18:38.053499 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:38 crc kubenswrapper[4942]: I0218 19:18:38.053576 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:38 crc kubenswrapper[4942]: I0218 19:18:38.053605 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:38 crc kubenswrapper[4942]: I0218 19:18:38.053637 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:38 crc kubenswrapper[4942]: I0218 19:18:38.053662 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:38Z","lastTransitionTime":"2026-02-18T19:18:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:38 crc kubenswrapper[4942]: I0218 19:18:38.157076 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:38 crc kubenswrapper[4942]: I0218 19:18:38.157171 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:38 crc kubenswrapper[4942]: I0218 19:18:38.157198 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:38 crc kubenswrapper[4942]: I0218 19:18:38.157232 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:38 crc kubenswrapper[4942]: I0218 19:18:38.157260 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:38Z","lastTransitionTime":"2026-02-18T19:18:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:38 crc kubenswrapper[4942]: I0218 19:18:38.261113 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:38 crc kubenswrapper[4942]: I0218 19:18:38.261172 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:38 crc kubenswrapper[4942]: I0218 19:18:38.261218 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:38 crc kubenswrapper[4942]: I0218 19:18:38.261239 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:38 crc kubenswrapper[4942]: I0218 19:18:38.261254 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:38Z","lastTransitionTime":"2026-02-18T19:18:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:38 crc kubenswrapper[4942]: I0218 19:18:38.354969 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:38 crc kubenswrapper[4942]: I0218 19:18:38.355030 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:38 crc kubenswrapper[4942]: I0218 19:18:38.355047 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:38 crc kubenswrapper[4942]: I0218 19:18:38.355074 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:38 crc kubenswrapper[4942]: I0218 19:18:38.355092 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:38Z","lastTransitionTime":"2026-02-18T19:18:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:38 crc kubenswrapper[4942]: E0218 19:18:38.377845 4942 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:18:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:38Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:18:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:38Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:18:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:38Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:18:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:38Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"26ba8477-3134-4454-b1a3-81cc0f315017\\\",\\\"systemUUID\\\":\\\"15e4da6b-0b96-4412-ada2-f835d7e5f88a\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:38Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:38 crc kubenswrapper[4942]: I0218 19:18:38.383876 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:38 crc kubenswrapper[4942]: I0218 19:18:38.383963 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:38 crc kubenswrapper[4942]: I0218 19:18:38.383988 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:38 crc kubenswrapper[4942]: I0218 19:18:38.384022 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:38 crc kubenswrapper[4942]: I0218 19:18:38.384047 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:38Z","lastTransitionTime":"2026-02-18T19:18:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:38 crc kubenswrapper[4942]: E0218 19:18:38.403562 4942 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:18:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:38Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:18:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:38Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:18:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:38Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:18:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:38Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"26ba8477-3134-4454-b1a3-81cc0f315017\\\",\\\"systemUUID\\\":\\\"15e4da6b-0b96-4412-ada2-f835d7e5f88a\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:38Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:38 crc kubenswrapper[4942]: I0218 19:18:38.408047 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:38 crc kubenswrapper[4942]: I0218 19:18:38.408134 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:38 crc kubenswrapper[4942]: I0218 19:18:38.408153 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:38 crc kubenswrapper[4942]: I0218 19:18:38.408179 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:38 crc kubenswrapper[4942]: I0218 19:18:38.408199 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:38Z","lastTransitionTime":"2026-02-18T19:18:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:38 crc kubenswrapper[4942]: E0218 19:18:38.428623 4942 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:18:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:38Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:18:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:38Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:18:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:38Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:18:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:38Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"26ba8477-3134-4454-b1a3-81cc0f315017\\\",\\\"systemUUID\\\":\\\"15e4da6b-0b96-4412-ada2-f835d7e5f88a\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:38Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:38 crc kubenswrapper[4942]: I0218 19:18:38.433900 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:38 crc kubenswrapper[4942]: I0218 19:18:38.433975 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:38 crc kubenswrapper[4942]: I0218 19:18:38.434000 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:38 crc kubenswrapper[4942]: I0218 19:18:38.434036 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:38 crc kubenswrapper[4942]: I0218 19:18:38.434061 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:38Z","lastTransitionTime":"2026-02-18T19:18:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:38 crc kubenswrapper[4942]: E0218 19:18:38.459472 4942 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:18:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:38Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:18:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:38Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:18:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:38Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:18:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:38Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"26ba8477-3134-4454-b1a3-81cc0f315017\\\",\\\"systemUUID\\\":\\\"15e4da6b-0b96-4412-ada2-f835d7e5f88a\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:38Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:38 crc kubenswrapper[4942]: I0218 19:18:38.465554 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:38 crc kubenswrapper[4942]: I0218 19:18:38.465613 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:38 crc kubenswrapper[4942]: I0218 19:18:38.465637 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:38 crc kubenswrapper[4942]: I0218 19:18:38.465669 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:38 crc kubenswrapper[4942]: I0218 19:18:38.465693 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:38Z","lastTransitionTime":"2026-02-18T19:18:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:38 crc kubenswrapper[4942]: E0218 19:18:38.483189 4942 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:18:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:38Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:18:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:38Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:18:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:38Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:18:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:38Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"26ba8477-3134-4454-b1a3-81cc0f315017\\\",\\\"systemUUID\\\":\\\"15e4da6b-0b96-4412-ada2-f835d7e5f88a\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:38Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:38 crc kubenswrapper[4942]: E0218 19:18:38.483409 4942 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 18 19:18:38 crc kubenswrapper[4942]: I0218 19:18:38.486362 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:38 crc kubenswrapper[4942]: I0218 19:18:38.486477 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:38 crc kubenswrapper[4942]: I0218 19:18:38.486559 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:38 crc kubenswrapper[4942]: I0218 19:18:38.486595 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:38 crc kubenswrapper[4942]: I0218 19:18:38.486700 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:38Z","lastTransitionTime":"2026-02-18T19:18:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:38 crc kubenswrapper[4942]: I0218 19:18:38.589823 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:38 crc kubenswrapper[4942]: I0218 19:18:38.589890 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:38 crc kubenswrapper[4942]: I0218 19:18:38.589908 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:38 crc kubenswrapper[4942]: I0218 19:18:38.589941 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:38 crc kubenswrapper[4942]: I0218 19:18:38.589969 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:38Z","lastTransitionTime":"2026-02-18T19:18:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:38 crc kubenswrapper[4942]: I0218 19:18:38.693498 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:38 crc kubenswrapper[4942]: I0218 19:18:38.693609 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:38 crc kubenswrapper[4942]: I0218 19:18:38.693628 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:38 crc kubenswrapper[4942]: I0218 19:18:38.693697 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:38 crc kubenswrapper[4942]: I0218 19:18:38.693718 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:38Z","lastTransitionTime":"2026-02-18T19:18:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:38 crc kubenswrapper[4942]: I0218 19:18:38.797419 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:38 crc kubenswrapper[4942]: I0218 19:18:38.797499 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:38 crc kubenswrapper[4942]: I0218 19:18:38.797539 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:38 crc kubenswrapper[4942]: I0218 19:18:38.797572 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:38 crc kubenswrapper[4942]: I0218 19:18:38.797595 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:38Z","lastTransitionTime":"2026-02-18T19:18:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:38 crc kubenswrapper[4942]: I0218 19:18:38.901384 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:38 crc kubenswrapper[4942]: I0218 19:18:38.901450 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:38 crc kubenswrapper[4942]: I0218 19:18:38.901471 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:38 crc kubenswrapper[4942]: I0218 19:18:38.901496 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:38 crc kubenswrapper[4942]: I0218 19:18:38.901515 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:38Z","lastTransitionTime":"2026-02-18T19:18:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:39 crc kubenswrapper[4942]: I0218 19:18:39.004838 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:39 crc kubenswrapper[4942]: I0218 19:18:39.004906 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:39 crc kubenswrapper[4942]: I0218 19:18:39.004922 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:39 crc kubenswrapper[4942]: I0218 19:18:39.004950 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:39 crc kubenswrapper[4942]: I0218 19:18:39.004968 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:39Z","lastTransitionTime":"2026-02-18T19:18:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:39 crc kubenswrapper[4942]: I0218 19:18:39.012103 4942 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-03 20:23:31.764992536 +0000 UTC Feb 18 19:18:39 crc kubenswrapper[4942]: I0218 19:18:39.035904 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 19:18:39 crc kubenswrapper[4942]: E0218 19:18:39.036096 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 19:18:39 crc kubenswrapper[4942]: I0218 19:18:39.108588 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:39 crc kubenswrapper[4942]: I0218 19:18:39.108687 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:39 crc kubenswrapper[4942]: I0218 19:18:39.108712 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:39 crc kubenswrapper[4942]: I0218 19:18:39.108747 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:39 crc kubenswrapper[4942]: I0218 19:18:39.108810 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:39Z","lastTransitionTime":"2026-02-18T19:18:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:39 crc kubenswrapper[4942]: I0218 19:18:39.212507 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:39 crc kubenswrapper[4942]: I0218 19:18:39.212597 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:39 crc kubenswrapper[4942]: I0218 19:18:39.212617 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:39 crc kubenswrapper[4942]: I0218 19:18:39.212646 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:39 crc kubenswrapper[4942]: I0218 19:18:39.212665 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:39Z","lastTransitionTime":"2026-02-18T19:18:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:39 crc kubenswrapper[4942]: I0218 19:18:39.316431 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:39 crc kubenswrapper[4942]: I0218 19:18:39.316522 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:39 crc kubenswrapper[4942]: I0218 19:18:39.316542 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:39 crc kubenswrapper[4942]: I0218 19:18:39.316569 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:39 crc kubenswrapper[4942]: I0218 19:18:39.316589 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:39Z","lastTransitionTime":"2026-02-18T19:18:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:39 crc kubenswrapper[4942]: I0218 19:18:39.419827 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:39 crc kubenswrapper[4942]: I0218 19:18:39.419892 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:39 crc kubenswrapper[4942]: I0218 19:18:39.419912 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:39 crc kubenswrapper[4942]: I0218 19:18:39.419940 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:39 crc kubenswrapper[4942]: I0218 19:18:39.419961 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:39Z","lastTransitionTime":"2026-02-18T19:18:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:39 crc kubenswrapper[4942]: I0218 19:18:39.522996 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:39 crc kubenswrapper[4942]: I0218 19:18:39.523067 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:39 crc kubenswrapper[4942]: I0218 19:18:39.523088 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:39 crc kubenswrapper[4942]: I0218 19:18:39.523116 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:39 crc kubenswrapper[4942]: I0218 19:18:39.523137 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:39Z","lastTransitionTime":"2026-02-18T19:18:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:39 crc kubenswrapper[4942]: I0218 19:18:39.626232 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:39 crc kubenswrapper[4942]: I0218 19:18:39.626304 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:39 crc kubenswrapper[4942]: I0218 19:18:39.626342 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:39 crc kubenswrapper[4942]: I0218 19:18:39.626377 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:39 crc kubenswrapper[4942]: I0218 19:18:39.626402 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:39Z","lastTransitionTime":"2026-02-18T19:18:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:39 crc kubenswrapper[4942]: I0218 19:18:39.728985 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:39 crc kubenswrapper[4942]: I0218 19:18:39.729022 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:39 crc kubenswrapper[4942]: I0218 19:18:39.729032 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:39 crc kubenswrapper[4942]: I0218 19:18:39.729048 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:39 crc kubenswrapper[4942]: I0218 19:18:39.729094 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:39Z","lastTransitionTime":"2026-02-18T19:18:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:39 crc kubenswrapper[4942]: I0218 19:18:39.832480 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:39 crc kubenswrapper[4942]: I0218 19:18:39.832536 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:39 crc kubenswrapper[4942]: I0218 19:18:39.832550 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:39 crc kubenswrapper[4942]: I0218 19:18:39.832595 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:39 crc kubenswrapper[4942]: I0218 19:18:39.832665 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:39Z","lastTransitionTime":"2026-02-18T19:18:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:39 crc kubenswrapper[4942]: I0218 19:18:39.935377 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:39 crc kubenswrapper[4942]: I0218 19:18:39.935429 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:39 crc kubenswrapper[4942]: I0218 19:18:39.935447 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:39 crc kubenswrapper[4942]: I0218 19:18:39.935474 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:39 crc kubenswrapper[4942]: I0218 19:18:39.935492 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:39Z","lastTransitionTime":"2026-02-18T19:18:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:40 crc kubenswrapper[4942]: I0218 19:18:40.012803 4942 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-20 17:34:37.902645373 +0000 UTC Feb 18 19:18:40 crc kubenswrapper[4942]: I0218 19:18:40.035907 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 19:18:40 crc kubenswrapper[4942]: I0218 19:18:40.036023 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 19:18:40 crc kubenswrapper[4942]: E0218 19:18:40.036114 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 19:18:40 crc kubenswrapper[4942]: E0218 19:18:40.036238 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 19:18:40 crc kubenswrapper[4942]: I0218 19:18:40.036271 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qwg6q" Feb 18 19:18:40 crc kubenswrapper[4942]: E0218 19:18:40.036402 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qwg6q" podUID="ac5b5f40-34db-4aeb-abb4-57204673bd53" Feb 18 19:18:40 crc kubenswrapper[4942]: I0218 19:18:40.038272 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:40 crc kubenswrapper[4942]: I0218 19:18:40.038342 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:40 crc kubenswrapper[4942]: I0218 19:18:40.038365 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:40 crc kubenswrapper[4942]: I0218 19:18:40.038391 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:40 crc kubenswrapper[4942]: I0218 19:18:40.038411 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:40Z","lastTransitionTime":"2026-02-18T19:18:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:40 crc kubenswrapper[4942]: I0218 19:18:40.142177 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:40 crc kubenswrapper[4942]: I0218 19:18:40.142314 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:40 crc kubenswrapper[4942]: I0218 19:18:40.142346 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:40 crc kubenswrapper[4942]: I0218 19:18:40.142381 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:40 crc kubenswrapper[4942]: I0218 19:18:40.142401 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:40Z","lastTransitionTime":"2026-02-18T19:18:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:40 crc kubenswrapper[4942]: I0218 19:18:40.246155 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:40 crc kubenswrapper[4942]: I0218 19:18:40.246231 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:40 crc kubenswrapper[4942]: I0218 19:18:40.246253 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:40 crc kubenswrapper[4942]: I0218 19:18:40.246283 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:40 crc kubenswrapper[4942]: I0218 19:18:40.246301 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:40Z","lastTransitionTime":"2026-02-18T19:18:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:40 crc kubenswrapper[4942]: I0218 19:18:40.349429 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:40 crc kubenswrapper[4942]: I0218 19:18:40.349496 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:40 crc kubenswrapper[4942]: I0218 19:18:40.349513 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:40 crc kubenswrapper[4942]: I0218 19:18:40.349537 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:40 crc kubenswrapper[4942]: I0218 19:18:40.349555 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:40Z","lastTransitionTime":"2026-02-18T19:18:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:40 crc kubenswrapper[4942]: I0218 19:18:40.452978 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:40 crc kubenswrapper[4942]: I0218 19:18:40.453043 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:40 crc kubenswrapper[4942]: I0218 19:18:40.453057 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:40 crc kubenswrapper[4942]: I0218 19:18:40.453077 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:40 crc kubenswrapper[4942]: I0218 19:18:40.453092 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:40Z","lastTransitionTime":"2026-02-18T19:18:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:40 crc kubenswrapper[4942]: I0218 19:18:40.556612 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:40 crc kubenswrapper[4942]: I0218 19:18:40.556678 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:40 crc kubenswrapper[4942]: I0218 19:18:40.556705 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:40 crc kubenswrapper[4942]: I0218 19:18:40.556737 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:40 crc kubenswrapper[4942]: I0218 19:18:40.556794 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:40Z","lastTransitionTime":"2026-02-18T19:18:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:40 crc kubenswrapper[4942]: I0218 19:18:40.659554 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:40 crc kubenswrapper[4942]: I0218 19:18:40.659617 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:40 crc kubenswrapper[4942]: I0218 19:18:40.659636 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:40 crc kubenswrapper[4942]: I0218 19:18:40.659665 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:40 crc kubenswrapper[4942]: I0218 19:18:40.659684 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:40Z","lastTransitionTime":"2026-02-18T19:18:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:40 crc kubenswrapper[4942]: I0218 19:18:40.766319 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:40 crc kubenswrapper[4942]: I0218 19:18:40.766417 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:40 crc kubenswrapper[4942]: I0218 19:18:40.766440 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:40 crc kubenswrapper[4942]: I0218 19:18:40.766466 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:40 crc kubenswrapper[4942]: I0218 19:18:40.766485 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:40Z","lastTransitionTime":"2026-02-18T19:18:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:40 crc kubenswrapper[4942]: I0218 19:18:40.869455 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:40 crc kubenswrapper[4942]: I0218 19:18:40.869559 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:40 crc kubenswrapper[4942]: I0218 19:18:40.869580 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:40 crc kubenswrapper[4942]: I0218 19:18:40.869609 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:40 crc kubenswrapper[4942]: I0218 19:18:40.869628 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:40Z","lastTransitionTime":"2026-02-18T19:18:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:40 crc kubenswrapper[4942]: I0218 19:18:40.973716 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:40 crc kubenswrapper[4942]: I0218 19:18:40.973968 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:40 crc kubenswrapper[4942]: I0218 19:18:40.974020 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:40 crc kubenswrapper[4942]: I0218 19:18:40.974052 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:40 crc kubenswrapper[4942]: I0218 19:18:40.974071 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:40Z","lastTransitionTime":"2026-02-18T19:18:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:41 crc kubenswrapper[4942]: I0218 19:18:41.013545 4942 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-15 12:23:03.663222231 +0000 UTC Feb 18 19:18:41 crc kubenswrapper[4942]: I0218 19:18:41.035713 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 19:18:41 crc kubenswrapper[4942]: E0218 19:18:41.035983 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 19:18:41 crc kubenswrapper[4942]: I0218 19:18:41.056296 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8cc6e8b6926e9cadf0bfdedb3a9fd0e5a7a902ba1cc703cd0396c3d7b2ec8666\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://45c0716738e2acbb0104b2ce05e3f23fd6933b653297d10972914500f3e55cd3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:41Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:41 crc kubenswrapper[4942]: I0218 19:18:41.073880 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-xk99z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8f8b40cd-7bbd-4189-a8c0-f4131e8b9add\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b7ea4ede9f2f9b4438bc9befcf913e5b8c7b9dc765fa1edce809e17c5ac933a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2zxvs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3573f095c220e3b1994394b83fdf24c7d1a721ccee2755042f520467f21ae1fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2zxvs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:17:54Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-xk99z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:41Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:41 crc kubenswrapper[4942]: I0218 19:18:41.077553 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:41 crc kubenswrapper[4942]: I0218 19:18:41.077627 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:41 crc kubenswrapper[4942]: I0218 19:18:41.077646 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:41 crc kubenswrapper[4942]: I0218 19:18:41.077670 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:41 crc kubenswrapper[4942]: I0218 19:18:41.077718 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:41Z","lastTransitionTime":"2026-02-18T19:18:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:41 crc kubenswrapper[4942]: I0218 19:18:41.090638 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-qwg6q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ac5b5f40-34db-4aeb-abb4-57204673bd53\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:56Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:56Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kmmq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kmmq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:17:56Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-qwg6q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:41Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:41 crc kubenswrapper[4942]: I0218 19:18:41.114979 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b5d2b9d-7ec0-41fa-a073-399c6fd41eb6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2c8b81c113e461032be39d6328308bad3189a9e84d987da987d43e8e2f6449fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3654d3b4a5084ce9ffb9ef8aeab6155788b56ac636aee44b098f6e9d457a8d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3a247d311cfbec62a54df5757a344bbc7ea516a66ccdeb67aecbbe268a4fbe4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://117748c4c4fa5e68d4b927639faa447ed3a984e0d7364a2224abe27e178d5746\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:17:21Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:41Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:41 crc kubenswrapper[4942]: I0218 19:18:41.128494 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:41Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:41 crc kubenswrapper[4942]: I0218 19:18:41.139132 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-5pgvt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f163820b-df8b-4e07-9b74-d5f3332580a6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://97b02b2ef091c462632d385e824d90a6dc8270726bb3b5dfaa6c3036e99d323f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjg6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:17:41Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-5pgvt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:41Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:41 crc kubenswrapper[4942]: I0218 19:18:41.155096 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-2rbc4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1ec6cacf-a09d-43b8-89fc-aea1d5a0ca9d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d379b6cff5fad06493f1e137d6f8de20b35e5350025c5875db8afb23cf30ac97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-27v7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://26e15915f01864e6357f914244e51cc52eb7c1c79fdda6cebfb23c7723978fc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://26e15915f01864e6357f914244e51cc52eb7c1c79fdda6cebfb23c7723978fc7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:17:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-27v7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3845cae9bbde573b86221f80bf461886dadf5c5149bb19824e505c703e168ba3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3845cae9bbde573b86221f80bf461886dadf5c5149bb19824e505c703e168ba3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:17:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-27v7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://83b7a573c632fbc4da1c088193202dcbe4f5f09d82c98b12e901a06acc877b80\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://83b7a573c632fbc4da1c088193202dcbe4f5f09d82c98b12e901a06acc877b80\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:17:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:17:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-27v7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2730d908eb063a0dc3278a304a8b7b9aee84bb6df39693e476d6517362864da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b2730d908eb063a0dc3278a304a8b7b9aee84bb6df39693e476d6517362864da\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:17:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:17:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-27v7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://86ba552c18df4c07b6d6b34acf51c27ec696374ddd079486c045e1cb9f68f703\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://86ba552c18df4c07b6d6b34acf51c27ec696374ddd079486c045e1cb9f68f703\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:17:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:17:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-27v7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://522b8abd41e12aecabbbc8a1f16dd8978b1e72b0984784780349570290bcc168\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://522b8abd41e12aecabbbc8a1f16dd8978b1e72b0984784780349570290bcc168\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:17:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:17:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-27v7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:17:42Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-2rbc4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:41Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:41 crc kubenswrapper[4942]: I0218 19:18:41.168379 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dab011ca-f26a-4a5e-b093-b1f4dc0e5efa\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c16c164479a6aa22042dd8b972db6fc6b802a7a1fc1a50b1538e85b6afe9b913\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://de8f04ef11faf93e27b40bb3839d1dabcfbb8248407854c379262f626810c92a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://de8f04ef11faf93e27b40bb3839d1dabcfbb8248407854c379262f626810c92a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:17:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:17:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:17:21Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:41Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:41 crc kubenswrapper[4942]: I0218 19:18:41.180745 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:41 crc kubenswrapper[4942]: I0218 19:18:41.180833 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:41 crc kubenswrapper[4942]: I0218 19:18:41.180852 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:41 crc kubenswrapper[4942]: I0218 19:18:41.180884 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:41 crc kubenswrapper[4942]: I0218 19:18:41.180901 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:41Z","lastTransitionTime":"2026-02-18T19:18:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:41 crc kubenswrapper[4942]: I0218 19:18:41.183423 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://36f5db0de79285e1aca04aee9ebb8824353d8746f2f7df24be858a55db3c9abf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:41Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:41 crc kubenswrapper[4942]: I0218 19:18:41.199856 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2e4be8605467674f949e5b4b8d282634126ab56d2983d5ffadb64ca4043b79b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:41Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:41 crc kubenswrapper[4942]: I0218 19:18:41.217911 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:41Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:41 crc kubenswrapper[4942]: I0218 19:18:41.230334 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wqxh4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"28921539-823a-4439-a230-3b5aed7085cc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d1f426cf3a46e9dbd6da2d7e0d1dc2649a781bb63b9b116e2e96e297ffe685f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c2zj5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3f2583de812c35d32f50918d2ea1071672e650d7bb1eca09416558ca25526b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c2zj5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:17:42Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wqxh4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:41Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:41 crc kubenswrapper[4942]: I0218 19:18:41.251487 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-8jfwb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"75150b8c-7a02-497b-86c3-eabc9c8dbc55\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4ea9fbe1ac2843b80786e84d58bed874d360e223686eac9666589a7841d71c46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f6aba9b40a3a963de7e8fb8f2a121318f0800350a41caa30b6aef71468e5e0e4\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-18T19:18:29Z\\\",\\\"message\\\":\\\"2026-02-18T19:17:44+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_4d82d104-9414-4e68-8849-a8f62a9a5d29\\\\n2026-02-18T19:17:44+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_4d82d104-9414-4e68-8849-a8f62a9a5d29 to /host/opt/cni/bin/\\\\n2026-02-18T19:17:44Z [verbose] multus-daemon started\\\\n2026-02-18T19:17:44Z [verbose] Readiness Indicator file check\\\\n2026-02-18T19:18:29Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T19:17:42Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-65c5q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:17:42Z\\\"}}\" for pod \"openshift-multus\"/\"multus-8jfwb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:41Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:41 crc kubenswrapper[4942]: I0218 19:18:41.286884 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-89fzv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"45dc4164-81a9-44cf-b86a-dff571bc0417\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:42Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:42Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e988175a524e389ddf3e3a47acb65910ac3bf3b812e14b76d988f13e2cdc5dc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl7tj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9333dac09e056ca12a248589ed4a097788b86ab83f9a1014d76d8bad88f1800c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl7tj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bcc9ee5f12cc3a3518c9fe13c16743e946e59b82dc01239767afb1e4afb2e4b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl7tj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2e222b580b244e85a382499ae61c72779f95fdab87e4d4c723d29b488219f94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl7tj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6351d0088a3e9c170ebe043fa700ef7f870c52f40d751b4fd13ac7b5bfa5e3b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl7tj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://427d7c083c5040fc6afe217c7850f1114323977542e83eb35d0a71b4bef6ecc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl7tj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://331d92ab2b896c654b5eb6e9e3372f06c02c3b582188b54cff7b9b6feb78c9a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://331d92ab2b896c654b5eb6e9e3372f06c02c3b582188b54cff7b9b6feb78c9a9\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-18T19:18:36Z\\\",\\\"message\\\":\\\"il\\\\u003e UUID: UUIDName:}]\\\\nI0218 19:18:36.041536 7014 model_client.go:382] Update operations generated as: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:5c 10.217.0.92]} options:{GoMap:map[iface-id-ver:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:5c 10.217.0.92]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {c94130be-172c-477c-88c4-40cc7eba30fe}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0218 19:18:36.041160 7014 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-node-identity/network-node-identity-vrzqb\\\\nI0218 19:18:36.041605 7014 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-node-identity/network-node-identity-vrzqb\\\\nI0218 19:18:36.041616 7014 ovn.go:134] Ensuring zone local for Pod openshift-network-node-identity/network-node-identity-vrzqb in node crc\\\\nI0218 19:18:36.041625 7014 obj_retry.go:386] Retry successful for *v1.Pod openshift-network-node-identity/network-node-identity-vrzqb after 0 failed attempt(s)\\\\nI0218 19:18:36.041636 7014 default_network_controller.go:776] Recording success event on pod openshift-network-node-identity/network-node-identity-vrzqb\\\\nI0218 19:18:36.040597 7014 model_client.go:398] Mutate operations generated as: [{Op:mutate Table:Logi\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T19:18:35Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-89fzv_openshift-ovn-kubernetes(45dc4164-81a9-44cf-b86a-dff571bc0417)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl7tj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c498aa99d3ec10af57c279f23804f4dce52a99d2c73fafe2bd9dc6ea454c7a23\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl7tj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://581689b9e064557a35e24e6d5c15a73036b8499700959fd330e9ebee15543edc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://581689b9e064557a35e24e6d5c15a73036b8499700959fd330e9ebee15543edc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:17:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:17:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl7tj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:17:42Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-89fzv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:41Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:41 crc kubenswrapper[4942]: I0218 19:18:41.289397 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:41 crc kubenswrapper[4942]: I0218 19:18:41.289449 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:41 crc kubenswrapper[4942]: I0218 19:18:41.289492 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:41 crc kubenswrapper[4942]: I0218 19:18:41.289528 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:41 crc kubenswrapper[4942]: I0218 19:18:41.289547 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:41Z","lastTransitionTime":"2026-02-18T19:18:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:41 crc kubenswrapper[4942]: I0218 19:18:41.307267 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"276d1ade-b018-4a59-8184-e121ff600ea0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bf61d811b92484ed6f2e49184a29d51957000ce926d74afe7b452b8845673afb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://691cb927291454a41fe8552c32737d52f8430e180870cd9c2bdc827926f15cd0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a3ed5634c2ead9b37bd3c51e5ba9f710e1a2b4430552bfce39b234bc7efdac5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8f965989f2401534556e39f4940e0a03935cf6ff85e89a9401fdfc20fc84dbc3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8f965989f2401534556e39f4940e0a03935cf6ff85e89a9401fdfc20fc84dbc3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:17:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:17:22Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:17:21Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:41Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:41 crc kubenswrapper[4942]: I0218 19:18:41.320726 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:41Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:41 crc kubenswrapper[4942]: I0218 19:18:41.332493 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-wxck8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"69ef2748-687e-4223-998e-7bd92ad8aaaf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ba4df5c822ff37a1a027d1908aab6472cd0b5a6ab0a2b5e5d1b172774107727\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vscpp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:17:45Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-wxck8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:41Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:41 crc kubenswrapper[4942]: I0218 19:18:41.346787 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4da93830-99a3-4d84-91c8-a5352a987b3f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://beecfbdf76954e7b9895240b52a2ec033ec3b81094ece02095f67a5f389d0383\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca3d8e99733c89b17e7211c9bae268f8e75942d896d32a6e2e9fc7e613000a6d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee5e19c2c5a503ae69e8052828713b9b399137e0fb7f3a06865d4d7f6b29c954\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c787e65428258ae002dd2569d2e100857851a5b699d573b42e59d1be987da8b3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b82ad9b4d66dae63d0d0fcf0dd65333cf43c3b1d6006e129b7db1c6320bfdee8\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"message\\\":\\\"le observer\\\\nW0218 19:17:41.723890 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0218 19:17:41.724123 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0218 19:17:41.725411 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3231040961/tls.crt::/tmp/serving-cert-3231040961/tls.key\\\\\\\"\\\\nI0218 19:17:41.923908 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0218 19:17:41.936017 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0218 19:17:41.936045 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0218 19:17:41.936073 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0218 19:17:41.936079 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0218 19:17:41.944174 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0218 19:17:41.944200 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 19:17:41.944205 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 19:17:41.944211 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0218 19:17:41.944214 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0218 19:17:41.944217 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0218 19:17:41.944220 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0218 19:17:41.944371 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0218 19:17:41.958094 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T19:17:41Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5fcd5de3303bba82e4a354de9f77b9aac574912955c2e49e2e74232f4d432a88\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:23Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6adb31d2464d541db843bddad1c43811ca11900db12deeb0d7c13ff8a3186d09\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6adb31d2464d541db843bddad1c43811ca11900db12deeb0d7c13ff8a3186d09\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:17:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:17:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:17:21Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:41Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:41 crc kubenswrapper[4942]: I0218 19:18:41.392706 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:41 crc kubenswrapper[4942]: I0218 19:18:41.392744 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:41 crc kubenswrapper[4942]: I0218 19:18:41.392775 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:41 crc kubenswrapper[4942]: I0218 19:18:41.392796 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:41 crc kubenswrapper[4942]: I0218 19:18:41.392810 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:41Z","lastTransitionTime":"2026-02-18T19:18:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:41 crc kubenswrapper[4942]: I0218 19:18:41.495140 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:41 crc kubenswrapper[4942]: I0218 19:18:41.495216 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:41 crc kubenswrapper[4942]: I0218 19:18:41.495240 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:41 crc kubenswrapper[4942]: I0218 19:18:41.495275 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:41 crc kubenswrapper[4942]: I0218 19:18:41.495300 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:41Z","lastTransitionTime":"2026-02-18T19:18:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:41 crc kubenswrapper[4942]: I0218 19:18:41.598013 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:41 crc kubenswrapper[4942]: I0218 19:18:41.598050 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:41 crc kubenswrapper[4942]: I0218 19:18:41.598059 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:41 crc kubenswrapper[4942]: I0218 19:18:41.598075 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:41 crc kubenswrapper[4942]: I0218 19:18:41.598084 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:41Z","lastTransitionTime":"2026-02-18T19:18:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:41 crc kubenswrapper[4942]: I0218 19:18:41.700752 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:41 crc kubenswrapper[4942]: I0218 19:18:41.700856 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:41 crc kubenswrapper[4942]: I0218 19:18:41.700880 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:41 crc kubenswrapper[4942]: I0218 19:18:41.700910 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:41 crc kubenswrapper[4942]: I0218 19:18:41.700933 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:41Z","lastTransitionTime":"2026-02-18T19:18:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:41 crc kubenswrapper[4942]: I0218 19:18:41.804594 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:41 crc kubenswrapper[4942]: I0218 19:18:41.804664 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:41 crc kubenswrapper[4942]: I0218 19:18:41.804811 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:41 crc kubenswrapper[4942]: I0218 19:18:41.804902 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:41 crc kubenswrapper[4942]: I0218 19:18:41.804932 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:41Z","lastTransitionTime":"2026-02-18T19:18:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:41 crc kubenswrapper[4942]: I0218 19:18:41.907662 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:41 crc kubenswrapper[4942]: I0218 19:18:41.907836 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:41 crc kubenswrapper[4942]: I0218 19:18:41.907861 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:41 crc kubenswrapper[4942]: I0218 19:18:41.907889 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:41 crc kubenswrapper[4942]: I0218 19:18:41.907908 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:41Z","lastTransitionTime":"2026-02-18T19:18:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:42 crc kubenswrapper[4942]: I0218 19:18:42.011131 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:42 crc kubenswrapper[4942]: I0218 19:18:42.011207 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:42 crc kubenswrapper[4942]: I0218 19:18:42.011226 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:42 crc kubenswrapper[4942]: I0218 19:18:42.011255 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:42 crc kubenswrapper[4942]: I0218 19:18:42.011275 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:42Z","lastTransitionTime":"2026-02-18T19:18:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:42 crc kubenswrapper[4942]: I0218 19:18:42.014275 4942 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-25 14:45:47.101671997 +0000 UTC Feb 18 19:18:42 crc kubenswrapper[4942]: I0218 19:18:42.035297 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 19:18:42 crc kubenswrapper[4942]: I0218 19:18:42.035346 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qwg6q" Feb 18 19:18:42 crc kubenswrapper[4942]: I0218 19:18:42.035359 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 19:18:42 crc kubenswrapper[4942]: E0218 19:18:42.035480 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 19:18:42 crc kubenswrapper[4942]: E0218 19:18:42.035691 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qwg6q" podUID="ac5b5f40-34db-4aeb-abb4-57204673bd53" Feb 18 19:18:42 crc kubenswrapper[4942]: E0218 19:18:42.035846 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 19:18:42 crc kubenswrapper[4942]: I0218 19:18:42.114470 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:42 crc kubenswrapper[4942]: I0218 19:18:42.114551 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:42 crc kubenswrapper[4942]: I0218 19:18:42.114821 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:42 crc kubenswrapper[4942]: I0218 19:18:42.114918 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:42 crc kubenswrapper[4942]: I0218 19:18:42.114944 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:42Z","lastTransitionTime":"2026-02-18T19:18:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:42 crc kubenswrapper[4942]: I0218 19:18:42.218802 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:42 crc kubenswrapper[4942]: I0218 19:18:42.218886 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:42 crc kubenswrapper[4942]: I0218 19:18:42.218909 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:42 crc kubenswrapper[4942]: I0218 19:18:42.218935 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:42 crc kubenswrapper[4942]: I0218 19:18:42.218956 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:42Z","lastTransitionTime":"2026-02-18T19:18:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:42 crc kubenswrapper[4942]: I0218 19:18:42.322092 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:42 crc kubenswrapper[4942]: I0218 19:18:42.322172 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:42 crc kubenswrapper[4942]: I0218 19:18:42.322195 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:42 crc kubenswrapper[4942]: I0218 19:18:42.322226 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:42 crc kubenswrapper[4942]: I0218 19:18:42.322247 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:42Z","lastTransitionTime":"2026-02-18T19:18:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:42 crc kubenswrapper[4942]: I0218 19:18:42.425806 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:42 crc kubenswrapper[4942]: I0218 19:18:42.425865 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:42 crc kubenswrapper[4942]: I0218 19:18:42.425883 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:42 crc kubenswrapper[4942]: I0218 19:18:42.425914 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:42 crc kubenswrapper[4942]: I0218 19:18:42.425932 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:42Z","lastTransitionTime":"2026-02-18T19:18:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:42 crc kubenswrapper[4942]: I0218 19:18:42.528610 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:42 crc kubenswrapper[4942]: I0218 19:18:42.528670 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:42 crc kubenswrapper[4942]: I0218 19:18:42.528681 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:42 crc kubenswrapper[4942]: I0218 19:18:42.528697 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:42 crc kubenswrapper[4942]: I0218 19:18:42.528709 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:42Z","lastTransitionTime":"2026-02-18T19:18:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:42 crc kubenswrapper[4942]: I0218 19:18:42.631377 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:42 crc kubenswrapper[4942]: I0218 19:18:42.631432 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:42 crc kubenswrapper[4942]: I0218 19:18:42.631444 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:42 crc kubenswrapper[4942]: I0218 19:18:42.631463 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:42 crc kubenswrapper[4942]: I0218 19:18:42.631475 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:42Z","lastTransitionTime":"2026-02-18T19:18:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:42 crc kubenswrapper[4942]: I0218 19:18:42.734535 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:42 crc kubenswrapper[4942]: I0218 19:18:42.734606 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:42 crc kubenswrapper[4942]: I0218 19:18:42.734630 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:42 crc kubenswrapper[4942]: I0218 19:18:42.734662 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:42 crc kubenswrapper[4942]: I0218 19:18:42.734683 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:42Z","lastTransitionTime":"2026-02-18T19:18:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:42 crc kubenswrapper[4942]: I0218 19:18:42.837610 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:42 crc kubenswrapper[4942]: I0218 19:18:42.837666 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:42 crc kubenswrapper[4942]: I0218 19:18:42.837680 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:42 crc kubenswrapper[4942]: I0218 19:18:42.837702 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:42 crc kubenswrapper[4942]: I0218 19:18:42.837714 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:42Z","lastTransitionTime":"2026-02-18T19:18:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:42 crc kubenswrapper[4942]: I0218 19:18:42.940553 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:42 crc kubenswrapper[4942]: I0218 19:18:42.940620 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:42 crc kubenswrapper[4942]: I0218 19:18:42.940639 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:42 crc kubenswrapper[4942]: I0218 19:18:42.940665 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:42 crc kubenswrapper[4942]: I0218 19:18:42.940683 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:42Z","lastTransitionTime":"2026-02-18T19:18:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:43 crc kubenswrapper[4942]: I0218 19:18:43.014499 4942 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-09 17:55:10.006794158 +0000 UTC Feb 18 19:18:43 crc kubenswrapper[4942]: I0218 19:18:43.034956 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 19:18:43 crc kubenswrapper[4942]: E0218 19:18:43.035537 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 19:18:43 crc kubenswrapper[4942]: I0218 19:18:43.043085 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:43 crc kubenswrapper[4942]: I0218 19:18:43.043151 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:43 crc kubenswrapper[4942]: I0218 19:18:43.043179 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:43 crc kubenswrapper[4942]: I0218 19:18:43.043209 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:43 crc kubenswrapper[4942]: I0218 19:18:43.043227 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:43Z","lastTransitionTime":"2026-02-18T19:18:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:43 crc kubenswrapper[4942]: I0218 19:18:43.146523 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:43 crc kubenswrapper[4942]: I0218 19:18:43.146601 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:43 crc kubenswrapper[4942]: I0218 19:18:43.146619 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:43 crc kubenswrapper[4942]: I0218 19:18:43.146651 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:43 crc kubenswrapper[4942]: I0218 19:18:43.146670 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:43Z","lastTransitionTime":"2026-02-18T19:18:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:43 crc kubenswrapper[4942]: I0218 19:18:43.249575 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:43 crc kubenswrapper[4942]: I0218 19:18:43.249681 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:43 crc kubenswrapper[4942]: I0218 19:18:43.249718 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:43 crc kubenswrapper[4942]: I0218 19:18:43.249839 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:43 crc kubenswrapper[4942]: I0218 19:18:43.249862 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:43Z","lastTransitionTime":"2026-02-18T19:18:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:43 crc kubenswrapper[4942]: I0218 19:18:43.353606 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:43 crc kubenswrapper[4942]: I0218 19:18:43.353673 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:43 crc kubenswrapper[4942]: I0218 19:18:43.353693 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:43 crc kubenswrapper[4942]: I0218 19:18:43.353721 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:43 crc kubenswrapper[4942]: I0218 19:18:43.353741 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:43Z","lastTransitionTime":"2026-02-18T19:18:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:43 crc kubenswrapper[4942]: I0218 19:18:43.456825 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:43 crc kubenswrapper[4942]: I0218 19:18:43.456913 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:43 crc kubenswrapper[4942]: I0218 19:18:43.456931 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:43 crc kubenswrapper[4942]: I0218 19:18:43.456962 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:43 crc kubenswrapper[4942]: I0218 19:18:43.456980 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:43Z","lastTransitionTime":"2026-02-18T19:18:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:43 crc kubenswrapper[4942]: I0218 19:18:43.561342 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:43 crc kubenswrapper[4942]: I0218 19:18:43.561407 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:43 crc kubenswrapper[4942]: I0218 19:18:43.561428 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:43 crc kubenswrapper[4942]: I0218 19:18:43.561455 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:43 crc kubenswrapper[4942]: I0218 19:18:43.561478 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:43Z","lastTransitionTime":"2026-02-18T19:18:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:43 crc kubenswrapper[4942]: I0218 19:18:43.664716 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:43 crc kubenswrapper[4942]: I0218 19:18:43.664808 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:43 crc kubenswrapper[4942]: I0218 19:18:43.664827 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:43 crc kubenswrapper[4942]: I0218 19:18:43.664852 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:43 crc kubenswrapper[4942]: I0218 19:18:43.664867 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:43Z","lastTransitionTime":"2026-02-18T19:18:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:43 crc kubenswrapper[4942]: I0218 19:18:43.768516 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:43 crc kubenswrapper[4942]: I0218 19:18:43.768577 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:43 crc kubenswrapper[4942]: I0218 19:18:43.768597 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:43 crc kubenswrapper[4942]: I0218 19:18:43.768623 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:43 crc kubenswrapper[4942]: I0218 19:18:43.768641 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:43Z","lastTransitionTime":"2026-02-18T19:18:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:43 crc kubenswrapper[4942]: I0218 19:18:43.871632 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:43 crc kubenswrapper[4942]: I0218 19:18:43.871721 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:43 crc kubenswrapper[4942]: I0218 19:18:43.871745 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:43 crc kubenswrapper[4942]: I0218 19:18:43.871813 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:43 crc kubenswrapper[4942]: I0218 19:18:43.871847 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:43Z","lastTransitionTime":"2026-02-18T19:18:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:43 crc kubenswrapper[4942]: I0218 19:18:43.975818 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:43 crc kubenswrapper[4942]: I0218 19:18:43.975882 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:43 crc kubenswrapper[4942]: I0218 19:18:43.975898 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:43 crc kubenswrapper[4942]: I0218 19:18:43.975923 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:43 crc kubenswrapper[4942]: I0218 19:18:43.975941 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:43Z","lastTransitionTime":"2026-02-18T19:18:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:44 crc kubenswrapper[4942]: I0218 19:18:44.014993 4942 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-02 11:20:37.033363042 +0000 UTC Feb 18 19:18:44 crc kubenswrapper[4942]: I0218 19:18:44.035469 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 19:18:44 crc kubenswrapper[4942]: I0218 19:18:44.035512 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qwg6q" Feb 18 19:18:44 crc kubenswrapper[4942]: I0218 19:18:44.035607 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 19:18:44 crc kubenswrapper[4942]: E0218 19:18:44.035740 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 19:18:44 crc kubenswrapper[4942]: E0218 19:18:44.035937 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qwg6q" podUID="ac5b5f40-34db-4aeb-abb4-57204673bd53" Feb 18 19:18:44 crc kubenswrapper[4942]: E0218 19:18:44.036086 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 19:18:44 crc kubenswrapper[4942]: I0218 19:18:44.079283 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:44 crc kubenswrapper[4942]: I0218 19:18:44.079334 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:44 crc kubenswrapper[4942]: I0218 19:18:44.079351 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:44 crc kubenswrapper[4942]: I0218 19:18:44.079377 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:44 crc kubenswrapper[4942]: I0218 19:18:44.079397 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:44Z","lastTransitionTime":"2026-02-18T19:18:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:44 crc kubenswrapper[4942]: I0218 19:18:44.182739 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:44 crc kubenswrapper[4942]: I0218 19:18:44.182826 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:44 crc kubenswrapper[4942]: I0218 19:18:44.182838 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:44 crc kubenswrapper[4942]: I0218 19:18:44.182857 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:44 crc kubenswrapper[4942]: I0218 19:18:44.182873 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:44Z","lastTransitionTime":"2026-02-18T19:18:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:44 crc kubenswrapper[4942]: I0218 19:18:44.288253 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:44 crc kubenswrapper[4942]: I0218 19:18:44.288310 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:44 crc kubenswrapper[4942]: I0218 19:18:44.288319 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:44 crc kubenswrapper[4942]: I0218 19:18:44.288335 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:44 crc kubenswrapper[4942]: I0218 19:18:44.288359 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:44Z","lastTransitionTime":"2026-02-18T19:18:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:44 crc kubenswrapper[4942]: I0218 19:18:44.391270 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:44 crc kubenswrapper[4942]: I0218 19:18:44.391342 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:44 crc kubenswrapper[4942]: I0218 19:18:44.391359 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:44 crc kubenswrapper[4942]: I0218 19:18:44.391384 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:44 crc kubenswrapper[4942]: I0218 19:18:44.391402 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:44Z","lastTransitionTime":"2026-02-18T19:18:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:44 crc kubenswrapper[4942]: I0218 19:18:44.495090 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:44 crc kubenswrapper[4942]: I0218 19:18:44.495151 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:44 crc kubenswrapper[4942]: I0218 19:18:44.495168 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:44 crc kubenswrapper[4942]: I0218 19:18:44.495192 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:44 crc kubenswrapper[4942]: I0218 19:18:44.495219 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:44Z","lastTransitionTime":"2026-02-18T19:18:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:44 crc kubenswrapper[4942]: I0218 19:18:44.598335 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:44 crc kubenswrapper[4942]: I0218 19:18:44.598405 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:44 crc kubenswrapper[4942]: I0218 19:18:44.598423 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:44 crc kubenswrapper[4942]: I0218 19:18:44.598451 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:44 crc kubenswrapper[4942]: I0218 19:18:44.598468 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:44Z","lastTransitionTime":"2026-02-18T19:18:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:44 crc kubenswrapper[4942]: I0218 19:18:44.701498 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:44 crc kubenswrapper[4942]: I0218 19:18:44.701577 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:44 crc kubenswrapper[4942]: I0218 19:18:44.701600 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:44 crc kubenswrapper[4942]: I0218 19:18:44.701640 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:44 crc kubenswrapper[4942]: I0218 19:18:44.701663 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:44Z","lastTransitionTime":"2026-02-18T19:18:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:44 crc kubenswrapper[4942]: I0218 19:18:44.805215 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:44 crc kubenswrapper[4942]: I0218 19:18:44.805286 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:44 crc kubenswrapper[4942]: I0218 19:18:44.805302 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:44 crc kubenswrapper[4942]: I0218 19:18:44.805334 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:44 crc kubenswrapper[4942]: I0218 19:18:44.805354 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:44Z","lastTransitionTime":"2026-02-18T19:18:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:44 crc kubenswrapper[4942]: I0218 19:18:44.910610 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:44 crc kubenswrapper[4942]: I0218 19:18:44.910676 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:44 crc kubenswrapper[4942]: I0218 19:18:44.910698 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:44 crc kubenswrapper[4942]: I0218 19:18:44.910739 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:44 crc kubenswrapper[4942]: I0218 19:18:44.910785 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:44Z","lastTransitionTime":"2026-02-18T19:18:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:45 crc kubenswrapper[4942]: I0218 19:18:45.015130 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:45 crc kubenswrapper[4942]: I0218 19:18:45.015188 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:45 crc kubenswrapper[4942]: I0218 19:18:45.015212 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:45 crc kubenswrapper[4942]: I0218 19:18:45.015182 4942 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-13 19:34:16.992575745 +0000 UTC Feb 18 19:18:45 crc kubenswrapper[4942]: I0218 19:18:45.015240 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:45 crc kubenswrapper[4942]: I0218 19:18:45.015278 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:45Z","lastTransitionTime":"2026-02-18T19:18:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:45 crc kubenswrapper[4942]: I0218 19:18:45.036024 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 19:18:45 crc kubenswrapper[4942]: E0218 19:18:45.036194 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 19:18:45 crc kubenswrapper[4942]: I0218 19:18:45.118689 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:45 crc kubenswrapper[4942]: I0218 19:18:45.119178 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:45 crc kubenswrapper[4942]: I0218 19:18:45.119331 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:45 crc kubenswrapper[4942]: I0218 19:18:45.119649 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:45 crc kubenswrapper[4942]: I0218 19:18:45.119833 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:45Z","lastTransitionTime":"2026-02-18T19:18:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:45 crc kubenswrapper[4942]: I0218 19:18:45.223237 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:45 crc kubenswrapper[4942]: I0218 19:18:45.223299 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:45 crc kubenswrapper[4942]: I0218 19:18:45.223317 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:45 crc kubenswrapper[4942]: I0218 19:18:45.223346 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:45 crc kubenswrapper[4942]: I0218 19:18:45.223365 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:45Z","lastTransitionTime":"2026-02-18T19:18:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:45 crc kubenswrapper[4942]: I0218 19:18:45.326585 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:45 crc kubenswrapper[4942]: I0218 19:18:45.326660 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:45 crc kubenswrapper[4942]: I0218 19:18:45.326678 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:45 crc kubenswrapper[4942]: I0218 19:18:45.326703 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:45 crc kubenswrapper[4942]: I0218 19:18:45.326723 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:45Z","lastTransitionTime":"2026-02-18T19:18:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:45 crc kubenswrapper[4942]: I0218 19:18:45.430261 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:45 crc kubenswrapper[4942]: I0218 19:18:45.430339 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:45 crc kubenswrapper[4942]: I0218 19:18:45.430366 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:45 crc kubenswrapper[4942]: I0218 19:18:45.430400 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:45 crc kubenswrapper[4942]: I0218 19:18:45.430422 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:45Z","lastTransitionTime":"2026-02-18T19:18:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:45 crc kubenswrapper[4942]: I0218 19:18:45.533281 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:45 crc kubenswrapper[4942]: I0218 19:18:45.533331 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:45 crc kubenswrapper[4942]: I0218 19:18:45.533347 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:45 crc kubenswrapper[4942]: I0218 19:18:45.533371 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:45 crc kubenswrapper[4942]: I0218 19:18:45.533426 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:45Z","lastTransitionTime":"2026-02-18T19:18:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:45 crc kubenswrapper[4942]: I0218 19:18:45.636982 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:45 crc kubenswrapper[4942]: I0218 19:18:45.637064 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:45 crc kubenswrapper[4942]: I0218 19:18:45.637090 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:45 crc kubenswrapper[4942]: I0218 19:18:45.637122 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:45 crc kubenswrapper[4942]: I0218 19:18:45.637151 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:45Z","lastTransitionTime":"2026-02-18T19:18:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:45 crc kubenswrapper[4942]: I0218 19:18:45.740311 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:45 crc kubenswrapper[4942]: I0218 19:18:45.740380 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:45 crc kubenswrapper[4942]: I0218 19:18:45.740404 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:45 crc kubenswrapper[4942]: I0218 19:18:45.740433 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:45 crc kubenswrapper[4942]: I0218 19:18:45.740456 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:45Z","lastTransitionTime":"2026-02-18T19:18:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:45 crc kubenswrapper[4942]: I0218 19:18:45.843615 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:45 crc kubenswrapper[4942]: I0218 19:18:45.843686 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:45 crc kubenswrapper[4942]: I0218 19:18:45.843713 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:45 crc kubenswrapper[4942]: I0218 19:18:45.843745 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:45 crc kubenswrapper[4942]: I0218 19:18:45.843798 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:45Z","lastTransitionTime":"2026-02-18T19:18:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:45 crc kubenswrapper[4942]: I0218 19:18:45.871413 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 19:18:45 crc kubenswrapper[4942]: I0218 19:18:45.871478 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 19:18:45 crc kubenswrapper[4942]: E0218 19:18:45.871687 4942 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 18 19:18:45 crc kubenswrapper[4942]: E0218 19:18:45.871726 4942 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 18 19:18:45 crc kubenswrapper[4942]: E0218 19:18:45.871747 4942 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 18 19:18:45 crc kubenswrapper[4942]: E0218 19:18:45.871870 4942 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-18 19:19:49.871846524 +0000 UTC m=+149.576779219 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 18 19:18:45 crc kubenswrapper[4942]: E0218 19:18:45.871969 4942 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 18 19:18:45 crc kubenswrapper[4942]: E0218 19:18:45.871991 4942 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 18 19:18:45 crc kubenswrapper[4942]: E0218 19:18:45.872005 4942 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 18 19:18:45 crc kubenswrapper[4942]: E0218 19:18:45.872049 4942 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-18 19:19:49.872035629 +0000 UTC m=+149.576968334 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 18 19:18:45 crc kubenswrapper[4942]: I0218 19:18:45.947077 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:45 crc kubenswrapper[4942]: I0218 19:18:45.947136 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:45 crc kubenswrapper[4942]: I0218 19:18:45.947153 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:45 crc kubenswrapper[4942]: I0218 19:18:45.947177 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:45 crc kubenswrapper[4942]: I0218 19:18:45.947197 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:45Z","lastTransitionTime":"2026-02-18T19:18:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:46 crc kubenswrapper[4942]: I0218 19:18:46.015463 4942 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-23 08:17:59.529701696 +0000 UTC Feb 18 19:18:46 crc kubenswrapper[4942]: I0218 19:18:46.035220 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 19:18:46 crc kubenswrapper[4942]: I0218 19:18:46.035292 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qwg6q" Feb 18 19:18:46 crc kubenswrapper[4942]: I0218 19:18:46.035250 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 19:18:46 crc kubenswrapper[4942]: E0218 19:18:46.035466 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 19:18:46 crc kubenswrapper[4942]: E0218 19:18:46.035599 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 19:18:46 crc kubenswrapper[4942]: E0218 19:18:46.035800 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qwg6q" podUID="ac5b5f40-34db-4aeb-abb4-57204673bd53" Feb 18 19:18:46 crc kubenswrapper[4942]: I0218 19:18:46.050084 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:46 crc kubenswrapper[4942]: I0218 19:18:46.050139 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:46 crc kubenswrapper[4942]: I0218 19:18:46.050161 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:46 crc kubenswrapper[4942]: I0218 19:18:46.050190 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:46 crc kubenswrapper[4942]: I0218 19:18:46.050215 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:46Z","lastTransitionTime":"2026-02-18T19:18:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:46 crc kubenswrapper[4942]: I0218 19:18:46.074067 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 19:18:46 crc kubenswrapper[4942]: E0218 19:18:46.074266 4942 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 19:19:50.074226317 +0000 UTC m=+149.779159032 (durationBeforeRetry 1m4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 19:18:46 crc kubenswrapper[4942]: I0218 19:18:46.074437 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 19:18:46 crc kubenswrapper[4942]: I0218 19:18:46.074491 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 19:18:46 crc kubenswrapper[4942]: E0218 19:18:46.074685 4942 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 18 19:18:46 crc kubenswrapper[4942]: E0218 19:18:46.074710 4942 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 18 19:18:46 crc kubenswrapper[4942]: E0218 19:18:46.074834 4942 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-18 19:19:50.074804302 +0000 UTC m=+149.779736997 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 18 19:18:46 crc kubenswrapper[4942]: E0218 19:18:46.074875 4942 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-18 19:19:50.074855953 +0000 UTC m=+149.779788778 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 18 19:18:46 crc kubenswrapper[4942]: I0218 19:18:46.153575 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:46 crc kubenswrapper[4942]: I0218 19:18:46.153646 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:46 crc kubenswrapper[4942]: I0218 19:18:46.153663 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:46 crc kubenswrapper[4942]: I0218 19:18:46.153690 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:46 crc kubenswrapper[4942]: I0218 19:18:46.153710 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:46Z","lastTransitionTime":"2026-02-18T19:18:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:46 crc kubenswrapper[4942]: I0218 19:18:46.257065 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:46 crc kubenswrapper[4942]: I0218 19:18:46.257207 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:46 crc kubenswrapper[4942]: I0218 19:18:46.257232 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:46 crc kubenswrapper[4942]: I0218 19:18:46.257266 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:46 crc kubenswrapper[4942]: I0218 19:18:46.257289 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:46Z","lastTransitionTime":"2026-02-18T19:18:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:46 crc kubenswrapper[4942]: I0218 19:18:46.361392 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:46 crc kubenswrapper[4942]: I0218 19:18:46.361452 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:46 crc kubenswrapper[4942]: I0218 19:18:46.361463 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:46 crc kubenswrapper[4942]: I0218 19:18:46.361486 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:46 crc kubenswrapper[4942]: I0218 19:18:46.361500 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:46Z","lastTransitionTime":"2026-02-18T19:18:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:46 crc kubenswrapper[4942]: I0218 19:18:46.464749 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:46 crc kubenswrapper[4942]: I0218 19:18:46.464863 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:46 crc kubenswrapper[4942]: I0218 19:18:46.464882 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:46 crc kubenswrapper[4942]: I0218 19:18:46.464906 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:46 crc kubenswrapper[4942]: I0218 19:18:46.464926 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:46Z","lastTransitionTime":"2026-02-18T19:18:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:46 crc kubenswrapper[4942]: I0218 19:18:46.568555 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:46 crc kubenswrapper[4942]: I0218 19:18:46.568659 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:46 crc kubenswrapper[4942]: I0218 19:18:46.568676 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:46 crc kubenswrapper[4942]: I0218 19:18:46.568720 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:46 crc kubenswrapper[4942]: I0218 19:18:46.568805 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:46Z","lastTransitionTime":"2026-02-18T19:18:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:46 crc kubenswrapper[4942]: I0218 19:18:46.674230 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:46 crc kubenswrapper[4942]: I0218 19:18:46.674284 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:46 crc kubenswrapper[4942]: I0218 19:18:46.674301 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:46 crc kubenswrapper[4942]: I0218 19:18:46.674324 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:46 crc kubenswrapper[4942]: I0218 19:18:46.674339 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:46Z","lastTransitionTime":"2026-02-18T19:18:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:46 crc kubenswrapper[4942]: I0218 19:18:46.777639 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:46 crc kubenswrapper[4942]: I0218 19:18:46.777713 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:46 crc kubenswrapper[4942]: I0218 19:18:46.777739 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:46 crc kubenswrapper[4942]: I0218 19:18:46.777807 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:46 crc kubenswrapper[4942]: I0218 19:18:46.777837 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:46Z","lastTransitionTime":"2026-02-18T19:18:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:46 crc kubenswrapper[4942]: I0218 19:18:46.881019 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:46 crc kubenswrapper[4942]: I0218 19:18:46.881070 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:46 crc kubenswrapper[4942]: I0218 19:18:46.881081 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:46 crc kubenswrapper[4942]: I0218 19:18:46.881099 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:46 crc kubenswrapper[4942]: I0218 19:18:46.881112 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:46Z","lastTransitionTime":"2026-02-18T19:18:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:46 crc kubenswrapper[4942]: I0218 19:18:46.984748 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:46 crc kubenswrapper[4942]: I0218 19:18:46.984910 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:46 crc kubenswrapper[4942]: I0218 19:18:46.985253 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:46 crc kubenswrapper[4942]: I0218 19:18:46.985307 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:46 crc kubenswrapper[4942]: I0218 19:18:46.985330 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:46Z","lastTransitionTime":"2026-02-18T19:18:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:47 crc kubenswrapper[4942]: I0218 19:18:47.016147 4942 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-21 18:12:13.511647487 +0000 UTC Feb 18 19:18:47 crc kubenswrapper[4942]: I0218 19:18:47.035240 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 19:18:47 crc kubenswrapper[4942]: E0218 19:18:47.035435 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 19:18:47 crc kubenswrapper[4942]: I0218 19:18:47.088931 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:47 crc kubenswrapper[4942]: I0218 19:18:47.088996 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:47 crc kubenswrapper[4942]: I0218 19:18:47.089015 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:47 crc kubenswrapper[4942]: I0218 19:18:47.089043 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:47 crc kubenswrapper[4942]: I0218 19:18:47.089064 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:47Z","lastTransitionTime":"2026-02-18T19:18:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:47 crc kubenswrapper[4942]: I0218 19:18:47.193487 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:47 crc kubenswrapper[4942]: I0218 19:18:47.193589 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:47 crc kubenswrapper[4942]: I0218 19:18:47.193614 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:47 crc kubenswrapper[4942]: I0218 19:18:47.193646 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:47 crc kubenswrapper[4942]: I0218 19:18:47.193670 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:47Z","lastTransitionTime":"2026-02-18T19:18:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:47 crc kubenswrapper[4942]: I0218 19:18:47.297646 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:47 crc kubenswrapper[4942]: I0218 19:18:47.297749 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:47 crc kubenswrapper[4942]: I0218 19:18:47.297793 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:47 crc kubenswrapper[4942]: I0218 19:18:47.297820 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:47 crc kubenswrapper[4942]: I0218 19:18:47.297840 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:47Z","lastTransitionTime":"2026-02-18T19:18:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:47 crc kubenswrapper[4942]: I0218 19:18:47.401890 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:47 crc kubenswrapper[4942]: I0218 19:18:47.401966 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:47 crc kubenswrapper[4942]: I0218 19:18:47.401985 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:47 crc kubenswrapper[4942]: I0218 19:18:47.402011 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:47 crc kubenswrapper[4942]: I0218 19:18:47.402029 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:47Z","lastTransitionTime":"2026-02-18T19:18:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:47 crc kubenswrapper[4942]: I0218 19:18:47.505658 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:47 crc kubenswrapper[4942]: I0218 19:18:47.505839 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:47 crc kubenswrapper[4942]: I0218 19:18:47.505873 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:47 crc kubenswrapper[4942]: I0218 19:18:47.505952 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:47 crc kubenswrapper[4942]: I0218 19:18:47.506045 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:47Z","lastTransitionTime":"2026-02-18T19:18:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:47 crc kubenswrapper[4942]: I0218 19:18:47.609956 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:47 crc kubenswrapper[4942]: I0218 19:18:47.610050 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:47 crc kubenswrapper[4942]: I0218 19:18:47.610075 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:47 crc kubenswrapper[4942]: I0218 19:18:47.610103 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:47 crc kubenswrapper[4942]: I0218 19:18:47.610135 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:47Z","lastTransitionTime":"2026-02-18T19:18:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:47 crc kubenswrapper[4942]: I0218 19:18:47.714084 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:47 crc kubenswrapper[4942]: I0218 19:18:47.714180 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:47 crc kubenswrapper[4942]: I0218 19:18:47.714202 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:47 crc kubenswrapper[4942]: I0218 19:18:47.714230 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:47 crc kubenswrapper[4942]: I0218 19:18:47.714251 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:47Z","lastTransitionTime":"2026-02-18T19:18:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:47 crc kubenswrapper[4942]: I0218 19:18:47.817232 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:47 crc kubenswrapper[4942]: I0218 19:18:47.817359 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:47 crc kubenswrapper[4942]: I0218 19:18:47.817383 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:47 crc kubenswrapper[4942]: I0218 19:18:47.817409 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:47 crc kubenswrapper[4942]: I0218 19:18:47.817426 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:47Z","lastTransitionTime":"2026-02-18T19:18:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:47 crc kubenswrapper[4942]: I0218 19:18:47.921229 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:47 crc kubenswrapper[4942]: I0218 19:18:47.921301 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:47 crc kubenswrapper[4942]: I0218 19:18:47.921320 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:47 crc kubenswrapper[4942]: I0218 19:18:47.921345 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:47 crc kubenswrapper[4942]: I0218 19:18:47.921363 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:47Z","lastTransitionTime":"2026-02-18T19:18:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:48 crc kubenswrapper[4942]: I0218 19:18:48.016514 4942 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-17 04:56:05.503592719 +0000 UTC Feb 18 19:18:48 crc kubenswrapper[4942]: I0218 19:18:48.024532 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:48 crc kubenswrapper[4942]: I0218 19:18:48.024601 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:48 crc kubenswrapper[4942]: I0218 19:18:48.024622 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:48 crc kubenswrapper[4942]: I0218 19:18:48.024649 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:48 crc kubenswrapper[4942]: I0218 19:18:48.024668 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:48Z","lastTransitionTime":"2026-02-18T19:18:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:48 crc kubenswrapper[4942]: I0218 19:18:48.035201 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 19:18:48 crc kubenswrapper[4942]: I0218 19:18:48.035234 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 19:18:48 crc kubenswrapper[4942]: I0218 19:18:48.035245 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qwg6q" Feb 18 19:18:48 crc kubenswrapper[4942]: E0218 19:18:48.035377 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 19:18:48 crc kubenswrapper[4942]: E0218 19:18:48.035519 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 19:18:48 crc kubenswrapper[4942]: E0218 19:18:48.035618 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qwg6q" podUID="ac5b5f40-34db-4aeb-abb4-57204673bd53" Feb 18 19:18:48 crc kubenswrapper[4942]: I0218 19:18:48.128685 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:48 crc kubenswrapper[4942]: I0218 19:18:48.128830 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:48 crc kubenswrapper[4942]: I0218 19:18:48.128867 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:48 crc kubenswrapper[4942]: I0218 19:18:48.128899 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:48 crc kubenswrapper[4942]: I0218 19:18:48.128920 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:48Z","lastTransitionTime":"2026-02-18T19:18:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:48 crc kubenswrapper[4942]: I0218 19:18:48.231694 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:48 crc kubenswrapper[4942]: I0218 19:18:48.231754 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:48 crc kubenswrapper[4942]: I0218 19:18:48.231779 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:48 crc kubenswrapper[4942]: I0218 19:18:48.231796 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:48 crc kubenswrapper[4942]: I0218 19:18:48.231808 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:48Z","lastTransitionTime":"2026-02-18T19:18:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:48 crc kubenswrapper[4942]: I0218 19:18:48.335061 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:48 crc kubenswrapper[4942]: I0218 19:18:48.335138 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:48 crc kubenswrapper[4942]: I0218 19:18:48.335156 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:48 crc kubenswrapper[4942]: I0218 19:18:48.335179 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:48 crc kubenswrapper[4942]: I0218 19:18:48.335198 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:48Z","lastTransitionTime":"2026-02-18T19:18:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:48 crc kubenswrapper[4942]: I0218 19:18:48.439091 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:48 crc kubenswrapper[4942]: I0218 19:18:48.439156 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:48 crc kubenswrapper[4942]: I0218 19:18:48.439177 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:48 crc kubenswrapper[4942]: I0218 19:18:48.439222 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:48 crc kubenswrapper[4942]: I0218 19:18:48.439252 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:48Z","lastTransitionTime":"2026-02-18T19:18:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:48 crc kubenswrapper[4942]: I0218 19:18:48.544316 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:48 crc kubenswrapper[4942]: I0218 19:18:48.544406 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:48 crc kubenswrapper[4942]: I0218 19:18:48.544432 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:48 crc kubenswrapper[4942]: I0218 19:18:48.544467 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:48 crc kubenswrapper[4942]: I0218 19:18:48.544496 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:48Z","lastTransitionTime":"2026-02-18T19:18:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:48 crc kubenswrapper[4942]: I0218 19:18:48.647591 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:48 crc kubenswrapper[4942]: I0218 19:18:48.647633 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:48 crc kubenswrapper[4942]: I0218 19:18:48.647645 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:48 crc kubenswrapper[4942]: I0218 19:18:48.647663 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:48 crc kubenswrapper[4942]: I0218 19:18:48.647676 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:48Z","lastTransitionTime":"2026-02-18T19:18:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:48 crc kubenswrapper[4942]: I0218 19:18:48.750995 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:48 crc kubenswrapper[4942]: I0218 19:18:48.751067 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:48 crc kubenswrapper[4942]: I0218 19:18:48.751084 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:48 crc kubenswrapper[4942]: I0218 19:18:48.751111 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:48 crc kubenswrapper[4942]: I0218 19:18:48.751130 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:48Z","lastTransitionTime":"2026-02-18T19:18:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:48 crc kubenswrapper[4942]: I0218 19:18:48.780988 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:48 crc kubenswrapper[4942]: I0218 19:18:48.781075 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:48 crc kubenswrapper[4942]: I0218 19:18:48.781098 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:48 crc kubenswrapper[4942]: I0218 19:18:48.781127 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:48 crc kubenswrapper[4942]: I0218 19:18:48.781148 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:48Z","lastTransitionTime":"2026-02-18T19:18:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:48 crc kubenswrapper[4942]: E0218 19:18:48.802964 4942 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:18:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:48Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:18:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:48Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:18:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:48Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:18:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:48Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"26ba8477-3134-4454-b1a3-81cc0f315017\\\",\\\"systemUUID\\\":\\\"15e4da6b-0b96-4412-ada2-f835d7e5f88a\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:48Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:48 crc kubenswrapper[4942]: I0218 19:18:48.808122 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:48 crc kubenswrapper[4942]: I0218 19:18:48.808235 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:48 crc kubenswrapper[4942]: I0218 19:18:48.808259 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:48 crc kubenswrapper[4942]: I0218 19:18:48.808293 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:48 crc kubenswrapper[4942]: I0218 19:18:48.808312 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:48Z","lastTransitionTime":"2026-02-18T19:18:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:48 crc kubenswrapper[4942]: E0218 19:18:48.827365 4942 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:18:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:48Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:18:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:48Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:18:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:48Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:18:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:48Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"26ba8477-3134-4454-b1a3-81cc0f315017\\\",\\\"systemUUID\\\":\\\"15e4da6b-0b96-4412-ada2-f835d7e5f88a\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:48Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:48 crc kubenswrapper[4942]: I0218 19:18:48.831908 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:48 crc kubenswrapper[4942]: I0218 19:18:48.831977 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:48 crc kubenswrapper[4942]: I0218 19:18:48.832002 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:48 crc kubenswrapper[4942]: I0218 19:18:48.832036 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:48 crc kubenswrapper[4942]: I0218 19:18:48.832060 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:48Z","lastTransitionTime":"2026-02-18T19:18:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:48 crc kubenswrapper[4942]: E0218 19:18:48.853354 4942 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:18:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:48Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:18:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:48Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:18:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:48Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:18:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:48Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"26ba8477-3134-4454-b1a3-81cc0f315017\\\",\\\"systemUUID\\\":\\\"15e4da6b-0b96-4412-ada2-f835d7e5f88a\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:48Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:48 crc kubenswrapper[4942]: I0218 19:18:48.859016 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:48 crc kubenswrapper[4942]: I0218 19:18:48.859077 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:48 crc kubenswrapper[4942]: I0218 19:18:48.859105 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:48 crc kubenswrapper[4942]: I0218 19:18:48.859134 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:48 crc kubenswrapper[4942]: I0218 19:18:48.859159 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:48Z","lastTransitionTime":"2026-02-18T19:18:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:48 crc kubenswrapper[4942]: E0218 19:18:48.878417 4942 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:18:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:48Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:18:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:48Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:18:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:48Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:18:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:48Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"26ba8477-3134-4454-b1a3-81cc0f315017\\\",\\\"systemUUID\\\":\\\"15e4da6b-0b96-4412-ada2-f835d7e5f88a\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:48Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:48 crc kubenswrapper[4942]: I0218 19:18:48.883784 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:48 crc kubenswrapper[4942]: I0218 19:18:48.883827 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:48 crc kubenswrapper[4942]: I0218 19:18:48.883840 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:48 crc kubenswrapper[4942]: I0218 19:18:48.883859 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:48 crc kubenswrapper[4942]: I0218 19:18:48.883872 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:48Z","lastTransitionTime":"2026-02-18T19:18:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:48 crc kubenswrapper[4942]: E0218 19:18:48.904625 4942 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:18:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:48Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:18:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:48Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:18:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:48Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T19:18:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:48Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"26ba8477-3134-4454-b1a3-81cc0f315017\\\",\\\"systemUUID\\\":\\\"15e4da6b-0b96-4412-ada2-f835d7e5f88a\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:48Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:48 crc kubenswrapper[4942]: E0218 19:18:48.904941 4942 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 18 19:18:48 crc kubenswrapper[4942]: I0218 19:18:48.906938 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:48 crc kubenswrapper[4942]: I0218 19:18:48.907000 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:48 crc kubenswrapper[4942]: I0218 19:18:48.907018 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:48 crc kubenswrapper[4942]: I0218 19:18:48.907043 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:48 crc kubenswrapper[4942]: I0218 19:18:48.907062 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:48Z","lastTransitionTime":"2026-02-18T19:18:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:49 crc kubenswrapper[4942]: I0218 19:18:49.010553 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:49 crc kubenswrapper[4942]: I0218 19:18:49.010623 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:49 crc kubenswrapper[4942]: I0218 19:18:49.010642 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:49 crc kubenswrapper[4942]: I0218 19:18:49.010672 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:49 crc kubenswrapper[4942]: I0218 19:18:49.010691 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:49Z","lastTransitionTime":"2026-02-18T19:18:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:49 crc kubenswrapper[4942]: I0218 19:18:49.016963 4942 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-04 17:03:54.936474102 +0000 UTC Feb 18 19:18:49 crc kubenswrapper[4942]: I0218 19:18:49.035725 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 19:18:49 crc kubenswrapper[4942]: E0218 19:18:49.035943 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 19:18:49 crc kubenswrapper[4942]: I0218 19:18:49.113633 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:49 crc kubenswrapper[4942]: I0218 19:18:49.113684 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:49 crc kubenswrapper[4942]: I0218 19:18:49.113699 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:49 crc kubenswrapper[4942]: I0218 19:18:49.113726 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:49 crc kubenswrapper[4942]: I0218 19:18:49.113745 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:49Z","lastTransitionTime":"2026-02-18T19:18:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:49 crc kubenswrapper[4942]: I0218 19:18:49.217268 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:49 crc kubenswrapper[4942]: I0218 19:18:49.217346 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:49 crc kubenswrapper[4942]: I0218 19:18:49.217371 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:49 crc kubenswrapper[4942]: I0218 19:18:49.217401 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:49 crc kubenswrapper[4942]: I0218 19:18:49.217421 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:49Z","lastTransitionTime":"2026-02-18T19:18:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:49 crc kubenswrapper[4942]: I0218 19:18:49.320028 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:49 crc kubenswrapper[4942]: I0218 19:18:49.320088 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:49 crc kubenswrapper[4942]: I0218 19:18:49.320112 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:49 crc kubenswrapper[4942]: I0218 19:18:49.320145 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:49 crc kubenswrapper[4942]: I0218 19:18:49.320169 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:49Z","lastTransitionTime":"2026-02-18T19:18:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:49 crc kubenswrapper[4942]: I0218 19:18:49.423051 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:49 crc kubenswrapper[4942]: I0218 19:18:49.423121 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:49 crc kubenswrapper[4942]: I0218 19:18:49.423138 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:49 crc kubenswrapper[4942]: I0218 19:18:49.423164 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:49 crc kubenswrapper[4942]: I0218 19:18:49.423183 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:49Z","lastTransitionTime":"2026-02-18T19:18:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:49 crc kubenswrapper[4942]: I0218 19:18:49.525534 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:49 crc kubenswrapper[4942]: I0218 19:18:49.525609 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:49 crc kubenswrapper[4942]: I0218 19:18:49.525634 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:49 crc kubenswrapper[4942]: I0218 19:18:49.525659 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:49 crc kubenswrapper[4942]: I0218 19:18:49.525678 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:49Z","lastTransitionTime":"2026-02-18T19:18:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:49 crc kubenswrapper[4942]: I0218 19:18:49.629147 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:49 crc kubenswrapper[4942]: I0218 19:18:49.629252 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:49 crc kubenswrapper[4942]: I0218 19:18:49.629273 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:49 crc kubenswrapper[4942]: I0218 19:18:49.629340 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:49 crc kubenswrapper[4942]: I0218 19:18:49.629361 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:49Z","lastTransitionTime":"2026-02-18T19:18:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:49 crc kubenswrapper[4942]: I0218 19:18:49.733167 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:49 crc kubenswrapper[4942]: I0218 19:18:49.733249 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:49 crc kubenswrapper[4942]: I0218 19:18:49.733268 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:49 crc kubenswrapper[4942]: I0218 19:18:49.733297 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:49 crc kubenswrapper[4942]: I0218 19:18:49.733316 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:49Z","lastTransitionTime":"2026-02-18T19:18:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:49 crc kubenswrapper[4942]: I0218 19:18:49.837078 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:49 crc kubenswrapper[4942]: I0218 19:18:49.837158 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:49 crc kubenswrapper[4942]: I0218 19:18:49.837177 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:49 crc kubenswrapper[4942]: I0218 19:18:49.837208 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:49 crc kubenswrapper[4942]: I0218 19:18:49.837228 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:49Z","lastTransitionTime":"2026-02-18T19:18:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:49 crc kubenswrapper[4942]: I0218 19:18:49.941543 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:49 crc kubenswrapper[4942]: I0218 19:18:49.941622 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:49 crc kubenswrapper[4942]: I0218 19:18:49.941640 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:49 crc kubenswrapper[4942]: I0218 19:18:49.941666 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:49 crc kubenswrapper[4942]: I0218 19:18:49.941683 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:49Z","lastTransitionTime":"2026-02-18T19:18:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:50 crc kubenswrapper[4942]: I0218 19:18:50.017971 4942 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-17 21:15:33.134557562 +0000 UTC Feb 18 19:18:50 crc kubenswrapper[4942]: I0218 19:18:50.035832 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 19:18:50 crc kubenswrapper[4942]: I0218 19:18:50.035939 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 19:18:50 crc kubenswrapper[4942]: I0218 19:18:50.035976 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qwg6q" Feb 18 19:18:50 crc kubenswrapper[4942]: E0218 19:18:50.036042 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 19:18:50 crc kubenswrapper[4942]: E0218 19:18:50.036181 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 19:18:50 crc kubenswrapper[4942]: E0218 19:18:50.036907 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qwg6q" podUID="ac5b5f40-34db-4aeb-abb4-57204673bd53" Feb 18 19:18:50 crc kubenswrapper[4942]: I0218 19:18:50.045187 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:50 crc kubenswrapper[4942]: I0218 19:18:50.045245 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:50 crc kubenswrapper[4942]: I0218 19:18:50.045264 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:50 crc kubenswrapper[4942]: I0218 19:18:50.045290 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:50 crc kubenswrapper[4942]: I0218 19:18:50.045311 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:50Z","lastTransitionTime":"2026-02-18T19:18:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:50 crc kubenswrapper[4942]: I0218 19:18:50.148664 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:50 crc kubenswrapper[4942]: I0218 19:18:50.148727 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:50 crc kubenswrapper[4942]: I0218 19:18:50.148747 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:50 crc kubenswrapper[4942]: I0218 19:18:50.148816 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:50 crc kubenswrapper[4942]: I0218 19:18:50.148844 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:50Z","lastTransitionTime":"2026-02-18T19:18:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:50 crc kubenswrapper[4942]: I0218 19:18:50.252199 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:50 crc kubenswrapper[4942]: I0218 19:18:50.252268 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:50 crc kubenswrapper[4942]: I0218 19:18:50.252286 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:50 crc kubenswrapper[4942]: I0218 19:18:50.252311 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:50 crc kubenswrapper[4942]: I0218 19:18:50.252330 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:50Z","lastTransitionTime":"2026-02-18T19:18:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:50 crc kubenswrapper[4942]: I0218 19:18:50.355311 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:50 crc kubenswrapper[4942]: I0218 19:18:50.355372 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:50 crc kubenswrapper[4942]: I0218 19:18:50.355391 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:50 crc kubenswrapper[4942]: I0218 19:18:50.355416 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:50 crc kubenswrapper[4942]: I0218 19:18:50.355435 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:50Z","lastTransitionTime":"2026-02-18T19:18:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:50 crc kubenswrapper[4942]: I0218 19:18:50.458930 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:50 crc kubenswrapper[4942]: I0218 19:18:50.459021 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:50 crc kubenswrapper[4942]: I0218 19:18:50.459048 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:50 crc kubenswrapper[4942]: I0218 19:18:50.459077 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:50 crc kubenswrapper[4942]: I0218 19:18:50.459094 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:50Z","lastTransitionTime":"2026-02-18T19:18:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:50 crc kubenswrapper[4942]: I0218 19:18:50.561839 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:50 crc kubenswrapper[4942]: I0218 19:18:50.561911 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:50 crc kubenswrapper[4942]: I0218 19:18:50.561936 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:50 crc kubenswrapper[4942]: I0218 19:18:50.561969 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:50 crc kubenswrapper[4942]: I0218 19:18:50.561995 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:50Z","lastTransitionTime":"2026-02-18T19:18:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:50 crc kubenswrapper[4942]: I0218 19:18:50.665969 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:50 crc kubenswrapper[4942]: I0218 19:18:50.666057 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:50 crc kubenswrapper[4942]: I0218 19:18:50.666079 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:50 crc kubenswrapper[4942]: I0218 19:18:50.666107 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:50 crc kubenswrapper[4942]: I0218 19:18:50.666125 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:50Z","lastTransitionTime":"2026-02-18T19:18:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:50 crc kubenswrapper[4942]: I0218 19:18:50.769591 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:50 crc kubenswrapper[4942]: I0218 19:18:50.769648 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:50 crc kubenswrapper[4942]: I0218 19:18:50.769665 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:50 crc kubenswrapper[4942]: I0218 19:18:50.769688 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:50 crc kubenswrapper[4942]: I0218 19:18:50.769699 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:50Z","lastTransitionTime":"2026-02-18T19:18:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:50 crc kubenswrapper[4942]: I0218 19:18:50.872980 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:50 crc kubenswrapper[4942]: I0218 19:18:50.873053 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:50 crc kubenswrapper[4942]: I0218 19:18:50.873074 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:50 crc kubenswrapper[4942]: I0218 19:18:50.873100 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:50 crc kubenswrapper[4942]: I0218 19:18:50.873118 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:50Z","lastTransitionTime":"2026-02-18T19:18:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:50 crc kubenswrapper[4942]: I0218 19:18:50.976722 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:50 crc kubenswrapper[4942]: I0218 19:18:50.976814 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:50 crc kubenswrapper[4942]: I0218 19:18:50.976835 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:50 crc kubenswrapper[4942]: I0218 19:18:50.976861 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:50 crc kubenswrapper[4942]: I0218 19:18:50.976878 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:50Z","lastTransitionTime":"2026-02-18T19:18:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:51 crc kubenswrapper[4942]: I0218 19:18:51.018164 4942 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-12 14:51:19.397192165 +0000 UTC Feb 18 19:18:51 crc kubenswrapper[4942]: I0218 19:18:51.035030 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 19:18:51 crc kubenswrapper[4942]: E0218 19:18:51.035238 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 19:18:51 crc kubenswrapper[4942]: I0218 19:18:51.056517 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://36f5db0de79285e1aca04aee9ebb8824353d8746f2f7df24be858a55db3c9abf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:51Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:51 crc kubenswrapper[4942]: I0218 19:18:51.077805 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2e4be8605467674f949e5b4b8d282634126ab56d2983d5ffadb64ca4043b79b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:51Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:51 crc kubenswrapper[4942]: I0218 19:18:51.080544 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:51 crc kubenswrapper[4942]: I0218 19:18:51.080619 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:51 crc kubenswrapper[4942]: I0218 19:18:51.080646 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:51 crc kubenswrapper[4942]: I0218 19:18:51.080673 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:51 crc kubenswrapper[4942]: I0218 19:18:51.080693 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:51Z","lastTransitionTime":"2026-02-18T19:18:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:51 crc kubenswrapper[4942]: I0218 19:18:51.099677 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:51Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:51 crc kubenswrapper[4942]: I0218 19:18:51.115705 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wqxh4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"28921539-823a-4439-a230-3b5aed7085cc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d1f426cf3a46e9dbd6da2d7e0d1dc2649a781bb63b9b116e2e96e297ffe685f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c2zj5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3f2583de812c35d32f50918d2ea1071672e650d7bb1eca09416558ca25526b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c2zj5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:17:42Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wqxh4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:51Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:51 crc kubenswrapper[4942]: I0218 19:18:51.138122 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-8jfwb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"75150b8c-7a02-497b-86c3-eabc9c8dbc55\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4ea9fbe1ac2843b80786e84d58bed874d360e223686eac9666589a7841d71c46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f6aba9b40a3a963de7e8fb8f2a121318f0800350a41caa30b6aef71468e5e0e4\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-18T19:18:29Z\\\",\\\"message\\\":\\\"2026-02-18T19:17:44+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_4d82d104-9414-4e68-8849-a8f62a9a5d29\\\\n2026-02-18T19:17:44+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_4d82d104-9414-4e68-8849-a8f62a9a5d29 to /host/opt/cni/bin/\\\\n2026-02-18T19:17:44Z [verbose] multus-daemon started\\\\n2026-02-18T19:17:44Z [verbose] Readiness Indicator file check\\\\n2026-02-18T19:18:29Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T19:17:42Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:18:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-65c5q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:17:42Z\\\"}}\" for pod \"openshift-multus\"/\"multus-8jfwb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:51Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:51 crc kubenswrapper[4942]: I0218 19:18:51.161584 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-89fzv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"45dc4164-81a9-44cf-b86a-dff571bc0417\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:42Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:42Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e988175a524e389ddf3e3a47acb65910ac3bf3b812e14b76d988f13e2cdc5dc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl7tj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9333dac09e056ca12a248589ed4a097788b86ab83f9a1014d76d8bad88f1800c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl7tj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bcc9ee5f12cc3a3518c9fe13c16743e946e59b82dc01239767afb1e4afb2e4b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl7tj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2e222b580b244e85a382499ae61c72779f95fdab87e4d4c723d29b488219f94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl7tj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6351d0088a3e9c170ebe043fa700ef7f870c52f40d751b4fd13ac7b5bfa5e3b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl7tj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://427d7c083c5040fc6afe217c7850f1114323977542e83eb35d0a71b4bef6ecc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl7tj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://331d92ab2b896c654b5eb6e9e3372f06c02c3b582188b54cff7b9b6feb78c9a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://331d92ab2b896c654b5eb6e9e3372f06c02c3b582188b54cff7b9b6feb78c9a9\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-18T19:18:36Z\\\",\\\"message\\\":\\\"il\\\\u003e UUID: UUIDName:}]\\\\nI0218 19:18:36.041536 7014 model_client.go:382] Update operations generated as: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:5c 10.217.0.92]} options:{GoMap:map[iface-id-ver:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:5c 10.217.0.92]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {c94130be-172c-477c-88c4-40cc7eba30fe}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0218 19:18:36.041160 7014 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-node-identity/network-node-identity-vrzqb\\\\nI0218 19:18:36.041605 7014 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-node-identity/network-node-identity-vrzqb\\\\nI0218 19:18:36.041616 7014 ovn.go:134] Ensuring zone local for Pod openshift-network-node-identity/network-node-identity-vrzqb in node crc\\\\nI0218 19:18:36.041625 7014 obj_retry.go:386] Retry successful for *v1.Pod openshift-network-node-identity/network-node-identity-vrzqb after 0 failed attempt(s)\\\\nI0218 19:18:36.041636 7014 default_network_controller.go:776] Recording success event on pod openshift-network-node-identity/network-node-identity-vrzqb\\\\nI0218 19:18:36.040597 7014 model_client.go:398] Mutate operations generated as: [{Op:mutate Table:Logi\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T19:18:35Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-89fzv_openshift-ovn-kubernetes(45dc4164-81a9-44cf-b86a-dff571bc0417)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl7tj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c498aa99d3ec10af57c279f23804f4dce52a99d2c73fafe2bd9dc6ea454c7a23\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl7tj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://581689b9e064557a35e24e6d5c15a73036b8499700959fd330e9ebee15543edc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://581689b9e064557a35e24e6d5c15a73036b8499700959fd330e9ebee15543edc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:17:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:17:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl7tj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:17:42Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-89fzv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:51Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:51 crc kubenswrapper[4942]: I0218 19:18:51.179579 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"276d1ade-b018-4a59-8184-e121ff600ea0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bf61d811b92484ed6f2e49184a29d51957000ce926d74afe7b452b8845673afb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://691cb927291454a41fe8552c32737d52f8430e180870cd9c2bdc827926f15cd0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a3ed5634c2ead9b37bd3c51e5ba9f710e1a2b4430552bfce39b234bc7efdac5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8f965989f2401534556e39f4940e0a03935cf6ff85e89a9401fdfc20fc84dbc3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8f965989f2401534556e39f4940e0a03935cf6ff85e89a9401fdfc20fc84dbc3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:17:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:17:22Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:17:21Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:51Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:51 crc kubenswrapper[4942]: I0218 19:18:51.183455 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:51 crc kubenswrapper[4942]: I0218 19:18:51.183518 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:51 crc kubenswrapper[4942]: I0218 19:18:51.183533 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:51 crc kubenswrapper[4942]: I0218 19:18:51.183555 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:51 crc kubenswrapper[4942]: I0218 19:18:51.183569 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:51Z","lastTransitionTime":"2026-02-18T19:18:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:51 crc kubenswrapper[4942]: I0218 19:18:51.194170 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dab011ca-f26a-4a5e-b093-b1f4dc0e5efa\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c16c164479a6aa22042dd8b972db6fc6b802a7a1fc1a50b1538e85b6afe9b913\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://de8f04ef11faf93e27b40bb3839d1dabcfbb8248407854c379262f626810c92a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://de8f04ef11faf93e27b40bb3839d1dabcfbb8248407854c379262f626810c92a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:17:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:17:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:17:21Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:51Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:51 crc kubenswrapper[4942]: I0218 19:18:51.206397 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-wxck8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"69ef2748-687e-4223-998e-7bd92ad8aaaf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ba4df5c822ff37a1a027d1908aab6472cd0b5a6ab0a2b5e5d1b172774107727\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vscpp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:17:45Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-wxck8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:51Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:51 crc kubenswrapper[4942]: I0218 19:18:51.220994 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4da93830-99a3-4d84-91c8-a5352a987b3f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:18:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://beecfbdf76954e7b9895240b52a2ec033ec3b81094ece02095f67a5f389d0383\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca3d8e99733c89b17e7211c9bae268f8e75942d896d32a6e2e9fc7e613000a6d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee5e19c2c5a503ae69e8052828713b9b399137e0fb7f3a06865d4d7f6b29c954\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c787e65428258ae002dd2569d2e100857851a5b699d573b42e59d1be987da8b3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b82ad9b4d66dae63d0d0fcf0dd65333cf43c3b1d6006e129b7db1c6320bfdee8\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"message\\\":\\\"le observer\\\\nW0218 19:17:41.723890 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0218 19:17:41.724123 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0218 19:17:41.725411 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3231040961/tls.crt::/tmp/serving-cert-3231040961/tls.key\\\\\\\"\\\\nI0218 19:17:41.923908 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0218 19:17:41.936017 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0218 19:17:41.936045 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0218 19:17:41.936073 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0218 19:17:41.936079 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0218 19:17:41.944174 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0218 19:17:41.944200 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 19:17:41.944205 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 19:17:41.944211 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0218 19:17:41.944214 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0218 19:17:41.944217 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0218 19:17:41.944220 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0218 19:17:41.944371 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0218 19:17:41.958094 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T19:17:41Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5fcd5de3303bba82e4a354de9f77b9aac574912955c2e49e2e74232f4d432a88\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:23Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6adb31d2464d541db843bddad1c43811ca11900db12deeb0d7c13ff8a3186d09\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6adb31d2464d541db843bddad1c43811ca11900db12deeb0d7c13ff8a3186d09\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:17:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:17:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:17:21Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:51Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:51 crc kubenswrapper[4942]: I0218 19:18:51.233726 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:51Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:51 crc kubenswrapper[4942]: I0218 19:18:51.250546 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-xk99z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8f8b40cd-7bbd-4189-a8c0-f4131e8b9add\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b7ea4ede9f2f9b4438bc9befcf913e5b8c7b9dc765fa1edce809e17c5ac933a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2zxvs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3573f095c220e3b1994394b83fdf24c7d1a721ccee2755042f520467f21ae1fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2zxvs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:17:54Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-xk99z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:51Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:51 crc kubenswrapper[4942]: I0218 19:18:51.266207 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-qwg6q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ac5b5f40-34db-4aeb-abb4-57204673bd53\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:56Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:56Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kmmq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kmmq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:17:56Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-qwg6q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:51Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:51 crc kubenswrapper[4942]: I0218 19:18:51.285372 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b5d2b9d-7ec0-41fa-a073-399c6fd41eb6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2c8b81c113e461032be39d6328308bad3189a9e84d987da987d43e8e2f6449fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3654d3b4a5084ce9ffb9ef8aeab6155788b56ac636aee44b098f6e9d457a8d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3a247d311cfbec62a54df5757a344bbc7ea516a66ccdeb67aecbbe268a4fbe4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://117748c4c4fa5e68d4b927639faa447ed3a984e0d7364a2224abe27e178d5746\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:17:21Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:51Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:51 crc kubenswrapper[4942]: I0218 19:18:51.287164 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:51 crc kubenswrapper[4942]: I0218 19:18:51.287250 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:51 crc kubenswrapper[4942]: I0218 19:18:51.287272 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:51 crc kubenswrapper[4942]: I0218 19:18:51.287298 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:51 crc kubenswrapper[4942]: I0218 19:18:51.287317 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:51Z","lastTransitionTime":"2026-02-18T19:18:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:51 crc kubenswrapper[4942]: I0218 19:18:51.305160 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8cc6e8b6926e9cadf0bfdedb3a9fd0e5a7a902ba1cc703cd0396c3d7b2ec8666\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://45c0716738e2acbb0104b2ce05e3f23fd6933b653297d10972914500f3e55cd3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:51Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:51 crc kubenswrapper[4942]: I0218 19:18:51.321358 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-5pgvt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f163820b-df8b-4e07-9b74-d5f3332580a6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://97b02b2ef091c462632d385e824d90a6dc8270726bb3b5dfaa6c3036e99d323f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjg6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:17:41Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-5pgvt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:51Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:51 crc kubenswrapper[4942]: I0218 19:18:51.354218 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-2rbc4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1ec6cacf-a09d-43b8-89fc-aea1d5a0ca9d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d379b6cff5fad06493f1e137d6f8de20b35e5350025c5875db8afb23cf30ac97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T19:17:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-27v7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://26e15915f01864e6357f914244e51cc52eb7c1c79fdda6cebfb23c7723978fc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://26e15915f01864e6357f914244e51cc52eb7c1c79fdda6cebfb23c7723978fc7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:17:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-27v7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3845cae9bbde573b86221f80bf461886dadf5c5149bb19824e505c703e168ba3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3845cae9bbde573b86221f80bf461886dadf5c5149bb19824e505c703e168ba3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:17:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:17:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-27v7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://83b7a573c632fbc4da1c088193202dcbe4f5f09d82c98b12e901a06acc877b80\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://83b7a573c632fbc4da1c088193202dcbe4f5f09d82c98b12e901a06acc877b80\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:17:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:17:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-27v7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2730d908eb063a0dc3278a304a8b7b9aee84bb6df39693e476d6517362864da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b2730d908eb063a0dc3278a304a8b7b9aee84bb6df39693e476d6517362864da\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:17:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:17:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-27v7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://86ba552c18df4c07b6d6b34acf51c27ec696374ddd079486c045e1cb9f68f703\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://86ba552c18df4c07b6d6b34acf51c27ec696374ddd079486c045e1cb9f68f703\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:17:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:17:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-27v7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://522b8abd41e12aecabbbc8a1f16dd8978b1e72b0984784780349570290bcc168\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://522b8abd41e12aecabbbc8a1f16dd8978b1e72b0984784780349570290bcc168\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T19:17:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T19:17:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-27v7m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T19:17:42Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-2rbc4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:51Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:51 crc kubenswrapper[4942]: I0218 19:18:51.374277 4942 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T19:17:41Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T19:18:51Z is after 2025-08-24T17:21:41Z" Feb 18 19:18:51 crc kubenswrapper[4942]: I0218 19:18:51.391132 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:51 crc kubenswrapper[4942]: I0218 19:18:51.391208 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:51 crc kubenswrapper[4942]: I0218 19:18:51.391229 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:51 crc kubenswrapper[4942]: I0218 19:18:51.391259 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:51 crc kubenswrapper[4942]: I0218 19:18:51.391279 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:51Z","lastTransitionTime":"2026-02-18T19:18:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:51 crc kubenswrapper[4942]: I0218 19:18:51.494944 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:51 crc kubenswrapper[4942]: I0218 19:18:51.495020 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:51 crc kubenswrapper[4942]: I0218 19:18:51.495038 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:51 crc kubenswrapper[4942]: I0218 19:18:51.495063 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:51 crc kubenswrapper[4942]: I0218 19:18:51.495083 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:51Z","lastTransitionTime":"2026-02-18T19:18:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:51 crc kubenswrapper[4942]: I0218 19:18:51.598390 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:51 crc kubenswrapper[4942]: I0218 19:18:51.598438 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:51 crc kubenswrapper[4942]: I0218 19:18:51.598451 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:51 crc kubenswrapper[4942]: I0218 19:18:51.598469 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:51 crc kubenswrapper[4942]: I0218 19:18:51.598479 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:51Z","lastTransitionTime":"2026-02-18T19:18:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:51 crc kubenswrapper[4942]: I0218 19:18:51.701682 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:51 crc kubenswrapper[4942]: I0218 19:18:51.701736 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:51 crc kubenswrapper[4942]: I0218 19:18:51.701747 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:51 crc kubenswrapper[4942]: I0218 19:18:51.701791 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:51 crc kubenswrapper[4942]: I0218 19:18:51.701804 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:51Z","lastTransitionTime":"2026-02-18T19:18:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:51 crc kubenswrapper[4942]: I0218 19:18:51.804941 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:51 crc kubenswrapper[4942]: I0218 19:18:51.805023 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:51 crc kubenswrapper[4942]: I0218 19:18:51.805033 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:51 crc kubenswrapper[4942]: I0218 19:18:51.805051 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:51 crc kubenswrapper[4942]: I0218 19:18:51.805061 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:51Z","lastTransitionTime":"2026-02-18T19:18:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:51 crc kubenswrapper[4942]: I0218 19:18:51.908787 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:51 crc kubenswrapper[4942]: I0218 19:18:51.908878 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:51 crc kubenswrapper[4942]: I0218 19:18:51.908905 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:51 crc kubenswrapper[4942]: I0218 19:18:51.908940 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:51 crc kubenswrapper[4942]: I0218 19:18:51.908968 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:51Z","lastTransitionTime":"2026-02-18T19:18:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:52 crc kubenswrapper[4942]: I0218 19:18:52.012063 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:52 crc kubenswrapper[4942]: I0218 19:18:52.012142 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:52 crc kubenswrapper[4942]: I0218 19:18:52.012161 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:52 crc kubenswrapper[4942]: I0218 19:18:52.012188 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:52 crc kubenswrapper[4942]: I0218 19:18:52.012207 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:52Z","lastTransitionTime":"2026-02-18T19:18:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:52 crc kubenswrapper[4942]: I0218 19:18:52.019312 4942 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-13 22:07:06.214505962 +0000 UTC Feb 18 19:18:52 crc kubenswrapper[4942]: I0218 19:18:52.034841 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 19:18:52 crc kubenswrapper[4942]: E0218 19:18:52.035041 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 19:18:52 crc kubenswrapper[4942]: I0218 19:18:52.035322 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qwg6q" Feb 18 19:18:52 crc kubenswrapper[4942]: I0218 19:18:52.036016 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 19:18:52 crc kubenswrapper[4942]: E0218 19:18:52.036513 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qwg6q" podUID="ac5b5f40-34db-4aeb-abb4-57204673bd53" Feb 18 19:18:52 crc kubenswrapper[4942]: I0218 19:18:52.036658 4942 scope.go:117] "RemoveContainer" containerID="331d92ab2b896c654b5eb6e9e3372f06c02c3b582188b54cff7b9b6feb78c9a9" Feb 18 19:18:52 crc kubenswrapper[4942]: E0218 19:18:52.036842 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-89fzv_openshift-ovn-kubernetes(45dc4164-81a9-44cf-b86a-dff571bc0417)\"" pod="openshift-ovn-kubernetes/ovnkube-node-89fzv" podUID="45dc4164-81a9-44cf-b86a-dff571bc0417" Feb 18 19:18:52 crc kubenswrapper[4942]: E0218 19:18:52.036911 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 19:18:52 crc kubenswrapper[4942]: I0218 19:18:52.115009 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:52 crc kubenswrapper[4942]: I0218 19:18:52.115082 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:52 crc kubenswrapper[4942]: I0218 19:18:52.115118 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:52 crc kubenswrapper[4942]: I0218 19:18:52.115145 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:52 crc kubenswrapper[4942]: I0218 19:18:52.115165 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:52Z","lastTransitionTime":"2026-02-18T19:18:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:52 crc kubenswrapper[4942]: I0218 19:18:52.218586 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:52 crc kubenswrapper[4942]: I0218 19:18:52.218641 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:52 crc kubenswrapper[4942]: I0218 19:18:52.218654 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:52 crc kubenswrapper[4942]: I0218 19:18:52.218676 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:52 crc kubenswrapper[4942]: I0218 19:18:52.218691 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:52Z","lastTransitionTime":"2026-02-18T19:18:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:52 crc kubenswrapper[4942]: I0218 19:18:52.323030 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:52 crc kubenswrapper[4942]: I0218 19:18:52.323102 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:52 crc kubenswrapper[4942]: I0218 19:18:52.323125 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:52 crc kubenswrapper[4942]: I0218 19:18:52.323153 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:52 crc kubenswrapper[4942]: I0218 19:18:52.323171 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:52Z","lastTransitionTime":"2026-02-18T19:18:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:52 crc kubenswrapper[4942]: I0218 19:18:52.425913 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:52 crc kubenswrapper[4942]: I0218 19:18:52.425997 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:52 crc kubenswrapper[4942]: I0218 19:18:52.426017 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:52 crc kubenswrapper[4942]: I0218 19:18:52.426050 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:52 crc kubenswrapper[4942]: I0218 19:18:52.426070 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:52Z","lastTransitionTime":"2026-02-18T19:18:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:52 crc kubenswrapper[4942]: I0218 19:18:52.529343 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:52 crc kubenswrapper[4942]: I0218 19:18:52.529409 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:52 crc kubenswrapper[4942]: I0218 19:18:52.529421 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:52 crc kubenswrapper[4942]: I0218 19:18:52.529439 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:52 crc kubenswrapper[4942]: I0218 19:18:52.529456 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:52Z","lastTransitionTime":"2026-02-18T19:18:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:52 crc kubenswrapper[4942]: I0218 19:18:52.633359 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:52 crc kubenswrapper[4942]: I0218 19:18:52.633463 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:52 crc kubenswrapper[4942]: I0218 19:18:52.633483 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:52 crc kubenswrapper[4942]: I0218 19:18:52.633513 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:52 crc kubenswrapper[4942]: I0218 19:18:52.633531 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:52Z","lastTransitionTime":"2026-02-18T19:18:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:52 crc kubenswrapper[4942]: I0218 19:18:52.735730 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:52 crc kubenswrapper[4942]: I0218 19:18:52.735829 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:52 crc kubenswrapper[4942]: I0218 19:18:52.735843 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:52 crc kubenswrapper[4942]: I0218 19:18:52.735866 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:52 crc kubenswrapper[4942]: I0218 19:18:52.735882 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:52Z","lastTransitionTime":"2026-02-18T19:18:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:52 crc kubenswrapper[4942]: I0218 19:18:52.838739 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:52 crc kubenswrapper[4942]: I0218 19:18:52.838850 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:52 crc kubenswrapper[4942]: I0218 19:18:52.838868 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:52 crc kubenswrapper[4942]: I0218 19:18:52.838893 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:52 crc kubenswrapper[4942]: I0218 19:18:52.838910 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:52Z","lastTransitionTime":"2026-02-18T19:18:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:52 crc kubenswrapper[4942]: I0218 19:18:52.942312 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:52 crc kubenswrapper[4942]: I0218 19:18:52.942414 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:52 crc kubenswrapper[4942]: I0218 19:18:52.942440 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:52 crc kubenswrapper[4942]: I0218 19:18:52.942483 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:52 crc kubenswrapper[4942]: I0218 19:18:52.942508 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:52Z","lastTransitionTime":"2026-02-18T19:18:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:53 crc kubenswrapper[4942]: I0218 19:18:53.020116 4942 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-12 05:32:33.328447005 +0000 UTC Feb 18 19:18:53 crc kubenswrapper[4942]: I0218 19:18:53.035490 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 19:18:53 crc kubenswrapper[4942]: E0218 19:18:53.035797 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 19:18:53 crc kubenswrapper[4942]: I0218 19:18:53.045787 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:53 crc kubenswrapper[4942]: I0218 19:18:53.045870 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:53 crc kubenswrapper[4942]: I0218 19:18:53.045889 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:53 crc kubenswrapper[4942]: I0218 19:18:53.045907 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:53 crc kubenswrapper[4942]: I0218 19:18:53.045919 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:53Z","lastTransitionTime":"2026-02-18T19:18:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:53 crc kubenswrapper[4942]: I0218 19:18:53.149035 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:53 crc kubenswrapper[4942]: I0218 19:18:53.149464 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:53 crc kubenswrapper[4942]: I0218 19:18:53.149487 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:53 crc kubenswrapper[4942]: I0218 19:18:53.149519 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:53 crc kubenswrapper[4942]: I0218 19:18:53.149539 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:53Z","lastTransitionTime":"2026-02-18T19:18:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:53 crc kubenswrapper[4942]: I0218 19:18:53.251711 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:53 crc kubenswrapper[4942]: I0218 19:18:53.251779 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:53 crc kubenswrapper[4942]: I0218 19:18:53.251789 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:53 crc kubenswrapper[4942]: I0218 19:18:53.251803 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:53 crc kubenswrapper[4942]: I0218 19:18:53.251829 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:53Z","lastTransitionTime":"2026-02-18T19:18:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:53 crc kubenswrapper[4942]: I0218 19:18:53.355263 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:53 crc kubenswrapper[4942]: I0218 19:18:53.355418 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:53 crc kubenswrapper[4942]: I0218 19:18:53.355509 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:53 crc kubenswrapper[4942]: I0218 19:18:53.355594 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:53 crc kubenswrapper[4942]: I0218 19:18:53.355706 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:53Z","lastTransitionTime":"2026-02-18T19:18:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:53 crc kubenswrapper[4942]: I0218 19:18:53.460119 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:53 crc kubenswrapper[4942]: I0218 19:18:53.460194 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:53 crc kubenswrapper[4942]: I0218 19:18:53.460229 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:53 crc kubenswrapper[4942]: I0218 19:18:53.460261 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:53 crc kubenswrapper[4942]: I0218 19:18:53.460282 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:53Z","lastTransitionTime":"2026-02-18T19:18:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:53 crc kubenswrapper[4942]: I0218 19:18:53.563929 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:53 crc kubenswrapper[4942]: I0218 19:18:53.563997 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:53 crc kubenswrapper[4942]: I0218 19:18:53.564022 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:53 crc kubenswrapper[4942]: I0218 19:18:53.564054 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:53 crc kubenswrapper[4942]: I0218 19:18:53.564077 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:53Z","lastTransitionTime":"2026-02-18T19:18:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:53 crc kubenswrapper[4942]: I0218 19:18:53.667530 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:53 crc kubenswrapper[4942]: I0218 19:18:53.667609 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:53 crc kubenswrapper[4942]: I0218 19:18:53.667631 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:53 crc kubenswrapper[4942]: I0218 19:18:53.667973 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:53 crc kubenswrapper[4942]: I0218 19:18:53.667997 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:53Z","lastTransitionTime":"2026-02-18T19:18:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:53 crc kubenswrapper[4942]: I0218 19:18:53.771387 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:53 crc kubenswrapper[4942]: I0218 19:18:53.771422 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:53 crc kubenswrapper[4942]: I0218 19:18:53.771431 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:53 crc kubenswrapper[4942]: I0218 19:18:53.771445 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:53 crc kubenswrapper[4942]: I0218 19:18:53.771455 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:53Z","lastTransitionTime":"2026-02-18T19:18:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:53 crc kubenswrapper[4942]: I0218 19:18:53.875555 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:53 crc kubenswrapper[4942]: I0218 19:18:53.875632 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:53 crc kubenswrapper[4942]: I0218 19:18:53.875657 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:53 crc kubenswrapper[4942]: I0218 19:18:53.875689 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:53 crc kubenswrapper[4942]: I0218 19:18:53.875711 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:53Z","lastTransitionTime":"2026-02-18T19:18:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:53 crc kubenswrapper[4942]: I0218 19:18:53.978372 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:53 crc kubenswrapper[4942]: I0218 19:18:53.978433 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:53 crc kubenswrapper[4942]: I0218 19:18:53.978450 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:53 crc kubenswrapper[4942]: I0218 19:18:53.978474 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:53 crc kubenswrapper[4942]: I0218 19:18:53.978491 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:53Z","lastTransitionTime":"2026-02-18T19:18:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:54 crc kubenswrapper[4942]: I0218 19:18:54.021169 4942 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-29 04:15:13.277779715 +0000 UTC Feb 18 19:18:54 crc kubenswrapper[4942]: I0218 19:18:54.035641 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 19:18:54 crc kubenswrapper[4942]: I0218 19:18:54.035645 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qwg6q" Feb 18 19:18:54 crc kubenswrapper[4942]: I0218 19:18:54.035693 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 19:18:54 crc kubenswrapper[4942]: E0218 19:18:54.036212 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 19:18:54 crc kubenswrapper[4942]: E0218 19:18:54.036366 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qwg6q" podUID="ac5b5f40-34db-4aeb-abb4-57204673bd53" Feb 18 19:18:54 crc kubenswrapper[4942]: E0218 19:18:54.036533 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 19:18:54 crc kubenswrapper[4942]: I0218 19:18:54.061478 4942 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd/etcd-crc"] Feb 18 19:18:54 crc kubenswrapper[4942]: I0218 19:18:54.081495 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:54 crc kubenswrapper[4942]: I0218 19:18:54.081576 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:54 crc kubenswrapper[4942]: I0218 19:18:54.081608 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:54 crc kubenswrapper[4942]: I0218 19:18:54.081637 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:54 crc kubenswrapper[4942]: I0218 19:18:54.081659 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:54Z","lastTransitionTime":"2026-02-18T19:18:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:54 crc kubenswrapper[4942]: I0218 19:18:54.185034 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:54 crc kubenswrapper[4942]: I0218 19:18:54.185105 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:54 crc kubenswrapper[4942]: I0218 19:18:54.185147 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:54 crc kubenswrapper[4942]: I0218 19:18:54.185182 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:54 crc kubenswrapper[4942]: I0218 19:18:54.185209 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:54Z","lastTransitionTime":"2026-02-18T19:18:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:54 crc kubenswrapper[4942]: I0218 19:18:54.287326 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:54 crc kubenswrapper[4942]: I0218 19:18:54.287374 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:54 crc kubenswrapper[4942]: I0218 19:18:54.287385 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:54 crc kubenswrapper[4942]: I0218 19:18:54.287399 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:54 crc kubenswrapper[4942]: I0218 19:18:54.287408 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:54Z","lastTransitionTime":"2026-02-18T19:18:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:54 crc kubenswrapper[4942]: I0218 19:18:54.391014 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:54 crc kubenswrapper[4942]: I0218 19:18:54.391092 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:54 crc kubenswrapper[4942]: I0218 19:18:54.391109 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:54 crc kubenswrapper[4942]: I0218 19:18:54.391137 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:54 crc kubenswrapper[4942]: I0218 19:18:54.391156 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:54Z","lastTransitionTime":"2026-02-18T19:18:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:54 crc kubenswrapper[4942]: I0218 19:18:54.494295 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:54 crc kubenswrapper[4942]: I0218 19:18:54.494343 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:54 crc kubenswrapper[4942]: I0218 19:18:54.494351 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:54 crc kubenswrapper[4942]: I0218 19:18:54.494368 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:54 crc kubenswrapper[4942]: I0218 19:18:54.494378 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:54Z","lastTransitionTime":"2026-02-18T19:18:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:54 crc kubenswrapper[4942]: I0218 19:18:54.597412 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:54 crc kubenswrapper[4942]: I0218 19:18:54.597474 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:54 crc kubenswrapper[4942]: I0218 19:18:54.597490 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:54 crc kubenswrapper[4942]: I0218 19:18:54.597515 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:54 crc kubenswrapper[4942]: I0218 19:18:54.597534 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:54Z","lastTransitionTime":"2026-02-18T19:18:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:54 crc kubenswrapper[4942]: I0218 19:18:54.700651 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:54 crc kubenswrapper[4942]: I0218 19:18:54.700697 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:54 crc kubenswrapper[4942]: I0218 19:18:54.700706 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:54 crc kubenswrapper[4942]: I0218 19:18:54.700722 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:54 crc kubenswrapper[4942]: I0218 19:18:54.700734 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:54Z","lastTransitionTime":"2026-02-18T19:18:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:54 crc kubenswrapper[4942]: I0218 19:18:54.803319 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:54 crc kubenswrapper[4942]: I0218 19:18:54.803355 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:54 crc kubenswrapper[4942]: I0218 19:18:54.803365 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:54 crc kubenswrapper[4942]: I0218 19:18:54.803377 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:54 crc kubenswrapper[4942]: I0218 19:18:54.803386 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:54Z","lastTransitionTime":"2026-02-18T19:18:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:54 crc kubenswrapper[4942]: I0218 19:18:54.907254 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:54 crc kubenswrapper[4942]: I0218 19:18:54.907325 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:54 crc kubenswrapper[4942]: I0218 19:18:54.907343 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:54 crc kubenswrapper[4942]: I0218 19:18:54.907370 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:54 crc kubenswrapper[4942]: I0218 19:18:54.907388 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:54Z","lastTransitionTime":"2026-02-18T19:18:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:55 crc kubenswrapper[4942]: I0218 19:18:55.010210 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:55 crc kubenswrapper[4942]: I0218 19:18:55.010274 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:55 crc kubenswrapper[4942]: I0218 19:18:55.010291 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:55 crc kubenswrapper[4942]: I0218 19:18:55.010315 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:55 crc kubenswrapper[4942]: I0218 19:18:55.010361 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:55Z","lastTransitionTime":"2026-02-18T19:18:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:55 crc kubenswrapper[4942]: I0218 19:18:55.021755 4942 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-14 15:41:27.826339754 +0000 UTC Feb 18 19:18:55 crc kubenswrapper[4942]: I0218 19:18:55.035297 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 19:18:55 crc kubenswrapper[4942]: E0218 19:18:55.035495 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 19:18:55 crc kubenswrapper[4942]: I0218 19:18:55.113976 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:55 crc kubenswrapper[4942]: I0218 19:18:55.114050 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:55 crc kubenswrapper[4942]: I0218 19:18:55.114079 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:55 crc kubenswrapper[4942]: I0218 19:18:55.114113 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:55 crc kubenswrapper[4942]: I0218 19:18:55.114138 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:55Z","lastTransitionTime":"2026-02-18T19:18:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:55 crc kubenswrapper[4942]: I0218 19:18:55.217273 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:55 crc kubenswrapper[4942]: I0218 19:18:55.217434 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:55 crc kubenswrapper[4942]: I0218 19:18:55.217458 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:55 crc kubenswrapper[4942]: I0218 19:18:55.217488 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:55 crc kubenswrapper[4942]: I0218 19:18:55.217584 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:55Z","lastTransitionTime":"2026-02-18T19:18:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:55 crc kubenswrapper[4942]: I0218 19:18:55.321312 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:55 crc kubenswrapper[4942]: I0218 19:18:55.321382 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:55 crc kubenswrapper[4942]: I0218 19:18:55.321398 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:55 crc kubenswrapper[4942]: I0218 19:18:55.321424 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:55 crc kubenswrapper[4942]: I0218 19:18:55.321443 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:55Z","lastTransitionTime":"2026-02-18T19:18:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:55 crc kubenswrapper[4942]: I0218 19:18:55.425540 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:55 crc kubenswrapper[4942]: I0218 19:18:55.425613 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:55 crc kubenswrapper[4942]: I0218 19:18:55.425633 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:55 crc kubenswrapper[4942]: I0218 19:18:55.425660 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:55 crc kubenswrapper[4942]: I0218 19:18:55.425679 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:55Z","lastTransitionTime":"2026-02-18T19:18:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:55 crc kubenswrapper[4942]: I0218 19:18:55.528974 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:55 crc kubenswrapper[4942]: I0218 19:18:55.529059 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:55 crc kubenswrapper[4942]: I0218 19:18:55.529100 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:55 crc kubenswrapper[4942]: I0218 19:18:55.529130 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:55 crc kubenswrapper[4942]: I0218 19:18:55.529150 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:55Z","lastTransitionTime":"2026-02-18T19:18:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:55 crc kubenswrapper[4942]: I0218 19:18:55.632190 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:55 crc kubenswrapper[4942]: I0218 19:18:55.632256 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:55 crc kubenswrapper[4942]: I0218 19:18:55.632274 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:55 crc kubenswrapper[4942]: I0218 19:18:55.632298 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:55 crc kubenswrapper[4942]: I0218 19:18:55.632318 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:55Z","lastTransitionTime":"2026-02-18T19:18:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:55 crc kubenswrapper[4942]: I0218 19:18:55.736372 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:55 crc kubenswrapper[4942]: I0218 19:18:55.736449 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:55 crc kubenswrapper[4942]: I0218 19:18:55.736470 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:55 crc kubenswrapper[4942]: I0218 19:18:55.736495 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:55 crc kubenswrapper[4942]: I0218 19:18:55.736520 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:55Z","lastTransitionTime":"2026-02-18T19:18:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:55 crc kubenswrapper[4942]: I0218 19:18:55.839182 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:55 crc kubenswrapper[4942]: I0218 19:18:55.839253 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:55 crc kubenswrapper[4942]: I0218 19:18:55.839275 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:55 crc kubenswrapper[4942]: I0218 19:18:55.839309 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:55 crc kubenswrapper[4942]: I0218 19:18:55.839332 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:55Z","lastTransitionTime":"2026-02-18T19:18:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:55 crc kubenswrapper[4942]: I0218 19:18:55.942475 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:55 crc kubenswrapper[4942]: I0218 19:18:55.942556 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:55 crc kubenswrapper[4942]: I0218 19:18:55.942586 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:55 crc kubenswrapper[4942]: I0218 19:18:55.942619 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:55 crc kubenswrapper[4942]: I0218 19:18:55.942643 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:55Z","lastTransitionTime":"2026-02-18T19:18:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:56 crc kubenswrapper[4942]: I0218 19:18:56.022436 4942 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-19 13:12:14.29956998 +0000 UTC Feb 18 19:18:56 crc kubenswrapper[4942]: I0218 19:18:56.035128 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 19:18:56 crc kubenswrapper[4942]: I0218 19:18:56.035166 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qwg6q" Feb 18 19:18:56 crc kubenswrapper[4942]: I0218 19:18:56.035137 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 19:18:56 crc kubenswrapper[4942]: E0218 19:18:56.035285 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 19:18:56 crc kubenswrapper[4942]: E0218 19:18:56.035427 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 19:18:56 crc kubenswrapper[4942]: E0218 19:18:56.035516 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qwg6q" podUID="ac5b5f40-34db-4aeb-abb4-57204673bd53" Feb 18 19:18:56 crc kubenswrapper[4942]: I0218 19:18:56.046580 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:56 crc kubenswrapper[4942]: I0218 19:18:56.046634 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:56 crc kubenswrapper[4942]: I0218 19:18:56.046651 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:56 crc kubenswrapper[4942]: I0218 19:18:56.046753 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:56 crc kubenswrapper[4942]: I0218 19:18:56.046822 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:56Z","lastTransitionTime":"2026-02-18T19:18:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:56 crc kubenswrapper[4942]: I0218 19:18:56.149821 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:56 crc kubenswrapper[4942]: I0218 19:18:56.149888 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:56 crc kubenswrapper[4942]: I0218 19:18:56.149902 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:56 crc kubenswrapper[4942]: I0218 19:18:56.149922 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:56 crc kubenswrapper[4942]: I0218 19:18:56.149935 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:56Z","lastTransitionTime":"2026-02-18T19:18:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:56 crc kubenswrapper[4942]: I0218 19:18:56.253413 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:56 crc kubenswrapper[4942]: I0218 19:18:56.253500 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:56 crc kubenswrapper[4942]: I0218 19:18:56.253518 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:56 crc kubenswrapper[4942]: I0218 19:18:56.253543 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:56 crc kubenswrapper[4942]: I0218 19:18:56.253561 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:56Z","lastTransitionTime":"2026-02-18T19:18:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:56 crc kubenswrapper[4942]: I0218 19:18:56.356407 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:56 crc kubenswrapper[4942]: I0218 19:18:56.356508 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:56 crc kubenswrapper[4942]: I0218 19:18:56.356526 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:56 crc kubenswrapper[4942]: I0218 19:18:56.356553 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:56 crc kubenswrapper[4942]: I0218 19:18:56.356574 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:56Z","lastTransitionTime":"2026-02-18T19:18:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:56 crc kubenswrapper[4942]: I0218 19:18:56.460161 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:56 crc kubenswrapper[4942]: I0218 19:18:56.460269 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:56 crc kubenswrapper[4942]: I0218 19:18:56.460339 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:56 crc kubenswrapper[4942]: I0218 19:18:56.460384 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:56 crc kubenswrapper[4942]: I0218 19:18:56.460405 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:56Z","lastTransitionTime":"2026-02-18T19:18:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:56 crc kubenswrapper[4942]: I0218 19:18:56.564143 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:56 crc kubenswrapper[4942]: I0218 19:18:56.564209 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:56 crc kubenswrapper[4942]: I0218 19:18:56.564229 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:56 crc kubenswrapper[4942]: I0218 19:18:56.564257 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:56 crc kubenswrapper[4942]: I0218 19:18:56.564282 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:56Z","lastTransitionTime":"2026-02-18T19:18:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:56 crc kubenswrapper[4942]: I0218 19:18:56.666830 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:56 crc kubenswrapper[4942]: I0218 19:18:56.666889 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:56 crc kubenswrapper[4942]: I0218 19:18:56.666914 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:56 crc kubenswrapper[4942]: I0218 19:18:56.666947 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:56 crc kubenswrapper[4942]: I0218 19:18:56.666974 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:56Z","lastTransitionTime":"2026-02-18T19:18:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:56 crc kubenswrapper[4942]: I0218 19:18:56.770538 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:56 crc kubenswrapper[4942]: I0218 19:18:56.770614 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:56 crc kubenswrapper[4942]: I0218 19:18:56.770632 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:56 crc kubenswrapper[4942]: I0218 19:18:56.770658 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:56 crc kubenswrapper[4942]: I0218 19:18:56.770676 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:56Z","lastTransitionTime":"2026-02-18T19:18:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:56 crc kubenswrapper[4942]: I0218 19:18:56.873929 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:56 crc kubenswrapper[4942]: I0218 19:18:56.874007 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:56 crc kubenswrapper[4942]: I0218 19:18:56.874025 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:56 crc kubenswrapper[4942]: I0218 19:18:56.874055 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:56 crc kubenswrapper[4942]: I0218 19:18:56.874077 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:56Z","lastTransitionTime":"2026-02-18T19:18:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:56 crc kubenswrapper[4942]: I0218 19:18:56.977456 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:56 crc kubenswrapper[4942]: I0218 19:18:56.977533 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:56 crc kubenswrapper[4942]: I0218 19:18:56.977551 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:56 crc kubenswrapper[4942]: I0218 19:18:56.977579 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:56 crc kubenswrapper[4942]: I0218 19:18:56.977598 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:56Z","lastTransitionTime":"2026-02-18T19:18:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:57 crc kubenswrapper[4942]: I0218 19:18:57.023077 4942 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-15 05:28:44.002423172 +0000 UTC Feb 18 19:18:57 crc kubenswrapper[4942]: I0218 19:18:57.035660 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 19:18:57 crc kubenswrapper[4942]: E0218 19:18:57.036074 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 19:18:57 crc kubenswrapper[4942]: I0218 19:18:57.079894 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:57 crc kubenswrapper[4942]: I0218 19:18:57.079945 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:57 crc kubenswrapper[4942]: I0218 19:18:57.079955 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:57 crc kubenswrapper[4942]: I0218 19:18:57.080009 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:57 crc kubenswrapper[4942]: I0218 19:18:57.080026 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:57Z","lastTransitionTime":"2026-02-18T19:18:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:57 crc kubenswrapper[4942]: I0218 19:18:57.183092 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:57 crc kubenswrapper[4942]: I0218 19:18:57.183163 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:57 crc kubenswrapper[4942]: I0218 19:18:57.183186 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:57 crc kubenswrapper[4942]: I0218 19:18:57.183212 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:57 crc kubenswrapper[4942]: I0218 19:18:57.183227 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:57Z","lastTransitionTime":"2026-02-18T19:18:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:57 crc kubenswrapper[4942]: I0218 19:18:57.286047 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:57 crc kubenswrapper[4942]: I0218 19:18:57.286108 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:57 crc kubenswrapper[4942]: I0218 19:18:57.286119 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:57 crc kubenswrapper[4942]: I0218 19:18:57.286143 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:57 crc kubenswrapper[4942]: I0218 19:18:57.286156 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:57Z","lastTransitionTime":"2026-02-18T19:18:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:57 crc kubenswrapper[4942]: I0218 19:18:57.388957 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:57 crc kubenswrapper[4942]: I0218 19:18:57.389027 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:57 crc kubenswrapper[4942]: I0218 19:18:57.389045 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:57 crc kubenswrapper[4942]: I0218 19:18:57.389072 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:57 crc kubenswrapper[4942]: I0218 19:18:57.389093 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:57Z","lastTransitionTime":"2026-02-18T19:18:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:57 crc kubenswrapper[4942]: I0218 19:18:57.492852 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:57 crc kubenswrapper[4942]: I0218 19:18:57.492934 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:57 crc kubenswrapper[4942]: I0218 19:18:57.492954 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:57 crc kubenswrapper[4942]: I0218 19:18:57.492982 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:57 crc kubenswrapper[4942]: I0218 19:18:57.493003 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:57Z","lastTransitionTime":"2026-02-18T19:18:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:57 crc kubenswrapper[4942]: I0218 19:18:57.596132 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:57 crc kubenswrapper[4942]: I0218 19:18:57.596213 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:57 crc kubenswrapper[4942]: I0218 19:18:57.596251 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:57 crc kubenswrapper[4942]: I0218 19:18:57.596276 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:57 crc kubenswrapper[4942]: I0218 19:18:57.596292 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:57Z","lastTransitionTime":"2026-02-18T19:18:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:57 crc kubenswrapper[4942]: I0218 19:18:57.699361 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:57 crc kubenswrapper[4942]: I0218 19:18:57.699415 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:57 crc kubenswrapper[4942]: I0218 19:18:57.699424 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:57 crc kubenswrapper[4942]: I0218 19:18:57.699441 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:57 crc kubenswrapper[4942]: I0218 19:18:57.699452 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:57Z","lastTransitionTime":"2026-02-18T19:18:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:57 crc kubenswrapper[4942]: I0218 19:18:57.802639 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:57 crc kubenswrapper[4942]: I0218 19:18:57.802700 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:57 crc kubenswrapper[4942]: I0218 19:18:57.802716 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:57 crc kubenswrapper[4942]: I0218 19:18:57.802740 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:57 crc kubenswrapper[4942]: I0218 19:18:57.802784 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:57Z","lastTransitionTime":"2026-02-18T19:18:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:57 crc kubenswrapper[4942]: I0218 19:18:57.905901 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:57 crc kubenswrapper[4942]: I0218 19:18:57.906045 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:57 crc kubenswrapper[4942]: I0218 19:18:57.906067 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:57 crc kubenswrapper[4942]: I0218 19:18:57.906092 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:57 crc kubenswrapper[4942]: I0218 19:18:57.906112 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:57Z","lastTransitionTime":"2026-02-18T19:18:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:58 crc kubenswrapper[4942]: I0218 19:18:58.009659 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:58 crc kubenswrapper[4942]: I0218 19:18:58.009861 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:58 crc kubenswrapper[4942]: I0218 19:18:58.009880 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:58 crc kubenswrapper[4942]: I0218 19:18:58.009908 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:58 crc kubenswrapper[4942]: I0218 19:18:58.009929 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:58Z","lastTransitionTime":"2026-02-18T19:18:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:58 crc kubenswrapper[4942]: I0218 19:18:58.023255 4942 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-15 14:50:03.136425909 +0000 UTC Feb 18 19:18:58 crc kubenswrapper[4942]: I0218 19:18:58.034954 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 19:18:58 crc kubenswrapper[4942]: I0218 19:18:58.034984 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 19:18:58 crc kubenswrapper[4942]: I0218 19:18:58.034984 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qwg6q" Feb 18 19:18:58 crc kubenswrapper[4942]: E0218 19:18:58.035228 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 19:18:58 crc kubenswrapper[4942]: E0218 19:18:58.035388 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qwg6q" podUID="ac5b5f40-34db-4aeb-abb4-57204673bd53" Feb 18 19:18:58 crc kubenswrapper[4942]: E0218 19:18:58.035524 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 19:18:58 crc kubenswrapper[4942]: I0218 19:18:58.113100 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:58 crc kubenswrapper[4942]: I0218 19:18:58.113195 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:58 crc kubenswrapper[4942]: I0218 19:18:58.113220 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:58 crc kubenswrapper[4942]: I0218 19:18:58.113258 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:58 crc kubenswrapper[4942]: I0218 19:18:58.113284 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:58Z","lastTransitionTime":"2026-02-18T19:18:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:58 crc kubenswrapper[4942]: I0218 19:18:58.216283 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:58 crc kubenswrapper[4942]: I0218 19:18:58.216420 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:58 crc kubenswrapper[4942]: I0218 19:18:58.216437 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:58 crc kubenswrapper[4942]: I0218 19:18:58.216557 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:58 crc kubenswrapper[4942]: I0218 19:18:58.216636 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:58Z","lastTransitionTime":"2026-02-18T19:18:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:58 crc kubenswrapper[4942]: I0218 19:18:58.319857 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:58 crc kubenswrapper[4942]: I0218 19:18:58.319958 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:58 crc kubenswrapper[4942]: I0218 19:18:58.319981 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:58 crc kubenswrapper[4942]: I0218 19:18:58.320013 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:58 crc kubenswrapper[4942]: I0218 19:18:58.320033 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:58Z","lastTransitionTime":"2026-02-18T19:18:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:58 crc kubenswrapper[4942]: I0218 19:18:58.423945 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:58 crc kubenswrapper[4942]: I0218 19:18:58.424037 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:58 crc kubenswrapper[4942]: I0218 19:18:58.424062 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:58 crc kubenswrapper[4942]: I0218 19:18:58.424094 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:58 crc kubenswrapper[4942]: I0218 19:18:58.424114 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:58Z","lastTransitionTime":"2026-02-18T19:18:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:58 crc kubenswrapper[4942]: I0218 19:18:58.526999 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:58 crc kubenswrapper[4942]: I0218 19:18:58.527056 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:58 crc kubenswrapper[4942]: I0218 19:18:58.527068 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:58 crc kubenswrapper[4942]: I0218 19:18:58.527085 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:58 crc kubenswrapper[4942]: I0218 19:18:58.527095 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:58Z","lastTransitionTime":"2026-02-18T19:18:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:58 crc kubenswrapper[4942]: I0218 19:18:58.630847 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:58 crc kubenswrapper[4942]: I0218 19:18:58.630913 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:58 crc kubenswrapper[4942]: I0218 19:18:58.630931 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:58 crc kubenswrapper[4942]: I0218 19:18:58.630986 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:58 crc kubenswrapper[4942]: I0218 19:18:58.631011 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:58Z","lastTransitionTime":"2026-02-18T19:18:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:58 crc kubenswrapper[4942]: I0218 19:18:58.733381 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:58 crc kubenswrapper[4942]: I0218 19:18:58.733451 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:58 crc kubenswrapper[4942]: I0218 19:18:58.733468 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:58 crc kubenswrapper[4942]: I0218 19:18:58.733492 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:58 crc kubenswrapper[4942]: I0218 19:18:58.733509 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:58Z","lastTransitionTime":"2026-02-18T19:18:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:58 crc kubenswrapper[4942]: I0218 19:18:58.836101 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:58 crc kubenswrapper[4942]: I0218 19:18:58.836174 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:58 crc kubenswrapper[4942]: I0218 19:18:58.836194 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:58 crc kubenswrapper[4942]: I0218 19:18:58.836220 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:58 crc kubenswrapper[4942]: I0218 19:18:58.836240 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:58Z","lastTransitionTime":"2026-02-18T19:18:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:58 crc kubenswrapper[4942]: I0218 19:18:58.939184 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:58 crc kubenswrapper[4942]: I0218 19:18:58.939263 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:58 crc kubenswrapper[4942]: I0218 19:18:58.939282 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:58 crc kubenswrapper[4942]: I0218 19:18:58.939310 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:58 crc kubenswrapper[4942]: I0218 19:18:58.939335 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:58Z","lastTransitionTime":"2026-02-18T19:18:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:59 crc kubenswrapper[4942]: I0218 19:18:59.023954 4942 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-20 22:13:26.227928965 +0000 UTC Feb 18 19:18:59 crc kubenswrapper[4942]: I0218 19:18:59.035215 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 19:18:59 crc kubenswrapper[4942]: E0218 19:18:59.035407 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 19:18:59 crc kubenswrapper[4942]: I0218 19:18:59.043207 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:59 crc kubenswrapper[4942]: I0218 19:18:59.043274 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:59 crc kubenswrapper[4942]: I0218 19:18:59.043290 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:59 crc kubenswrapper[4942]: I0218 19:18:59.043313 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:59 crc kubenswrapper[4942]: I0218 19:18:59.043330 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:59Z","lastTransitionTime":"2026-02-18T19:18:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:59 crc kubenswrapper[4942]: I0218 19:18:59.146066 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:59 crc kubenswrapper[4942]: I0218 19:18:59.146118 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:59 crc kubenswrapper[4942]: I0218 19:18:59.146135 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:59 crc kubenswrapper[4942]: I0218 19:18:59.146158 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:59 crc kubenswrapper[4942]: I0218 19:18:59.146176 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:59Z","lastTransitionTime":"2026-02-18T19:18:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:59 crc kubenswrapper[4942]: I0218 19:18:59.249320 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:59 crc kubenswrapper[4942]: I0218 19:18:59.249389 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:59 crc kubenswrapper[4942]: I0218 19:18:59.249412 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:59 crc kubenswrapper[4942]: I0218 19:18:59.249442 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:59 crc kubenswrapper[4942]: I0218 19:18:59.249465 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:59Z","lastTransitionTime":"2026-02-18T19:18:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:59 crc kubenswrapper[4942]: I0218 19:18:59.254842 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 19:18:59 crc kubenswrapper[4942]: I0218 19:18:59.254897 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 19:18:59 crc kubenswrapper[4942]: I0218 19:18:59.254915 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 19:18:59 crc kubenswrapper[4942]: I0218 19:18:59.254936 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 19:18:59 crc kubenswrapper[4942]: I0218 19:18:59.254952 4942 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T19:18:59Z","lastTransitionTime":"2026-02-18T19:18:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 19:18:59 crc kubenswrapper[4942]: I0218 19:18:59.330865 4942 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-version/cluster-version-operator-5c965bbfc6-66xl6"] Feb 18 19:18:59 crc kubenswrapper[4942]: I0218 19:18:59.331706 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-66xl6" Feb 18 19:18:59 crc kubenswrapper[4942]: I0218 19:18:59.336917 4942 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Feb 18 19:18:59 crc kubenswrapper[4942]: I0218 19:18:59.337110 4942 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Feb 18 19:18:59 crc kubenswrapper[4942]: I0218 19:18:59.337636 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Feb 18 19:18:59 crc kubenswrapper[4942]: I0218 19:18:59.337955 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Feb 18 19:18:59 crc kubenswrapper[4942]: I0218 19:18:59.415449 4942 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-daemon-wqxh4" podStartSLOduration=78.415413135 podStartE2EDuration="1m18.415413135s" podCreationTimestamp="2026-02-18 19:17:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:18:59.397808026 +0000 UTC m=+99.102740711" watchObservedRunningTime="2026-02-18 19:18:59.415413135 +0000 UTC m=+99.120345840" Feb 18 19:18:59 crc kubenswrapper[4942]: I0218 19:18:59.415892 4942 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-8jfwb" podStartSLOduration=78.415880057 podStartE2EDuration="1m18.415880057s" podCreationTimestamp="2026-02-18 19:17:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:18:59.415301742 +0000 UTC m=+99.120234407" watchObservedRunningTime="2026-02-18 19:18:59.415880057 +0000 UTC m=+99.120812762" Feb 18 19:18:59 crc kubenswrapper[4942]: I0218 19:18:59.439715 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/a3683112-fff6-4df6-ae06-4a3c78a76e5b-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-66xl6\" (UID: \"a3683112-fff6-4df6-ae06-4a3c78a76e5b\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-66xl6" Feb 18 19:18:59 crc kubenswrapper[4942]: I0218 19:18:59.439825 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/a3683112-fff6-4df6-ae06-4a3c78a76e5b-service-ca\") pod \"cluster-version-operator-5c965bbfc6-66xl6\" (UID: \"a3683112-fff6-4df6-ae06-4a3c78a76e5b\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-66xl6" Feb 18 19:18:59 crc kubenswrapper[4942]: I0218 19:18:59.439873 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/a3683112-fff6-4df6-ae06-4a3c78a76e5b-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-66xl6\" (UID: \"a3683112-fff6-4df6-ae06-4a3c78a76e5b\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-66xl6" Feb 18 19:18:59 crc kubenswrapper[4942]: I0218 19:18:59.439935 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a3683112-fff6-4df6-ae06-4a3c78a76e5b-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-66xl6\" (UID: \"a3683112-fff6-4df6-ae06-4a3c78a76e5b\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-66xl6" Feb 18 19:18:59 crc kubenswrapper[4942]: I0218 19:18:59.440255 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a3683112-fff6-4df6-ae06-4a3c78a76e5b-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-66xl6\" (UID: \"a3683112-fff6-4df6-ae06-4a3c78a76e5b\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-66xl6" Feb 18 19:18:59 crc kubenswrapper[4942]: I0218 19:18:59.466368 4942 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podStartSLOduration=42.466327804 podStartE2EDuration="42.466327804s" podCreationTimestamp="2026-02-18 19:18:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:18:59.465897783 +0000 UTC m=+99.170830518" watchObservedRunningTime="2026-02-18 19:18:59.466327804 +0000 UTC m=+99.171260559" Feb 18 19:18:59 crc kubenswrapper[4942]: I0218 19:18:59.482935 4942 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" podStartSLOduration=31.482897947 podStartE2EDuration="31.482897947s" podCreationTimestamp="2026-02-18 19:18:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:18:59.482190008 +0000 UTC m=+99.187122713" watchObservedRunningTime="2026-02-18 19:18:59.482897947 +0000 UTC m=+99.187830652" Feb 18 19:18:59 crc kubenswrapper[4942]: I0218 19:18:59.541684 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/a3683112-fff6-4df6-ae06-4a3c78a76e5b-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-66xl6\" (UID: \"a3683112-fff6-4df6-ae06-4a3c78a76e5b\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-66xl6" Feb 18 19:18:59 crc kubenswrapper[4942]: I0218 19:18:59.541908 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a3683112-fff6-4df6-ae06-4a3c78a76e5b-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-66xl6\" (UID: \"a3683112-fff6-4df6-ae06-4a3c78a76e5b\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-66xl6" Feb 18 19:18:59 crc kubenswrapper[4942]: I0218 19:18:59.541919 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/a3683112-fff6-4df6-ae06-4a3c78a76e5b-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-66xl6\" (UID: \"a3683112-fff6-4df6-ae06-4a3c78a76e5b\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-66xl6" Feb 18 19:18:59 crc kubenswrapper[4942]: I0218 19:18:59.542081 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a3683112-fff6-4df6-ae06-4a3c78a76e5b-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-66xl6\" (UID: \"a3683112-fff6-4df6-ae06-4a3c78a76e5b\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-66xl6" Feb 18 19:18:59 crc kubenswrapper[4942]: I0218 19:18:59.542154 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/a3683112-fff6-4df6-ae06-4a3c78a76e5b-service-ca\") pod \"cluster-version-operator-5c965bbfc6-66xl6\" (UID: \"a3683112-fff6-4df6-ae06-4a3c78a76e5b\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-66xl6" Feb 18 19:18:59 crc kubenswrapper[4942]: I0218 19:18:59.542205 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/a3683112-fff6-4df6-ae06-4a3c78a76e5b-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-66xl6\" (UID: \"a3683112-fff6-4df6-ae06-4a3c78a76e5b\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-66xl6" Feb 18 19:18:59 crc kubenswrapper[4942]: I0218 19:18:59.542367 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/a3683112-fff6-4df6-ae06-4a3c78a76e5b-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-66xl6\" (UID: \"a3683112-fff6-4df6-ae06-4a3c78a76e5b\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-66xl6" Feb 18 19:18:59 crc kubenswrapper[4942]: I0218 19:18:59.542397 4942 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=77.542380559 podStartE2EDuration="1m17.542380559s" podCreationTimestamp="2026-02-18 19:17:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:18:59.542185354 +0000 UTC m=+99.247118029" watchObservedRunningTime="2026-02-18 19:18:59.542380559 +0000 UTC m=+99.247313234" Feb 18 19:18:59 crc kubenswrapper[4942]: I0218 19:18:59.543623 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/a3683112-fff6-4df6-ae06-4a3c78a76e5b-service-ca\") pod \"cluster-version-operator-5c965bbfc6-66xl6\" (UID: \"a3683112-fff6-4df6-ae06-4a3c78a76e5b\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-66xl6" Feb 18 19:18:59 crc kubenswrapper[4942]: I0218 19:18:59.556085 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a3683112-fff6-4df6-ae06-4a3c78a76e5b-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-66xl6\" (UID: \"a3683112-fff6-4df6-ae06-4a3c78a76e5b\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-66xl6" Feb 18 19:18:59 crc kubenswrapper[4942]: I0218 19:18:59.571649 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a3683112-fff6-4df6-ae06-4a3c78a76e5b-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-66xl6\" (UID: \"a3683112-fff6-4df6-ae06-4a3c78a76e5b\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-66xl6" Feb 18 19:18:59 crc kubenswrapper[4942]: I0218 19:18:59.589215 4942 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/node-ca-wxck8" podStartSLOduration=78.589189441 podStartE2EDuration="1m18.589189441s" podCreationTimestamp="2026-02-18 19:17:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:18:59.576089369 +0000 UTC m=+99.281022064" watchObservedRunningTime="2026-02-18 19:18:59.589189441 +0000 UTC m=+99.294122106" Feb 18 19:18:59 crc kubenswrapper[4942]: I0218 19:18:59.619744 4942 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podStartSLOduration=78.619718068 podStartE2EDuration="1m18.619718068s" podCreationTimestamp="2026-02-18 19:17:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:18:59.604236484 +0000 UTC m=+99.309169149" watchObservedRunningTime="2026-02-18 19:18:59.619718068 +0000 UTC m=+99.324650733" Feb 18 19:18:59 crc kubenswrapper[4942]: I0218 19:18:59.649024 4942 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-xk99z" podStartSLOduration=77.648998451 podStartE2EDuration="1m17.648998451s" podCreationTimestamp="2026-02-18 19:17:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:18:59.647502442 +0000 UTC m=+99.352435117" watchObservedRunningTime="2026-02-18 19:18:59.648998451 +0000 UTC m=+99.353931126" Feb 18 19:18:59 crc kubenswrapper[4942]: I0218 19:18:59.659113 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-66xl6" Feb 18 19:18:59 crc kubenswrapper[4942]: I0218 19:18:59.737565 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-66xl6" event={"ID":"a3683112-fff6-4df6-ae06-4a3c78a76e5b","Type":"ContainerStarted","Data":"fde2b99c4c2f44221db1343ac6ac41994c84d5f4f2c86b6a5bc822771e5444c4"} Feb 18 19:18:59 crc kubenswrapper[4942]: I0218 19:18:59.764373 4942 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-additional-cni-plugins-2rbc4" podStartSLOduration=78.764353222 podStartE2EDuration="1m18.764353222s" podCreationTimestamp="2026-02-18 19:17:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:18:59.67692547 +0000 UTC m=+99.381858165" watchObservedRunningTime="2026-02-18 19:18:59.764353222 +0000 UTC m=+99.469285887" Feb 18 19:18:59 crc kubenswrapper[4942]: I0218 19:18:59.764654 4942 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd/etcd-crc" podStartSLOduration=5.764651319 podStartE2EDuration="5.764651319s" podCreationTimestamp="2026-02-18 19:18:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:18:59.760004268 +0000 UTC m=+99.464936943" watchObservedRunningTime="2026-02-18 19:18:59.764651319 +0000 UTC m=+99.469583974" Feb 18 19:19:00 crc kubenswrapper[4942]: I0218 19:19:00.024625 4942 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-07 23:48:36.087469496 +0000 UTC Feb 18 19:19:00 crc kubenswrapper[4942]: I0218 19:19:00.024740 4942 certificate_manager.go:356] kubernetes.io/kubelet-serving: Rotating certificates Feb 18 19:19:00 crc kubenswrapper[4942]: I0218 19:19:00.034819 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 19:19:00 crc kubenswrapper[4942]: I0218 19:19:00.034866 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 19:19:00 crc kubenswrapper[4942]: E0218 19:19:00.034995 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 19:19:00 crc kubenswrapper[4942]: I0218 19:19:00.035050 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qwg6q" Feb 18 19:19:00 crc kubenswrapper[4942]: E0218 19:19:00.035205 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qwg6q" podUID="ac5b5f40-34db-4aeb-abb4-57204673bd53" Feb 18 19:19:00 crc kubenswrapper[4942]: E0218 19:19:00.035549 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 19:19:00 crc kubenswrapper[4942]: I0218 19:19:00.037258 4942 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Feb 18 19:19:00 crc kubenswrapper[4942]: I0218 19:19:00.254441 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/ac5b5f40-34db-4aeb-abb4-57204673bd53-metrics-certs\") pod \"network-metrics-daemon-qwg6q\" (UID: \"ac5b5f40-34db-4aeb-abb4-57204673bd53\") " pod="openshift-multus/network-metrics-daemon-qwg6q" Feb 18 19:19:00 crc kubenswrapper[4942]: E0218 19:19:00.254705 4942 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 18 19:19:00 crc kubenswrapper[4942]: E0218 19:19:00.255065 4942 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ac5b5f40-34db-4aeb-abb4-57204673bd53-metrics-certs podName:ac5b5f40-34db-4aeb-abb4-57204673bd53 nodeName:}" failed. No retries permitted until 2026-02-18 19:20:04.255042899 +0000 UTC m=+163.959975564 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/ac5b5f40-34db-4aeb-abb4-57204673bd53-metrics-certs") pod "network-metrics-daemon-qwg6q" (UID: "ac5b5f40-34db-4aeb-abb4-57204673bd53") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 18 19:19:00 crc kubenswrapper[4942]: I0218 19:19:00.743383 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-66xl6" event={"ID":"a3683112-fff6-4df6-ae06-4a3c78a76e5b","Type":"ContainerStarted","Data":"acb325a963abc0e7eec844a8cb08d5a92f1d633a5a4fbae8dd5db4a3c4328286"} Feb 18 19:19:00 crc kubenswrapper[4942]: I0218 19:19:00.769647 4942 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/node-resolver-5pgvt" podStartSLOduration=80.76962313 podStartE2EDuration="1m20.76962313s" podCreationTimestamp="2026-02-18 19:17:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:18:59.791693485 +0000 UTC m=+99.496626160" watchObservedRunningTime="2026-02-18 19:19:00.76962313 +0000 UTC m=+100.474555795" Feb 18 19:19:01 crc kubenswrapper[4942]: I0218 19:19:01.035245 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 19:19:01 crc kubenswrapper[4942]: E0218 19:19:01.036375 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 19:19:02 crc kubenswrapper[4942]: I0218 19:19:02.035794 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qwg6q" Feb 18 19:19:02 crc kubenswrapper[4942]: E0218 19:19:02.036735 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qwg6q" podUID="ac5b5f40-34db-4aeb-abb4-57204673bd53" Feb 18 19:19:02 crc kubenswrapper[4942]: I0218 19:19:02.035937 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 19:19:02 crc kubenswrapper[4942]: I0218 19:19:02.035810 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 19:19:02 crc kubenswrapper[4942]: E0218 19:19:02.037139 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 19:19:02 crc kubenswrapper[4942]: E0218 19:19:02.037401 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 19:19:03 crc kubenswrapper[4942]: I0218 19:19:03.035903 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 19:19:03 crc kubenswrapper[4942]: E0218 19:19:03.036088 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 19:19:04 crc kubenswrapper[4942]: I0218 19:19:04.035088 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 19:19:04 crc kubenswrapper[4942]: I0218 19:19:04.035171 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 19:19:04 crc kubenswrapper[4942]: E0218 19:19:04.035259 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 19:19:04 crc kubenswrapper[4942]: E0218 19:19:04.035383 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 19:19:04 crc kubenswrapper[4942]: I0218 19:19:04.035971 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qwg6q" Feb 18 19:19:04 crc kubenswrapper[4942]: E0218 19:19:04.036896 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qwg6q" podUID="ac5b5f40-34db-4aeb-abb4-57204673bd53" Feb 18 19:19:05 crc kubenswrapper[4942]: I0218 19:19:05.035816 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 19:19:05 crc kubenswrapper[4942]: E0218 19:19:05.036006 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 19:19:06 crc kubenswrapper[4942]: I0218 19:19:06.035583 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qwg6q" Feb 18 19:19:06 crc kubenswrapper[4942]: I0218 19:19:06.036319 4942 scope.go:117] "RemoveContainer" containerID="331d92ab2b896c654b5eb6e9e3372f06c02c3b582188b54cff7b9b6feb78c9a9" Feb 18 19:19:06 crc kubenswrapper[4942]: I0218 19:19:06.036041 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 19:19:06 crc kubenswrapper[4942]: E0218 19:19:06.036526 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-89fzv_openshift-ovn-kubernetes(45dc4164-81a9-44cf-b86a-dff571bc0417)\"" pod="openshift-ovn-kubernetes/ovnkube-node-89fzv" podUID="45dc4164-81a9-44cf-b86a-dff571bc0417" Feb 18 19:19:06 crc kubenswrapper[4942]: E0218 19:19:06.036521 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 19:19:06 crc kubenswrapper[4942]: I0218 19:19:06.036044 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 19:19:06 crc kubenswrapper[4942]: E0218 19:19:06.036822 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 19:19:06 crc kubenswrapper[4942]: E0218 19:19:06.037548 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qwg6q" podUID="ac5b5f40-34db-4aeb-abb4-57204673bd53" Feb 18 19:19:07 crc kubenswrapper[4942]: I0218 19:19:07.035284 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 19:19:07 crc kubenswrapper[4942]: E0218 19:19:07.035466 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 19:19:08 crc kubenswrapper[4942]: I0218 19:19:08.035525 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 19:19:08 crc kubenswrapper[4942]: I0218 19:19:08.035597 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 19:19:08 crc kubenswrapper[4942]: I0218 19:19:08.035637 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qwg6q" Feb 18 19:19:08 crc kubenswrapper[4942]: E0218 19:19:08.035861 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 19:19:08 crc kubenswrapper[4942]: E0218 19:19:08.036151 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qwg6q" podUID="ac5b5f40-34db-4aeb-abb4-57204673bd53" Feb 18 19:19:08 crc kubenswrapper[4942]: E0218 19:19:08.036398 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 19:19:09 crc kubenswrapper[4942]: I0218 19:19:09.035639 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 19:19:09 crc kubenswrapper[4942]: E0218 19:19:09.035947 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 19:19:10 crc kubenswrapper[4942]: I0218 19:19:10.035520 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 19:19:10 crc kubenswrapper[4942]: I0218 19:19:10.035597 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qwg6q" Feb 18 19:19:10 crc kubenswrapper[4942]: I0218 19:19:10.035543 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 19:19:10 crc kubenswrapper[4942]: E0218 19:19:10.035896 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 19:19:10 crc kubenswrapper[4942]: E0218 19:19:10.036117 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 19:19:10 crc kubenswrapper[4942]: E0218 19:19:10.036349 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qwg6q" podUID="ac5b5f40-34db-4aeb-abb4-57204673bd53" Feb 18 19:19:11 crc kubenswrapper[4942]: I0218 19:19:11.035053 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 19:19:11 crc kubenswrapper[4942]: E0218 19:19:11.036191 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 19:19:12 crc kubenswrapper[4942]: I0218 19:19:12.035404 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 19:19:12 crc kubenswrapper[4942]: I0218 19:19:12.035430 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 19:19:12 crc kubenswrapper[4942]: E0218 19:19:12.036170 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 19:19:12 crc kubenswrapper[4942]: I0218 19:19:12.035432 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qwg6q" Feb 18 19:19:12 crc kubenswrapper[4942]: E0218 19:19:12.036289 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 19:19:12 crc kubenswrapper[4942]: E0218 19:19:12.036486 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qwg6q" podUID="ac5b5f40-34db-4aeb-abb4-57204673bd53" Feb 18 19:19:13 crc kubenswrapper[4942]: I0218 19:19:13.035931 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 19:19:13 crc kubenswrapper[4942]: E0218 19:19:13.036180 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 19:19:14 crc kubenswrapper[4942]: I0218 19:19:14.035910 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 19:19:14 crc kubenswrapper[4942]: E0218 19:19:14.036102 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 19:19:14 crc kubenswrapper[4942]: I0218 19:19:14.036372 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qwg6q" Feb 18 19:19:14 crc kubenswrapper[4942]: I0218 19:19:14.036418 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 19:19:14 crc kubenswrapper[4942]: E0218 19:19:14.036509 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qwg6q" podUID="ac5b5f40-34db-4aeb-abb4-57204673bd53" Feb 18 19:19:14 crc kubenswrapper[4942]: E0218 19:19:14.036729 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 19:19:15 crc kubenswrapper[4942]: I0218 19:19:15.035115 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 19:19:15 crc kubenswrapper[4942]: E0218 19:19:15.035486 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 19:19:15 crc kubenswrapper[4942]: I0218 19:19:15.802841 4942 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-8jfwb_75150b8c-7a02-497b-86c3-eabc9c8dbc55/kube-multus/1.log" Feb 18 19:19:15 crc kubenswrapper[4942]: I0218 19:19:15.803555 4942 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-8jfwb_75150b8c-7a02-497b-86c3-eabc9c8dbc55/kube-multus/0.log" Feb 18 19:19:15 crc kubenswrapper[4942]: I0218 19:19:15.803606 4942 generic.go:334] "Generic (PLEG): container finished" podID="75150b8c-7a02-497b-86c3-eabc9c8dbc55" containerID="4ea9fbe1ac2843b80786e84d58bed874d360e223686eac9666589a7841d71c46" exitCode=1 Feb 18 19:19:15 crc kubenswrapper[4942]: I0218 19:19:15.803655 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-8jfwb" event={"ID":"75150b8c-7a02-497b-86c3-eabc9c8dbc55","Type":"ContainerDied","Data":"4ea9fbe1ac2843b80786e84d58bed874d360e223686eac9666589a7841d71c46"} Feb 18 19:19:15 crc kubenswrapper[4942]: I0218 19:19:15.803705 4942 scope.go:117] "RemoveContainer" containerID="f6aba9b40a3a963de7e8fb8f2a121318f0800350a41caa30b6aef71468e5e0e4" Feb 18 19:19:15 crc kubenswrapper[4942]: I0218 19:19:15.804368 4942 scope.go:117] "RemoveContainer" containerID="4ea9fbe1ac2843b80786e84d58bed874d360e223686eac9666589a7841d71c46" Feb 18 19:19:15 crc kubenswrapper[4942]: E0218 19:19:15.804952 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-multus pod=multus-8jfwb_openshift-multus(75150b8c-7a02-497b-86c3-eabc9c8dbc55)\"" pod="openshift-multus/multus-8jfwb" podUID="75150b8c-7a02-497b-86c3-eabc9c8dbc55" Feb 18 19:19:15 crc kubenswrapper[4942]: I0218 19:19:15.838629 4942 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-66xl6" podStartSLOduration=94.838599352 podStartE2EDuration="1m34.838599352s" podCreationTimestamp="2026-02-18 19:17:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:19:00.769114746 +0000 UTC m=+100.474047481" watchObservedRunningTime="2026-02-18 19:19:15.838599352 +0000 UTC m=+115.543532057" Feb 18 19:19:16 crc kubenswrapper[4942]: I0218 19:19:16.035096 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 19:19:16 crc kubenswrapper[4942]: I0218 19:19:16.035151 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qwg6q" Feb 18 19:19:16 crc kubenswrapper[4942]: I0218 19:19:16.035187 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 19:19:16 crc kubenswrapper[4942]: E0218 19:19:16.035339 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 19:19:16 crc kubenswrapper[4942]: E0218 19:19:16.035562 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 19:19:16 crc kubenswrapper[4942]: E0218 19:19:16.035667 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qwg6q" podUID="ac5b5f40-34db-4aeb-abb4-57204673bd53" Feb 18 19:19:16 crc kubenswrapper[4942]: I0218 19:19:16.809646 4942 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-8jfwb_75150b8c-7a02-497b-86c3-eabc9c8dbc55/kube-multus/1.log" Feb 18 19:19:17 crc kubenswrapper[4942]: I0218 19:19:17.035665 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 19:19:17 crc kubenswrapper[4942]: E0218 19:19:17.035952 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 19:19:18 crc kubenswrapper[4942]: I0218 19:19:18.034972 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 19:19:18 crc kubenswrapper[4942]: I0218 19:19:18.034972 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 19:19:18 crc kubenswrapper[4942]: E0218 19:19:18.035160 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 19:19:18 crc kubenswrapper[4942]: E0218 19:19:18.035519 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 19:19:18 crc kubenswrapper[4942]: I0218 19:19:18.036259 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qwg6q" Feb 18 19:19:18 crc kubenswrapper[4942]: E0218 19:19:18.036535 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qwg6q" podUID="ac5b5f40-34db-4aeb-abb4-57204673bd53" Feb 18 19:19:19 crc kubenswrapper[4942]: I0218 19:19:19.035211 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 19:19:19 crc kubenswrapper[4942]: E0218 19:19:19.035391 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 19:19:20 crc kubenswrapper[4942]: I0218 19:19:20.035426 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 19:19:20 crc kubenswrapper[4942]: I0218 19:19:20.035566 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qwg6q" Feb 18 19:19:20 crc kubenswrapper[4942]: I0218 19:19:20.035426 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 19:19:20 crc kubenswrapper[4942]: E0218 19:19:20.035666 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 19:19:20 crc kubenswrapper[4942]: E0218 19:19:20.035733 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qwg6q" podUID="ac5b5f40-34db-4aeb-abb4-57204673bd53" Feb 18 19:19:20 crc kubenswrapper[4942]: E0218 19:19:20.035872 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 19:19:20 crc kubenswrapper[4942]: E0218 19:19:20.983040 4942 kubelet_node_status.go:497] "Node not becoming ready in time after startup" Feb 18 19:19:21 crc kubenswrapper[4942]: I0218 19:19:21.036515 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 19:19:21 crc kubenswrapper[4942]: E0218 19:19:21.036878 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 19:19:21 crc kubenswrapper[4942]: I0218 19:19:21.038031 4942 scope.go:117] "RemoveContainer" containerID="331d92ab2b896c654b5eb6e9e3372f06c02c3b582188b54cff7b9b6feb78c9a9" Feb 18 19:19:21 crc kubenswrapper[4942]: E0218 19:19:21.160803 4942 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 18 19:19:21 crc kubenswrapper[4942]: I0218 19:19:21.831508 4942 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-89fzv_45dc4164-81a9-44cf-b86a-dff571bc0417/ovnkube-controller/3.log" Feb 18 19:19:21 crc kubenswrapper[4942]: I0218 19:19:21.834983 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-89fzv" event={"ID":"45dc4164-81a9-44cf-b86a-dff571bc0417","Type":"ContainerStarted","Data":"7f5cfffb19bf5e734126be098127f35dd8141f0fb212e21f57fd5fb0d64306d6"} Feb 18 19:19:21 crc kubenswrapper[4942]: I0218 19:19:21.835364 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-89fzv" Feb 18 19:19:21 crc kubenswrapper[4942]: I0218 19:19:21.870710 4942 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-89fzv" podStartSLOduration=100.87067634900001 podStartE2EDuration="1m40.870676349s" podCreationTimestamp="2026-02-18 19:17:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:19:21.868276647 +0000 UTC m=+121.573209412" watchObservedRunningTime="2026-02-18 19:19:21.870676349 +0000 UTC m=+121.575609054" Feb 18 19:19:22 crc kubenswrapper[4942]: I0218 19:19:22.035654 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 19:19:22 crc kubenswrapper[4942]: I0218 19:19:22.035706 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qwg6q" Feb 18 19:19:22 crc kubenswrapper[4942]: E0218 19:19:22.035921 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 19:19:22 crc kubenswrapper[4942]: I0218 19:19:22.035969 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 19:19:22 crc kubenswrapper[4942]: E0218 19:19:22.036095 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qwg6q" podUID="ac5b5f40-34db-4aeb-abb4-57204673bd53" Feb 18 19:19:22 crc kubenswrapper[4942]: E0218 19:19:22.036246 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 19:19:22 crc kubenswrapper[4942]: I0218 19:19:22.106632 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-qwg6q"] Feb 18 19:19:22 crc kubenswrapper[4942]: I0218 19:19:22.839495 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qwg6q" Feb 18 19:19:22 crc kubenswrapper[4942]: E0218 19:19:22.840804 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qwg6q" podUID="ac5b5f40-34db-4aeb-abb4-57204673bd53" Feb 18 19:19:23 crc kubenswrapper[4942]: I0218 19:19:23.035387 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 19:19:23 crc kubenswrapper[4942]: E0218 19:19:23.035632 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 19:19:24 crc kubenswrapper[4942]: I0218 19:19:24.035842 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qwg6q" Feb 18 19:19:24 crc kubenswrapper[4942]: I0218 19:19:24.035910 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 19:19:24 crc kubenswrapper[4942]: I0218 19:19:24.035886 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 19:19:24 crc kubenswrapper[4942]: E0218 19:19:24.036064 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qwg6q" podUID="ac5b5f40-34db-4aeb-abb4-57204673bd53" Feb 18 19:19:24 crc kubenswrapper[4942]: E0218 19:19:24.036302 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 19:19:24 crc kubenswrapper[4942]: E0218 19:19:24.036409 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 19:19:25 crc kubenswrapper[4942]: I0218 19:19:25.035162 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 19:19:25 crc kubenswrapper[4942]: E0218 19:19:25.035388 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 19:19:26 crc kubenswrapper[4942]: I0218 19:19:26.035259 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qwg6q" Feb 18 19:19:26 crc kubenswrapper[4942]: I0218 19:19:26.035259 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 19:19:26 crc kubenswrapper[4942]: E0218 19:19:26.035483 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qwg6q" podUID="ac5b5f40-34db-4aeb-abb4-57204673bd53" Feb 18 19:19:26 crc kubenswrapper[4942]: E0218 19:19:26.035556 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 19:19:26 crc kubenswrapper[4942]: I0218 19:19:26.036544 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 19:19:26 crc kubenswrapper[4942]: E0218 19:19:26.036940 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 19:19:26 crc kubenswrapper[4942]: E0218 19:19:26.162833 4942 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 18 19:19:27 crc kubenswrapper[4942]: I0218 19:19:27.035497 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 19:19:27 crc kubenswrapper[4942]: E0218 19:19:27.035839 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 19:19:27 crc kubenswrapper[4942]: I0218 19:19:27.036606 4942 scope.go:117] "RemoveContainer" containerID="4ea9fbe1ac2843b80786e84d58bed874d360e223686eac9666589a7841d71c46" Feb 18 19:19:27 crc kubenswrapper[4942]: I0218 19:19:27.861705 4942 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-8jfwb_75150b8c-7a02-497b-86c3-eabc9c8dbc55/kube-multus/1.log" Feb 18 19:19:27 crc kubenswrapper[4942]: I0218 19:19:27.861842 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-8jfwb" event={"ID":"75150b8c-7a02-497b-86c3-eabc9c8dbc55","Type":"ContainerStarted","Data":"62118c834582250ad430997ee392aa040ba0e100f92c0bb922d559c42cf4e958"} Feb 18 19:19:28 crc kubenswrapper[4942]: I0218 19:19:28.035206 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 19:19:28 crc kubenswrapper[4942]: I0218 19:19:28.035285 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 19:19:28 crc kubenswrapper[4942]: I0218 19:19:28.035342 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qwg6q" Feb 18 19:19:28 crc kubenswrapper[4942]: E0218 19:19:28.035372 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 19:19:28 crc kubenswrapper[4942]: E0218 19:19:28.035594 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 19:19:28 crc kubenswrapper[4942]: E0218 19:19:28.036091 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qwg6q" podUID="ac5b5f40-34db-4aeb-abb4-57204673bd53" Feb 18 19:19:29 crc kubenswrapper[4942]: I0218 19:19:29.035871 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 19:19:29 crc kubenswrapper[4942]: E0218 19:19:29.036081 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 19:19:30 crc kubenswrapper[4942]: I0218 19:19:30.035787 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qwg6q" Feb 18 19:19:30 crc kubenswrapper[4942]: I0218 19:19:30.035808 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 19:19:30 crc kubenswrapper[4942]: E0218 19:19:30.035970 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-qwg6q" podUID="ac5b5f40-34db-4aeb-abb4-57204673bd53" Feb 18 19:19:30 crc kubenswrapper[4942]: I0218 19:19:30.035808 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 19:19:30 crc kubenswrapper[4942]: E0218 19:19:30.036144 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 19:19:30 crc kubenswrapper[4942]: E0218 19:19:30.036242 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 19:19:31 crc kubenswrapper[4942]: I0218 19:19:31.034937 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 19:19:31 crc kubenswrapper[4942]: E0218 19:19:31.036957 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 19:19:32 crc kubenswrapper[4942]: I0218 19:19:32.035449 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qwg6q" Feb 18 19:19:32 crc kubenswrapper[4942]: I0218 19:19:32.035616 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 19:19:32 crc kubenswrapper[4942]: I0218 19:19:32.035637 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 19:19:32 crc kubenswrapper[4942]: I0218 19:19:32.040547 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Feb 18 19:19:32 crc kubenswrapper[4942]: I0218 19:19:32.042403 4942 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Feb 18 19:19:32 crc kubenswrapper[4942]: I0218 19:19:32.042466 4942 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Feb 18 19:19:32 crc kubenswrapper[4942]: I0218 19:19:32.042652 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Feb 18 19:19:32 crc kubenswrapper[4942]: I0218 19:19:32.042713 4942 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Feb 18 19:19:32 crc kubenswrapper[4942]: I0218 19:19:32.042658 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Feb 18 19:19:33 crc kubenswrapper[4942]: I0218 19:19:33.035829 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 19:19:39 crc kubenswrapper[4942]: I0218 19:19:39.916283 4942 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeReady" Feb 18 19:19:39 crc kubenswrapper[4942]: I0218 19:19:39.992646 4942 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-2ldmd"] Feb 18 19:19:39 crc kubenswrapper[4942]: I0218 19:19:39.993418 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-2ldmd" Feb 18 19:19:39 crc kubenswrapper[4942]: I0218 19:19:39.999651 4942 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console-operator/console-operator-58897d9998-4pmfw"] Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.000283 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-4pmfw" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.001016 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.001573 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.001723 4942 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.003402 4942 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-bd7zz"] Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.004322 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-bd7zz" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.005283 4942 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-kpfjc"] Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.006013 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-kpfjc" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.006565 4942 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/downloads-7954f5f757-tndhs"] Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.007203 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-tndhs" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.023172 4942 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.023172 4942 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.027199 4942 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.029538 4942 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-f9d7485db-5l26l"] Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.030206 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-5l26l" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.034005 4942 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.036325 4942 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.036482 4942 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.036545 4942 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.036557 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.036627 4942 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.036645 4942 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-q9pxc"] Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.036972 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.037356 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.037508 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.037649 4942 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.037543 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-q9pxc" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.037861 4942 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.037514 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.040274 4942 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.041153 4942 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.041253 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.041281 4942 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.041307 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.041327 4942 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.041330 4942 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.041378 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.041267 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.041445 4942 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.042463 4942 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.042588 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.042696 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.042790 4942 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.042826 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.042934 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.043185 4942 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.043331 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.043671 4942 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-vms6h"] Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.044485 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-vms6h" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.045279 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.045332 4942 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-machine-approver/machine-approver-56656f9798-7x2vd"] Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.052727 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-7x2vd" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.056029 4942 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-xbkl5"] Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.058737 4942 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-z4t28"] Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.061260 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-z4t28" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.059155 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/5bd5f22b-1c00-4281-9d3a-6ed77a4d0d29-trusted-ca\") pod \"console-operator-58897d9998-4pmfw\" (UID: \"5bd5f22b-1c00-4281-9d3a-6ed77a4d0d29\") " pod="openshift-console-operator/console-operator-58897d9998-4pmfw" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.074034 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-xbkl5" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.077549 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/42dda107-038c-42c1-8182-52bee75caea9-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-kpfjc\" (UID: \"42dda107-038c-42c1-8182-52bee75caea9\") " pod="openshift-authentication/oauth-openshift-558db77b4-kpfjc" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.077630 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2wh89\" (UniqueName: \"kubernetes.io/projected/42dda107-038c-42c1-8182-52bee75caea9-kube-api-access-2wh89\") pod \"oauth-openshift-558db77b4-kpfjc\" (UID: \"42dda107-038c-42c1-8182-52bee75caea9\") " pod="openshift-authentication/oauth-openshift-558db77b4-kpfjc" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.077663 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5bd5f22b-1c00-4281-9d3a-6ed77a4d0d29-serving-cert\") pod \"console-operator-58897d9998-4pmfw\" (UID: \"5bd5f22b-1c00-4281-9d3a-6ed77a4d0d29\") " pod="openshift-console-operator/console-operator-58897d9998-4pmfw" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.077700 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/42dda107-038c-42c1-8182-52bee75caea9-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-kpfjc\" (UID: \"42dda107-038c-42c1-8182-52bee75caea9\") " pod="openshift-authentication/oauth-openshift-558db77b4-kpfjc" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.077729 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/42dda107-038c-42c1-8182-52bee75caea9-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-kpfjc\" (UID: \"42dda107-038c-42c1-8182-52bee75caea9\") " pod="openshift-authentication/oauth-openshift-558db77b4-kpfjc" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.077806 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/42dda107-038c-42c1-8182-52bee75caea9-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-kpfjc\" (UID: \"42dda107-038c-42c1-8182-52bee75caea9\") " pod="openshift-authentication/oauth-openshift-558db77b4-kpfjc" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.077840 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4afc5765-32dc-4b49-b1a3-9141c2c96087-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-bd7zz\" (UID: \"4afc5765-32dc-4b49-b1a3-9141c2c96087\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-bd7zz" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.077869 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mvgsp\" (UniqueName: \"kubernetes.io/projected/4afc5765-32dc-4b49-b1a3-9141c2c96087-kube-api-access-mvgsp\") pod \"authentication-operator-69f744f599-bd7zz\" (UID: \"4afc5765-32dc-4b49-b1a3-9141c2c96087\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-bd7zz" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.077896 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4afc5765-32dc-4b49-b1a3-9141c2c96087-config\") pod \"authentication-operator-69f744f599-bd7zz\" (UID: \"4afc5765-32dc-4b49-b1a3-9141c2c96087\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-bd7zz" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.077946 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/42dda107-038c-42c1-8182-52bee75caea9-audit-dir\") pod \"oauth-openshift-558db77b4-kpfjc\" (UID: \"42dda107-038c-42c1-8182-52bee75caea9\") " pod="openshift-authentication/oauth-openshift-558db77b4-kpfjc" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.077973 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/42dda107-038c-42c1-8182-52bee75caea9-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-kpfjc\" (UID: \"42dda107-038c-42c1-8182-52bee75caea9\") " pod="openshift-authentication/oauth-openshift-558db77b4-kpfjc" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.078006 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/42dda107-038c-42c1-8182-52bee75caea9-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-kpfjc\" (UID: \"42dda107-038c-42c1-8182-52bee75caea9\") " pod="openshift-authentication/oauth-openshift-558db77b4-kpfjc" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.078044 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/42dda107-038c-42c1-8182-52bee75caea9-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-kpfjc\" (UID: \"42dda107-038c-42c1-8182-52bee75caea9\") " pod="openshift-authentication/oauth-openshift-558db77b4-kpfjc" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.078074 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5bd5f22b-1c00-4281-9d3a-6ed77a4d0d29-config\") pod \"console-operator-58897d9998-4pmfw\" (UID: \"5bd5f22b-1c00-4281-9d3a-6ed77a4d0d29\") " pod="openshift-console-operator/console-operator-58897d9998-4pmfw" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.078106 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cckls\" (UniqueName: \"kubernetes.io/projected/5bd5f22b-1c00-4281-9d3a-6ed77a4d0d29-kube-api-access-cckls\") pod \"console-operator-58897d9998-4pmfw\" (UID: \"5bd5f22b-1c00-4281-9d3a-6ed77a4d0d29\") " pod="openshift-console-operator/console-operator-58897d9998-4pmfw" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.078136 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/42dda107-038c-42c1-8182-52bee75caea9-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-kpfjc\" (UID: \"42dda107-038c-42c1-8182-52bee75caea9\") " pod="openshift-authentication/oauth-openshift-558db77b4-kpfjc" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.078163 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4afc5765-32dc-4b49-b1a3-9141c2c96087-service-ca-bundle\") pod \"authentication-operator-69f744f599-bd7zz\" (UID: \"4afc5765-32dc-4b49-b1a3-9141c2c96087\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-bd7zz" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.078197 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0d941adf-0c5e-46d6-9a7c-a7677468f322-config\") pod \"openshift-apiserver-operator-796bbdcf4f-2ldmd\" (UID: \"0d941adf-0c5e-46d6-9a7c-a7677468f322\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-2ldmd" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.078230 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rlvr4\" (UniqueName: \"kubernetes.io/projected/cb8403e3-f9b3-4ddf-8688-1a025a2b9291-kube-api-access-rlvr4\") pod \"downloads-7954f5f757-tndhs\" (UID: \"cb8403e3-f9b3-4ddf-8688-1a025a2b9291\") " pod="openshift-console/downloads-7954f5f757-tndhs" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.078273 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4afc5765-32dc-4b49-b1a3-9141c2c96087-serving-cert\") pod \"authentication-operator-69f744f599-bd7zz\" (UID: \"4afc5765-32dc-4b49-b1a3-9141c2c96087\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-bd7zz" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.078332 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/42dda107-038c-42c1-8182-52bee75caea9-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-kpfjc\" (UID: \"42dda107-038c-42c1-8182-52bee75caea9\") " pod="openshift-authentication/oauth-openshift-558db77b4-kpfjc" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.078361 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/42dda107-038c-42c1-8182-52bee75caea9-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-kpfjc\" (UID: \"42dda107-038c-42c1-8182-52bee75caea9\") " pod="openshift-authentication/oauth-openshift-558db77b4-kpfjc" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.078397 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/42dda107-038c-42c1-8182-52bee75caea9-audit-policies\") pod \"oauth-openshift-558db77b4-kpfjc\" (UID: \"42dda107-038c-42c1-8182-52bee75caea9\") " pod="openshift-authentication/oauth-openshift-558db77b4-kpfjc" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.078427 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0d941adf-0c5e-46d6-9a7c-a7677468f322-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-2ldmd\" (UID: \"0d941adf-0c5e-46d6-9a7c-a7677468f322\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-2ldmd" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.078474 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-chfv2\" (UniqueName: \"kubernetes.io/projected/0d941adf-0c5e-46d6-9a7c-a7677468f322-kube-api-access-chfv2\") pod \"openshift-apiserver-operator-796bbdcf4f-2ldmd\" (UID: \"0d941adf-0c5e-46d6-9a7c-a7677468f322\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-2ldmd" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.078506 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/42dda107-038c-42c1-8182-52bee75caea9-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-kpfjc\" (UID: \"42dda107-038c-42c1-8182-52bee75caea9\") " pod="openshift-authentication/oauth-openshift-558db77b4-kpfjc" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.082064 4942 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-v5w2k"] Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.083080 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-v5w2k" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.086366 4942 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-2fcrf"] Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.087181 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-2fcrf" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.087360 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.088123 4942 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-p42pr"] Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.088991 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-p42pr" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.091495 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.091537 4942 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-qk5bm"] Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.091638 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.092171 4942 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-rw75p"] Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.092487 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.092679 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-rw75p" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.092714 4942 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.093054 4942 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.095187 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.095428 4942 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.096294 4942 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.096618 4942 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.098935 4942 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.099021 4942 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.100105 4942 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.100407 4942 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.100752 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-qk5bm" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.100872 4942 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.101153 4942 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.102258 4942 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-566m9"] Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.102889 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-566m9" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.103109 4942 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-x5rln"] Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.103139 4942 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.103235 4942 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.103331 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.103406 4942 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.103479 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.103553 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.103630 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.103720 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-x5rln" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.104450 4942 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.104692 4942 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.104916 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.105199 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.105338 4942 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.105435 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.105863 4942 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.105897 4942 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.106053 4942 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.106136 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.106181 4942 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.106352 4942 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.106506 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.106582 4942 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.106656 4942 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.106726 4942 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.106835 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.107829 4942 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-bgd6x"] Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.108575 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-bgd6x" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.122659 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.122991 4942 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.123184 4942 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.124532 4942 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.124929 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.125204 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.125590 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.128452 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.129174 4942 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.138702 4942 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.138825 4942 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.139268 4942 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-7tzn9"] Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.151646 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.151703 4942 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.151987 4942 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.152103 4942 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-jqs9l"] Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.152595 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-jqs9l" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.152820 4942 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.153114 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-7tzn9" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.153159 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.153307 4942 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.153405 4942 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.153438 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.153405 4942 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.153542 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.153587 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.153682 4942 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.153791 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.153882 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.154196 4942 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-z4vt6"] Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.155031 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-z4vt6" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.155543 4942 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.159518 4942 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress/router-default-5444994796-fgw8l"] Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.160280 4942 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-zz9rm"] Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.159531 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.161156 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-zz9rm" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.161193 4942 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.160249 4942 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.161281 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-fgw8l" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.162856 4942 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-9mc8z"] Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.163560 4942 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-vqvpq"] Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.164078 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-vqvpq" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.164396 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-9mc8z" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.164427 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-2ldmd"] Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.166894 4942 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-g5df6"] Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.167551 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-g5df6" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.169845 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-4pmfw"] Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.172210 4942 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-8nxhq"] Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.173444 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-8nxhq" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.174092 4942 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-zpnzn"] Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.176855 4942 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.179117 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4afc5765-32dc-4b49-b1a3-9141c2c96087-config\") pod \"authentication-operator-69f744f599-bd7zz\" (UID: \"4afc5765-32dc-4b49-b1a3-9141c2c96087\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-bd7zz" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.179166 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/6890b7aa-fac3-4c00-90cc-4618ddfae25e-image-import-ca\") pod \"apiserver-76f77b778f-v5w2k\" (UID: \"6890b7aa-fac3-4c00-90cc-4618ddfae25e\") " pod="openshift-apiserver/apiserver-76f77b778f-v5w2k" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.179190 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/6890b7aa-fac3-4c00-90cc-4618ddfae25e-encryption-config\") pod \"apiserver-76f77b778f-v5w2k\" (UID: \"6890b7aa-fac3-4c00-90cc-4618ddfae25e\") " pod="openshift-apiserver/apiserver-76f77b778f-v5w2k" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.179213 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nn6b4\" (UniqueName: \"kubernetes.io/projected/5683bb73-dc7f-40ed-86cd-0c08f2d38147-kube-api-access-nn6b4\") pod \"console-f9d7485db-5l26l\" (UID: \"5683bb73-dc7f-40ed-86cd-0c08f2d38147\") " pod="openshift-console/console-f9d7485db-5l26l" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.179236 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/48a8b317-27eb-4d20-93ad-37fa559ec858-images\") pod \"machine-api-operator-5694c8668f-p42pr\" (UID: \"48a8b317-27eb-4d20-93ad-37fa559ec858\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-p42pr" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.179295 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/42dda107-038c-42c1-8182-52bee75caea9-audit-dir\") pod \"oauth-openshift-558db77b4-kpfjc\" (UID: \"42dda107-038c-42c1-8182-52bee75caea9\") " pod="openshift-authentication/oauth-openshift-558db77b4-kpfjc" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.179533 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/42dda107-038c-42c1-8182-52bee75caea9-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-kpfjc\" (UID: \"42dda107-038c-42c1-8182-52bee75caea9\") " pod="openshift-authentication/oauth-openshift-558db77b4-kpfjc" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.179564 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kg4sb\" (UniqueName: \"kubernetes.io/projected/e3586689-cf81-4cd2-84d1-70b0ce221b9d-kube-api-access-kg4sb\") pod \"cluster-samples-operator-665b6dd947-vms6h\" (UID: \"e3586689-cf81-4cd2-84d1-70b0ce221b9d\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-vms6h" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.180045 4942 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-lrcbr"] Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.180434 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4afc5765-32dc-4b49-b1a3-9141c2c96087-config\") pod \"authentication-operator-69f744f599-bd7zz\" (UID: \"4afc5765-32dc-4b49-b1a3-9141c2c96087\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-bd7zz" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.180956 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fa346657-46eb-4817-b206-4c09d46d4a55-serving-cert\") pod \"route-controller-manager-6576b87f9c-xbkl5\" (UID: \"fa346657-46eb-4817-b206-4c09d46d4a55\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-xbkl5" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.181124 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/42dda107-038c-42c1-8182-52bee75caea9-audit-dir\") pod \"oauth-openshift-558db77b4-kpfjc\" (UID: \"42dda107-038c-42c1-8182-52bee75caea9\") " pod="openshift-authentication/oauth-openshift-558db77b4-kpfjc" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.181202 4942 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-9grql"] Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.181599 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-zpnzn" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.181755 4942 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-zj44h"] Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.181972 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-lrcbr" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.182266 4942 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-v6bqq"] Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.182638 4942 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-9wcp7"] Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.183003 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-9grql" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.183264 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-zj44h" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.183529 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/42dda107-038c-42c1-8182-52bee75caea9-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-kpfjc\" (UID: \"42dda107-038c-42c1-8182-52bee75caea9\") " pod="openshift-authentication/oauth-openshift-558db77b4-kpfjc" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.183654 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6890b7aa-fac3-4c00-90cc-4618ddfae25e-trusted-ca-bundle\") pod \"apiserver-76f77b778f-v5w2k\" (UID: \"6890b7aa-fac3-4c00-90cc-4618ddfae25e\") " pod="openshift-apiserver/apiserver-76f77b778f-v5w2k" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.183792 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/6890b7aa-fac3-4c00-90cc-4618ddfae25e-node-pullsecrets\") pod \"apiserver-76f77b778f-v5w2k\" (UID: \"6890b7aa-fac3-4c00-90cc-4618ddfae25e\") " pod="openshift-apiserver/apiserver-76f77b778f-v5w2k" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.185205 4942 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-b488q"] Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.184269 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-v6bqq" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.185831 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-b488q" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.185980 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-9wcp7" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.186012 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/42dda107-038c-42c1-8182-52bee75caea9-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-kpfjc\" (UID: \"42dda107-038c-42c1-8182-52bee75caea9\") " pod="openshift-authentication/oauth-openshift-558db77b4-kpfjc" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.186227 4942 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-rd6k5"] Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.186672 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-rd6k5" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.187139 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/42dda107-038c-42c1-8182-52bee75caea9-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-kpfjc\" (UID: \"42dda107-038c-42c1-8182-52bee75caea9\") " pod="openshift-authentication/oauth-openshift-558db77b4-kpfjc" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.190432 4942 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-jfkrb"] Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.191151 4942 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29524035-tk5g4"] Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.191660 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-tndhs"] Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.191724 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29524035-tk5g4" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.191986 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-jfkrb" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.193869 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-kpfjc"] Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.194274 4942 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-canary/ingress-canary-s57sd"] Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.195074 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-s57sd" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.195525 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.195631 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-2fcrf"] Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.197429 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-vms6h"] Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.200507 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-p42pr"] Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.200545 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-x5rln"] Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.200559 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-bd7zz"] Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.202398 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-z4vt6"] Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.203374 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/cbe755cf-b7a2-4557-9368-5d71df455408-audit-policies\") pod \"apiserver-7bbb656c7d-q9pxc\" (UID: \"cbe755cf-b7a2-4557-9368-5d71df455408\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-q9pxc" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.203419 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/cbe755cf-b7a2-4557-9368-5d71df455408-etcd-client\") pod \"apiserver-7bbb656c7d-q9pxc\" (UID: \"cbe755cf-b7a2-4557-9368-5d71df455408\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-q9pxc" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.203453 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/994be5c4-0c9d-4577-82e8-644d64c3ab1d-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-566m9\" (UID: \"994be5c4-0c9d-4577-82e8-644d64c3ab1d\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-566m9" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.203477 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dbqqn\" (UniqueName: \"kubernetes.io/projected/bccecc4d-32d0-4367-a3b6-e35ddf53dd1a-kube-api-access-dbqqn\") pod \"openshift-config-operator-7777fb866f-qk5bm\" (UID: \"bccecc4d-32d0-4367-a3b6-e35ddf53dd1a\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-qk5bm" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.203509 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/42dda107-038c-42c1-8182-52bee75caea9-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-kpfjc\" (UID: \"42dda107-038c-42c1-8182-52bee75caea9\") " pod="openshift-authentication/oauth-openshift-558db77b4-kpfjc" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.203532 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6890b7aa-fac3-4c00-90cc-4618ddfae25e-serving-cert\") pod \"apiserver-76f77b778f-v5w2k\" (UID: \"6890b7aa-fac3-4c00-90cc-4618ddfae25e\") " pod="openshift-apiserver/apiserver-76f77b778f-v5w2k" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.203558 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5bd5f22b-1c00-4281-9d3a-6ed77a4d0d29-config\") pod \"console-operator-58897d9998-4pmfw\" (UID: \"5bd5f22b-1c00-4281-9d3a-6ed77a4d0d29\") " pod="openshift-console-operator/console-operator-58897d9998-4pmfw" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.203590 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cckls\" (UniqueName: \"kubernetes.io/projected/5bd5f22b-1c00-4281-9d3a-6ed77a4d0d29-kube-api-access-cckls\") pod \"console-operator-58897d9998-4pmfw\" (UID: \"5bd5f22b-1c00-4281-9d3a-6ed77a4d0d29\") " pod="openshift-console-operator/console-operator-58897d9998-4pmfw" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.203612 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/6890b7aa-fac3-4c00-90cc-4618ddfae25e-audit-dir\") pod \"apiserver-76f77b778f-v5w2k\" (UID: \"6890b7aa-fac3-4c00-90cc-4618ddfae25e\") " pod="openshift-apiserver/apiserver-76f77b778f-v5w2k" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.203635 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/42dda107-038c-42c1-8182-52bee75caea9-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-kpfjc\" (UID: \"42dda107-038c-42c1-8182-52bee75caea9\") " pod="openshift-authentication/oauth-openshift-558db77b4-kpfjc" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.203661 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4afc5765-32dc-4b49-b1a3-9141c2c96087-service-ca-bundle\") pod \"authentication-operator-69f744f599-bd7zz\" (UID: \"4afc5765-32dc-4b49-b1a3-9141c2c96087\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-bd7zz" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.203685 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/709f9378-2d1c-4158-9521-e6000e06eb5e-auth-proxy-config\") pod \"machine-approver-56656f9798-7x2vd\" (UID: \"709f9378-2d1c-4158-9521-e6000e06eb5e\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-7x2vd" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.203705 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/5683bb73-dc7f-40ed-86cd-0c08f2d38147-console-config\") pod \"console-f9d7485db-5l26l\" (UID: \"5683bb73-dc7f-40ed-86cd-0c08f2d38147\") " pod="openshift-console/console-f9d7485db-5l26l" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.203731 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0d941adf-0c5e-46d6-9a7c-a7677468f322-config\") pod \"openshift-apiserver-operator-796bbdcf4f-2ldmd\" (UID: \"0d941adf-0c5e-46d6-9a7c-a7677468f322\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-2ldmd" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.203753 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/86fdeda0-1ae3-488d-9612-d633a5fca64f-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-7tzn9\" (UID: \"86fdeda0-1ae3-488d-9612-d633a5fca64f\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-7tzn9" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.203801 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rlvr4\" (UniqueName: \"kubernetes.io/projected/cb8403e3-f9b3-4ddf-8688-1a025a2b9291-kube-api-access-rlvr4\") pod \"downloads-7954f5f757-tndhs\" (UID: \"cb8403e3-f9b3-4ddf-8688-1a025a2b9291\") " pod="openshift-console/downloads-7954f5f757-tndhs" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.203822 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/86fdeda0-1ae3-488d-9612-d633a5fca64f-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-7tzn9\" (UID: \"86fdeda0-1ae3-488d-9612-d633a5fca64f\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-7tzn9" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.203845 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/709f9378-2d1c-4158-9521-e6000e06eb5e-machine-approver-tls\") pod \"machine-approver-56656f9798-7x2vd\" (UID: \"709f9378-2d1c-4158-9521-e6000e06eb5e\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-7x2vd" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.203871 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/709f9378-2d1c-4158-9521-e6000e06eb5e-config\") pod \"machine-approver-56656f9798-7x2vd\" (UID: \"709f9378-2d1c-4158-9521-e6000e06eb5e\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-7x2vd" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.203912 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4afc5765-32dc-4b49-b1a3-9141c2c96087-serving-cert\") pod \"authentication-operator-69f744f599-bd7zz\" (UID: \"4afc5765-32dc-4b49-b1a3-9141c2c96087\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-bd7zz" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.203970 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/42dda107-038c-42c1-8182-52bee75caea9-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-kpfjc\" (UID: \"42dda107-038c-42c1-8182-52bee75caea9\") " pod="openshift-authentication/oauth-openshift-558db77b4-kpfjc" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.203996 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/42dda107-038c-42c1-8182-52bee75caea9-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-kpfjc\" (UID: \"42dda107-038c-42c1-8182-52bee75caea9\") " pod="openshift-authentication/oauth-openshift-558db77b4-kpfjc" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.204027 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/e3586689-cf81-4cd2-84d1-70b0ce221b9d-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-vms6h\" (UID: \"e3586689-cf81-4cd2-84d1-70b0ce221b9d\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-vms6h" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.204051 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/cbe755cf-b7a2-4557-9368-5d71df455408-encryption-config\") pod \"apiserver-7bbb656c7d-q9pxc\" (UID: \"cbe755cf-b7a2-4557-9368-5d71df455408\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-q9pxc" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.204076 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6890b7aa-fac3-4c00-90cc-4618ddfae25e-config\") pod \"apiserver-76f77b778f-v5w2k\" (UID: \"6890b7aa-fac3-4c00-90cc-4618ddfae25e\") " pod="openshift-apiserver/apiserver-76f77b778f-v5w2k" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.204095 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/6890b7aa-fac3-4c00-90cc-4618ddfae25e-audit\") pod \"apiserver-76f77b778f-v5w2k\" (UID: \"6890b7aa-fac3-4c00-90cc-4618ddfae25e\") " pod="openshift-apiserver/apiserver-76f77b778f-v5w2k" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.204116 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/5683bb73-dc7f-40ed-86cd-0c08f2d38147-oauth-serving-cert\") pod \"console-f9d7485db-5l26l\" (UID: \"5683bb73-dc7f-40ed-86cd-0c08f2d38147\") " pod="openshift-console/console-f9d7485db-5l26l" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.204136 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-48shh\" (UniqueName: \"kubernetes.io/projected/cbe755cf-b7a2-4557-9368-5d71df455408-kube-api-access-48shh\") pod \"apiserver-7bbb656c7d-q9pxc\" (UID: \"cbe755cf-b7a2-4557-9368-5d71df455408\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-q9pxc" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.204156 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/42dda107-038c-42c1-8182-52bee75caea9-audit-policies\") pod \"oauth-openshift-558db77b4-kpfjc\" (UID: \"42dda107-038c-42c1-8182-52bee75caea9\") " pod="openshift-authentication/oauth-openshift-558db77b4-kpfjc" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.204175 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/cbe755cf-b7a2-4557-9368-5d71df455408-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-q9pxc\" (UID: \"cbe755cf-b7a2-4557-9368-5d71df455408\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-q9pxc" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.204197 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cbe755cf-b7a2-4557-9368-5d71df455408-serving-cert\") pod \"apiserver-7bbb656c7d-q9pxc\" (UID: \"cbe755cf-b7a2-4557-9368-5d71df455408\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-q9pxc" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.204221 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0d941adf-0c5e-46d6-9a7c-a7677468f322-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-2ldmd\" (UID: \"0d941adf-0c5e-46d6-9a7c-a7677468f322\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-2ldmd" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.204247 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/86fdeda0-1ae3-488d-9612-d633a5fca64f-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-7tzn9\" (UID: \"86fdeda0-1ae3-488d-9612-d633a5fca64f\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-7tzn9" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.204264 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/5683bb73-dc7f-40ed-86cd-0c08f2d38147-console-oauth-config\") pod \"console-f9d7485db-5l26l\" (UID: \"5683bb73-dc7f-40ed-86cd-0c08f2d38147\") " pod="openshift-console/console-f9d7485db-5l26l" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.204282 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/48a8b317-27eb-4d20-93ad-37fa559ec858-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-p42pr\" (UID: \"48a8b317-27eb-4d20-93ad-37fa559ec858\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-p42pr" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.204299 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-272vp\" (UniqueName: \"kubernetes.io/projected/48a8b317-27eb-4d20-93ad-37fa559ec858-kube-api-access-272vp\") pod \"machine-api-operator-5694c8668f-p42pr\" (UID: \"48a8b317-27eb-4d20-93ad-37fa559ec858\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-p42pr" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.204319 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/cbe755cf-b7a2-4557-9368-5d71df455408-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-q9pxc\" (UID: \"cbe755cf-b7a2-4557-9368-5d71df455408\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-q9pxc" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.204336 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/fa346657-46eb-4817-b206-4c09d46d4a55-client-ca\") pod \"route-controller-manager-6576b87f9c-xbkl5\" (UID: \"fa346657-46eb-4817-b206-4c09d46d4a55\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-xbkl5" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.204355 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bccecc4d-32d0-4367-a3b6-e35ddf53dd1a-serving-cert\") pod \"openshift-config-operator-7777fb866f-qk5bm\" (UID: \"bccecc4d-32d0-4367-a3b6-e35ddf53dd1a\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-qk5bm" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.204382 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5wfd2\" (UniqueName: \"kubernetes.io/projected/6890b7aa-fac3-4c00-90cc-4618ddfae25e-kube-api-access-5wfd2\") pod \"apiserver-76f77b778f-v5w2k\" (UID: \"6890b7aa-fac3-4c00-90cc-4618ddfae25e\") " pod="openshift-apiserver/apiserver-76f77b778f-v5w2k" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.204408 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/6890b7aa-fac3-4c00-90cc-4618ddfae25e-etcd-client\") pod \"apiserver-76f77b778f-v5w2k\" (UID: \"6890b7aa-fac3-4c00-90cc-4618ddfae25e\") " pod="openshift-apiserver/apiserver-76f77b778f-v5w2k" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.204425 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pcdjq\" (UniqueName: \"kubernetes.io/projected/709f9378-2d1c-4158-9521-e6000e06eb5e-kube-api-access-pcdjq\") pod \"machine-approver-56656f9798-7x2vd\" (UID: \"709f9378-2d1c-4158-9521-e6000e06eb5e\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-7x2vd" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.204445 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mddqc\" (UniqueName: \"kubernetes.io/projected/fa346657-46eb-4817-b206-4c09d46d4a55-kube-api-access-mddqc\") pod \"route-controller-manager-6576b87f9c-xbkl5\" (UID: \"fa346657-46eb-4817-b206-4c09d46d4a55\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-xbkl5" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.204473 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-chfv2\" (UniqueName: \"kubernetes.io/projected/0d941adf-0c5e-46d6-9a7c-a7677468f322-kube-api-access-chfv2\") pod \"openshift-apiserver-operator-796bbdcf4f-2ldmd\" (UID: \"0d941adf-0c5e-46d6-9a7c-a7677468f322\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-2ldmd" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.204492 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/42dda107-038c-42c1-8182-52bee75caea9-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-kpfjc\" (UID: \"42dda107-038c-42c1-8182-52bee75caea9\") " pod="openshift-authentication/oauth-openshift-558db77b4-kpfjc" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.204513 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/ae259edb-f577-48b8-b236-91656ac269d2-metrics-tls\") pod \"dns-operator-744455d44c-bgd6x\" (UID: \"ae259edb-f577-48b8-b236-91656ac269d2\") " pod="openshift-dns-operator/dns-operator-744455d44c-bgd6x" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.204533 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/5683bb73-dc7f-40ed-86cd-0c08f2d38147-service-ca\") pod \"console-f9d7485db-5l26l\" (UID: \"5683bb73-dc7f-40ed-86cd-0c08f2d38147\") " pod="openshift-console/console-f9d7485db-5l26l" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.204554 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/cbe755cf-b7a2-4557-9368-5d71df455408-audit-dir\") pod \"apiserver-7bbb656c7d-q9pxc\" (UID: \"cbe755cf-b7a2-4557-9368-5d71df455408\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-q9pxc" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.204586 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/5bd5f22b-1c00-4281-9d3a-6ed77a4d0d29-trusted-ca\") pod \"console-operator-58897d9998-4pmfw\" (UID: \"5bd5f22b-1c00-4281-9d3a-6ed77a4d0d29\") " pod="openshift-console-operator/console-operator-58897d9998-4pmfw" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.204619 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/42dda107-038c-42c1-8182-52bee75caea9-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-kpfjc\" (UID: \"42dda107-038c-42c1-8182-52bee75caea9\") " pod="openshift-authentication/oauth-openshift-558db77b4-kpfjc" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.204643 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2wh89\" (UniqueName: \"kubernetes.io/projected/42dda107-038c-42c1-8182-52bee75caea9-kube-api-access-2wh89\") pod \"oauth-openshift-558db77b4-kpfjc\" (UID: \"42dda107-038c-42c1-8182-52bee75caea9\") " pod="openshift-authentication/oauth-openshift-558db77b4-kpfjc" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.204665 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5bd5f22b-1c00-4281-9d3a-6ed77a4d0d29-serving-cert\") pod \"console-operator-58897d9998-4pmfw\" (UID: \"5bd5f22b-1c00-4281-9d3a-6ed77a4d0d29\") " pod="openshift-console-operator/console-operator-58897d9998-4pmfw" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.204689 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-drnjv\" (UniqueName: \"kubernetes.io/projected/86fdeda0-1ae3-488d-9612-d633a5fca64f-kube-api-access-drnjv\") pod \"cluster-image-registry-operator-dc59b4c8b-7tzn9\" (UID: \"86fdeda0-1ae3-488d-9612-d633a5fca64f\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-7tzn9" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.204717 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/42dda107-038c-42c1-8182-52bee75caea9-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-kpfjc\" (UID: \"42dda107-038c-42c1-8182-52bee75caea9\") " pod="openshift-authentication/oauth-openshift-558db77b4-kpfjc" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.204740 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/42dda107-038c-42c1-8182-52bee75caea9-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-kpfjc\" (UID: \"42dda107-038c-42c1-8182-52bee75caea9\") " pod="openshift-authentication/oauth-openshift-558db77b4-kpfjc" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.204780 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/994be5c4-0c9d-4577-82e8-644d64c3ab1d-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-566m9\" (UID: \"994be5c4-0c9d-4577-82e8-644d64c3ab1d\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-566m9" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.204803 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fa346657-46eb-4817-b206-4c09d46d4a55-config\") pod \"route-controller-manager-6576b87f9c-xbkl5\" (UID: \"fa346657-46eb-4817-b206-4c09d46d4a55\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-xbkl5" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.204827 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bccecc4d-32d0-4367-a3b6-e35ddf53dd1a-available-featuregates\") pod \"openshift-config-operator-7777fb866f-qk5bm\" (UID: \"bccecc4d-32d0-4367-a3b6-e35ddf53dd1a\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-qk5bm" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.204853 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/42dda107-038c-42c1-8182-52bee75caea9-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-kpfjc\" (UID: \"42dda107-038c-42c1-8182-52bee75caea9\") " pod="openshift-authentication/oauth-openshift-558db77b4-kpfjc" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.204878 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rsz8q\" (UniqueName: \"kubernetes.io/projected/ae259edb-f577-48b8-b236-91656ac269d2-kube-api-access-rsz8q\") pod \"dns-operator-744455d44c-bgd6x\" (UID: \"ae259edb-f577-48b8-b236-91656ac269d2\") " pod="openshift-dns-operator/dns-operator-744455d44c-bgd6x" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.204901 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/5683bb73-dc7f-40ed-86cd-0c08f2d38147-console-serving-cert\") pod \"console-f9d7485db-5l26l\" (UID: \"5683bb73-dc7f-40ed-86cd-0c08f2d38147\") " pod="openshift-console/console-f9d7485db-5l26l" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.204923 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/48a8b317-27eb-4d20-93ad-37fa559ec858-config\") pod \"machine-api-operator-5694c8668f-p42pr\" (UID: \"48a8b317-27eb-4d20-93ad-37fa559ec858\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-p42pr" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.204950 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4afc5765-32dc-4b49-b1a3-9141c2c96087-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-bd7zz\" (UID: \"4afc5765-32dc-4b49-b1a3-9141c2c96087\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-bd7zz" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.204971 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/6890b7aa-fac3-4c00-90cc-4618ddfae25e-etcd-serving-ca\") pod \"apiserver-76f77b778f-v5w2k\" (UID: \"6890b7aa-fac3-4c00-90cc-4618ddfae25e\") " pod="openshift-apiserver/apiserver-76f77b778f-v5w2k" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.205066 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5683bb73-dc7f-40ed-86cd-0c08f2d38147-trusted-ca-bundle\") pod \"console-f9d7485db-5l26l\" (UID: \"5683bb73-dc7f-40ed-86cd-0c08f2d38147\") " pod="openshift-console/console-f9d7485db-5l26l" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.205094 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mvgsp\" (UniqueName: \"kubernetes.io/projected/4afc5765-32dc-4b49-b1a3-9141c2c96087-kube-api-access-mvgsp\") pod \"authentication-operator-69f744f599-bd7zz\" (UID: \"4afc5765-32dc-4b49-b1a3-9141c2c96087\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-bd7zz" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.205116 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/994be5c4-0c9d-4577-82e8-644d64c3ab1d-config\") pod \"kube-apiserver-operator-766d6c64bb-566m9\" (UID: \"994be5c4-0c9d-4577-82e8-644d64c3ab1d\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-566m9" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.209631 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5bd5f22b-1c00-4281-9d3a-6ed77a4d0d29-config\") pod \"console-operator-58897d9998-4pmfw\" (UID: \"5bd5f22b-1c00-4281-9d3a-6ed77a4d0d29\") " pod="openshift-console-operator/console-operator-58897d9998-4pmfw" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.221508 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4afc5765-32dc-4b49-b1a3-9141c2c96087-service-ca-bundle\") pod \"authentication-operator-69f744f599-bd7zz\" (UID: \"4afc5765-32dc-4b49-b1a3-9141c2c96087\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-bd7zz" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.222371 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0d941adf-0c5e-46d6-9a7c-a7677468f322-config\") pod \"openshift-apiserver-operator-796bbdcf4f-2ldmd\" (UID: \"0d941adf-0c5e-46d6-9a7c-a7677468f322\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-2ldmd" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.225436 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-5l26l"] Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.225497 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-v5w2k"] Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.225451 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/42dda107-038c-42c1-8182-52bee75caea9-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-kpfjc\" (UID: \"42dda107-038c-42c1-8182-52bee75caea9\") " pod="openshift-authentication/oauth-openshift-558db77b4-kpfjc" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.225754 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/42dda107-038c-42c1-8182-52bee75caea9-audit-policies\") pod \"oauth-openshift-558db77b4-kpfjc\" (UID: \"42dda107-038c-42c1-8182-52bee75caea9\") " pod="openshift-authentication/oauth-openshift-558db77b4-kpfjc" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.226427 4942 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.228294 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/42dda107-038c-42c1-8182-52bee75caea9-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-kpfjc\" (UID: \"42dda107-038c-42c1-8182-52bee75caea9\") " pod="openshift-authentication/oauth-openshift-558db77b4-kpfjc" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.230064 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/42dda107-038c-42c1-8182-52bee75caea9-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-kpfjc\" (UID: \"42dda107-038c-42c1-8182-52bee75caea9\") " pod="openshift-authentication/oauth-openshift-558db77b4-kpfjc" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.230183 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/42dda107-038c-42c1-8182-52bee75caea9-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-kpfjc\" (UID: \"42dda107-038c-42c1-8182-52bee75caea9\") " pod="openshift-authentication/oauth-openshift-558db77b4-kpfjc" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.230726 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/42dda107-038c-42c1-8182-52bee75caea9-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-kpfjc\" (UID: \"42dda107-038c-42c1-8182-52bee75caea9\") " pod="openshift-authentication/oauth-openshift-558db77b4-kpfjc" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.230915 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-z4t28"] Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.231599 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5bd5f22b-1c00-4281-9d3a-6ed77a4d0d29-serving-cert\") pod \"console-operator-58897d9998-4pmfw\" (UID: \"5bd5f22b-1c00-4281-9d3a-6ed77a4d0d29\") " pod="openshift-console-operator/console-operator-58897d9998-4pmfw" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.231641 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/42dda107-038c-42c1-8182-52bee75caea9-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-kpfjc\" (UID: \"42dda107-038c-42c1-8182-52bee75caea9\") " pod="openshift-authentication/oauth-openshift-558db77b4-kpfjc" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.232686 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/42dda107-038c-42c1-8182-52bee75caea9-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-kpfjc\" (UID: \"42dda107-038c-42c1-8182-52bee75caea9\") " pod="openshift-authentication/oauth-openshift-558db77b4-kpfjc" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.233053 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4afc5765-32dc-4b49-b1a3-9141c2c96087-serving-cert\") pod \"authentication-operator-69f744f599-bd7zz\" (UID: \"4afc5765-32dc-4b49-b1a3-9141c2c96087\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-bd7zz" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.233492 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/5bd5f22b-1c00-4281-9d3a-6ed77a4d0d29-trusted-ca\") pod \"console-operator-58897d9998-4pmfw\" (UID: \"5bd5f22b-1c00-4281-9d3a-6ed77a4d0d29\") " pod="openshift-console-operator/console-operator-58897d9998-4pmfw" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.234628 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4afc5765-32dc-4b49-b1a3-9141c2c96087-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-bd7zz\" (UID: \"4afc5765-32dc-4b49-b1a3-9141c2c96087\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-bd7zz" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.235579 4942 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.236220 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/42dda107-038c-42c1-8182-52bee75caea9-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-kpfjc\" (UID: \"42dda107-038c-42c1-8182-52bee75caea9\") " pod="openshift-authentication/oauth-openshift-558db77b4-kpfjc" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.237103 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0d941adf-0c5e-46d6-9a7c-a7677468f322-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-2ldmd\" (UID: \"0d941adf-0c5e-46d6-9a7c-a7677468f322\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-2ldmd" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.243100 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/42dda107-038c-42c1-8182-52bee75caea9-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-kpfjc\" (UID: \"42dda107-038c-42c1-8182-52bee75caea9\") " pod="openshift-authentication/oauth-openshift-558db77b4-kpfjc" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.244260 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-9mc8z"] Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.246017 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-zz9rm"] Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.247093 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-bgd6x"] Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.248174 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-qk5bm"] Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.249635 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-7tzn9"] Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.250701 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-566m9"] Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.251814 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-jqs9l"] Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.253453 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-q9pxc"] Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.255309 4942 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.255647 4942 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/dns-default-s4kjv"] Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.256615 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-s4kjv" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.257452 4942 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-w9lpz"] Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.260372 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-xbkl5"] Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.260498 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-v6bqq"] Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.260517 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-w9lpz" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.261203 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-s57sd"] Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.263025 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-lrcbr"] Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.264342 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-g5df6"] Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.266245 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-vqvpq"] Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.267195 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-8nxhq"] Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.270404 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-zj44h"] Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.272163 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-rd6k5"] Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.273794 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-9grql"] Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.276422 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-9wcp7"] Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.276454 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.277820 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-rw75p"] Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.279191 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-zpnzn"] Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.280025 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-jfkrb"] Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.281089 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29524035-tk5g4"] Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.282042 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-b488q"] Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.283395 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-s4kjv"] Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.284520 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-w9lpz"] Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.285770 4942 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-server-xs9jl"] Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.286398 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-xs9jl" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.295612 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.305969 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-48shh\" (UniqueName: \"kubernetes.io/projected/cbe755cf-b7a2-4557-9368-5d71df455408-kube-api-access-48shh\") pod \"apiserver-7bbb656c7d-q9pxc\" (UID: \"cbe755cf-b7a2-4557-9368-5d71df455408\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-q9pxc" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.306005 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6890b7aa-fac3-4c00-90cc-4618ddfae25e-config\") pod \"apiserver-76f77b778f-v5w2k\" (UID: \"6890b7aa-fac3-4c00-90cc-4618ddfae25e\") " pod="openshift-apiserver/apiserver-76f77b778f-v5w2k" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.306023 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/6890b7aa-fac3-4c00-90cc-4618ddfae25e-audit\") pod \"apiserver-76f77b778f-v5w2k\" (UID: \"6890b7aa-fac3-4c00-90cc-4618ddfae25e\") " pod="openshift-apiserver/apiserver-76f77b778f-v5w2k" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.306040 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/5683bb73-dc7f-40ed-86cd-0c08f2d38147-oauth-serving-cert\") pod \"console-f9d7485db-5l26l\" (UID: \"5683bb73-dc7f-40ed-86cd-0c08f2d38147\") " pod="openshift-console/console-f9d7485db-5l26l" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.306058 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/cbe755cf-b7a2-4557-9368-5d71df455408-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-q9pxc\" (UID: \"cbe755cf-b7a2-4557-9368-5d71df455408\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-q9pxc" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.306073 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cbe755cf-b7a2-4557-9368-5d71df455408-serving-cert\") pod \"apiserver-7bbb656c7d-q9pxc\" (UID: \"cbe755cf-b7a2-4557-9368-5d71df455408\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-q9pxc" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.306092 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/86fdeda0-1ae3-488d-9612-d633a5fca64f-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-7tzn9\" (UID: \"86fdeda0-1ae3-488d-9612-d633a5fca64f\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-7tzn9" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.306111 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/5683bb73-dc7f-40ed-86cd-0c08f2d38147-console-oauth-config\") pod \"console-f9d7485db-5l26l\" (UID: \"5683bb73-dc7f-40ed-86cd-0c08f2d38147\") " pod="openshift-console/console-f9d7485db-5l26l" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.306225 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/48a8b317-27eb-4d20-93ad-37fa559ec858-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-p42pr\" (UID: \"48a8b317-27eb-4d20-93ad-37fa559ec858\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-p42pr" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.306244 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-272vp\" (UniqueName: \"kubernetes.io/projected/48a8b317-27eb-4d20-93ad-37fa559ec858-kube-api-access-272vp\") pod \"machine-api-operator-5694c8668f-p42pr\" (UID: \"48a8b317-27eb-4d20-93ad-37fa559ec858\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-p42pr" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.306259 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/cbe755cf-b7a2-4557-9368-5d71df455408-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-q9pxc\" (UID: \"cbe755cf-b7a2-4557-9368-5d71df455408\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-q9pxc" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.306274 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/fa346657-46eb-4817-b206-4c09d46d4a55-client-ca\") pod \"route-controller-manager-6576b87f9c-xbkl5\" (UID: \"fa346657-46eb-4817-b206-4c09d46d4a55\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-xbkl5" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.306292 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bccecc4d-32d0-4367-a3b6-e35ddf53dd1a-serving-cert\") pod \"openshift-config-operator-7777fb866f-qk5bm\" (UID: \"bccecc4d-32d0-4367-a3b6-e35ddf53dd1a\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-qk5bm" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.306306 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5wfd2\" (UniqueName: \"kubernetes.io/projected/6890b7aa-fac3-4c00-90cc-4618ddfae25e-kube-api-access-5wfd2\") pod \"apiserver-76f77b778f-v5w2k\" (UID: \"6890b7aa-fac3-4c00-90cc-4618ddfae25e\") " pod="openshift-apiserver/apiserver-76f77b778f-v5w2k" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.306323 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/6890b7aa-fac3-4c00-90cc-4618ddfae25e-etcd-client\") pod \"apiserver-76f77b778f-v5w2k\" (UID: \"6890b7aa-fac3-4c00-90cc-4618ddfae25e\") " pod="openshift-apiserver/apiserver-76f77b778f-v5w2k" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.306338 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pcdjq\" (UniqueName: \"kubernetes.io/projected/709f9378-2d1c-4158-9521-e6000e06eb5e-kube-api-access-pcdjq\") pod \"machine-approver-56656f9798-7x2vd\" (UID: \"709f9378-2d1c-4158-9521-e6000e06eb5e\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-7x2vd" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.306352 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mddqc\" (UniqueName: \"kubernetes.io/projected/fa346657-46eb-4817-b206-4c09d46d4a55-kube-api-access-mddqc\") pod \"route-controller-manager-6576b87f9c-xbkl5\" (UID: \"fa346657-46eb-4817-b206-4c09d46d4a55\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-xbkl5" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.306366 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/ae259edb-f577-48b8-b236-91656ac269d2-metrics-tls\") pod \"dns-operator-744455d44c-bgd6x\" (UID: \"ae259edb-f577-48b8-b236-91656ac269d2\") " pod="openshift-dns-operator/dns-operator-744455d44c-bgd6x" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.306382 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/5683bb73-dc7f-40ed-86cd-0c08f2d38147-service-ca\") pod \"console-f9d7485db-5l26l\" (UID: \"5683bb73-dc7f-40ed-86cd-0c08f2d38147\") " pod="openshift-console/console-f9d7485db-5l26l" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.306402 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/cbe755cf-b7a2-4557-9368-5d71df455408-audit-dir\") pod \"apiserver-7bbb656c7d-q9pxc\" (UID: \"cbe755cf-b7a2-4557-9368-5d71df455408\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-q9pxc" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.306436 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-drnjv\" (UniqueName: \"kubernetes.io/projected/86fdeda0-1ae3-488d-9612-d633a5fca64f-kube-api-access-drnjv\") pod \"cluster-image-registry-operator-dc59b4c8b-7tzn9\" (UID: \"86fdeda0-1ae3-488d-9612-d633a5fca64f\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-7tzn9" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.306452 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/994be5c4-0c9d-4577-82e8-644d64c3ab1d-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-566m9\" (UID: \"994be5c4-0c9d-4577-82e8-644d64c3ab1d\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-566m9" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.306467 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fa346657-46eb-4817-b206-4c09d46d4a55-config\") pod \"route-controller-manager-6576b87f9c-xbkl5\" (UID: \"fa346657-46eb-4817-b206-4c09d46d4a55\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-xbkl5" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.306484 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bccecc4d-32d0-4367-a3b6-e35ddf53dd1a-available-featuregates\") pod \"openshift-config-operator-7777fb866f-qk5bm\" (UID: \"bccecc4d-32d0-4367-a3b6-e35ddf53dd1a\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-qk5bm" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.306507 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/5683bb73-dc7f-40ed-86cd-0c08f2d38147-console-serving-cert\") pod \"console-f9d7485db-5l26l\" (UID: \"5683bb73-dc7f-40ed-86cd-0c08f2d38147\") " pod="openshift-console/console-f9d7485db-5l26l" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.306525 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/48a8b317-27eb-4d20-93ad-37fa559ec858-config\") pod \"machine-api-operator-5694c8668f-p42pr\" (UID: \"48a8b317-27eb-4d20-93ad-37fa559ec858\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-p42pr" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.306543 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rsz8q\" (UniqueName: \"kubernetes.io/projected/ae259edb-f577-48b8-b236-91656ac269d2-kube-api-access-rsz8q\") pod \"dns-operator-744455d44c-bgd6x\" (UID: \"ae259edb-f577-48b8-b236-91656ac269d2\") " pod="openshift-dns-operator/dns-operator-744455d44c-bgd6x" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.306562 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/6890b7aa-fac3-4c00-90cc-4618ddfae25e-etcd-serving-ca\") pod \"apiserver-76f77b778f-v5w2k\" (UID: \"6890b7aa-fac3-4c00-90cc-4618ddfae25e\") " pod="openshift-apiserver/apiserver-76f77b778f-v5w2k" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.306584 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5683bb73-dc7f-40ed-86cd-0c08f2d38147-trusted-ca-bundle\") pod \"console-f9d7485db-5l26l\" (UID: \"5683bb73-dc7f-40ed-86cd-0c08f2d38147\") " pod="openshift-console/console-f9d7485db-5l26l" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.306612 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/994be5c4-0c9d-4577-82e8-644d64c3ab1d-config\") pod \"kube-apiserver-operator-766d6c64bb-566m9\" (UID: \"994be5c4-0c9d-4577-82e8-644d64c3ab1d\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-566m9" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.306630 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nn6b4\" (UniqueName: \"kubernetes.io/projected/5683bb73-dc7f-40ed-86cd-0c08f2d38147-kube-api-access-nn6b4\") pod \"console-f9d7485db-5l26l\" (UID: \"5683bb73-dc7f-40ed-86cd-0c08f2d38147\") " pod="openshift-console/console-f9d7485db-5l26l" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.306645 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/48a8b317-27eb-4d20-93ad-37fa559ec858-images\") pod \"machine-api-operator-5694c8668f-p42pr\" (UID: \"48a8b317-27eb-4d20-93ad-37fa559ec858\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-p42pr" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.306665 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/6890b7aa-fac3-4c00-90cc-4618ddfae25e-image-import-ca\") pod \"apiserver-76f77b778f-v5w2k\" (UID: \"6890b7aa-fac3-4c00-90cc-4618ddfae25e\") " pod="openshift-apiserver/apiserver-76f77b778f-v5w2k" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.306687 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/6890b7aa-fac3-4c00-90cc-4618ddfae25e-encryption-config\") pod \"apiserver-76f77b778f-v5w2k\" (UID: \"6890b7aa-fac3-4c00-90cc-4618ddfae25e\") " pod="openshift-apiserver/apiserver-76f77b778f-v5w2k" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.306709 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kg4sb\" (UniqueName: \"kubernetes.io/projected/e3586689-cf81-4cd2-84d1-70b0ce221b9d-kube-api-access-kg4sb\") pod \"cluster-samples-operator-665b6dd947-vms6h\" (UID: \"e3586689-cf81-4cd2-84d1-70b0ce221b9d\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-vms6h" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.306726 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fa346657-46eb-4817-b206-4c09d46d4a55-serving-cert\") pod \"route-controller-manager-6576b87f9c-xbkl5\" (UID: \"fa346657-46eb-4817-b206-4c09d46d4a55\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-xbkl5" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.306751 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6890b7aa-fac3-4c00-90cc-4618ddfae25e-trusted-ca-bundle\") pod \"apiserver-76f77b778f-v5w2k\" (UID: \"6890b7aa-fac3-4c00-90cc-4618ddfae25e\") " pod="openshift-apiserver/apiserver-76f77b778f-v5w2k" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.306775 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6890b7aa-fac3-4c00-90cc-4618ddfae25e-config\") pod \"apiserver-76f77b778f-v5w2k\" (UID: \"6890b7aa-fac3-4c00-90cc-4618ddfae25e\") " pod="openshift-apiserver/apiserver-76f77b778f-v5w2k" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.306845 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/6890b7aa-fac3-4c00-90cc-4618ddfae25e-node-pullsecrets\") pod \"apiserver-76f77b778f-v5w2k\" (UID: \"6890b7aa-fac3-4c00-90cc-4618ddfae25e\") " pod="openshift-apiserver/apiserver-76f77b778f-v5w2k" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.306788 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/5683bb73-dc7f-40ed-86cd-0c08f2d38147-oauth-serving-cert\") pod \"console-f9d7485db-5l26l\" (UID: \"5683bb73-dc7f-40ed-86cd-0c08f2d38147\") " pod="openshift-console/console-f9d7485db-5l26l" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.306789 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/6890b7aa-fac3-4c00-90cc-4618ddfae25e-node-pullsecrets\") pod \"apiserver-76f77b778f-v5w2k\" (UID: \"6890b7aa-fac3-4c00-90cc-4618ddfae25e\") " pod="openshift-apiserver/apiserver-76f77b778f-v5w2k" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.306911 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/cbe755cf-b7a2-4557-9368-5d71df455408-audit-policies\") pod \"apiserver-7bbb656c7d-q9pxc\" (UID: \"cbe755cf-b7a2-4557-9368-5d71df455408\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-q9pxc" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.306936 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/cbe755cf-b7a2-4557-9368-5d71df455408-etcd-client\") pod \"apiserver-7bbb656c7d-q9pxc\" (UID: \"cbe755cf-b7a2-4557-9368-5d71df455408\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-q9pxc" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.306966 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/994be5c4-0c9d-4577-82e8-644d64c3ab1d-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-566m9\" (UID: \"994be5c4-0c9d-4577-82e8-644d64c3ab1d\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-566m9" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.306993 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dbqqn\" (UniqueName: \"kubernetes.io/projected/bccecc4d-32d0-4367-a3b6-e35ddf53dd1a-kube-api-access-dbqqn\") pod \"openshift-config-operator-7777fb866f-qk5bm\" (UID: \"bccecc4d-32d0-4367-a3b6-e35ddf53dd1a\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-qk5bm" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.307021 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6890b7aa-fac3-4c00-90cc-4618ddfae25e-serving-cert\") pod \"apiserver-76f77b778f-v5w2k\" (UID: \"6890b7aa-fac3-4c00-90cc-4618ddfae25e\") " pod="openshift-apiserver/apiserver-76f77b778f-v5w2k" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.307076 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/6890b7aa-fac3-4c00-90cc-4618ddfae25e-audit-dir\") pod \"apiserver-76f77b778f-v5w2k\" (UID: \"6890b7aa-fac3-4c00-90cc-4618ddfae25e\") " pod="openshift-apiserver/apiserver-76f77b778f-v5w2k" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.307107 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/5683bb73-dc7f-40ed-86cd-0c08f2d38147-console-config\") pod \"console-f9d7485db-5l26l\" (UID: \"5683bb73-dc7f-40ed-86cd-0c08f2d38147\") " pod="openshift-console/console-f9d7485db-5l26l" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.307133 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/709f9378-2d1c-4158-9521-e6000e06eb5e-auth-proxy-config\") pod \"machine-approver-56656f9798-7x2vd\" (UID: \"709f9378-2d1c-4158-9521-e6000e06eb5e\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-7x2vd" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.307163 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/86fdeda0-1ae3-488d-9612-d633a5fca64f-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-7tzn9\" (UID: \"86fdeda0-1ae3-488d-9612-d633a5fca64f\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-7tzn9" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.307186 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/709f9378-2d1c-4158-9521-e6000e06eb5e-machine-approver-tls\") pod \"machine-approver-56656f9798-7x2vd\" (UID: \"709f9378-2d1c-4158-9521-e6000e06eb5e\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-7x2vd" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.307207 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/709f9378-2d1c-4158-9521-e6000e06eb5e-config\") pod \"machine-approver-56656f9798-7x2vd\" (UID: \"709f9378-2d1c-4158-9521-e6000e06eb5e\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-7x2vd" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.307243 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/86fdeda0-1ae3-488d-9612-d633a5fca64f-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-7tzn9\" (UID: \"86fdeda0-1ae3-488d-9612-d633a5fca64f\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-7tzn9" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.307312 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/cbe755cf-b7a2-4557-9368-5d71df455408-encryption-config\") pod \"apiserver-7bbb656c7d-q9pxc\" (UID: \"cbe755cf-b7a2-4557-9368-5d71df455408\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-q9pxc" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.307342 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/e3586689-cf81-4cd2-84d1-70b0ce221b9d-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-vms6h\" (UID: \"e3586689-cf81-4cd2-84d1-70b0ce221b9d\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-vms6h" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.307361 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/cbe755cf-b7a2-4557-9368-5d71df455408-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-q9pxc\" (UID: \"cbe755cf-b7a2-4557-9368-5d71df455408\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-q9pxc" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.307858 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fa346657-46eb-4817-b206-4c09d46d4a55-config\") pod \"route-controller-manager-6576b87f9c-xbkl5\" (UID: \"fa346657-46eb-4817-b206-4c09d46d4a55\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-xbkl5" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.307857 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/cbe755cf-b7a2-4557-9368-5d71df455408-audit-dir\") pod \"apiserver-7bbb656c7d-q9pxc\" (UID: \"cbe755cf-b7a2-4557-9368-5d71df455408\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-q9pxc" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.308442 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/709f9378-2d1c-4158-9521-e6000e06eb5e-config\") pod \"machine-approver-56656f9798-7x2vd\" (UID: \"709f9378-2d1c-4158-9521-e6000e06eb5e\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-7x2vd" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.308465 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/5683bb73-dc7f-40ed-86cd-0c08f2d38147-service-ca\") pod \"console-f9d7485db-5l26l\" (UID: \"5683bb73-dc7f-40ed-86cd-0c08f2d38147\") " pod="openshift-console/console-f9d7485db-5l26l" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.308502 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/cbe755cf-b7a2-4557-9368-5d71df455408-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-q9pxc\" (UID: \"cbe755cf-b7a2-4557-9368-5d71df455408\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-q9pxc" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.309228 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/709f9378-2d1c-4158-9521-e6000e06eb5e-auth-proxy-config\") pod \"machine-approver-56656f9798-7x2vd\" (UID: \"709f9378-2d1c-4158-9521-e6000e06eb5e\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-7x2vd" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.309430 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/cbe755cf-b7a2-4557-9368-5d71df455408-audit-policies\") pod \"apiserver-7bbb656c7d-q9pxc\" (UID: \"cbe755cf-b7a2-4557-9368-5d71df455408\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-q9pxc" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.309650 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cbe755cf-b7a2-4557-9368-5d71df455408-serving-cert\") pod \"apiserver-7bbb656c7d-q9pxc\" (UID: \"cbe755cf-b7a2-4557-9368-5d71df455408\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-q9pxc" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.309828 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/994be5c4-0c9d-4577-82e8-644d64c3ab1d-config\") pod \"kube-apiserver-operator-766d6c64bb-566m9\" (UID: \"994be5c4-0c9d-4577-82e8-644d64c3ab1d\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-566m9" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.310272 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/5683bb73-dc7f-40ed-86cd-0c08f2d38147-console-config\") pod \"console-f9d7485db-5l26l\" (UID: \"5683bb73-dc7f-40ed-86cd-0c08f2d38147\") " pod="openshift-console/console-f9d7485db-5l26l" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.310309 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bccecc4d-32d0-4367-a3b6-e35ddf53dd1a-available-featuregates\") pod \"openshift-config-operator-7777fb866f-qk5bm\" (UID: \"bccecc4d-32d0-4367-a3b6-e35ddf53dd1a\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-qk5bm" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.310447 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/48a8b317-27eb-4d20-93ad-37fa559ec858-images\") pod \"machine-api-operator-5694c8668f-p42pr\" (UID: \"48a8b317-27eb-4d20-93ad-37fa559ec858\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-p42pr" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.310724 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/6890b7aa-fac3-4c00-90cc-4618ddfae25e-audit-dir\") pod \"apiserver-76f77b778f-v5w2k\" (UID: \"6890b7aa-fac3-4c00-90cc-4618ddfae25e\") " pod="openshift-apiserver/apiserver-76f77b778f-v5w2k" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.311310 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/86fdeda0-1ae3-488d-9612-d633a5fca64f-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-7tzn9\" (UID: \"86fdeda0-1ae3-488d-9612-d633a5fca64f\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-7tzn9" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.311361 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6890b7aa-fac3-4c00-90cc-4618ddfae25e-trusted-ca-bundle\") pod \"apiserver-76f77b778f-v5w2k\" (UID: \"6890b7aa-fac3-4c00-90cc-4618ddfae25e\") " pod="openshift-apiserver/apiserver-76f77b778f-v5w2k" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.311391 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/6890b7aa-fac3-4c00-90cc-4618ddfae25e-audit\") pod \"apiserver-76f77b778f-v5w2k\" (UID: \"6890b7aa-fac3-4c00-90cc-4618ddfae25e\") " pod="openshift-apiserver/apiserver-76f77b778f-v5w2k" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.311435 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/6890b7aa-fac3-4c00-90cc-4618ddfae25e-image-import-ca\") pod \"apiserver-76f77b778f-v5w2k\" (UID: \"6890b7aa-fac3-4c00-90cc-4618ddfae25e\") " pod="openshift-apiserver/apiserver-76f77b778f-v5w2k" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.311958 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/48a8b317-27eb-4d20-93ad-37fa559ec858-config\") pod \"machine-api-operator-5694c8668f-p42pr\" (UID: \"48a8b317-27eb-4d20-93ad-37fa559ec858\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-p42pr" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.311972 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/6890b7aa-fac3-4c00-90cc-4618ddfae25e-etcd-client\") pod \"apiserver-76f77b778f-v5w2k\" (UID: \"6890b7aa-fac3-4c00-90cc-4618ddfae25e\") " pod="openshift-apiserver/apiserver-76f77b778f-v5w2k" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.312017 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/6890b7aa-fac3-4c00-90cc-4618ddfae25e-etcd-serving-ca\") pod \"apiserver-76f77b778f-v5w2k\" (UID: \"6890b7aa-fac3-4c00-90cc-4618ddfae25e\") " pod="openshift-apiserver/apiserver-76f77b778f-v5w2k" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.312182 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/fa346657-46eb-4817-b206-4c09d46d4a55-client-ca\") pod \"route-controller-manager-6576b87f9c-xbkl5\" (UID: \"fa346657-46eb-4817-b206-4c09d46d4a55\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-xbkl5" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.312353 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/cbe755cf-b7a2-4557-9368-5d71df455408-etcd-client\") pod \"apiserver-7bbb656c7d-q9pxc\" (UID: \"cbe755cf-b7a2-4557-9368-5d71df455408\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-q9pxc" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.312597 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5683bb73-dc7f-40ed-86cd-0c08f2d38147-trusted-ca-bundle\") pod \"console-f9d7485db-5l26l\" (UID: \"5683bb73-dc7f-40ed-86cd-0c08f2d38147\") " pod="openshift-console/console-f9d7485db-5l26l" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.313126 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/994be5c4-0c9d-4577-82e8-644d64c3ab1d-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-566m9\" (UID: \"994be5c4-0c9d-4577-82e8-644d64c3ab1d\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-566m9" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.313160 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bccecc4d-32d0-4367-a3b6-e35ddf53dd1a-serving-cert\") pod \"openshift-config-operator-7777fb866f-qk5bm\" (UID: \"bccecc4d-32d0-4367-a3b6-e35ddf53dd1a\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-qk5bm" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.313324 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/48a8b317-27eb-4d20-93ad-37fa559ec858-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-p42pr\" (UID: \"48a8b317-27eb-4d20-93ad-37fa559ec858\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-p42pr" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.313794 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/709f9378-2d1c-4158-9521-e6000e06eb5e-machine-approver-tls\") pod \"machine-approver-56656f9798-7x2vd\" (UID: \"709f9378-2d1c-4158-9521-e6000e06eb5e\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-7x2vd" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.313839 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/cbe755cf-b7a2-4557-9368-5d71df455408-encryption-config\") pod \"apiserver-7bbb656c7d-q9pxc\" (UID: \"cbe755cf-b7a2-4557-9368-5d71df455408\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-q9pxc" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.314072 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6890b7aa-fac3-4c00-90cc-4618ddfae25e-serving-cert\") pod \"apiserver-76f77b778f-v5w2k\" (UID: \"6890b7aa-fac3-4c00-90cc-4618ddfae25e\") " pod="openshift-apiserver/apiserver-76f77b778f-v5w2k" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.314098 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/6890b7aa-fac3-4c00-90cc-4618ddfae25e-encryption-config\") pod \"apiserver-76f77b778f-v5w2k\" (UID: \"6890b7aa-fac3-4c00-90cc-4618ddfae25e\") " pod="openshift-apiserver/apiserver-76f77b778f-v5w2k" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.314810 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/e3586689-cf81-4cd2-84d1-70b0ce221b9d-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-vms6h\" (UID: \"e3586689-cf81-4cd2-84d1-70b0ce221b9d\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-vms6h" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.315406 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fa346657-46eb-4817-b206-4c09d46d4a55-serving-cert\") pod \"route-controller-manager-6576b87f9c-xbkl5\" (UID: \"fa346657-46eb-4817-b206-4c09d46d4a55\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-xbkl5" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.315875 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/5683bb73-dc7f-40ed-86cd-0c08f2d38147-console-oauth-config\") pod \"console-f9d7485db-5l26l\" (UID: \"5683bb73-dc7f-40ed-86cd-0c08f2d38147\") " pod="openshift-console/console-f9d7485db-5l26l" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.316031 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/5683bb73-dc7f-40ed-86cd-0c08f2d38147-console-serving-cert\") pod \"console-f9d7485db-5l26l\" (UID: \"5683bb73-dc7f-40ed-86cd-0c08f2d38147\") " pod="openshift-console/console-f9d7485db-5l26l" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.316135 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.335853 4942 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.355525 4942 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.384371 4942 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.416231 4942 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.436841 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.445149 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/ae259edb-f577-48b8-b236-91656ac269d2-metrics-tls\") pod \"dns-operator-744455d44c-bgd6x\" (UID: \"ae259edb-f577-48b8-b236-91656ac269d2\") " pod="openshift-dns-operator/dns-operator-744455d44c-bgd6x" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.455747 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.478107 4942 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.495627 4942 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.516554 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.536556 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.542266 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/86fdeda0-1ae3-488d-9612-d633a5fca64f-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-7tzn9\" (UID: \"86fdeda0-1ae3-488d-9612-d633a5fca64f\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-7tzn9" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.556533 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.575888 4942 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.597063 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.616450 4942 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.636689 4942 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.656332 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.676515 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.696879 4942 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.737495 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.758604 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.777025 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.796882 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.816309 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.837411 4942 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.856821 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.876836 4942 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.897163 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.916719 4942 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.937444 4942 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.956341 4942 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.976391 4942 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Feb 18 19:19:40 crc kubenswrapper[4942]: I0218 19:19:40.995921 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Feb 18 19:19:41 crc kubenswrapper[4942]: I0218 19:19:41.016844 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Feb 18 19:19:41 crc kubenswrapper[4942]: I0218 19:19:41.037233 4942 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Feb 18 19:19:41 crc kubenswrapper[4942]: I0218 19:19:41.057799 4942 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Feb 18 19:19:41 crc kubenswrapper[4942]: I0218 19:19:41.076921 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Feb 18 19:19:41 crc kubenswrapper[4942]: I0218 19:19:41.096165 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Feb 18 19:19:41 crc kubenswrapper[4942]: I0218 19:19:41.116733 4942 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Feb 18 19:19:41 crc kubenswrapper[4942]: I0218 19:19:41.136861 4942 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Feb 18 19:19:41 crc kubenswrapper[4942]: I0218 19:19:41.156601 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Feb 18 19:19:41 crc kubenswrapper[4942]: I0218 19:19:41.176442 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Feb 18 19:19:41 crc kubenswrapper[4942]: I0218 19:19:41.194341 4942 request.go:700] Waited for 1.012403879s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-machine-config-operator/secrets?fieldSelector=metadata.name%3Dmcc-proxy-tls&limit=500&resourceVersion=0 Feb 18 19:19:41 crc kubenswrapper[4942]: I0218 19:19:41.196713 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Feb 18 19:19:41 crc kubenswrapper[4942]: I0218 19:19:41.216885 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Feb 18 19:19:41 crc kubenswrapper[4942]: I0218 19:19:41.236510 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Feb 18 19:19:41 crc kubenswrapper[4942]: I0218 19:19:41.255727 4942 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Feb 18 19:19:41 crc kubenswrapper[4942]: I0218 19:19:41.275842 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Feb 18 19:19:41 crc kubenswrapper[4942]: I0218 19:19:41.296953 4942 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Feb 18 19:19:41 crc kubenswrapper[4942]: I0218 19:19:41.316647 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Feb 18 19:19:41 crc kubenswrapper[4942]: I0218 19:19:41.336347 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Feb 18 19:19:41 crc kubenswrapper[4942]: I0218 19:19:41.356081 4942 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Feb 18 19:19:41 crc kubenswrapper[4942]: I0218 19:19:41.378124 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Feb 18 19:19:41 crc kubenswrapper[4942]: I0218 19:19:41.397411 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Feb 18 19:19:41 crc kubenswrapper[4942]: I0218 19:19:41.416390 4942 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Feb 18 19:19:41 crc kubenswrapper[4942]: I0218 19:19:41.436584 4942 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Feb 18 19:19:41 crc kubenswrapper[4942]: I0218 19:19:41.456250 4942 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Feb 18 19:19:41 crc kubenswrapper[4942]: I0218 19:19:41.476820 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Feb 18 19:19:41 crc kubenswrapper[4942]: I0218 19:19:41.497512 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Feb 18 19:19:41 crc kubenswrapper[4942]: I0218 19:19:41.516609 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Feb 18 19:19:41 crc kubenswrapper[4942]: I0218 19:19:41.536217 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Feb 18 19:19:41 crc kubenswrapper[4942]: I0218 19:19:41.557351 4942 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Feb 18 19:19:41 crc kubenswrapper[4942]: I0218 19:19:41.575961 4942 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Feb 18 19:19:41 crc kubenswrapper[4942]: I0218 19:19:41.597028 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Feb 18 19:19:41 crc kubenswrapper[4942]: I0218 19:19:41.617103 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Feb 18 19:19:41 crc kubenswrapper[4942]: I0218 19:19:41.636374 4942 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Feb 18 19:19:41 crc kubenswrapper[4942]: I0218 19:19:41.656875 4942 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 18 19:19:41 crc kubenswrapper[4942]: I0218 19:19:41.689604 4942 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Feb 18 19:19:41 crc kubenswrapper[4942]: I0218 19:19:41.696802 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 18 19:19:41 crc kubenswrapper[4942]: I0218 19:19:41.716607 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Feb 18 19:19:41 crc kubenswrapper[4942]: I0218 19:19:41.736497 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Feb 18 19:19:41 crc kubenswrapper[4942]: I0218 19:19:41.757536 4942 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Feb 18 19:19:41 crc kubenswrapper[4942]: I0218 19:19:41.775806 4942 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Feb 18 19:19:41 crc kubenswrapper[4942]: I0218 19:19:41.797040 4942 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Feb 18 19:19:41 crc kubenswrapper[4942]: I0218 19:19:41.816907 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Feb 18 19:19:41 crc kubenswrapper[4942]: I0218 19:19:41.836178 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Feb 18 19:19:41 crc kubenswrapper[4942]: I0218 19:19:41.856455 4942 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Feb 18 19:19:41 crc kubenswrapper[4942]: I0218 19:19:41.903314 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cckls\" (UniqueName: \"kubernetes.io/projected/5bd5f22b-1c00-4281-9d3a-6ed77a4d0d29-kube-api-access-cckls\") pod \"console-operator-58897d9998-4pmfw\" (UID: \"5bd5f22b-1c00-4281-9d3a-6ed77a4d0d29\") " pod="openshift-console-operator/console-operator-58897d9998-4pmfw" Feb 18 19:19:41 crc kubenswrapper[4942]: I0218 19:19:41.929258 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rlvr4\" (UniqueName: \"kubernetes.io/projected/cb8403e3-f9b3-4ddf-8688-1a025a2b9291-kube-api-access-rlvr4\") pod \"downloads-7954f5f757-tndhs\" (UID: \"cb8403e3-f9b3-4ddf-8688-1a025a2b9291\") " pod="openshift-console/downloads-7954f5f757-tndhs" Feb 18 19:19:41 crc kubenswrapper[4942]: I0218 19:19:41.934065 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-chfv2\" (UniqueName: \"kubernetes.io/projected/0d941adf-0c5e-46d6-9a7c-a7677468f322-kube-api-access-chfv2\") pod \"openshift-apiserver-operator-796bbdcf4f-2ldmd\" (UID: \"0d941adf-0c5e-46d6-9a7c-a7677468f322\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-2ldmd" Feb 18 19:19:41 crc kubenswrapper[4942]: I0218 19:19:41.960728 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mvgsp\" (UniqueName: \"kubernetes.io/projected/4afc5765-32dc-4b49-b1a3-9141c2c96087-kube-api-access-mvgsp\") pod \"authentication-operator-69f744f599-bd7zz\" (UID: \"4afc5765-32dc-4b49-b1a3-9141c2c96087\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-bd7zz" Feb 18 19:19:41 crc kubenswrapper[4942]: I0218 19:19:41.974971 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2wh89\" (UniqueName: \"kubernetes.io/projected/42dda107-038c-42c1-8182-52bee75caea9-kube-api-access-2wh89\") pod \"oauth-openshift-558db77b4-kpfjc\" (UID: \"42dda107-038c-42c1-8182-52bee75caea9\") " pod="openshift-authentication/oauth-openshift-558db77b4-kpfjc" Feb 18 19:19:41 crc kubenswrapper[4942]: I0218 19:19:41.975706 4942 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Feb 18 19:19:41 crc kubenswrapper[4942]: I0218 19:19:41.996474 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Feb 18 19:19:42 crc kubenswrapper[4942]: I0218 19:19:42.016833 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Feb 18 19:19:42 crc kubenswrapper[4942]: I0218 19:19:42.035899 4942 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Feb 18 19:19:42 crc kubenswrapper[4942]: I0218 19:19:42.056719 4942 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Feb 18 19:19:42 crc kubenswrapper[4942]: I0218 19:19:42.076107 4942 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Feb 18 19:19:42 crc kubenswrapper[4942]: I0218 19:19:42.095991 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Feb 18 19:19:42 crc kubenswrapper[4942]: I0218 19:19:42.116633 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Feb 18 19:19:42 crc kubenswrapper[4942]: I0218 19:19:42.119659 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-2ldmd" Feb 18 19:19:42 crc kubenswrapper[4942]: I0218 19:19:42.135869 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-4pmfw" Feb 18 19:19:42 crc kubenswrapper[4942]: I0218 19:19:42.138082 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Feb 18 19:19:42 crc kubenswrapper[4942]: I0218 19:19:42.161329 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-bd7zz" Feb 18 19:19:42 crc kubenswrapper[4942]: I0218 19:19:42.182133 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-48shh\" (UniqueName: \"kubernetes.io/projected/cbe755cf-b7a2-4557-9368-5d71df455408-kube-api-access-48shh\") pod \"apiserver-7bbb656c7d-q9pxc\" (UID: \"cbe755cf-b7a2-4557-9368-5d71df455408\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-q9pxc" Feb 18 19:19:42 crc kubenswrapper[4942]: I0218 19:19:42.185240 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-kpfjc" Feb 18 19:19:42 crc kubenswrapper[4942]: I0218 19:19:42.193734 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/86fdeda0-1ae3-488d-9612-d633a5fca64f-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-7tzn9\" (UID: \"86fdeda0-1ae3-488d-9612-d633a5fca64f\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-7tzn9" Feb 18 19:19:42 crc kubenswrapper[4942]: I0218 19:19:42.210067 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-tndhs" Feb 18 19:19:42 crc kubenswrapper[4942]: I0218 19:19:42.214583 4942 request.go:700] Waited for 1.906630391s due to client-side throttling, not priority and fairness, request: POST:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/serviceaccounts/route-controller-manager-sa/token Feb 18 19:19:42 crc kubenswrapper[4942]: I0218 19:19:42.230576 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-drnjv\" (UniqueName: \"kubernetes.io/projected/86fdeda0-1ae3-488d-9612-d633a5fca64f-kube-api-access-drnjv\") pod \"cluster-image-registry-operator-dc59b4c8b-7tzn9\" (UID: \"86fdeda0-1ae3-488d-9612-d633a5fca64f\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-7tzn9" Feb 18 19:19:42 crc kubenswrapper[4942]: I0218 19:19:42.271502 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-q9pxc" Feb 18 19:19:42 crc kubenswrapper[4942]: I0218 19:19:42.278351 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/994be5c4-0c9d-4577-82e8-644d64c3ab1d-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-566m9\" (UID: \"994be5c4-0c9d-4577-82e8-644d64c3ab1d\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-566m9" Feb 18 19:19:42 crc kubenswrapper[4942]: I0218 19:19:42.291880 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pcdjq\" (UniqueName: \"kubernetes.io/projected/709f9378-2d1c-4158-9521-e6000e06eb5e-kube-api-access-pcdjq\") pod \"machine-approver-56656f9798-7x2vd\" (UID: \"709f9378-2d1c-4158-9521-e6000e06eb5e\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-7x2vd" Feb 18 19:19:42 crc kubenswrapper[4942]: I0218 19:19:42.291909 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mddqc\" (UniqueName: \"kubernetes.io/projected/fa346657-46eb-4817-b206-4c09d46d4a55-kube-api-access-mddqc\") pod \"route-controller-manager-6576b87f9c-xbkl5\" (UID: \"fa346657-46eb-4817-b206-4c09d46d4a55\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-xbkl5" Feb 18 19:19:42 crc kubenswrapper[4942]: I0218 19:19:42.305869 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-272vp\" (UniqueName: \"kubernetes.io/projected/48a8b317-27eb-4d20-93ad-37fa559ec858-kube-api-access-272vp\") pod \"machine-api-operator-5694c8668f-p42pr\" (UID: \"48a8b317-27eb-4d20-93ad-37fa559ec858\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-p42pr" Feb 18 19:19:42 crc kubenswrapper[4942]: I0218 19:19:42.324134 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nn6b4\" (UniqueName: \"kubernetes.io/projected/5683bb73-dc7f-40ed-86cd-0c08f2d38147-kube-api-access-nn6b4\") pod \"console-f9d7485db-5l26l\" (UID: \"5683bb73-dc7f-40ed-86cd-0c08f2d38147\") " pod="openshift-console/console-f9d7485db-5l26l" Feb 18 19:19:42 crc kubenswrapper[4942]: I0218 19:19:42.336894 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kg4sb\" (UniqueName: \"kubernetes.io/projected/e3586689-cf81-4cd2-84d1-70b0ce221b9d-kube-api-access-kg4sb\") pod \"cluster-samples-operator-665b6dd947-vms6h\" (UID: \"e3586689-cf81-4cd2-84d1-70b0ce221b9d\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-vms6h" Feb 18 19:19:42 crc kubenswrapper[4942]: I0218 19:19:42.355167 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-7x2vd" Feb 18 19:19:42 crc kubenswrapper[4942]: I0218 19:19:42.359013 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rsz8q\" (UniqueName: \"kubernetes.io/projected/ae259edb-f577-48b8-b236-91656ac269d2-kube-api-access-rsz8q\") pod \"dns-operator-744455d44c-bgd6x\" (UID: \"ae259edb-f577-48b8-b236-91656ac269d2\") " pod="openshift-dns-operator/dns-operator-744455d44c-bgd6x" Feb 18 19:19:42 crc kubenswrapper[4942]: I0218 19:19:42.380270 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-xbkl5" Feb 18 19:19:42 crc kubenswrapper[4942]: I0218 19:19:42.387540 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dbqqn\" (UniqueName: \"kubernetes.io/projected/bccecc4d-32d0-4367-a3b6-e35ddf53dd1a-kube-api-access-dbqqn\") pod \"openshift-config-operator-7777fb866f-qk5bm\" (UID: \"bccecc4d-32d0-4367-a3b6-e35ddf53dd1a\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-qk5bm" Feb 18 19:19:42 crc kubenswrapper[4942]: I0218 19:19:42.394673 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-p42pr" Feb 18 19:19:42 crc kubenswrapper[4942]: I0218 19:19:42.402115 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5wfd2\" (UniqueName: \"kubernetes.io/projected/6890b7aa-fac3-4c00-90cc-4618ddfae25e-kube-api-access-5wfd2\") pod \"apiserver-76f77b778f-v5w2k\" (UID: \"6890b7aa-fac3-4c00-90cc-4618ddfae25e\") " pod="openshift-apiserver/apiserver-76f77b778f-v5w2k" Feb 18 19:19:42 crc kubenswrapper[4942]: I0218 19:19:42.410394 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-qk5bm" Feb 18 19:19:42 crc kubenswrapper[4942]: I0218 19:19:42.420978 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-566m9" Feb 18 19:19:42 crc kubenswrapper[4942]: I0218 19:19:42.423468 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-2ldmd"] Feb 18 19:19:42 crc kubenswrapper[4942]: I0218 19:19:42.435108 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-bgd6x" Feb 18 19:19:42 crc kubenswrapper[4942]: I0218 19:19:42.445958 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/087f0c6b-3e9f-4db4-bbcb-a8075e218219-bound-sa-token\") pod \"image-registry-697d97f7c8-2fcrf\" (UID: \"087f0c6b-3e9f-4db4-bbcb-a8075e218219\") " pod="openshift-image-registry/image-registry-697d97f7c8-2fcrf" Feb 18 19:19:42 crc kubenswrapper[4942]: I0218 19:19:42.446059 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2bpp7\" (UniqueName: \"kubernetes.io/projected/5d6ad520-b407-4b86-867b-9e9658bfa536-kube-api-access-2bpp7\") pod \"controller-manager-879f6c89f-z4t28\" (UID: \"5d6ad520-b407-4b86-867b-9e9658bfa536\") " pod="openshift-controller-manager/controller-manager-879f6c89f-z4t28" Feb 18 19:19:42 crc kubenswrapper[4942]: I0218 19:19:42.446138 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-25z4w\" (UniqueName: \"kubernetes.io/projected/087f0c6b-3e9f-4db4-bbcb-a8075e218219-kube-api-access-25z4w\") pod \"image-registry-697d97f7c8-2fcrf\" (UID: \"087f0c6b-3e9f-4db4-bbcb-a8075e218219\") " pod="openshift-image-registry/image-registry-697d97f7c8-2fcrf" Feb 18 19:19:42 crc kubenswrapper[4942]: I0218 19:19:42.446166 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9a79c946-4621-4b6d-af59-6b919d125502-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-jqs9l\" (UID: \"9a79c946-4621-4b6d-af59-6b919d125502\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-jqs9l" Feb 18 19:19:42 crc kubenswrapper[4942]: I0218 19:19:42.447193 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5d6ad520-b407-4b86-867b-9e9658bfa536-client-ca\") pod \"controller-manager-879f6c89f-z4t28\" (UID: \"5d6ad520-b407-4b86-867b-9e9658bfa536\") " pod="openshift-controller-manager/controller-manager-879f6c89f-z4t28" Feb 18 19:19:42 crc kubenswrapper[4942]: I0218 19:19:42.447279 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/087f0c6b-3e9f-4db4-bbcb-a8075e218219-registry-tls\") pod \"image-registry-697d97f7c8-2fcrf\" (UID: \"087f0c6b-3e9f-4db4-bbcb-a8075e218219\") " pod="openshift-image-registry/image-registry-697d97f7c8-2fcrf" Feb 18 19:19:42 crc kubenswrapper[4942]: I0218 19:19:42.447357 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/087f0c6b-3e9f-4db4-bbcb-a8075e218219-trusted-ca\") pod \"image-registry-697d97f7c8-2fcrf\" (UID: \"087f0c6b-3e9f-4db4-bbcb-a8075e218219\") " pod="openshift-image-registry/image-registry-697d97f7c8-2fcrf" Feb 18 19:19:42 crc kubenswrapper[4942]: I0218 19:19:42.447423 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tmwzh\" (UniqueName: \"kubernetes.io/projected/9a79c946-4621-4b6d-af59-6b919d125502-kube-api-access-tmwzh\") pod \"kube-storage-version-migrator-operator-b67b599dd-jqs9l\" (UID: \"9a79c946-4621-4b6d-af59-6b919d125502\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-jqs9l" Feb 18 19:19:42 crc kubenswrapper[4942]: I0218 19:19:42.447466 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5d6ad520-b407-4b86-867b-9e9658bfa536-config\") pod \"controller-manager-879f6c89f-z4t28\" (UID: \"5d6ad520-b407-4b86-867b-9e9658bfa536\") " pod="openshift-controller-manager/controller-manager-879f6c89f-z4t28" Feb 18 19:19:42 crc kubenswrapper[4942]: I0218 19:19:42.447487 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/714a349f-4480-4467-9041-7cae31df7686-etcd-ca\") pod \"etcd-operator-b45778765-x5rln\" (UID: \"714a349f-4480-4467-9041-7cae31df7686\") " pod="openshift-etcd-operator/etcd-operator-b45778765-x5rln" Feb 18 19:19:42 crc kubenswrapper[4942]: I0218 19:19:42.447679 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/087f0c6b-3e9f-4db4-bbcb-a8075e218219-installation-pull-secrets\") pod \"image-registry-697d97f7c8-2fcrf\" (UID: \"087f0c6b-3e9f-4db4-bbcb-a8075e218219\") " pod="openshift-image-registry/image-registry-697d97f7c8-2fcrf" Feb 18 19:19:42 crc kubenswrapper[4942]: I0218 19:19:42.448144 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/05aed8e4-390c-4589-8a61-2aab50a1d90f-bound-sa-token\") pod \"ingress-operator-5b745b69d9-rw75p\" (UID: \"05aed8e4-390c-4589-8a61-2aab50a1d90f\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-rw75p" Feb 18 19:19:42 crc kubenswrapper[4942]: I0218 19:19:42.448217 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/714a349f-4480-4467-9041-7cae31df7686-config\") pod \"etcd-operator-b45778765-x5rln\" (UID: \"714a349f-4480-4467-9041-7cae31df7686\") " pod="openshift-etcd-operator/etcd-operator-b45778765-x5rln" Feb 18 19:19:42 crc kubenswrapper[4942]: I0218 19:19:42.448246 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5d6ad520-b407-4b86-867b-9e9658bfa536-serving-cert\") pod \"controller-manager-879f6c89f-z4t28\" (UID: \"5d6ad520-b407-4b86-867b-9e9658bfa536\") " pod="openshift-controller-manager/controller-manager-879f6c89f-z4t28" Feb 18 19:19:42 crc kubenswrapper[4942]: I0218 19:19:42.448306 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/5d6ad520-b407-4b86-867b-9e9658bfa536-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-z4t28\" (UID: \"5d6ad520-b407-4b86-867b-9e9658bfa536\") " pod="openshift-controller-manager/controller-manager-879f6c89f-z4t28" Feb 18 19:19:42 crc kubenswrapper[4942]: I0218 19:19:42.448402 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v7smb\" (UniqueName: \"kubernetes.io/projected/05aed8e4-390c-4589-8a61-2aab50a1d90f-kube-api-access-v7smb\") pod \"ingress-operator-5b745b69d9-rw75p\" (UID: \"05aed8e4-390c-4589-8a61-2aab50a1d90f\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-rw75p" Feb 18 19:19:42 crc kubenswrapper[4942]: I0218 19:19:42.448720 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9a79c946-4621-4b6d-af59-6b919d125502-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-jqs9l\" (UID: \"9a79c946-4621-4b6d-af59-6b919d125502\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-jqs9l" Feb 18 19:19:42 crc kubenswrapper[4942]: I0218 19:19:42.448820 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/714a349f-4480-4467-9041-7cae31df7686-serving-cert\") pod \"etcd-operator-b45778765-x5rln\" (UID: \"714a349f-4480-4467-9041-7cae31df7686\") " pod="openshift-etcd-operator/etcd-operator-b45778765-x5rln" Feb 18 19:19:42 crc kubenswrapper[4942]: I0218 19:19:42.448943 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-2fcrf\" (UID: \"087f0c6b-3e9f-4db4-bbcb-a8075e218219\") " pod="openshift-image-registry/image-registry-697d97f7c8-2fcrf" Feb 18 19:19:42 crc kubenswrapper[4942]: E0218 19:19:42.449515 4942 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-18 19:19:42.949482006 +0000 UTC m=+142.654414671 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-2fcrf" (UID: "087f0c6b-3e9f-4db4-bbcb-a8075e218219") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 19:19:42 crc kubenswrapper[4942]: I0218 19:19:42.449564 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/087f0c6b-3e9f-4db4-bbcb-a8075e218219-ca-trust-extracted\") pod \"image-registry-697d97f7c8-2fcrf\" (UID: \"087f0c6b-3e9f-4db4-bbcb-a8075e218219\") " pod="openshift-image-registry/image-registry-697d97f7c8-2fcrf" Feb 18 19:19:42 crc kubenswrapper[4942]: I0218 19:19:42.449607 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/05aed8e4-390c-4589-8a61-2aab50a1d90f-trusted-ca\") pod \"ingress-operator-5b745b69d9-rw75p\" (UID: \"05aed8e4-390c-4589-8a61-2aab50a1d90f\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-rw75p" Feb 18 19:19:42 crc kubenswrapper[4942]: I0218 19:19:42.449775 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/714a349f-4480-4467-9041-7cae31df7686-etcd-service-ca\") pod \"etcd-operator-b45778765-x5rln\" (UID: \"714a349f-4480-4467-9041-7cae31df7686\") " pod="openshift-etcd-operator/etcd-operator-b45778765-x5rln" Feb 18 19:19:42 crc kubenswrapper[4942]: I0218 19:19:42.449924 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/087f0c6b-3e9f-4db4-bbcb-a8075e218219-registry-certificates\") pod \"image-registry-697d97f7c8-2fcrf\" (UID: \"087f0c6b-3e9f-4db4-bbcb-a8075e218219\") " pod="openshift-image-registry/image-registry-697d97f7c8-2fcrf" Feb 18 19:19:42 crc kubenswrapper[4942]: I0218 19:19:42.449955 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/714a349f-4480-4467-9041-7cae31df7686-etcd-client\") pod \"etcd-operator-b45778765-x5rln\" (UID: \"714a349f-4480-4467-9041-7cae31df7686\") " pod="openshift-etcd-operator/etcd-operator-b45778765-x5rln" Feb 18 19:19:42 crc kubenswrapper[4942]: I0218 19:19:42.450002 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/05aed8e4-390c-4589-8a61-2aab50a1d90f-metrics-tls\") pod \"ingress-operator-5b745b69d9-rw75p\" (UID: \"05aed8e4-390c-4589-8a61-2aab50a1d90f\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-rw75p" Feb 18 19:19:42 crc kubenswrapper[4942]: I0218 19:19:42.450047 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vtlv7\" (UniqueName: \"kubernetes.io/projected/714a349f-4480-4467-9041-7cae31df7686-kube-api-access-vtlv7\") pod \"etcd-operator-b45778765-x5rln\" (UID: \"714a349f-4480-4467-9041-7cae31df7686\") " pod="openshift-etcd-operator/etcd-operator-b45778765-x5rln" Feb 18 19:19:42 crc kubenswrapper[4942]: I0218 19:19:42.452700 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-7tzn9" Feb 18 19:19:42 crc kubenswrapper[4942]: W0218 19:19:42.466123 4942 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0d941adf_0c5e_46d6_9a7c_a7677468f322.slice/crio-da475d0721a7d193057b00168df055f153446074dc5684b508d817f1c1d3fe48 WatchSource:0}: Error finding container da475d0721a7d193057b00168df055f153446074dc5684b508d817f1c1d3fe48: Status 404 returned error can't find the container with id da475d0721a7d193057b00168df055f153446074dc5684b508d817f1c1d3fe48 Feb 18 19:19:42 crc kubenswrapper[4942]: I0218 19:19:42.552360 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 19:19:42 crc kubenswrapper[4942]: E0218 19:19:42.552534 4942 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 19:19:43.052510564 +0000 UTC m=+142.757443229 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 19:19:42 crc kubenswrapper[4942]: I0218 19:19:42.552564 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5d6ad520-b407-4b86-867b-9e9658bfa536-serving-cert\") pod \"controller-manager-879f6c89f-z4t28\" (UID: \"5d6ad520-b407-4b86-867b-9e9658bfa536\") " pod="openshift-controller-manager/controller-manager-879f6c89f-z4t28" Feb 18 19:19:42 crc kubenswrapper[4942]: I0218 19:19:42.552593 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8wmrg\" (UniqueName: \"kubernetes.io/projected/9b732dca-66e7-48c3-bd7d-5efc1d9662d7-kube-api-access-8wmrg\") pod \"service-ca-operator-777779d784-v6bqq\" (UID: \"9b732dca-66e7-48c3-bd7d-5efc1d9662d7\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-v6bqq" Feb 18 19:19:42 crc kubenswrapper[4942]: I0218 19:19:42.552611 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/af99a6af-5df3-4b87-8f14-a564c5d86164-plugins-dir\") pod \"csi-hostpathplugin-w9lpz\" (UID: \"af99a6af-5df3-4b87-8f14-a564c5d86164\") " pod="hostpath-provisioner/csi-hostpathplugin-w9lpz" Feb 18 19:19:42 crc kubenswrapper[4942]: I0218 19:19:42.552635 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/5d6ad520-b407-4b86-867b-9e9658bfa536-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-z4t28\" (UID: \"5d6ad520-b407-4b86-867b-9e9658bfa536\") " pod="openshift-controller-manager/controller-manager-879f6c89f-z4t28" Feb 18 19:19:42 crc kubenswrapper[4942]: I0218 19:19:42.552651 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8j82p\" (UniqueName: \"kubernetes.io/projected/2407a935-a8b9-4894-baaf-7460fee3d22b-kube-api-access-8j82p\") pod \"openshift-controller-manager-operator-756b6f6bc6-9mc8z\" (UID: \"2407a935-a8b9-4894-baaf-7460fee3d22b\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-9mc8z" Feb 18 19:19:42 crc kubenswrapper[4942]: I0218 19:19:42.552667 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/696bcbdd-c9ca-45cd-ae12-e733919e2832-signing-key\") pod \"service-ca-9c57cc56f-9wcp7\" (UID: \"696bcbdd-c9ca-45cd-ae12-e733919e2832\") " pod="openshift-service-ca/service-ca-9c57cc56f-9wcp7" Feb 18 19:19:42 crc kubenswrapper[4942]: I0218 19:19:42.552681 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8134898c-a265-4fa0-8548-075ea0812b7b-service-ca-bundle\") pod \"router-default-5444994796-fgw8l\" (UID: \"8134898c-a265-4fa0-8548-075ea0812b7b\") " pod="openshift-ingress/router-default-5444994796-fgw8l" Feb 18 19:19:42 crc kubenswrapper[4942]: I0218 19:19:42.552699 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c8e73114-6ccf-40ba-94e8-437e2db303fb-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-z4vt6\" (UID: \"c8e73114-6ccf-40ba-94e8-437e2db303fb\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-z4vt6" Feb 18 19:19:42 crc kubenswrapper[4942]: I0218 19:19:42.552725 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v7smb\" (UniqueName: \"kubernetes.io/projected/05aed8e4-390c-4589-8a61-2aab50a1d90f-kube-api-access-v7smb\") pod \"ingress-operator-5b745b69d9-rw75p\" (UID: \"05aed8e4-390c-4589-8a61-2aab50a1d90f\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-rw75p" Feb 18 19:19:42 crc kubenswrapper[4942]: I0218 19:19:42.552742 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9a79c946-4621-4b6d-af59-6b919d125502-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-jqs9l\" (UID: \"9a79c946-4621-4b6d-af59-6b919d125502\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-jqs9l" Feb 18 19:19:42 crc kubenswrapper[4942]: I0218 19:19:42.553208 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wgftz\" (UniqueName: \"kubernetes.io/projected/0e51d4dc-e813-4166-bb6a-45d083a09d2a-kube-api-access-wgftz\") pod \"migrator-59844c95c7-9grql\" (UID: \"0e51d4dc-e813-4166-bb6a-45d083a09d2a\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-9grql" Feb 18 19:19:42 crc kubenswrapper[4942]: I0218 19:19:42.553261 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cz8wn\" (UniqueName: \"kubernetes.io/projected/af99a6af-5df3-4b87-8f14-a564c5d86164-kube-api-access-cz8wn\") pod \"csi-hostpathplugin-w9lpz\" (UID: \"af99a6af-5df3-4b87-8f14-a564c5d86164\") " pod="hostpath-provisioner/csi-hostpathplugin-w9lpz" Feb 18 19:19:42 crc kubenswrapper[4942]: I0218 19:19:42.553308 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/087f0c6b-3e9f-4db4-bbcb-a8075e218219-ca-trust-extracted\") pod \"image-registry-697d97f7c8-2fcrf\" (UID: \"087f0c6b-3e9f-4db4-bbcb-a8075e218219\") " pod="openshift-image-registry/image-registry-697d97f7c8-2fcrf" Feb 18 19:19:42 crc kubenswrapper[4942]: I0218 19:19:42.553334 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-88vg5\" (UniqueName: \"kubernetes.io/projected/5a5bda6e-e1c2-4ecf-a531-fbbe8139e91e-kube-api-access-88vg5\") pod \"packageserver-d55dfcdfc-g5df6\" (UID: \"5a5bda6e-e1c2-4ecf-a531-fbbe8139e91e\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-g5df6" Feb 18 19:19:42 crc kubenswrapper[4942]: I0218 19:19:42.553355 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/ae63de17-3438-46ed-94f9-5f51d8a216fd-certs\") pod \"machine-config-server-xs9jl\" (UID: \"ae63de17-3438-46ed-94f9-5f51d8a216fd\") " pod="openshift-machine-config-operator/machine-config-server-xs9jl" Feb 18 19:19:42 crc kubenswrapper[4942]: I0218 19:19:42.553402 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/efab374b-fec3-4b4e-81f1-002715812a67-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-jfkrb\" (UID: \"efab374b-fec3-4b4e-81f1-002715812a67\") " pod="openshift-marketplace/marketplace-operator-79b997595-jfkrb" Feb 18 19:19:42 crc kubenswrapper[4942]: I0218 19:19:42.553747 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/714a349f-4480-4467-9041-7cae31df7686-etcd-service-ca\") pod \"etcd-operator-b45778765-x5rln\" (UID: \"714a349f-4480-4467-9041-7cae31df7686\") " pod="openshift-etcd-operator/etcd-operator-b45778765-x5rln" Feb 18 19:19:42 crc kubenswrapper[4942]: I0218 19:19:42.553815 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/087f0c6b-3e9f-4db4-bbcb-a8075e218219-ca-trust-extracted\") pod \"image-registry-697d97f7c8-2fcrf\" (UID: \"087f0c6b-3e9f-4db4-bbcb-a8075e218219\") " pod="openshift-image-registry/image-registry-697d97f7c8-2fcrf" Feb 18 19:19:42 crc kubenswrapper[4942]: I0218 19:19:42.553909 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d48r9\" (UniqueName: \"kubernetes.io/projected/ae63de17-3438-46ed-94f9-5f51d8a216fd-kube-api-access-d48r9\") pod \"machine-config-server-xs9jl\" (UID: \"ae63de17-3438-46ed-94f9-5f51d8a216fd\") " pod="openshift-machine-config-operator/machine-config-server-xs9jl" Feb 18 19:19:42 crc kubenswrapper[4942]: I0218 19:19:42.554020 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/b69f8ddf-cdf8-4104-bf4a-d2843c2aefa7-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-zj44h\" (UID: \"b69f8ddf-cdf8-4104-bf4a-d2843c2aefa7\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-zj44h" Feb 18 19:19:42 crc kubenswrapper[4942]: I0218 19:19:42.554049 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qrch7\" (UniqueName: \"kubernetes.io/projected/696bcbdd-c9ca-45cd-ae12-e733919e2832-kube-api-access-qrch7\") pod \"service-ca-9c57cc56f-9wcp7\" (UID: \"696bcbdd-c9ca-45cd-ae12-e733919e2832\") " pod="openshift-service-ca/service-ca-9c57cc56f-9wcp7" Feb 18 19:19:42 crc kubenswrapper[4942]: I0218 19:19:42.554079 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/01ba4570-01bb-4964-8c1d-791c25d72a1a-config-volume\") pod \"collect-profiles-29524035-tk5g4\" (UID: \"01ba4570-01bb-4964-8c1d-791c25d72a1a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524035-tk5g4" Feb 18 19:19:42 crc kubenswrapper[4942]: I0218 19:19:42.554102 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/51ed31a1-9bf0-40ff-8bca-041d691662b4-auth-proxy-config\") pod \"machine-config-operator-74547568cd-b488q\" (UID: \"51ed31a1-9bf0-40ff-8bca-041d691662b4\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-b488q" Feb 18 19:19:42 crc kubenswrapper[4942]: I0218 19:19:42.554123 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jl889\" (UniqueName: \"kubernetes.io/projected/b69f8ddf-cdf8-4104-bf4a-d2843c2aefa7-kube-api-access-jl889\") pod \"multus-admission-controller-857f4d67dd-zj44h\" (UID: \"b69f8ddf-cdf8-4104-bf4a-d2843c2aefa7\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-zj44h" Feb 18 19:19:42 crc kubenswrapper[4942]: I0218 19:19:42.554194 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/af99a6af-5df3-4b87-8f14-a564c5d86164-csi-data-dir\") pod \"csi-hostpathplugin-w9lpz\" (UID: \"af99a6af-5df3-4b87-8f14-a564c5d86164\") " pod="hostpath-provisioner/csi-hostpathplugin-w9lpz" Feb 18 19:19:42 crc kubenswrapper[4942]: I0218 19:19:42.554323 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/714a349f-4480-4467-9041-7cae31df7686-etcd-service-ca\") pod \"etcd-operator-b45778765-x5rln\" (UID: \"714a349f-4480-4467-9041-7cae31df7686\") " pod="openshift-etcd-operator/etcd-operator-b45778765-x5rln" Feb 18 19:19:42 crc kubenswrapper[4942]: I0218 19:19:42.554345 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/087f0c6b-3e9f-4db4-bbcb-a8075e218219-bound-sa-token\") pod \"image-registry-697d97f7c8-2fcrf\" (UID: \"087f0c6b-3e9f-4db4-bbcb-a8075e218219\") " pod="openshift-image-registry/image-registry-697d97f7c8-2fcrf" Feb 18 19:19:42 crc kubenswrapper[4942]: I0218 19:19:42.554471 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/5d6ad520-b407-4b86-867b-9e9658bfa536-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-z4t28\" (UID: \"5d6ad520-b407-4b86-867b-9e9658bfa536\") " pod="openshift-controller-manager/controller-manager-879f6c89f-z4t28" Feb 18 19:19:42 crc kubenswrapper[4942]: I0218 19:19:42.554617 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/8134898c-a265-4fa0-8548-075ea0812b7b-metrics-certs\") pod \"router-default-5444994796-fgw8l\" (UID: \"8134898c-a265-4fa0-8548-075ea0812b7b\") " pod="openshift-ingress/router-default-5444994796-fgw8l" Feb 18 19:19:42 crc kubenswrapper[4942]: I0218 19:19:42.554735 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9a79c946-4621-4b6d-af59-6b919d125502-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-jqs9l\" (UID: \"9a79c946-4621-4b6d-af59-6b919d125502\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-jqs9l" Feb 18 19:19:42 crc kubenswrapper[4942]: I0218 19:19:42.554780 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5d6ad520-b407-4b86-867b-9e9658bfa536-client-ca\") pod \"controller-manager-879f6c89f-z4t28\" (UID: \"5d6ad520-b407-4b86-867b-9e9658bfa536\") " pod="openshift-controller-manager/controller-manager-879f6c89f-z4t28" Feb 18 19:19:42 crc kubenswrapper[4942]: I0218 19:19:42.554831 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4h4qr\" (UniqueName: \"kubernetes.io/projected/51ed31a1-9bf0-40ff-8bca-041d691662b4-kube-api-access-4h4qr\") pod \"machine-config-operator-74547568cd-b488q\" (UID: \"51ed31a1-9bf0-40ff-8bca-041d691662b4\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-b488q" Feb 18 19:19:42 crc kubenswrapper[4942]: I0218 19:19:42.554948 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/157afc1c-f5df-419b-a760-336d14bbbd6d-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-vqvpq\" (UID: \"157afc1c-f5df-419b-a760-336d14bbbd6d\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-vqvpq" Feb 18 19:19:42 crc kubenswrapper[4942]: I0218 19:19:42.555053 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/01ba4570-01bb-4964-8c1d-791c25d72a1a-secret-volume\") pod \"collect-profiles-29524035-tk5g4\" (UID: \"01ba4570-01bb-4964-8c1d-791c25d72a1a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524035-tk5g4" Feb 18 19:19:42 crc kubenswrapper[4942]: I0218 19:19:42.555095 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7g7fx\" (UniqueName: \"kubernetes.io/projected/01ba4570-01bb-4964-8c1d-791c25d72a1a-kube-api-access-7g7fx\") pod \"collect-profiles-29524035-tk5g4\" (UID: \"01ba4570-01bb-4964-8c1d-791c25d72a1a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524035-tk5g4" Feb 18 19:19:42 crc kubenswrapper[4942]: I0218 19:19:42.555194 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tmwzh\" (UniqueName: \"kubernetes.io/projected/9a79c946-4621-4b6d-af59-6b919d125502-kube-api-access-tmwzh\") pod \"kube-storage-version-migrator-operator-b67b599dd-jqs9l\" (UID: \"9a79c946-4621-4b6d-af59-6b919d125502\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-jqs9l" Feb 18 19:19:42 crc kubenswrapper[4942]: I0218 19:19:42.555227 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9a79c946-4621-4b6d-af59-6b919d125502-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-jqs9l\" (UID: \"9a79c946-4621-4b6d-af59-6b919d125502\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-jqs9l" Feb 18 19:19:42 crc kubenswrapper[4942]: I0218 19:19:42.555278 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pnrbc\" (UniqueName: \"kubernetes.io/projected/8134898c-a265-4fa0-8548-075ea0812b7b-kube-api-access-pnrbc\") pod \"router-default-5444994796-fgw8l\" (UID: \"8134898c-a265-4fa0-8548-075ea0812b7b\") " pod="openshift-ingress/router-default-5444994796-fgw8l" Feb 18 19:19:42 crc kubenswrapper[4942]: I0218 19:19:42.555459 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c8e73114-6ccf-40ba-94e8-437e2db303fb-config\") pod \"kube-controller-manager-operator-78b949d7b-z4vt6\" (UID: \"c8e73114-6ccf-40ba-94e8-437e2db303fb\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-z4vt6" Feb 18 19:19:42 crc kubenswrapper[4942]: I0218 19:19:42.555511 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/efab374b-fec3-4b4e-81f1-002715812a67-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-jfkrb\" (UID: \"efab374b-fec3-4b4e-81f1-002715812a67\") " pod="openshift-marketplace/marketplace-operator-79b997595-jfkrb" Feb 18 19:19:42 crc kubenswrapper[4942]: I0218 19:19:42.555535 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dfmcm\" (UniqueName: \"kubernetes.io/projected/0ec933ee-8c36-49a0-8ba5-c7442f4de367-kube-api-access-dfmcm\") pod \"catalog-operator-68c6474976-zz9rm\" (UID: \"0ec933ee-8c36-49a0-8ba5-c7442f4de367\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-zz9rm" Feb 18 19:19:42 crc kubenswrapper[4942]: I0218 19:19:42.555641 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rlq6p\" (UniqueName: \"kubernetes.io/projected/461a0658-ae3b-4972-8122-2719276793b9-kube-api-access-rlq6p\") pod \"dns-default-s4kjv\" (UID: \"461a0658-ae3b-4972-8122-2719276793b9\") " pod="openshift-dns/dns-default-s4kjv" Feb 18 19:19:42 crc kubenswrapper[4942]: I0218 19:19:42.556232 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/8134898c-a265-4fa0-8548-075ea0812b7b-default-certificate\") pod \"router-default-5444994796-fgw8l\" (UID: \"8134898c-a265-4fa0-8548-075ea0812b7b\") " pod="openshift-ingress/router-default-5444994796-fgw8l" Feb 18 19:19:42 crc kubenswrapper[4942]: I0218 19:19:42.556369 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/5a5bda6e-e1c2-4ecf-a531-fbbe8139e91e-webhook-cert\") pod \"packageserver-d55dfcdfc-g5df6\" (UID: \"5a5bda6e-e1c2-4ecf-a531-fbbe8139e91e\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-g5df6" Feb 18 19:19:42 crc kubenswrapper[4942]: I0218 19:19:42.556427 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c8e73114-6ccf-40ba-94e8-437e2db303fb-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-z4vt6\" (UID: \"c8e73114-6ccf-40ba-94e8-437e2db303fb\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-z4vt6" Feb 18 19:19:42 crc kubenswrapper[4942]: I0218 19:19:42.556490 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/5a5bda6e-e1c2-4ecf-a531-fbbe8139e91e-apiservice-cert\") pod \"packageserver-d55dfcdfc-g5df6\" (UID: \"5a5bda6e-e1c2-4ecf-a531-fbbe8139e91e\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-g5df6" Feb 18 19:19:42 crc kubenswrapper[4942]: I0218 19:19:42.556506 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9b732dca-66e7-48c3-bd7d-5efc1d9662d7-config\") pod \"service-ca-operator-777779d784-v6bqq\" (UID: \"9b732dca-66e7-48c3-bd7d-5efc1d9662d7\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-v6bqq" Feb 18 19:19:42 crc kubenswrapper[4942]: I0218 19:19:42.556523 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2407a935-a8b9-4894-baaf-7460fee3d22b-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-9mc8z\" (UID: \"2407a935-a8b9-4894-baaf-7460fee3d22b\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-9mc8z" Feb 18 19:19:42 crc kubenswrapper[4942]: I0218 19:19:42.556566 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/714a349f-4480-4467-9041-7cae31df7686-serving-cert\") pod \"etcd-operator-b45778765-x5rln\" (UID: \"714a349f-4480-4467-9041-7cae31df7686\") " pod="openshift-etcd-operator/etcd-operator-b45778765-x5rln" Feb 18 19:19:42 crc kubenswrapper[4942]: I0218 19:19:42.556602 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/461a0658-ae3b-4972-8122-2719276793b9-metrics-tls\") pod \"dns-default-s4kjv\" (UID: \"461a0658-ae3b-4972-8122-2719276793b9\") " pod="openshift-dns/dns-default-s4kjv" Feb 18 19:19:42 crc kubenswrapper[4942]: I0218 19:19:42.556658 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-2fcrf\" (UID: \"087f0c6b-3e9f-4db4-bbcb-a8075e218219\") " pod="openshift-image-registry/image-registry-697d97f7c8-2fcrf" Feb 18 19:19:42 crc kubenswrapper[4942]: I0218 19:19:42.556682 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/05aed8e4-390c-4589-8a61-2aab50a1d90f-trusted-ca\") pod \"ingress-operator-5b745b69d9-rw75p\" (UID: \"05aed8e4-390c-4589-8a61-2aab50a1d90f\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-rw75p" Feb 18 19:19:42 crc kubenswrapper[4942]: I0218 19:19:42.556945 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/67ea138c-808d-40ee-9e77-2435676f7fba-cert\") pod \"ingress-canary-s57sd\" (UID: \"67ea138c-808d-40ee-9e77-2435676f7fba\") " pod="openshift-ingress-canary/ingress-canary-s57sd" Feb 18 19:19:42 crc kubenswrapper[4942]: E0218 19:19:42.556970 4942 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-18 19:19:43.056959174 +0000 UTC m=+142.761891839 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-2fcrf" (UID: "087f0c6b-3e9f-4db4-bbcb-a8075e218219") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 19:19:42 crc kubenswrapper[4942]: I0218 19:19:42.556987 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/0ec933ee-8c36-49a0-8ba5-c7442f4de367-srv-cert\") pod \"catalog-operator-68c6474976-zz9rm\" (UID: \"0ec933ee-8c36-49a0-8ba5-c7442f4de367\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-zz9rm" Feb 18 19:19:42 crc kubenswrapper[4942]: I0218 19:19:42.557249 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9b732dca-66e7-48c3-bd7d-5efc1d9662d7-serving-cert\") pod \"service-ca-operator-777779d784-v6bqq\" (UID: \"9b732dca-66e7-48c3-bd7d-5efc1d9662d7\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-v6bqq" Feb 18 19:19:42 crc kubenswrapper[4942]: I0218 19:19:42.557277 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/ae63de17-3438-46ed-94f9-5f51d8a216fd-node-bootstrap-token\") pod \"machine-config-server-xs9jl\" (UID: \"ae63de17-3438-46ed-94f9-5f51d8a216fd\") " pod="openshift-machine-config-operator/machine-config-server-xs9jl" Feb 18 19:19:42 crc kubenswrapper[4942]: I0218 19:19:42.557337 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/157afc1c-f5df-419b-a760-336d14bbbd6d-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-vqvpq\" (UID: \"157afc1c-f5df-419b-a760-336d14bbbd6d\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-vqvpq" Feb 18 19:19:42 crc kubenswrapper[4942]: I0218 19:19:42.557364 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/a873b689-a8f1-4125-b97c-e9d0f6b06397-profile-collector-cert\") pod \"olm-operator-6b444d44fb-lrcbr\" (UID: \"a873b689-a8f1-4125-b97c-e9d0f6b06397\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-lrcbr" Feb 18 19:19:42 crc kubenswrapper[4942]: I0218 19:19:42.557426 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/a873b689-a8f1-4125-b97c-e9d0f6b06397-srv-cert\") pod \"olm-operator-6b444d44fb-lrcbr\" (UID: \"a873b689-a8f1-4125-b97c-e9d0f6b06397\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-lrcbr" Feb 18 19:19:42 crc kubenswrapper[4942]: I0218 19:19:42.557448 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/0ec933ee-8c36-49a0-8ba5-c7442f4de367-profile-collector-cert\") pod \"catalog-operator-68c6474976-zz9rm\" (UID: \"0ec933ee-8c36-49a0-8ba5-c7442f4de367\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-zz9rm" Feb 18 19:19:42 crc kubenswrapper[4942]: I0218 19:19:42.557690 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/51ed31a1-9bf0-40ff-8bca-041d691662b4-images\") pod \"machine-config-operator-74547568cd-b488q\" (UID: \"51ed31a1-9bf0-40ff-8bca-041d691662b4\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-b488q" Feb 18 19:19:42 crc kubenswrapper[4942]: I0218 19:19:42.557715 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/af99a6af-5df3-4b87-8f14-a564c5d86164-socket-dir\") pod \"csi-hostpathplugin-w9lpz\" (UID: \"af99a6af-5df3-4b87-8f14-a564c5d86164\") " pod="hostpath-provisioner/csi-hostpathplugin-w9lpz" Feb 18 19:19:42 crc kubenswrapper[4942]: I0218 19:19:42.557750 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-244x4\" (UniqueName: \"kubernetes.io/projected/bb96ca2b-27a4-42e3-af7f-3514321500a3-kube-api-access-244x4\") pod \"control-plane-machine-set-operator-78cbb6b69f-rd6k5\" (UID: \"bb96ca2b-27a4-42e3-af7f-3514321500a3\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-rd6k5" Feb 18 19:19:42 crc kubenswrapper[4942]: I0218 19:19:42.560032 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/83c8ec50-d07e-4c96-80b8-22cf232b015c-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-8nxhq\" (UID: \"83c8ec50-d07e-4c96-80b8-22cf232b015c\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-8nxhq" Feb 18 19:19:42 crc kubenswrapper[4942]: I0218 19:19:42.560090 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/087f0c6b-3e9f-4db4-bbcb-a8075e218219-registry-certificates\") pod \"image-registry-697d97f7c8-2fcrf\" (UID: \"087f0c6b-3e9f-4db4-bbcb-a8075e218219\") " pod="openshift-image-registry/image-registry-697d97f7c8-2fcrf" Feb 18 19:19:42 crc kubenswrapper[4942]: I0218 19:19:42.560114 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/714a349f-4480-4467-9041-7cae31df7686-etcd-client\") pod \"etcd-operator-b45778765-x5rln\" (UID: \"714a349f-4480-4467-9041-7cae31df7686\") " pod="openshift-etcd-operator/etcd-operator-b45778765-x5rln" Feb 18 19:19:42 crc kubenswrapper[4942]: I0218 19:19:42.560138 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/8134898c-a265-4fa0-8548-075ea0812b7b-stats-auth\") pod \"router-default-5444994796-fgw8l\" (UID: \"8134898c-a265-4fa0-8548-075ea0812b7b\") " pod="openshift-ingress/router-default-5444994796-fgw8l" Feb 18 19:19:42 crc kubenswrapper[4942]: I0218 19:19:42.560158 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2407a935-a8b9-4894-baaf-7460fee3d22b-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-9mc8z\" (UID: \"2407a935-a8b9-4894-baaf-7460fee3d22b\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-9mc8z" Feb 18 19:19:42 crc kubenswrapper[4942]: I0218 19:19:42.560194 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/05aed8e4-390c-4589-8a61-2aab50a1d90f-metrics-tls\") pod \"ingress-operator-5b745b69d9-rw75p\" (UID: \"05aed8e4-390c-4589-8a61-2aab50a1d90f\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-rw75p" Feb 18 19:19:42 crc kubenswrapper[4942]: I0218 19:19:42.560226 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vtlv7\" (UniqueName: \"kubernetes.io/projected/714a349f-4480-4467-9041-7cae31df7686-kube-api-access-vtlv7\") pod \"etcd-operator-b45778765-x5rln\" (UID: \"714a349f-4480-4467-9041-7cae31df7686\") " pod="openshift-etcd-operator/etcd-operator-b45778765-x5rln" Feb 18 19:19:42 crc kubenswrapper[4942]: I0218 19:19:42.560248 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/51ed31a1-9bf0-40ff-8bca-041d691662b4-proxy-tls\") pod \"machine-config-operator-74547568cd-b488q\" (UID: \"51ed31a1-9bf0-40ff-8bca-041d691662b4\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-b488q" Feb 18 19:19:42 crc kubenswrapper[4942]: I0218 19:19:42.560272 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mtdlh\" (UniqueName: \"kubernetes.io/projected/67ea138c-808d-40ee-9e77-2435676f7fba-kube-api-access-mtdlh\") pod \"ingress-canary-s57sd\" (UID: \"67ea138c-808d-40ee-9e77-2435676f7fba\") " pod="openshift-ingress-canary/ingress-canary-s57sd" Feb 18 19:19:42 crc kubenswrapper[4942]: I0218 19:19:42.560294 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g6lxh\" (UniqueName: \"kubernetes.io/projected/83c8ec50-d07e-4c96-80b8-22cf232b015c-kube-api-access-g6lxh\") pod \"package-server-manager-789f6589d5-8nxhq\" (UID: \"83c8ec50-d07e-4c96-80b8-22cf232b015c\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-8nxhq" Feb 18 19:19:42 crc kubenswrapper[4942]: I0218 19:19:42.560321 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2bpp7\" (UniqueName: \"kubernetes.io/projected/5d6ad520-b407-4b86-867b-9e9658bfa536-kube-api-access-2bpp7\") pod \"controller-manager-879f6c89f-z4t28\" (UID: \"5d6ad520-b407-4b86-867b-9e9658bfa536\") " pod="openshift-controller-manager/controller-manager-879f6c89f-z4t28" Feb 18 19:19:42 crc kubenswrapper[4942]: I0218 19:19:42.559441 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/714a349f-4480-4467-9041-7cae31df7686-serving-cert\") pod \"etcd-operator-b45778765-x5rln\" (UID: \"714a349f-4480-4467-9041-7cae31df7686\") " pod="openshift-etcd-operator/etcd-operator-b45778765-x5rln" Feb 18 19:19:42 crc kubenswrapper[4942]: I0218 19:19:42.560373 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-25z4w\" (UniqueName: \"kubernetes.io/projected/087f0c6b-3e9f-4db4-bbcb-a8075e218219-kube-api-access-25z4w\") pod \"image-registry-697d97f7c8-2fcrf\" (UID: \"087f0c6b-3e9f-4db4-bbcb-a8075e218219\") " pod="openshift-image-registry/image-registry-697d97f7c8-2fcrf" Feb 18 19:19:42 crc kubenswrapper[4942]: I0218 19:19:42.559598 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9a79c946-4621-4b6d-af59-6b919d125502-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-jqs9l\" (UID: \"9a79c946-4621-4b6d-af59-6b919d125502\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-jqs9l" Feb 18 19:19:42 crc kubenswrapper[4942]: I0218 19:19:42.560412 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5d6ad520-b407-4b86-867b-9e9658bfa536-client-ca\") pod \"controller-manager-879f6c89f-z4t28\" (UID: \"5d6ad520-b407-4b86-867b-9e9658bfa536\") " pod="openshift-controller-manager/controller-manager-879f6c89f-z4t28" Feb 18 19:19:42 crc kubenswrapper[4942]: I0218 19:19:42.562599 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/05aed8e4-390c-4589-8a61-2aab50a1d90f-trusted-ca\") pod \"ingress-operator-5b745b69d9-rw75p\" (UID: \"05aed8e4-390c-4589-8a61-2aab50a1d90f\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-rw75p" Feb 18 19:19:42 crc kubenswrapper[4942]: I0218 19:19:42.562639 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5d6ad520-b407-4b86-867b-9e9658bfa536-serving-cert\") pod \"controller-manager-879f6c89f-z4t28\" (UID: \"5d6ad520-b407-4b86-867b-9e9658bfa536\") " pod="openshift-controller-manager/controller-manager-879f6c89f-z4t28" Feb 18 19:19:42 crc kubenswrapper[4942]: I0218 19:19:42.564946 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-5l26l" Feb 18 19:19:42 crc kubenswrapper[4942]: I0218 19:19:42.565094 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/461a0658-ae3b-4972-8122-2719276793b9-config-volume\") pod \"dns-default-s4kjv\" (UID: \"461a0658-ae3b-4972-8122-2719276793b9\") " pod="openshift-dns/dns-default-s4kjv" Feb 18 19:19:42 crc kubenswrapper[4942]: I0218 19:19:42.565120 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/af99a6af-5df3-4b87-8f14-a564c5d86164-mountpoint-dir\") pod \"csi-hostpathplugin-w9lpz\" (UID: \"af99a6af-5df3-4b87-8f14-a564c5d86164\") " pod="hostpath-provisioner/csi-hostpathplugin-w9lpz" Feb 18 19:19:42 crc kubenswrapper[4942]: I0218 19:19:42.565148 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/5a5bda6e-e1c2-4ecf-a531-fbbe8139e91e-tmpfs\") pod \"packageserver-d55dfcdfc-g5df6\" (UID: \"5a5bda6e-e1c2-4ecf-a531-fbbe8139e91e\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-g5df6" Feb 18 19:19:42 crc kubenswrapper[4942]: I0218 19:19:42.565167 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rv6cc\" (UniqueName: \"kubernetes.io/projected/a873b689-a8f1-4125-b97c-e9d0f6b06397-kube-api-access-rv6cc\") pod \"olm-operator-6b444d44fb-lrcbr\" (UID: \"a873b689-a8f1-4125-b97c-e9d0f6b06397\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-lrcbr" Feb 18 19:19:42 crc kubenswrapper[4942]: I0218 19:19:42.565217 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/087f0c6b-3e9f-4db4-bbcb-a8075e218219-registry-tls\") pod \"image-registry-697d97f7c8-2fcrf\" (UID: \"087f0c6b-3e9f-4db4-bbcb-a8075e218219\") " pod="openshift-image-registry/image-registry-697d97f7c8-2fcrf" Feb 18 19:19:42 crc kubenswrapper[4942]: I0218 19:19:42.565238 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/696bcbdd-c9ca-45cd-ae12-e733919e2832-signing-cabundle\") pod \"service-ca-9c57cc56f-9wcp7\" (UID: \"696bcbdd-c9ca-45cd-ae12-e733919e2832\") " pod="openshift-service-ca/service-ca-9c57cc56f-9wcp7" Feb 18 19:19:42 crc kubenswrapper[4942]: I0218 19:19:42.565817 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/bb96ca2b-27a4-42e3-af7f-3514321500a3-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-rd6k5\" (UID: \"bb96ca2b-27a4-42e3-af7f-3514321500a3\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-rd6k5" Feb 18 19:19:42 crc kubenswrapper[4942]: I0218 19:19:42.565985 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-phjwr\" (UniqueName: \"kubernetes.io/projected/efab374b-fec3-4b4e-81f1-002715812a67-kube-api-access-phjwr\") pod \"marketplace-operator-79b997595-jfkrb\" (UID: \"efab374b-fec3-4b4e-81f1-002715812a67\") " pod="openshift-marketplace/marketplace-operator-79b997595-jfkrb" Feb 18 19:19:42 crc kubenswrapper[4942]: I0218 19:19:42.566021 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/157afc1c-f5df-419b-a760-336d14bbbd6d-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-vqvpq\" (UID: \"157afc1c-f5df-419b-a760-336d14bbbd6d\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-vqvpq" Feb 18 19:19:42 crc kubenswrapper[4942]: I0218 19:19:42.566221 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/8a08dbbe-ad0e-444f-8d4a-4ad6f2e84aae-proxy-tls\") pod \"machine-config-controller-84d6567774-zpnzn\" (UID: \"8a08dbbe-ad0e-444f-8d4a-4ad6f2e84aae\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-zpnzn" Feb 18 19:19:42 crc kubenswrapper[4942]: I0218 19:19:42.566254 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/af99a6af-5df3-4b87-8f14-a564c5d86164-registration-dir\") pod \"csi-hostpathplugin-w9lpz\" (UID: \"af99a6af-5df3-4b87-8f14-a564c5d86164\") " pod="hostpath-provisioner/csi-hostpathplugin-w9lpz" Feb 18 19:19:42 crc kubenswrapper[4942]: I0218 19:19:42.566294 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/087f0c6b-3e9f-4db4-bbcb-a8075e218219-trusted-ca\") pod \"image-registry-697d97f7c8-2fcrf\" (UID: \"087f0c6b-3e9f-4db4-bbcb-a8075e218219\") " pod="openshift-image-registry/image-registry-697d97f7c8-2fcrf" Feb 18 19:19:42 crc kubenswrapper[4942]: I0218 19:19:42.566748 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5d6ad520-b407-4b86-867b-9e9658bfa536-config\") pod \"controller-manager-879f6c89f-z4t28\" (UID: \"5d6ad520-b407-4b86-867b-9e9658bfa536\") " pod="openshift-controller-manager/controller-manager-879f6c89f-z4t28" Feb 18 19:19:42 crc kubenswrapper[4942]: I0218 19:19:42.566799 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/714a349f-4480-4467-9041-7cae31df7686-etcd-ca\") pod \"etcd-operator-b45778765-x5rln\" (UID: \"714a349f-4480-4467-9041-7cae31df7686\") " pod="openshift-etcd-operator/etcd-operator-b45778765-x5rln" Feb 18 19:19:42 crc kubenswrapper[4942]: I0218 19:19:42.566865 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wh7v2\" (UniqueName: \"kubernetes.io/projected/8a08dbbe-ad0e-444f-8d4a-4ad6f2e84aae-kube-api-access-wh7v2\") pod \"machine-config-controller-84d6567774-zpnzn\" (UID: \"8a08dbbe-ad0e-444f-8d4a-4ad6f2e84aae\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-zpnzn" Feb 18 19:19:42 crc kubenswrapper[4942]: I0218 19:19:42.567407 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/714a349f-4480-4467-9041-7cae31df7686-etcd-ca\") pod \"etcd-operator-b45778765-x5rln\" (UID: \"714a349f-4480-4467-9041-7cae31df7686\") " pod="openshift-etcd-operator/etcd-operator-b45778765-x5rln" Feb 18 19:19:42 crc kubenswrapper[4942]: I0218 19:19:42.569021 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/087f0c6b-3e9f-4db4-bbcb-a8075e218219-trusted-ca\") pod \"image-registry-697d97f7c8-2fcrf\" (UID: \"087f0c6b-3e9f-4db4-bbcb-a8075e218219\") " pod="openshift-image-registry/image-registry-697d97f7c8-2fcrf" Feb 18 19:19:42 crc kubenswrapper[4942]: I0218 19:19:42.569294 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/8a08dbbe-ad0e-444f-8d4a-4ad6f2e84aae-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-zpnzn\" (UID: \"8a08dbbe-ad0e-444f-8d4a-4ad6f2e84aae\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-zpnzn" Feb 18 19:19:42 crc kubenswrapper[4942]: I0218 19:19:42.569510 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/087f0c6b-3e9f-4db4-bbcb-a8075e218219-installation-pull-secrets\") pod \"image-registry-697d97f7c8-2fcrf\" (UID: \"087f0c6b-3e9f-4db4-bbcb-a8075e218219\") " pod="openshift-image-registry/image-registry-697d97f7c8-2fcrf" Feb 18 19:19:42 crc kubenswrapper[4942]: I0218 19:19:42.569701 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/05aed8e4-390c-4589-8a61-2aab50a1d90f-bound-sa-token\") pod \"ingress-operator-5b745b69d9-rw75p\" (UID: \"05aed8e4-390c-4589-8a61-2aab50a1d90f\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-rw75p" Feb 18 19:19:42 crc kubenswrapper[4942]: I0218 19:19:42.569815 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/714a349f-4480-4467-9041-7cae31df7686-config\") pod \"etcd-operator-b45778765-x5rln\" (UID: \"714a349f-4480-4467-9041-7cae31df7686\") " pod="openshift-etcd-operator/etcd-operator-b45778765-x5rln" Feb 18 19:19:42 crc kubenswrapper[4942]: I0218 19:19:42.570421 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/714a349f-4480-4467-9041-7cae31df7686-config\") pod \"etcd-operator-b45778765-x5rln\" (UID: \"714a349f-4480-4467-9041-7cae31df7686\") " pod="openshift-etcd-operator/etcd-operator-b45778765-x5rln" Feb 18 19:19:42 crc kubenswrapper[4942]: I0218 19:19:42.571063 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/05aed8e4-390c-4589-8a61-2aab50a1d90f-metrics-tls\") pod \"ingress-operator-5b745b69d9-rw75p\" (UID: \"05aed8e4-390c-4589-8a61-2aab50a1d90f\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-rw75p" Feb 18 19:19:42 crc kubenswrapper[4942]: I0218 19:19:42.571452 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/087f0c6b-3e9f-4db4-bbcb-a8075e218219-registry-certificates\") pod \"image-registry-697d97f7c8-2fcrf\" (UID: \"087f0c6b-3e9f-4db4-bbcb-a8075e218219\") " pod="openshift-image-registry/image-registry-697d97f7c8-2fcrf" Feb 18 19:19:42 crc kubenswrapper[4942]: I0218 19:19:42.572912 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/087f0c6b-3e9f-4db4-bbcb-a8075e218219-registry-tls\") pod \"image-registry-697d97f7c8-2fcrf\" (UID: \"087f0c6b-3e9f-4db4-bbcb-a8075e218219\") " pod="openshift-image-registry/image-registry-697d97f7c8-2fcrf" Feb 18 19:19:42 crc kubenswrapper[4942]: I0218 19:19:42.573372 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5d6ad520-b407-4b86-867b-9e9658bfa536-config\") pod \"controller-manager-879f6c89f-z4t28\" (UID: \"5d6ad520-b407-4b86-867b-9e9658bfa536\") " pod="openshift-controller-manager/controller-manager-879f6c89f-z4t28" Feb 18 19:19:42 crc kubenswrapper[4942]: I0218 19:19:42.576889 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/714a349f-4480-4467-9041-7cae31df7686-etcd-client\") pod \"etcd-operator-b45778765-x5rln\" (UID: \"714a349f-4480-4467-9041-7cae31df7686\") " pod="openshift-etcd-operator/etcd-operator-b45778765-x5rln" Feb 18 19:19:42 crc kubenswrapper[4942]: I0218 19:19:42.585740 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-vms6h" Feb 18 19:19:42 crc kubenswrapper[4942]: I0218 19:19:42.588992 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/087f0c6b-3e9f-4db4-bbcb-a8075e218219-installation-pull-secrets\") pod \"image-registry-697d97f7c8-2fcrf\" (UID: \"087f0c6b-3e9f-4db4-bbcb-a8075e218219\") " pod="openshift-image-registry/image-registry-697d97f7c8-2fcrf" Feb 18 19:19:42 crc kubenswrapper[4942]: I0218 19:19:42.601721 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v7smb\" (UniqueName: \"kubernetes.io/projected/05aed8e4-390c-4589-8a61-2aab50a1d90f-kube-api-access-v7smb\") pod \"ingress-operator-5b745b69d9-rw75p\" (UID: \"05aed8e4-390c-4589-8a61-2aab50a1d90f\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-rw75p" Feb 18 19:19:42 crc kubenswrapper[4942]: I0218 19:19:42.626446 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/087f0c6b-3e9f-4db4-bbcb-a8075e218219-bound-sa-token\") pod \"image-registry-697d97f7c8-2fcrf\" (UID: \"087f0c6b-3e9f-4db4-bbcb-a8075e218219\") " pod="openshift-image-registry/image-registry-697d97f7c8-2fcrf" Feb 18 19:19:42 crc kubenswrapper[4942]: I0218 19:19:42.631400 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tmwzh\" (UniqueName: \"kubernetes.io/projected/9a79c946-4621-4b6d-af59-6b919d125502-kube-api-access-tmwzh\") pod \"kube-storage-version-migrator-operator-b67b599dd-jqs9l\" (UID: \"9a79c946-4621-4b6d-af59-6b919d125502\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-jqs9l" Feb 18 19:19:42 crc kubenswrapper[4942]: I0218 19:19:42.671171 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 19:19:42 crc kubenswrapper[4942]: E0218 19:19:42.671371 4942 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 19:19:43.171318967 +0000 UTC m=+142.876251632 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 19:19:42 crc kubenswrapper[4942]: I0218 19:19:42.672245 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/8134898c-a265-4fa0-8548-075ea0812b7b-stats-auth\") pod \"router-default-5444994796-fgw8l\" (UID: \"8134898c-a265-4fa0-8548-075ea0812b7b\") " pod="openshift-ingress/router-default-5444994796-fgw8l" Feb 18 19:19:42 crc kubenswrapper[4942]: I0218 19:19:42.672279 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2407a935-a8b9-4894-baaf-7460fee3d22b-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-9mc8z\" (UID: \"2407a935-a8b9-4894-baaf-7460fee3d22b\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-9mc8z" Feb 18 19:19:42 crc kubenswrapper[4942]: I0218 19:19:42.672310 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/51ed31a1-9bf0-40ff-8bca-041d691662b4-proxy-tls\") pod \"machine-config-operator-74547568cd-b488q\" (UID: \"51ed31a1-9bf0-40ff-8bca-041d691662b4\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-b488q" Feb 18 19:19:42 crc kubenswrapper[4942]: I0218 19:19:42.672332 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mtdlh\" (UniqueName: \"kubernetes.io/projected/67ea138c-808d-40ee-9e77-2435676f7fba-kube-api-access-mtdlh\") pod \"ingress-canary-s57sd\" (UID: \"67ea138c-808d-40ee-9e77-2435676f7fba\") " pod="openshift-ingress-canary/ingress-canary-s57sd" Feb 18 19:19:42 crc kubenswrapper[4942]: I0218 19:19:42.672353 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g6lxh\" (UniqueName: \"kubernetes.io/projected/83c8ec50-d07e-4c96-80b8-22cf232b015c-kube-api-access-g6lxh\") pod \"package-server-manager-789f6589d5-8nxhq\" (UID: \"83c8ec50-d07e-4c96-80b8-22cf232b015c\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-8nxhq" Feb 18 19:19:42 crc kubenswrapper[4942]: I0218 19:19:42.672393 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/461a0658-ae3b-4972-8122-2719276793b9-config-volume\") pod \"dns-default-s4kjv\" (UID: \"461a0658-ae3b-4972-8122-2719276793b9\") " pod="openshift-dns/dns-default-s4kjv" Feb 18 19:19:42 crc kubenswrapper[4942]: I0218 19:19:42.672417 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/5a5bda6e-e1c2-4ecf-a531-fbbe8139e91e-tmpfs\") pod \"packageserver-d55dfcdfc-g5df6\" (UID: \"5a5bda6e-e1c2-4ecf-a531-fbbe8139e91e\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-g5df6" Feb 18 19:19:42 crc kubenswrapper[4942]: I0218 19:19:42.672437 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/af99a6af-5df3-4b87-8f14-a564c5d86164-mountpoint-dir\") pod \"csi-hostpathplugin-w9lpz\" (UID: \"af99a6af-5df3-4b87-8f14-a564c5d86164\") " pod="hostpath-provisioner/csi-hostpathplugin-w9lpz" Feb 18 19:19:42 crc kubenswrapper[4942]: I0218 19:19:42.672457 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rv6cc\" (UniqueName: \"kubernetes.io/projected/a873b689-a8f1-4125-b97c-e9d0f6b06397-kube-api-access-rv6cc\") pod \"olm-operator-6b444d44fb-lrcbr\" (UID: \"a873b689-a8f1-4125-b97c-e9d0f6b06397\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-lrcbr" Feb 18 19:19:42 crc kubenswrapper[4942]: I0218 19:19:42.672480 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/696bcbdd-c9ca-45cd-ae12-e733919e2832-signing-cabundle\") pod \"service-ca-9c57cc56f-9wcp7\" (UID: \"696bcbdd-c9ca-45cd-ae12-e733919e2832\") " pod="openshift-service-ca/service-ca-9c57cc56f-9wcp7" Feb 18 19:19:42 crc kubenswrapper[4942]: I0218 19:19:42.672502 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/bb96ca2b-27a4-42e3-af7f-3514321500a3-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-rd6k5\" (UID: \"bb96ca2b-27a4-42e3-af7f-3514321500a3\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-rd6k5" Feb 18 19:19:42 crc kubenswrapper[4942]: I0218 19:19:42.672533 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-phjwr\" (UniqueName: \"kubernetes.io/projected/efab374b-fec3-4b4e-81f1-002715812a67-kube-api-access-phjwr\") pod \"marketplace-operator-79b997595-jfkrb\" (UID: \"efab374b-fec3-4b4e-81f1-002715812a67\") " pod="openshift-marketplace/marketplace-operator-79b997595-jfkrb" Feb 18 19:19:42 crc kubenswrapper[4942]: I0218 19:19:42.672553 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/af99a6af-5df3-4b87-8f14-a564c5d86164-registration-dir\") pod \"csi-hostpathplugin-w9lpz\" (UID: \"af99a6af-5df3-4b87-8f14-a564c5d86164\") " pod="hostpath-provisioner/csi-hostpathplugin-w9lpz" Feb 18 19:19:42 crc kubenswrapper[4942]: I0218 19:19:42.672574 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/157afc1c-f5df-419b-a760-336d14bbbd6d-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-vqvpq\" (UID: \"157afc1c-f5df-419b-a760-336d14bbbd6d\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-vqvpq" Feb 18 19:19:42 crc kubenswrapper[4942]: I0218 19:19:42.672595 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/8a08dbbe-ad0e-444f-8d4a-4ad6f2e84aae-proxy-tls\") pod \"machine-config-controller-84d6567774-zpnzn\" (UID: \"8a08dbbe-ad0e-444f-8d4a-4ad6f2e84aae\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-zpnzn" Feb 18 19:19:42 crc kubenswrapper[4942]: I0218 19:19:42.672621 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wh7v2\" (UniqueName: \"kubernetes.io/projected/8a08dbbe-ad0e-444f-8d4a-4ad6f2e84aae-kube-api-access-wh7v2\") pod \"machine-config-controller-84d6567774-zpnzn\" (UID: \"8a08dbbe-ad0e-444f-8d4a-4ad6f2e84aae\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-zpnzn" Feb 18 19:19:42 crc kubenswrapper[4942]: I0218 19:19:42.672664 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/8a08dbbe-ad0e-444f-8d4a-4ad6f2e84aae-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-zpnzn\" (UID: \"8a08dbbe-ad0e-444f-8d4a-4ad6f2e84aae\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-zpnzn" Feb 18 19:19:42 crc kubenswrapper[4942]: I0218 19:19:42.672690 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8wmrg\" (UniqueName: \"kubernetes.io/projected/9b732dca-66e7-48c3-bd7d-5efc1d9662d7-kube-api-access-8wmrg\") pod \"service-ca-operator-777779d784-v6bqq\" (UID: \"9b732dca-66e7-48c3-bd7d-5efc1d9662d7\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-v6bqq" Feb 18 19:19:42 crc kubenswrapper[4942]: I0218 19:19:42.672710 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/af99a6af-5df3-4b87-8f14-a564c5d86164-plugins-dir\") pod \"csi-hostpathplugin-w9lpz\" (UID: \"af99a6af-5df3-4b87-8f14-a564c5d86164\") " pod="hostpath-provisioner/csi-hostpathplugin-w9lpz" Feb 18 19:19:42 crc kubenswrapper[4942]: I0218 19:19:42.672732 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8j82p\" (UniqueName: \"kubernetes.io/projected/2407a935-a8b9-4894-baaf-7460fee3d22b-kube-api-access-8j82p\") pod \"openshift-controller-manager-operator-756b6f6bc6-9mc8z\" (UID: \"2407a935-a8b9-4894-baaf-7460fee3d22b\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-9mc8z" Feb 18 19:19:42 crc kubenswrapper[4942]: I0218 19:19:42.673512 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/af99a6af-5df3-4b87-8f14-a564c5d86164-registration-dir\") pod \"csi-hostpathplugin-w9lpz\" (UID: \"af99a6af-5df3-4b87-8f14-a564c5d86164\") " pod="hostpath-provisioner/csi-hostpathplugin-w9lpz" Feb 18 19:19:42 crc kubenswrapper[4942]: I0218 19:19:42.674008 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c8e73114-6ccf-40ba-94e8-437e2db303fb-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-z4vt6\" (UID: \"c8e73114-6ccf-40ba-94e8-437e2db303fb\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-z4vt6" Feb 18 19:19:42 crc kubenswrapper[4942]: I0218 19:19:42.674048 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/696bcbdd-c9ca-45cd-ae12-e733919e2832-signing-key\") pod \"service-ca-9c57cc56f-9wcp7\" (UID: \"696bcbdd-c9ca-45cd-ae12-e733919e2832\") " pod="openshift-service-ca/service-ca-9c57cc56f-9wcp7" Feb 18 19:19:42 crc kubenswrapper[4942]: I0218 19:19:42.674071 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8134898c-a265-4fa0-8548-075ea0812b7b-service-ca-bundle\") pod \"router-default-5444994796-fgw8l\" (UID: \"8134898c-a265-4fa0-8548-075ea0812b7b\") " pod="openshift-ingress/router-default-5444994796-fgw8l" Feb 18 19:19:42 crc kubenswrapper[4942]: I0218 19:19:42.674103 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cz8wn\" (UniqueName: \"kubernetes.io/projected/af99a6af-5df3-4b87-8f14-a564c5d86164-kube-api-access-cz8wn\") pod \"csi-hostpathplugin-w9lpz\" (UID: \"af99a6af-5df3-4b87-8f14-a564c5d86164\") " pod="hostpath-provisioner/csi-hostpathplugin-w9lpz" Feb 18 19:19:42 crc kubenswrapper[4942]: I0218 19:19:42.674128 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wgftz\" (UniqueName: \"kubernetes.io/projected/0e51d4dc-e813-4166-bb6a-45d083a09d2a-kube-api-access-wgftz\") pod \"migrator-59844c95c7-9grql\" (UID: \"0e51d4dc-e813-4166-bb6a-45d083a09d2a\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-9grql" Feb 18 19:19:42 crc kubenswrapper[4942]: I0218 19:19:42.674151 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-88vg5\" (UniqueName: \"kubernetes.io/projected/5a5bda6e-e1c2-4ecf-a531-fbbe8139e91e-kube-api-access-88vg5\") pod \"packageserver-d55dfcdfc-g5df6\" (UID: \"5a5bda6e-e1c2-4ecf-a531-fbbe8139e91e\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-g5df6" Feb 18 19:19:42 crc kubenswrapper[4942]: I0218 19:19:42.674182 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/ae63de17-3438-46ed-94f9-5f51d8a216fd-certs\") pod \"machine-config-server-xs9jl\" (UID: \"ae63de17-3438-46ed-94f9-5f51d8a216fd\") " pod="openshift-machine-config-operator/machine-config-server-xs9jl" Feb 18 19:19:42 crc kubenswrapper[4942]: I0218 19:19:42.674207 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/efab374b-fec3-4b4e-81f1-002715812a67-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-jfkrb\" (UID: \"efab374b-fec3-4b4e-81f1-002715812a67\") " pod="openshift-marketplace/marketplace-operator-79b997595-jfkrb" Feb 18 19:19:42 crc kubenswrapper[4942]: I0218 19:19:42.674251 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d48r9\" (UniqueName: \"kubernetes.io/projected/ae63de17-3438-46ed-94f9-5f51d8a216fd-kube-api-access-d48r9\") pod \"machine-config-server-xs9jl\" (UID: \"ae63de17-3438-46ed-94f9-5f51d8a216fd\") " pod="openshift-machine-config-operator/machine-config-server-xs9jl" Feb 18 19:19:42 crc kubenswrapper[4942]: I0218 19:19:42.674684 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/b69f8ddf-cdf8-4104-bf4a-d2843c2aefa7-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-zj44h\" (UID: \"b69f8ddf-cdf8-4104-bf4a-d2843c2aefa7\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-zj44h" Feb 18 19:19:42 crc kubenswrapper[4942]: I0218 19:19:42.674727 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/51ed31a1-9bf0-40ff-8bca-041d691662b4-auth-proxy-config\") pod \"machine-config-operator-74547568cd-b488q\" (UID: \"51ed31a1-9bf0-40ff-8bca-041d691662b4\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-b488q" Feb 18 19:19:42 crc kubenswrapper[4942]: I0218 19:19:42.674752 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qrch7\" (UniqueName: \"kubernetes.io/projected/696bcbdd-c9ca-45cd-ae12-e733919e2832-kube-api-access-qrch7\") pod \"service-ca-9c57cc56f-9wcp7\" (UID: \"696bcbdd-c9ca-45cd-ae12-e733919e2832\") " pod="openshift-service-ca/service-ca-9c57cc56f-9wcp7" Feb 18 19:19:42 crc kubenswrapper[4942]: I0218 19:19:42.674788 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/01ba4570-01bb-4964-8c1d-791c25d72a1a-config-volume\") pod \"collect-profiles-29524035-tk5g4\" (UID: \"01ba4570-01bb-4964-8c1d-791c25d72a1a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524035-tk5g4" Feb 18 19:19:42 crc kubenswrapper[4942]: I0218 19:19:42.674815 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jl889\" (UniqueName: \"kubernetes.io/projected/b69f8ddf-cdf8-4104-bf4a-d2843c2aefa7-kube-api-access-jl889\") pod \"multus-admission-controller-857f4d67dd-zj44h\" (UID: \"b69f8ddf-cdf8-4104-bf4a-d2843c2aefa7\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-zj44h" Feb 18 19:19:42 crc kubenswrapper[4942]: I0218 19:19:42.674862 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/af99a6af-5df3-4b87-8f14-a564c5d86164-csi-data-dir\") pod \"csi-hostpathplugin-w9lpz\" (UID: \"af99a6af-5df3-4b87-8f14-a564c5d86164\") " pod="hostpath-provisioner/csi-hostpathplugin-w9lpz" Feb 18 19:19:42 crc kubenswrapper[4942]: I0218 19:19:42.674889 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/8134898c-a265-4fa0-8548-075ea0812b7b-metrics-certs\") pod \"router-default-5444994796-fgw8l\" (UID: \"8134898c-a265-4fa0-8548-075ea0812b7b\") " pod="openshift-ingress/router-default-5444994796-fgw8l" Feb 18 19:19:42 crc kubenswrapper[4942]: I0218 19:19:42.674941 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4h4qr\" (UniqueName: \"kubernetes.io/projected/51ed31a1-9bf0-40ff-8bca-041d691662b4-kube-api-access-4h4qr\") pod \"machine-config-operator-74547568cd-b488q\" (UID: \"51ed31a1-9bf0-40ff-8bca-041d691662b4\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-b488q" Feb 18 19:19:42 crc kubenswrapper[4942]: I0218 19:19:42.674971 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/157afc1c-f5df-419b-a760-336d14bbbd6d-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-vqvpq\" (UID: \"157afc1c-f5df-419b-a760-336d14bbbd6d\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-vqvpq" Feb 18 19:19:42 crc kubenswrapper[4942]: I0218 19:19:42.674996 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/01ba4570-01bb-4964-8c1d-791c25d72a1a-secret-volume\") pod \"collect-profiles-29524035-tk5g4\" (UID: \"01ba4570-01bb-4964-8c1d-791c25d72a1a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524035-tk5g4" Feb 18 19:19:42 crc kubenswrapper[4942]: I0218 19:19:42.675019 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7g7fx\" (UniqueName: \"kubernetes.io/projected/01ba4570-01bb-4964-8c1d-791c25d72a1a-kube-api-access-7g7fx\") pod \"collect-profiles-29524035-tk5g4\" (UID: \"01ba4570-01bb-4964-8c1d-791c25d72a1a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524035-tk5g4" Feb 18 19:19:42 crc kubenswrapper[4942]: I0218 19:19:42.675044 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pnrbc\" (UniqueName: \"kubernetes.io/projected/8134898c-a265-4fa0-8548-075ea0812b7b-kube-api-access-pnrbc\") pod \"router-default-5444994796-fgw8l\" (UID: \"8134898c-a265-4fa0-8548-075ea0812b7b\") " pod="openshift-ingress/router-default-5444994796-fgw8l" Feb 18 19:19:42 crc kubenswrapper[4942]: I0218 19:19:42.675069 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c8e73114-6ccf-40ba-94e8-437e2db303fb-config\") pod \"kube-controller-manager-operator-78b949d7b-z4vt6\" (UID: \"c8e73114-6ccf-40ba-94e8-437e2db303fb\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-z4vt6" Feb 18 19:19:42 crc kubenswrapper[4942]: I0218 19:19:42.675091 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dfmcm\" (UniqueName: \"kubernetes.io/projected/0ec933ee-8c36-49a0-8ba5-c7442f4de367-kube-api-access-dfmcm\") pod \"catalog-operator-68c6474976-zz9rm\" (UID: \"0ec933ee-8c36-49a0-8ba5-c7442f4de367\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-zz9rm" Feb 18 19:19:42 crc kubenswrapper[4942]: I0218 19:19:42.675113 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/efab374b-fec3-4b4e-81f1-002715812a67-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-jfkrb\" (UID: \"efab374b-fec3-4b4e-81f1-002715812a67\") " pod="openshift-marketplace/marketplace-operator-79b997595-jfkrb" Feb 18 19:19:42 crc kubenswrapper[4942]: I0218 19:19:42.675136 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rlq6p\" (UniqueName: \"kubernetes.io/projected/461a0658-ae3b-4972-8122-2719276793b9-kube-api-access-rlq6p\") pod \"dns-default-s4kjv\" (UID: \"461a0658-ae3b-4972-8122-2719276793b9\") " pod="openshift-dns/dns-default-s4kjv" Feb 18 19:19:42 crc kubenswrapper[4942]: I0218 19:19:42.675168 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/8134898c-a265-4fa0-8548-075ea0812b7b-default-certificate\") pod \"router-default-5444994796-fgw8l\" (UID: \"8134898c-a265-4fa0-8548-075ea0812b7b\") " pod="openshift-ingress/router-default-5444994796-fgw8l" Feb 18 19:19:42 crc kubenswrapper[4942]: I0218 19:19:42.675188 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/5a5bda6e-e1c2-4ecf-a531-fbbe8139e91e-webhook-cert\") pod \"packageserver-d55dfcdfc-g5df6\" (UID: \"5a5bda6e-e1c2-4ecf-a531-fbbe8139e91e\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-g5df6" Feb 18 19:19:42 crc kubenswrapper[4942]: I0218 19:19:42.675310 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c8e73114-6ccf-40ba-94e8-437e2db303fb-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-z4vt6\" (UID: \"c8e73114-6ccf-40ba-94e8-437e2db303fb\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-z4vt6" Feb 18 19:19:42 crc kubenswrapper[4942]: I0218 19:19:42.675340 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9b732dca-66e7-48c3-bd7d-5efc1d9662d7-config\") pod \"service-ca-operator-777779d784-v6bqq\" (UID: \"9b732dca-66e7-48c3-bd7d-5efc1d9662d7\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-v6bqq" Feb 18 19:19:42 crc kubenswrapper[4942]: I0218 19:19:42.675376 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2407a935-a8b9-4894-baaf-7460fee3d22b-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-9mc8z\" (UID: \"2407a935-a8b9-4894-baaf-7460fee3d22b\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-9mc8z" Feb 18 19:19:42 crc kubenswrapper[4942]: I0218 19:19:42.675403 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/5a5bda6e-e1c2-4ecf-a531-fbbe8139e91e-apiservice-cert\") pod \"packageserver-d55dfcdfc-g5df6\" (UID: \"5a5bda6e-e1c2-4ecf-a531-fbbe8139e91e\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-g5df6" Feb 18 19:19:42 crc kubenswrapper[4942]: I0218 19:19:42.675425 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/461a0658-ae3b-4972-8122-2719276793b9-metrics-tls\") pod \"dns-default-s4kjv\" (UID: \"461a0658-ae3b-4972-8122-2719276793b9\") " pod="openshift-dns/dns-default-s4kjv" Feb 18 19:19:42 crc kubenswrapper[4942]: I0218 19:19:42.675457 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-2fcrf\" (UID: \"087f0c6b-3e9f-4db4-bbcb-a8075e218219\") " pod="openshift-image-registry/image-registry-697d97f7c8-2fcrf" Feb 18 19:19:42 crc kubenswrapper[4942]: I0218 19:19:42.675480 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/67ea138c-808d-40ee-9e77-2435676f7fba-cert\") pod \"ingress-canary-s57sd\" (UID: \"67ea138c-808d-40ee-9e77-2435676f7fba\") " pod="openshift-ingress-canary/ingress-canary-s57sd" Feb 18 19:19:42 crc kubenswrapper[4942]: I0218 19:19:42.675577 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/461a0658-ae3b-4972-8122-2719276793b9-config-volume\") pod \"dns-default-s4kjv\" (UID: \"461a0658-ae3b-4972-8122-2719276793b9\") " pod="openshift-dns/dns-default-s4kjv" Feb 18 19:19:42 crc kubenswrapper[4942]: I0218 19:19:42.676028 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-25z4w\" (UniqueName: \"kubernetes.io/projected/087f0c6b-3e9f-4db4-bbcb-a8075e218219-kube-api-access-25z4w\") pod \"image-registry-697d97f7c8-2fcrf\" (UID: \"087f0c6b-3e9f-4db4-bbcb-a8075e218219\") " pod="openshift-image-registry/image-registry-697d97f7c8-2fcrf" Feb 18 19:19:42 crc kubenswrapper[4942]: I0218 19:19:42.676411 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/5a5bda6e-e1c2-4ecf-a531-fbbe8139e91e-tmpfs\") pod \"packageserver-d55dfcdfc-g5df6\" (UID: \"5a5bda6e-e1c2-4ecf-a531-fbbe8139e91e\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-g5df6" Feb 18 19:19:42 crc kubenswrapper[4942]: I0218 19:19:42.674969 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/af99a6af-5df3-4b87-8f14-a564c5d86164-plugins-dir\") pod \"csi-hostpathplugin-w9lpz\" (UID: \"af99a6af-5df3-4b87-8f14-a564c5d86164\") " pod="hostpath-provisioner/csi-hostpathplugin-w9lpz" Feb 18 19:19:42 crc kubenswrapper[4942]: I0218 19:19:42.677218 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8134898c-a265-4fa0-8548-075ea0812b7b-service-ca-bundle\") pod \"router-default-5444994796-fgw8l\" (UID: \"8134898c-a265-4fa0-8548-075ea0812b7b\") " pod="openshift-ingress/router-default-5444994796-fgw8l" Feb 18 19:19:42 crc kubenswrapper[4942]: I0218 19:19:42.678181 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/01ba4570-01bb-4964-8c1d-791c25d72a1a-config-volume\") pod \"collect-profiles-29524035-tk5g4\" (UID: \"01ba4570-01bb-4964-8c1d-791c25d72a1a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524035-tk5g4" Feb 18 19:19:42 crc kubenswrapper[4942]: I0218 19:19:42.678354 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/af99a6af-5df3-4b87-8f14-a564c5d86164-csi-data-dir\") pod \"csi-hostpathplugin-w9lpz\" (UID: \"af99a6af-5df3-4b87-8f14-a564c5d86164\") " pod="hostpath-provisioner/csi-hostpathplugin-w9lpz" Feb 18 19:19:42 crc kubenswrapper[4942]: I0218 19:19:42.678881 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/bb96ca2b-27a4-42e3-af7f-3514321500a3-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-rd6k5\" (UID: \"bb96ca2b-27a4-42e3-af7f-3514321500a3\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-rd6k5" Feb 18 19:19:42 crc kubenswrapper[4942]: I0218 19:19:42.679200 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2407a935-a8b9-4894-baaf-7460fee3d22b-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-9mc8z\" (UID: \"2407a935-a8b9-4894-baaf-7460fee3d22b\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-9mc8z" Feb 18 19:19:42 crc kubenswrapper[4942]: I0218 19:19:42.679171 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/8a08dbbe-ad0e-444f-8d4a-4ad6f2e84aae-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-zpnzn\" (UID: \"8a08dbbe-ad0e-444f-8d4a-4ad6f2e84aae\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-zpnzn" Feb 18 19:19:42 crc kubenswrapper[4942]: I0218 19:19:42.679465 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-v5w2k" Feb 18 19:19:42 crc kubenswrapper[4942]: I0218 19:19:42.679549 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/696bcbdd-c9ca-45cd-ae12-e733919e2832-signing-cabundle\") pod \"service-ca-9c57cc56f-9wcp7\" (UID: \"696bcbdd-c9ca-45cd-ae12-e733919e2832\") " pod="openshift-service-ca/service-ca-9c57cc56f-9wcp7" Feb 18 19:19:42 crc kubenswrapper[4942]: I0218 19:19:42.679587 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/8a08dbbe-ad0e-444f-8d4a-4ad6f2e84aae-proxy-tls\") pod \"machine-config-controller-84d6567774-zpnzn\" (UID: \"8a08dbbe-ad0e-444f-8d4a-4ad6f2e84aae\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-zpnzn" Feb 18 19:19:42 crc kubenswrapper[4942]: I0218 19:19:42.675501 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/0ec933ee-8c36-49a0-8ba5-c7442f4de367-srv-cert\") pod \"catalog-operator-68c6474976-zz9rm\" (UID: \"0ec933ee-8c36-49a0-8ba5-c7442f4de367\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-zz9rm" Feb 18 19:19:42 crc kubenswrapper[4942]: I0218 19:19:42.680290 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9b732dca-66e7-48c3-bd7d-5efc1d9662d7-serving-cert\") pod \"service-ca-operator-777779d784-v6bqq\" (UID: \"9b732dca-66e7-48c3-bd7d-5efc1d9662d7\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-v6bqq" Feb 18 19:19:42 crc kubenswrapper[4942]: I0218 19:19:42.680343 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/ae63de17-3438-46ed-94f9-5f51d8a216fd-node-bootstrap-token\") pod \"machine-config-server-xs9jl\" (UID: \"ae63de17-3438-46ed-94f9-5f51d8a216fd\") " pod="openshift-machine-config-operator/machine-config-server-xs9jl" Feb 18 19:19:42 crc kubenswrapper[4942]: I0218 19:19:42.680501 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/157afc1c-f5df-419b-a760-336d14bbbd6d-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-vqvpq\" (UID: \"157afc1c-f5df-419b-a760-336d14bbbd6d\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-vqvpq" Feb 18 19:19:42 crc kubenswrapper[4942]: I0218 19:19:42.681188 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/a873b689-a8f1-4125-b97c-e9d0f6b06397-profile-collector-cert\") pod \"olm-operator-6b444d44fb-lrcbr\" (UID: \"a873b689-a8f1-4125-b97c-e9d0f6b06397\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-lrcbr" Feb 18 19:19:42 crc kubenswrapper[4942]: I0218 19:19:42.681236 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/a873b689-a8f1-4125-b97c-e9d0f6b06397-srv-cert\") pod \"olm-operator-6b444d44fb-lrcbr\" (UID: \"a873b689-a8f1-4125-b97c-e9d0f6b06397\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-lrcbr" Feb 18 19:19:42 crc kubenswrapper[4942]: I0218 19:19:42.681262 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/0ec933ee-8c36-49a0-8ba5-c7442f4de367-profile-collector-cert\") pod \"catalog-operator-68c6474976-zz9rm\" (UID: \"0ec933ee-8c36-49a0-8ba5-c7442f4de367\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-zz9rm" Feb 18 19:19:42 crc kubenswrapper[4942]: I0218 19:19:42.681296 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/51ed31a1-9bf0-40ff-8bca-041d691662b4-images\") pod \"machine-config-operator-74547568cd-b488q\" (UID: \"51ed31a1-9bf0-40ff-8bca-041d691662b4\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-b488q" Feb 18 19:19:42 crc kubenswrapper[4942]: I0218 19:19:42.681318 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/af99a6af-5df3-4b87-8f14-a564c5d86164-socket-dir\") pod \"csi-hostpathplugin-w9lpz\" (UID: \"af99a6af-5df3-4b87-8f14-a564c5d86164\") " pod="hostpath-provisioner/csi-hostpathplugin-w9lpz" Feb 18 19:19:42 crc kubenswrapper[4942]: I0218 19:19:42.681345 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-244x4\" (UniqueName: \"kubernetes.io/projected/bb96ca2b-27a4-42e3-af7f-3514321500a3-kube-api-access-244x4\") pod \"control-plane-machine-set-operator-78cbb6b69f-rd6k5\" (UID: \"bb96ca2b-27a4-42e3-af7f-3514321500a3\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-rd6k5" Feb 18 19:19:42 crc kubenswrapper[4942]: I0218 19:19:42.681370 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/83c8ec50-d07e-4c96-80b8-22cf232b015c-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-8nxhq\" (UID: \"83c8ec50-d07e-4c96-80b8-22cf232b015c\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-8nxhq" Feb 18 19:19:42 crc kubenswrapper[4942]: I0218 19:19:42.681523 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c8e73114-6ccf-40ba-94e8-437e2db303fb-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-z4vt6\" (UID: \"c8e73114-6ccf-40ba-94e8-437e2db303fb\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-z4vt6" Feb 18 19:19:42 crc kubenswrapper[4942]: I0218 19:19:42.682034 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/8134898c-a265-4fa0-8548-075ea0812b7b-stats-auth\") pod \"router-default-5444994796-fgw8l\" (UID: \"8134898c-a265-4fa0-8548-075ea0812b7b\") " pod="openshift-ingress/router-default-5444994796-fgw8l" Feb 18 19:19:42 crc kubenswrapper[4942]: I0218 19:19:42.682870 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/696bcbdd-c9ca-45cd-ae12-e733919e2832-signing-key\") pod \"service-ca-9c57cc56f-9wcp7\" (UID: \"696bcbdd-c9ca-45cd-ae12-e733919e2832\") " pod="openshift-service-ca/service-ca-9c57cc56f-9wcp7" Feb 18 19:19:42 crc kubenswrapper[4942]: I0218 19:19:42.683003 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/secret/ae63de17-3438-46ed-94f9-5f51d8a216fd-certs\") pod \"machine-config-server-xs9jl\" (UID: \"ae63de17-3438-46ed-94f9-5f51d8a216fd\") " pod="openshift-machine-config-operator/machine-config-server-xs9jl" Feb 18 19:19:42 crc kubenswrapper[4942]: E0218 19:19:42.684143 4942 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-18 19:19:43.184126532 +0000 UTC m=+142.889059277 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-2fcrf" (UID: "087f0c6b-3e9f-4db4-bbcb-a8075e218219") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 19:19:42 crc kubenswrapper[4942]: I0218 19:19:42.684386 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/8134898c-a265-4fa0-8548-075ea0812b7b-metrics-certs\") pod \"router-default-5444994796-fgw8l\" (UID: \"8134898c-a265-4fa0-8548-075ea0812b7b\") " pod="openshift-ingress/router-default-5444994796-fgw8l" Feb 18 19:19:42 crc kubenswrapper[4942]: I0218 19:19:42.684549 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9b732dca-66e7-48c3-bd7d-5efc1d9662d7-config\") pod \"service-ca-operator-777779d784-v6bqq\" (UID: \"9b732dca-66e7-48c3-bd7d-5efc1d9662d7\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-v6bqq" Feb 18 19:19:42 crc kubenswrapper[4942]: I0218 19:19:42.684902 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/efab374b-fec3-4b4e-81f1-002715812a67-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-jfkrb\" (UID: \"efab374b-fec3-4b4e-81f1-002715812a67\") " pod="openshift-marketplace/marketplace-operator-79b997595-jfkrb" Feb 18 19:19:42 crc kubenswrapper[4942]: I0218 19:19:42.686197 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/51ed31a1-9bf0-40ff-8bca-041d691662b4-proxy-tls\") pod \"machine-config-operator-74547568cd-b488q\" (UID: \"51ed31a1-9bf0-40ff-8bca-041d691662b4\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-b488q" Feb 18 19:19:42 crc kubenswrapper[4942]: I0218 19:19:42.687646 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/51ed31a1-9bf0-40ff-8bca-041d691662b4-auth-proxy-config\") pod \"machine-config-operator-74547568cd-b488q\" (UID: \"51ed31a1-9bf0-40ff-8bca-041d691662b4\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-b488q" Feb 18 19:19:42 crc kubenswrapper[4942]: I0218 19:19:42.690188 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9b732dca-66e7-48c3-bd7d-5efc1d9662d7-serving-cert\") pod \"service-ca-operator-777779d784-v6bqq\" (UID: \"9b732dca-66e7-48c3-bd7d-5efc1d9662d7\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-v6bqq" Feb 18 19:19:42 crc kubenswrapper[4942]: I0218 19:19:42.690310 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/67ea138c-808d-40ee-9e77-2435676f7fba-cert\") pod \"ingress-canary-s57sd\" (UID: \"67ea138c-808d-40ee-9e77-2435676f7fba\") " pod="openshift-ingress-canary/ingress-canary-s57sd" Feb 18 19:19:42 crc kubenswrapper[4942]: I0218 19:19:42.690483 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/8134898c-a265-4fa0-8548-075ea0812b7b-default-certificate\") pod \"router-default-5444994796-fgw8l\" (UID: \"8134898c-a265-4fa0-8548-075ea0812b7b\") " pod="openshift-ingress/router-default-5444994796-fgw8l" Feb 18 19:19:42 crc kubenswrapper[4942]: I0218 19:19:42.691295 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2407a935-a8b9-4894-baaf-7460fee3d22b-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-9mc8z\" (UID: \"2407a935-a8b9-4894-baaf-7460fee3d22b\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-9mc8z" Feb 18 19:19:42 crc kubenswrapper[4942]: I0218 19:19:42.692170 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/0ec933ee-8c36-49a0-8ba5-c7442f4de367-srv-cert\") pod \"catalog-operator-68c6474976-zz9rm\" (UID: \"0ec933ee-8c36-49a0-8ba5-c7442f4de367\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-zz9rm" Feb 18 19:19:42 crc kubenswrapper[4942]: I0218 19:19:42.692238 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/ae63de17-3438-46ed-94f9-5f51d8a216fd-node-bootstrap-token\") pod \"machine-config-server-xs9jl\" (UID: \"ae63de17-3438-46ed-94f9-5f51d8a216fd\") " pod="openshift-machine-config-operator/machine-config-server-xs9jl" Feb 18 19:19:42 crc kubenswrapper[4942]: I0218 19:19:42.693399 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/83c8ec50-d07e-4c96-80b8-22cf232b015c-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-8nxhq\" (UID: \"83c8ec50-d07e-4c96-80b8-22cf232b015c\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-8nxhq" Feb 18 19:19:42 crc kubenswrapper[4942]: I0218 19:19:42.693971 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/a873b689-a8f1-4125-b97c-e9d0f6b06397-profile-collector-cert\") pod \"olm-operator-6b444d44fb-lrcbr\" (UID: \"a873b689-a8f1-4125-b97c-e9d0f6b06397\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-lrcbr" Feb 18 19:19:42 crc kubenswrapper[4942]: I0218 19:19:42.694934 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vtlv7\" (UniqueName: \"kubernetes.io/projected/714a349f-4480-4467-9041-7cae31df7686-kube-api-access-vtlv7\") pod \"etcd-operator-b45778765-x5rln\" (UID: \"714a349f-4480-4467-9041-7cae31df7686\") " pod="openshift-etcd-operator/etcd-operator-b45778765-x5rln" Feb 18 19:19:42 crc kubenswrapper[4942]: I0218 19:19:42.697435 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/b69f8ddf-cdf8-4104-bf4a-d2843c2aefa7-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-zj44h\" (UID: \"b69f8ddf-cdf8-4104-bf4a-d2843c2aefa7\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-zj44h" Feb 18 19:19:42 crc kubenswrapper[4942]: I0218 19:19:42.697707 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/af99a6af-5df3-4b87-8f14-a564c5d86164-socket-dir\") pod \"csi-hostpathplugin-w9lpz\" (UID: \"af99a6af-5df3-4b87-8f14-a564c5d86164\") " pod="hostpath-provisioner/csi-hostpathplugin-w9lpz" Feb 18 19:19:42 crc kubenswrapper[4942]: I0218 19:19:42.699033 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/461a0658-ae3b-4972-8122-2719276793b9-metrics-tls\") pod \"dns-default-s4kjv\" (UID: \"461a0658-ae3b-4972-8122-2719276793b9\") " pod="openshift-dns/dns-default-s4kjv" Feb 18 19:19:42 crc kubenswrapper[4942]: I0218 19:19:42.699850 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/51ed31a1-9bf0-40ff-8bca-041d691662b4-images\") pod \"machine-config-operator-74547568cd-b488q\" (UID: \"51ed31a1-9bf0-40ff-8bca-041d691662b4\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-b488q" Feb 18 19:19:42 crc kubenswrapper[4942]: I0218 19:19:42.700120 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c8e73114-6ccf-40ba-94e8-437e2db303fb-config\") pod \"kube-controller-manager-operator-78b949d7b-z4vt6\" (UID: \"c8e73114-6ccf-40ba-94e8-437e2db303fb\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-z4vt6" Feb 18 19:19:42 crc kubenswrapper[4942]: I0218 19:19:42.700880 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/5a5bda6e-e1c2-4ecf-a531-fbbe8139e91e-apiservice-cert\") pod \"packageserver-d55dfcdfc-g5df6\" (UID: \"5a5bda6e-e1c2-4ecf-a531-fbbe8139e91e\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-g5df6" Feb 18 19:19:42 crc kubenswrapper[4942]: I0218 19:19:42.704996 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/01ba4570-01bb-4964-8c1d-791c25d72a1a-secret-volume\") pod \"collect-profiles-29524035-tk5g4\" (UID: \"01ba4570-01bb-4964-8c1d-791c25d72a1a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524035-tk5g4" Feb 18 19:19:42 crc kubenswrapper[4942]: I0218 19:19:42.706505 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/157afc1c-f5df-419b-a760-336d14bbbd6d-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-vqvpq\" (UID: \"157afc1c-f5df-419b-a760-336d14bbbd6d\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-vqvpq" Feb 18 19:19:42 crc kubenswrapper[4942]: I0218 19:19:42.706750 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/af99a6af-5df3-4b87-8f14-a564c5d86164-mountpoint-dir\") pod \"csi-hostpathplugin-w9lpz\" (UID: \"af99a6af-5df3-4b87-8f14-a564c5d86164\") " pod="hostpath-provisioner/csi-hostpathplugin-w9lpz" Feb 18 19:19:42 crc kubenswrapper[4942]: I0218 19:19:42.708641 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/a873b689-a8f1-4125-b97c-e9d0f6b06397-srv-cert\") pod \"olm-operator-6b444d44fb-lrcbr\" (UID: \"a873b689-a8f1-4125-b97c-e9d0f6b06397\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-lrcbr" Feb 18 19:19:42 crc kubenswrapper[4942]: I0218 19:19:42.709267 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/0ec933ee-8c36-49a0-8ba5-c7442f4de367-profile-collector-cert\") pod \"catalog-operator-68c6474976-zz9rm\" (UID: \"0ec933ee-8c36-49a0-8ba5-c7442f4de367\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-zz9rm" Feb 18 19:19:42 crc kubenswrapper[4942]: I0218 19:19:42.709379 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/efab374b-fec3-4b4e-81f1-002715812a67-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-jfkrb\" (UID: \"efab374b-fec3-4b4e-81f1-002715812a67\") " pod="openshift-marketplace/marketplace-operator-79b997595-jfkrb" Feb 18 19:19:42 crc kubenswrapper[4942]: I0218 19:19:42.710046 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/5a5bda6e-e1c2-4ecf-a531-fbbe8139e91e-webhook-cert\") pod \"packageserver-d55dfcdfc-g5df6\" (UID: \"5a5bda6e-e1c2-4ecf-a531-fbbe8139e91e\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-g5df6" Feb 18 19:19:42 crc kubenswrapper[4942]: I0218 19:19:42.710157 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/157afc1c-f5df-419b-a760-336d14bbbd6d-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-vqvpq\" (UID: \"157afc1c-f5df-419b-a760-336d14bbbd6d\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-vqvpq" Feb 18 19:19:42 crc kubenswrapper[4942]: I0218 19:19:42.729278 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-x5rln" Feb 18 19:19:42 crc kubenswrapper[4942]: I0218 19:19:42.736721 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2bpp7\" (UniqueName: \"kubernetes.io/projected/5d6ad520-b407-4b86-867b-9e9658bfa536-kube-api-access-2bpp7\") pod \"controller-manager-879f6c89f-z4t28\" (UID: \"5d6ad520-b407-4b86-867b-9e9658bfa536\") " pod="openshift-controller-manager/controller-manager-879f6c89f-z4t28" Feb 18 19:19:42 crc kubenswrapper[4942]: I0218 19:19:42.748041 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-jqs9l" Feb 18 19:19:42 crc kubenswrapper[4942]: I0218 19:19:42.754174 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-4pmfw"] Feb 18 19:19:42 crc kubenswrapper[4942]: I0218 19:19:42.755080 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/05aed8e4-390c-4589-8a61-2aab50a1d90f-bound-sa-token\") pod \"ingress-operator-5b745b69d9-rw75p\" (UID: \"05aed8e4-390c-4589-8a61-2aab50a1d90f\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-rw75p" Feb 18 19:19:42 crc kubenswrapper[4942]: I0218 19:19:42.774535 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-bd7zz"] Feb 18 19:19:42 crc kubenswrapper[4942]: I0218 19:19:42.780695 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wh7v2\" (UniqueName: \"kubernetes.io/projected/8a08dbbe-ad0e-444f-8d4a-4ad6f2e84aae-kube-api-access-wh7v2\") pod \"machine-config-controller-84d6567774-zpnzn\" (UID: \"8a08dbbe-ad0e-444f-8d4a-4ad6f2e84aae\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-zpnzn" Feb 18 19:19:42 crc kubenswrapper[4942]: I0218 19:19:42.786152 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 19:19:42 crc kubenswrapper[4942]: E0218 19:19:42.787142 4942 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 19:19:43.287117859 +0000 UTC m=+142.992050524 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 19:19:42 crc kubenswrapper[4942]: I0218 19:19:42.794841 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-phjwr\" (UniqueName: \"kubernetes.io/projected/efab374b-fec3-4b4e-81f1-002715812a67-kube-api-access-phjwr\") pod \"marketplace-operator-79b997595-jfkrb\" (UID: \"efab374b-fec3-4b4e-81f1-002715812a67\") " pod="openshift-marketplace/marketplace-operator-79b997595-jfkrb" Feb 18 19:19:42 crc kubenswrapper[4942]: I0218 19:19:42.817528 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/157afc1c-f5df-419b-a760-336d14bbbd6d-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-vqvpq\" (UID: \"157afc1c-f5df-419b-a760-336d14bbbd6d\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-vqvpq" Feb 18 19:19:42 crc kubenswrapper[4942]: I0218 19:19:42.833573 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mtdlh\" (UniqueName: \"kubernetes.io/projected/67ea138c-808d-40ee-9e77-2435676f7fba-kube-api-access-mtdlh\") pod \"ingress-canary-s57sd\" (UID: \"67ea138c-808d-40ee-9e77-2435676f7fba\") " pod="openshift-ingress-canary/ingress-canary-s57sd" Feb 18 19:19:42 crc kubenswrapper[4942]: I0218 19:19:42.837834 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-kpfjc"] Feb 18 19:19:42 crc kubenswrapper[4942]: I0218 19:19:42.840615 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-zpnzn" Feb 18 19:19:42 crc kubenswrapper[4942]: I0218 19:19:42.846496 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-tndhs"] Feb 18 19:19:42 crc kubenswrapper[4942]: I0218 19:19:42.854541 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rv6cc\" (UniqueName: \"kubernetes.io/projected/a873b689-a8f1-4125-b97c-e9d0f6b06397-kube-api-access-rv6cc\") pod \"olm-operator-6b444d44fb-lrcbr\" (UID: \"a873b689-a8f1-4125-b97c-e9d0f6b06397\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-lrcbr" Feb 18 19:19:42 crc kubenswrapper[4942]: I0218 19:19:42.886636 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8j82p\" (UniqueName: \"kubernetes.io/projected/2407a935-a8b9-4894-baaf-7460fee3d22b-kube-api-access-8j82p\") pod \"openshift-controller-manager-operator-756b6f6bc6-9mc8z\" (UID: \"2407a935-a8b9-4894-baaf-7460fee3d22b\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-9mc8z" Feb 18 19:19:42 crc kubenswrapper[4942]: I0218 19:19:42.890559 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-2fcrf\" (UID: \"087f0c6b-3e9f-4db4-bbcb-a8075e218219\") " pod="openshift-image-registry/image-registry-697d97f7c8-2fcrf" Feb 18 19:19:42 crc kubenswrapper[4942]: E0218 19:19:42.890897 4942 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-18 19:19:43.390883997 +0000 UTC m=+143.095816652 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-2fcrf" (UID: "087f0c6b-3e9f-4db4-bbcb-a8075e218219") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 19:19:42 crc kubenswrapper[4942]: I0218 19:19:42.916935 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-bgd6x"] Feb 18 19:19:42 crc kubenswrapper[4942]: I0218 19:19:42.926694 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-jfkrb" Feb 18 19:19:42 crc kubenswrapper[4942]: I0218 19:19:42.928959 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jl889\" (UniqueName: \"kubernetes.io/projected/b69f8ddf-cdf8-4104-bf4a-d2843c2aefa7-kube-api-access-jl889\") pod \"multus-admission-controller-857f4d67dd-zj44h\" (UID: \"b69f8ddf-cdf8-4104-bf4a-d2843c2aefa7\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-zj44h" Feb 18 19:19:42 crc kubenswrapper[4942]: I0218 19:19:42.932264 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qrch7\" (UniqueName: \"kubernetes.io/projected/696bcbdd-c9ca-45cd-ae12-e733919e2832-kube-api-access-qrch7\") pod \"service-ca-9c57cc56f-9wcp7\" (UID: \"696bcbdd-c9ca-45cd-ae12-e733919e2832\") " pod="openshift-service-ca/service-ca-9c57cc56f-9wcp7" Feb 18 19:19:42 crc kubenswrapper[4942]: I0218 19:19:42.936038 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-bd7zz" event={"ID":"4afc5765-32dc-4b49-b1a3-9141c2c96087","Type":"ContainerStarted","Data":"0bd508beedecb19783ef3701b1d63010adfb4748bf36f05ad2a53d349bbaec15"} Feb 18 19:19:42 crc kubenswrapper[4942]: I0218 19:19:42.936371 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-s57sd" Feb 18 19:19:42 crc kubenswrapper[4942]: I0218 19:19:42.937419 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8wmrg\" (UniqueName: \"kubernetes.io/projected/9b732dca-66e7-48c3-bd7d-5efc1d9662d7-kube-api-access-8wmrg\") pod \"service-ca-operator-777779d784-v6bqq\" (UID: \"9b732dca-66e7-48c3-bd7d-5efc1d9662d7\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-v6bqq" Feb 18 19:19:42 crc kubenswrapper[4942]: I0218 19:19:42.940073 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-4pmfw" event={"ID":"5bd5f22b-1c00-4281-9d3a-6ed77a4d0d29","Type":"ContainerStarted","Data":"be0e2983f257f89983a034d639679ba41911d38165ae858e7dff29e94b7347e8"} Feb 18 19:19:42 crc kubenswrapper[4942]: I0218 19:19:42.940105 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-4pmfw" event={"ID":"5bd5f22b-1c00-4281-9d3a-6ed77a4d0d29","Type":"ContainerStarted","Data":"7752848ea47de612256d0efd19571c026f62196ff350d11480f295cbf89a9d21"} Feb 18 19:19:42 crc kubenswrapper[4942]: I0218 19:19:42.942585 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console-operator/console-operator-58897d9998-4pmfw" Feb 18 19:19:42 crc kubenswrapper[4942]: I0218 19:19:42.947483 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-7x2vd" event={"ID":"709f9378-2d1c-4158-9521-e6000e06eb5e","Type":"ContainerStarted","Data":"70fafaa1b92bcbda4edb91c0b2cb05438ffafad32fa2fec58890b4c7a238677b"} Feb 18 19:19:42 crc kubenswrapper[4942]: I0218 19:19:42.947572 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-7x2vd" event={"ID":"709f9378-2d1c-4158-9521-e6000e06eb5e","Type":"ContainerStarted","Data":"ce1c08e5daa508491ee69a9555f342277221c9ed36bf4ab05789d1efc230b58e"} Feb 18 19:19:42 crc kubenswrapper[4942]: I0218 19:19:42.951340 4942 patch_prober.go:28] interesting pod/console-operator-58897d9998-4pmfw container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.6:8443/readyz\": dial tcp 10.217.0.6:8443: connect: connection refused" start-of-body= Feb 18 19:19:42 crc kubenswrapper[4942]: I0218 19:19:42.951411 4942 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-4pmfw" podUID="5bd5f22b-1c00-4281-9d3a-6ed77a4d0d29" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.6:8443/readyz\": dial tcp 10.217.0.6:8443: connect: connection refused" Feb 18 19:19:42 crc kubenswrapper[4942]: I0218 19:19:42.951922 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-2ldmd" event={"ID":"0d941adf-0c5e-46d6-9a7c-a7677468f322","Type":"ContainerStarted","Data":"094fb475778879f3d2db25d5800580f383e0308a54494a858cf3ebc47d46f656"} Feb 18 19:19:42 crc kubenswrapper[4942]: I0218 19:19:42.951968 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-2ldmd" event={"ID":"0d941adf-0c5e-46d6-9a7c-a7677468f322","Type":"ContainerStarted","Data":"da475d0721a7d193057b00168df055f153446074dc5684b508d817f1c1d3fe48"} Feb 18 19:19:42 crc kubenswrapper[4942]: I0218 19:19:42.953090 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-tndhs" event={"ID":"cb8403e3-f9b3-4ddf-8688-1a025a2b9291","Type":"ContainerStarted","Data":"e1995a8aaae8e1fb6ac760957cee590a7081bcdd015d1c6948be8dd9b3e47eeb"} Feb 18 19:19:42 crc kubenswrapper[4942]: I0218 19:19:42.955719 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-kpfjc" event={"ID":"42dda107-038c-42c1-8182-52bee75caea9","Type":"ContainerStarted","Data":"549a45966f3465b915ee762043425f7fc34d780e5d763266b632f538fe2cd88e"} Feb 18 19:19:42 crc kubenswrapper[4942]: I0218 19:19:42.958205 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4h4qr\" (UniqueName: \"kubernetes.io/projected/51ed31a1-9bf0-40ff-8bca-041d691662b4-kube-api-access-4h4qr\") pod \"machine-config-operator-74547568cd-b488q\" (UID: \"51ed31a1-9bf0-40ff-8bca-041d691662b4\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-b488q" Feb 18 19:19:42 crc kubenswrapper[4942]: I0218 19:19:42.965267 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-z4t28" Feb 18 19:19:42 crc kubenswrapper[4942]: W0218 19:19:42.972163 4942 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podae259edb_f577_48b8_b236_91656ac269d2.slice/crio-e5df29d48d25aa3ea435cdc1318c970ef57f723e811a569ec499280ebf2c8923 WatchSource:0}: Error finding container e5df29d48d25aa3ea435cdc1318c970ef57f723e811a569ec499280ebf2c8923: Status 404 returned error can't find the container with id e5df29d48d25aa3ea435cdc1318c970ef57f723e811a569ec499280ebf2c8923 Feb 18 19:19:42 crc kubenswrapper[4942]: I0218 19:19:42.974791 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g6lxh\" (UniqueName: \"kubernetes.io/projected/83c8ec50-d07e-4c96-80b8-22cf232b015c-kube-api-access-g6lxh\") pod \"package-server-manager-789f6589d5-8nxhq\" (UID: \"83c8ec50-d07e-4c96-80b8-22cf232b015c\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-8nxhq" Feb 18 19:19:42 crc kubenswrapper[4942]: I0218 19:19:42.992522 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 19:19:42 crc kubenswrapper[4942]: E0218 19:19:42.992976 4942 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 19:19:43.49295654 +0000 UTC m=+143.197889205 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 19:19:42 crc kubenswrapper[4942]: I0218 19:19:42.995294 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rlq6p\" (UniqueName: \"kubernetes.io/projected/461a0658-ae3b-4972-8122-2719276793b9-kube-api-access-rlq6p\") pod \"dns-default-s4kjv\" (UID: \"461a0658-ae3b-4972-8122-2719276793b9\") " pod="openshift-dns/dns-default-s4kjv" Feb 18 19:19:43 crc kubenswrapper[4942]: I0218 19:19:43.002188 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-rw75p" Feb 18 19:19:43 crc kubenswrapper[4942]: I0218 19:19:43.014244 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-xbkl5"] Feb 18 19:19:43 crc kubenswrapper[4942]: I0218 19:19:43.021593 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-q9pxc"] Feb 18 19:19:43 crc kubenswrapper[4942]: I0218 19:19:43.022161 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c8e73114-6ccf-40ba-94e8-437e2db303fb-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-z4vt6\" (UID: \"c8e73114-6ccf-40ba-94e8-437e2db303fb\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-z4vt6" Feb 18 19:19:43 crc kubenswrapper[4942]: I0218 19:19:43.040213 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-qk5bm"] Feb 18 19:19:43 crc kubenswrapper[4942]: I0218 19:19:43.042187 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-7tzn9"] Feb 18 19:19:43 crc kubenswrapper[4942]: I0218 19:19:43.060512 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-z4vt6" Feb 18 19:19:43 crc kubenswrapper[4942]: I0218 19:19:43.068138 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-244x4\" (UniqueName: \"kubernetes.io/projected/bb96ca2b-27a4-42e3-af7f-3514321500a3-kube-api-access-244x4\") pod \"control-plane-machine-set-operator-78cbb6b69f-rd6k5\" (UID: \"bb96ca2b-27a4-42e3-af7f-3514321500a3\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-rd6k5" Feb 18 19:19:43 crc kubenswrapper[4942]: I0218 19:19:43.073786 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dfmcm\" (UniqueName: \"kubernetes.io/projected/0ec933ee-8c36-49a0-8ba5-c7442f4de367-kube-api-access-dfmcm\") pod \"catalog-operator-68c6474976-zz9rm\" (UID: \"0ec933ee-8c36-49a0-8ba5-c7442f4de367\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-zz9rm" Feb 18 19:19:43 crc kubenswrapper[4942]: I0218 19:19:43.080702 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-vqvpq" Feb 18 19:19:43 crc kubenswrapper[4942]: I0218 19:19:43.090089 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-9mc8z" Feb 18 19:19:43 crc kubenswrapper[4942]: I0218 19:19:43.095568 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-2fcrf\" (UID: \"087f0c6b-3e9f-4db4-bbcb-a8075e218219\") " pod="openshift-image-registry/image-registry-697d97f7c8-2fcrf" Feb 18 19:19:43 crc kubenswrapper[4942]: I0218 19:19:43.097128 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wgftz\" (UniqueName: \"kubernetes.io/projected/0e51d4dc-e813-4166-bb6a-45d083a09d2a-kube-api-access-wgftz\") pod \"migrator-59844c95c7-9grql\" (UID: \"0e51d4dc-e813-4166-bb6a-45d083a09d2a\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-9grql" Feb 18 19:19:43 crc kubenswrapper[4942]: E0218 19:19:43.097136 4942 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-18 19:19:43.597120138 +0000 UTC m=+143.302052923 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-2fcrf" (UID: "087f0c6b-3e9f-4db4-bbcb-a8075e218219") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 19:19:43 crc kubenswrapper[4942]: I0218 19:19:43.100330 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-88vg5\" (UniqueName: \"kubernetes.io/projected/5a5bda6e-e1c2-4ecf-a531-fbbe8139e91e-kube-api-access-88vg5\") pod \"packageserver-d55dfcdfc-g5df6\" (UID: \"5a5bda6e-e1c2-4ecf-a531-fbbe8139e91e\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-g5df6" Feb 18 19:19:43 crc kubenswrapper[4942]: I0218 19:19:43.103409 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-g5df6" Feb 18 19:19:43 crc kubenswrapper[4942]: W0218 19:19:43.103914 4942 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podcbe755cf_b7a2_4557_9368_5d71df455408.slice/crio-2d9ae5080d8a0911d435f2cee044bba81aa7a5e14f68397f7975e962531d6193 WatchSource:0}: Error finding container 2d9ae5080d8a0911d435f2cee044bba81aa7a5e14f68397f7975e962531d6193: Status 404 returned error can't find the container with id 2d9ae5080d8a0911d435f2cee044bba81aa7a5e14f68397f7975e962531d6193 Feb 18 19:19:43 crc kubenswrapper[4942]: I0218 19:19:43.114488 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d48r9\" (UniqueName: \"kubernetes.io/projected/ae63de17-3438-46ed-94f9-5f51d8a216fd-kube-api-access-d48r9\") pod \"machine-config-server-xs9jl\" (UID: \"ae63de17-3438-46ed-94f9-5f51d8a216fd\") " pod="openshift-machine-config-operator/machine-config-server-xs9jl" Feb 18 19:19:43 crc kubenswrapper[4942]: I0218 19:19:43.118915 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-566m9"] Feb 18 19:19:43 crc kubenswrapper[4942]: I0218 19:19:43.118972 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-x5rln"] Feb 18 19:19:43 crc kubenswrapper[4942]: I0218 19:19:43.118984 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-jqs9l"] Feb 18 19:19:43 crc kubenswrapper[4942]: I0218 19:19:43.123887 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-8nxhq" Feb 18 19:19:43 crc kubenswrapper[4942]: I0218 19:19:43.134702 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7g7fx\" (UniqueName: \"kubernetes.io/projected/01ba4570-01bb-4964-8c1d-791c25d72a1a-kube-api-access-7g7fx\") pod \"collect-profiles-29524035-tk5g4\" (UID: \"01ba4570-01bb-4964-8c1d-791c25d72a1a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524035-tk5g4" Feb 18 19:19:43 crc kubenswrapper[4942]: I0218 19:19:43.153360 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-lrcbr" Feb 18 19:19:43 crc kubenswrapper[4942]: I0218 19:19:43.163600 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cz8wn\" (UniqueName: \"kubernetes.io/projected/af99a6af-5df3-4b87-8f14-a564c5d86164-kube-api-access-cz8wn\") pod \"csi-hostpathplugin-w9lpz\" (UID: \"af99a6af-5df3-4b87-8f14-a564c5d86164\") " pod="hostpath-provisioner/csi-hostpathplugin-w9lpz" Feb 18 19:19:43 crc kubenswrapper[4942]: I0218 19:19:43.164743 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-9grql" Feb 18 19:19:43 crc kubenswrapper[4942]: I0218 19:19:43.171634 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-zj44h" Feb 18 19:19:43 crc kubenswrapper[4942]: I0218 19:19:43.182833 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-vms6h"] Feb 18 19:19:43 crc kubenswrapper[4942]: I0218 19:19:43.185558 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-v6bqq" Feb 18 19:19:43 crc kubenswrapper[4942]: I0218 19:19:43.186275 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-5l26l"] Feb 18 19:19:43 crc kubenswrapper[4942]: I0218 19:19:43.189116 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-p42pr"] Feb 18 19:19:43 crc kubenswrapper[4942]: I0218 19:19:43.203716 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-9wcp7" Feb 18 19:19:43 crc kubenswrapper[4942]: I0218 19:19:43.199697 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 19:19:43 crc kubenswrapper[4942]: I0218 19:19:43.200709 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-v5w2k"] Feb 18 19:19:43 crc kubenswrapper[4942]: I0218 19:19:43.196241 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-b488q" Feb 18 19:19:43 crc kubenswrapper[4942]: E0218 19:19:43.199787 4942 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 19:19:43.699754096 +0000 UTC m=+143.404686761 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 19:19:43 crc kubenswrapper[4942]: I0218 19:19:43.204519 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-2fcrf\" (UID: \"087f0c6b-3e9f-4db4-bbcb-a8075e218219\") " pod="openshift-image-registry/image-registry-697d97f7c8-2fcrf" Feb 18 19:19:43 crc kubenswrapper[4942]: I0218 19:19:43.204524 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pnrbc\" (UniqueName: \"kubernetes.io/projected/8134898c-a265-4fa0-8548-075ea0812b7b-kube-api-access-pnrbc\") pod \"router-default-5444994796-fgw8l\" (UID: \"8134898c-a265-4fa0-8548-075ea0812b7b\") " pod="openshift-ingress/router-default-5444994796-fgw8l" Feb 18 19:19:43 crc kubenswrapper[4942]: E0218 19:19:43.207404 4942 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-18 19:19:43.707378332 +0000 UTC m=+143.412310997 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-2fcrf" (UID: "087f0c6b-3e9f-4db4-bbcb-a8075e218219") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 19:19:43 crc kubenswrapper[4942]: I0218 19:19:43.210673 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-rd6k5" Feb 18 19:19:43 crc kubenswrapper[4942]: I0218 19:19:43.215271 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-jfkrb"] Feb 18 19:19:43 crc kubenswrapper[4942]: I0218 19:19:43.223243 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29524035-tk5g4" Feb 18 19:19:43 crc kubenswrapper[4942]: I0218 19:19:43.244149 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-s4kjv" Feb 18 19:19:43 crc kubenswrapper[4942]: I0218 19:19:43.264802 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-w9lpz" Feb 18 19:19:43 crc kubenswrapper[4942]: I0218 19:19:43.272303 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-xs9jl" Feb 18 19:19:43 crc kubenswrapper[4942]: I0218 19:19:43.314911 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 19:19:43 crc kubenswrapper[4942]: E0218 19:19:43.315213 4942 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 19:19:43.815193659 +0000 UTC m=+143.520126324 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 19:19:43 crc kubenswrapper[4942]: I0218 19:19:43.368358 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-zz9rm" Feb 18 19:19:43 crc kubenswrapper[4942]: I0218 19:19:43.379313 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-fgw8l" Feb 18 19:19:43 crc kubenswrapper[4942]: I0218 19:19:43.416352 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-2fcrf\" (UID: \"087f0c6b-3e9f-4db4-bbcb-a8075e218219\") " pod="openshift-image-registry/image-registry-697d97f7c8-2fcrf" Feb 18 19:19:43 crc kubenswrapper[4942]: E0218 19:19:43.416728 4942 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-18 19:19:43.916716086 +0000 UTC m=+143.621648751 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-2fcrf" (UID: "087f0c6b-3e9f-4db4-bbcb-a8075e218219") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 19:19:43 crc kubenswrapper[4942]: I0218 19:19:43.517594 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 19:19:43 crc kubenswrapper[4942]: E0218 19:19:43.519427 4942 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 19:19:44.018220963 +0000 UTC m=+143.723153628 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 19:19:43 crc kubenswrapper[4942]: I0218 19:19:43.606489 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-zpnzn"] Feb 18 19:19:43 crc kubenswrapper[4942]: I0218 19:19:43.621428 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-2fcrf\" (UID: \"087f0c6b-3e9f-4db4-bbcb-a8075e218219\") " pod="openshift-image-registry/image-registry-697d97f7c8-2fcrf" Feb 18 19:19:43 crc kubenswrapper[4942]: E0218 19:19:43.621880 4942 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-18 19:19:44.121860638 +0000 UTC m=+143.826793293 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-2fcrf" (UID: "087f0c6b-3e9f-4db4-bbcb-a8075e218219") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 19:19:43 crc kubenswrapper[4942]: I0218 19:19:43.632056 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-s57sd"] Feb 18 19:19:43 crc kubenswrapper[4942]: I0218 19:19:43.720882 4942 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console-operator/console-operator-58897d9998-4pmfw" podStartSLOduration=122.720861777 podStartE2EDuration="2m2.720861777s" podCreationTimestamp="2026-02-18 19:17:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:19:43.719997274 +0000 UTC m=+143.424929939" watchObservedRunningTime="2026-02-18 19:19:43.720861777 +0000 UTC m=+143.425794442" Feb 18 19:19:43 crc kubenswrapper[4942]: I0218 19:19:43.722048 4942 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-2ldmd" podStartSLOduration=122.722038569 podStartE2EDuration="2m2.722038569s" podCreationTimestamp="2026-02-18 19:17:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:19:43.675688569 +0000 UTC m=+143.380621234" watchObservedRunningTime="2026-02-18 19:19:43.722038569 +0000 UTC m=+143.426971234" Feb 18 19:19:43 crc kubenswrapper[4942]: I0218 19:19:43.724958 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 19:19:43 crc kubenswrapper[4942]: E0218 19:19:43.725068 4942 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 19:19:44.22505474 +0000 UTC m=+143.929987405 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 19:19:43 crc kubenswrapper[4942]: I0218 19:19:43.725278 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-2fcrf\" (UID: \"087f0c6b-3e9f-4db4-bbcb-a8075e218219\") " pod="openshift-image-registry/image-registry-697d97f7c8-2fcrf" Feb 18 19:19:43 crc kubenswrapper[4942]: E0218 19:19:43.725541 4942 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-18 19:19:44.225533973 +0000 UTC m=+143.930466638 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-2fcrf" (UID: "087f0c6b-3e9f-4db4-bbcb-a8075e218219") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 19:19:43 crc kubenswrapper[4942]: I0218 19:19:43.748526 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-g5df6"] Feb 18 19:19:43 crc kubenswrapper[4942]: I0218 19:19:43.757670 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-z4t28"] Feb 18 19:19:43 crc kubenswrapper[4942]: I0218 19:19:43.813617 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-rw75p"] Feb 18 19:19:43 crc kubenswrapper[4942]: W0218 19:19:43.819288 4942 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod67ea138c_808d_40ee_9e77_2435676f7fba.slice/crio-295d840e2d2cbcd9fed3e4010b0664d7b5d4e6da4e446024f4f143ed6ae594bc WatchSource:0}: Error finding container 295d840e2d2cbcd9fed3e4010b0664d7b5d4e6da4e446024f4f143ed6ae594bc: Status 404 returned error can't find the container with id 295d840e2d2cbcd9fed3e4010b0664d7b5d4e6da4e446024f4f143ed6ae594bc Feb 18 19:19:43 crc kubenswrapper[4942]: I0218 19:19:43.826270 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 19:19:43 crc kubenswrapper[4942]: E0218 19:19:43.826689 4942 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 19:19:44.32666499 +0000 UTC m=+144.031597655 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 19:19:43 crc kubenswrapper[4942]: I0218 19:19:43.838167 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-z4vt6"] Feb 18 19:19:43 crc kubenswrapper[4942]: I0218 19:19:43.857748 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-vqvpq"] Feb 18 19:19:43 crc kubenswrapper[4942]: I0218 19:19:43.858935 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-9mc8z"] Feb 18 19:19:43 crc kubenswrapper[4942]: W0218 19:19:43.917967 4942 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5d6ad520_b407_4b86_867b_9e9658bfa536.slice/crio-561f208636e4ed3a972d1961d576d8357f830eea84893972b2e168b33bc8de2c WatchSource:0}: Error finding container 561f208636e4ed3a972d1961d576d8357f830eea84893972b2e168b33bc8de2c: Status 404 returned error can't find the container with id 561f208636e4ed3a972d1961d576d8357f830eea84893972b2e168b33bc8de2c Feb 18 19:19:43 crc kubenswrapper[4942]: I0218 19:19:43.928123 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-2fcrf\" (UID: \"087f0c6b-3e9f-4db4-bbcb-a8075e218219\") " pod="openshift-image-registry/image-registry-697d97f7c8-2fcrf" Feb 18 19:19:43 crc kubenswrapper[4942]: E0218 19:19:43.928569 4942 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-18 19:19:44.428552108 +0000 UTC m=+144.133484773 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-2fcrf" (UID: "087f0c6b-3e9f-4db4-bbcb-a8075e218219") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 19:19:43 crc kubenswrapper[4942]: I0218 19:19:43.968539 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-jfkrb" event={"ID":"efab374b-fec3-4b4e-81f1-002715812a67","Type":"ContainerStarted","Data":"3c276811f364fb83706109331be8399abc2c7a535cfd237e4abe3dc07119fee5"} Feb 18 19:19:43 crc kubenswrapper[4942]: I0218 19:19:43.975667 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-kpfjc" event={"ID":"42dda107-038c-42c1-8182-52bee75caea9","Type":"ContainerStarted","Data":"00cc93cdee68acda24bf0c7ef246cedca573cf4a425ffa82cd541a7e7fb12fe0"} Feb 18 19:19:43 crc kubenswrapper[4942]: I0218 19:19:43.976248 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-558db77b4-kpfjc" Feb 18 19:19:43 crc kubenswrapper[4942]: I0218 19:19:43.980058 4942 patch_prober.go:28] interesting pod/oauth-openshift-558db77b4-kpfjc container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.8:6443/healthz\": dial tcp 10.217.0.8:6443: connect: connection refused" start-of-body= Feb 18 19:19:43 crc kubenswrapper[4942]: I0218 19:19:43.980095 4942 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-558db77b4-kpfjc" podUID="42dda107-038c-42c1-8182-52bee75caea9" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.8:6443/healthz\": dial tcp 10.217.0.8:6443: connect: connection refused" Feb 18 19:19:43 crc kubenswrapper[4942]: I0218 19:19:43.982375 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-p42pr" event={"ID":"48a8b317-27eb-4d20-93ad-37fa559ec858","Type":"ContainerStarted","Data":"2def3982ef8fbe6c1791a93b229c90af6eb468fdba16041dec0a2cea286b5b3e"} Feb 18 19:19:43 crc kubenswrapper[4942]: I0218 19:19:43.989272 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-zpnzn" event={"ID":"8a08dbbe-ad0e-444f-8d4a-4ad6f2e84aae","Type":"ContainerStarted","Data":"4368118890a7a63201ed7f0bb9d641c1a985ddd14f6b2f95c6e0fe5ff3b25845"} Feb 18 19:19:43 crc kubenswrapper[4942]: I0218 19:19:43.998366 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-7tzn9" event={"ID":"86fdeda0-1ae3-488d-9612-d633a5fca64f","Type":"ContainerStarted","Data":"3e45dbfff30834bdc3ea5cfa860f4a71f2e183e6f4466624047f4e79c5f4c782"} Feb 18 19:19:43 crc kubenswrapper[4942]: I0218 19:19:43.998414 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-7tzn9" event={"ID":"86fdeda0-1ae3-488d-9612-d633a5fca64f","Type":"ContainerStarted","Data":"c6fb470a9f7dd52043826f7743424c4910d5221d27151e50832dbc21d2c68477"} Feb 18 19:19:44 crc kubenswrapper[4942]: I0218 19:19:44.029200 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 19:19:44 crc kubenswrapper[4942]: E0218 19:19:44.029720 4942 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 19:19:44.529699185 +0000 UTC m=+144.234631850 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 19:19:44 crc kubenswrapper[4942]: I0218 19:19:44.071833 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-bgd6x" event={"ID":"ae259edb-f577-48b8-b236-91656ac269d2","Type":"ContainerStarted","Data":"6e49fe8a89e451498bdcab8bc2c5d3c214682dd0957f2a2a6d828c820f095390"} Feb 18 19:19:44 crc kubenswrapper[4942]: I0218 19:19:44.071882 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-bgd6x" event={"ID":"ae259edb-f577-48b8-b236-91656ac269d2","Type":"ContainerStarted","Data":"e5df29d48d25aa3ea435cdc1318c970ef57f723e811a569ec499280ebf2c8923"} Feb 18 19:19:44 crc kubenswrapper[4942]: I0218 19:19:44.081212 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-zj44h"] Feb 18 19:19:44 crc kubenswrapper[4942]: I0218 19:19:44.084598 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-rd6k5"] Feb 18 19:19:44 crc kubenswrapper[4942]: I0218 19:19:44.089014 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-lrcbr"] Feb 18 19:19:44 crc kubenswrapper[4942]: I0218 19:19:44.097641 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-v5w2k" event={"ID":"6890b7aa-fac3-4c00-90cc-4618ddfae25e","Type":"ContainerStarted","Data":"9dc5c5908b962492b879d1c2f73708683ee59282a9e3326436550f837bfce9de"} Feb 18 19:19:44 crc kubenswrapper[4942]: I0218 19:19:44.104191 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-8nxhq"] Feb 18 19:19:44 crc kubenswrapper[4942]: I0218 19:19:44.118766 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-9grql"] Feb 18 19:19:44 crc kubenswrapper[4942]: I0218 19:19:44.119461 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-rw75p" event={"ID":"05aed8e4-390c-4589-8a61-2aab50a1d90f","Type":"ContainerStarted","Data":"87b1c55fc887a15eed1d534f98401fa9f1353ef360325bad77bd1d77df197bac"} Feb 18 19:19:44 crc kubenswrapper[4942]: I0218 19:19:44.121066 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-9wcp7"] Feb 18 19:19:44 crc kubenswrapper[4942]: I0218 19:19:44.133022 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-2fcrf\" (UID: \"087f0c6b-3e9f-4db4-bbcb-a8075e218219\") " pod="openshift-image-registry/image-registry-697d97f7c8-2fcrf" Feb 18 19:19:44 crc kubenswrapper[4942]: E0218 19:19:44.133344 4942 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-18 19:19:44.633330299 +0000 UTC m=+144.338262964 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-2fcrf" (UID: "087f0c6b-3e9f-4db4-bbcb-a8075e218219") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 19:19:44 crc kubenswrapper[4942]: I0218 19:19:44.205657 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-b488q"] Feb 18 19:19:44 crc kubenswrapper[4942]: I0218 19:19:44.207432 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-tndhs" event={"ID":"cb8403e3-f9b3-4ddf-8688-1a025a2b9291","Type":"ContainerStarted","Data":"2fbc58cab36a5d2b4e6a2405f15a520af72a5fbbdf8fc502d8c2eabf69ff0731"} Feb 18 19:19:44 crc kubenswrapper[4942]: I0218 19:19:44.208100 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-v6bqq"] Feb 18 19:19:44 crc kubenswrapper[4942]: I0218 19:19:44.208649 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/downloads-7954f5f757-tndhs" Feb 18 19:19:44 crc kubenswrapper[4942]: I0218 19:19:44.235822 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 19:19:44 crc kubenswrapper[4942]: E0218 19:19:44.236109 4942 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 19:19:44.73609363 +0000 UTC m=+144.441026295 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 19:19:44 crc kubenswrapper[4942]: I0218 19:19:44.260655 4942 patch_prober.go:28] interesting pod/downloads-7954f5f757-tndhs container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.10:8080/\": dial tcp 10.217.0.10:8080: connect: connection refused" start-of-body= Feb 18 19:19:44 crc kubenswrapper[4942]: I0218 19:19:44.260719 4942 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-tndhs" podUID="cb8403e3-f9b3-4ddf-8688-1a025a2b9291" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.10:8080/\": dial tcp 10.217.0.10:8080: connect: connection refused" Feb 18 19:19:44 crc kubenswrapper[4942]: W0218 19:19:44.278809 4942 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0e51d4dc_e813_4166_bb6a_45d083a09d2a.slice/crio-14f50396c8170cc2a69c2e6637c93b60bc26e9ebdd445e1de739aeb3a386b19a WatchSource:0}: Error finding container 14f50396c8170cc2a69c2e6637c93b60bc26e9ebdd445e1de739aeb3a386b19a: Status 404 returned error can't find the container with id 14f50396c8170cc2a69c2e6637c93b60bc26e9ebdd445e1de739aeb3a386b19a Feb 18 19:19:44 crc kubenswrapper[4942]: I0218 19:19:44.284853 4942 generic.go:334] "Generic (PLEG): container finished" podID="cbe755cf-b7a2-4557-9368-5d71df455408" containerID="939dca11536ea0f62b8c54cd4880927921818e8fa63a125e07c0d44498b1e7c2" exitCode=0 Feb 18 19:19:44 crc kubenswrapper[4942]: I0218 19:19:44.284956 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-q9pxc" event={"ID":"cbe755cf-b7a2-4557-9368-5d71df455408","Type":"ContainerDied","Data":"939dca11536ea0f62b8c54cd4880927921818e8fa63a125e07c0d44498b1e7c2"} Feb 18 19:19:44 crc kubenswrapper[4942]: I0218 19:19:44.284996 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-q9pxc" event={"ID":"cbe755cf-b7a2-4557-9368-5d71df455408","Type":"ContainerStarted","Data":"2d9ae5080d8a0911d435f2cee044bba81aa7a5e14f68397f7975e962531d6193"} Feb 18 19:19:44 crc kubenswrapper[4942]: W0218 19:19:44.306069 4942 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod696bcbdd_c9ca_45cd_ae12_e733919e2832.slice/crio-1dc34889cbf59b8791ee47c092b48d9fc2128835cc39bc93bf9951ee5a6d0e78 WatchSource:0}: Error finding container 1dc34889cbf59b8791ee47c092b48d9fc2128835cc39bc93bf9951ee5a6d0e78: Status 404 returned error can't find the container with id 1dc34889cbf59b8791ee47c092b48d9fc2128835cc39bc93bf9951ee5a6d0e78 Feb 18 19:19:44 crc kubenswrapper[4942]: I0218 19:19:44.337702 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-2fcrf\" (UID: \"087f0c6b-3e9f-4db4-bbcb-a8075e218219\") " pod="openshift-image-registry/image-registry-697d97f7c8-2fcrf" Feb 18 19:19:44 crc kubenswrapper[4942]: E0218 19:19:44.339415 4942 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-18 19:19:44.839402276 +0000 UTC m=+144.544334941 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-2fcrf" (UID: "087f0c6b-3e9f-4db4-bbcb-a8075e218219") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 19:19:44 crc kubenswrapper[4942]: I0218 19:19:44.350421 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-vms6h" event={"ID":"e3586689-cf81-4cd2-84d1-70b0ce221b9d","Type":"ContainerStarted","Data":"10757be4e8c30187826ef9ec48219806ca1641a5947bd7e3a04e2113cd573c9b"} Feb 18 19:19:44 crc kubenswrapper[4942]: I0218 19:19:44.359057 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-s4kjv"] Feb 18 19:19:44 crc kubenswrapper[4942]: I0218 19:19:44.418462 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-bd7zz" event={"ID":"4afc5765-32dc-4b49-b1a3-9141c2c96087","Type":"ContainerStarted","Data":"47c265a739641c55bf6470c05de2623aba65db5ef5d48e5181131b6bdf46ed0e"} Feb 18 19:19:44 crc kubenswrapper[4942]: I0218 19:19:44.437628 4942 generic.go:334] "Generic (PLEG): container finished" podID="bccecc4d-32d0-4367-a3b6-e35ddf53dd1a" containerID="fbaafee7cecf61bb8d77ca6672d2ac8ccf91008f81ea26809234c8c633d166e3" exitCode=0 Feb 18 19:19:44 crc kubenswrapper[4942]: I0218 19:19:44.437698 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-qk5bm" event={"ID":"bccecc4d-32d0-4367-a3b6-e35ddf53dd1a","Type":"ContainerDied","Data":"fbaafee7cecf61bb8d77ca6672d2ac8ccf91008f81ea26809234c8c633d166e3"} Feb 18 19:19:44 crc kubenswrapper[4942]: I0218 19:19:44.437807 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-qk5bm" event={"ID":"bccecc4d-32d0-4367-a3b6-e35ddf53dd1a","Type":"ContainerStarted","Data":"0489fffbee81e5796046d641c35020c77c5b7bd4227cf3560686542a55639094"} Feb 18 19:19:44 crc kubenswrapper[4942]: I0218 19:19:44.440005 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 19:19:44 crc kubenswrapper[4942]: E0218 19:19:44.440232 4942 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 19:19:44.940202634 +0000 UTC m=+144.645135299 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 19:19:44 crc kubenswrapper[4942]: I0218 19:19:44.445158 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-2fcrf\" (UID: \"087f0c6b-3e9f-4db4-bbcb-a8075e218219\") " pod="openshift-image-registry/image-registry-697d97f7c8-2fcrf" Feb 18 19:19:44 crc kubenswrapper[4942]: E0218 19:19:44.447966 4942 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-18 19:19:44.947944323 +0000 UTC m=+144.652876988 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-2fcrf" (UID: "087f0c6b-3e9f-4db4-bbcb-a8075e218219") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 19:19:44 crc kubenswrapper[4942]: I0218 19:19:44.460191 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-w9lpz"] Feb 18 19:19:44 crc kubenswrapper[4942]: I0218 19:19:44.485405 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-x5rln" event={"ID":"714a349f-4480-4467-9041-7cae31df7686","Type":"ContainerStarted","Data":"cd56566509dea5efaa88e554c374e00e1e507d1aed85aabf7157db6e887929bb"} Feb 18 19:19:44 crc kubenswrapper[4942]: I0218 19:19:44.502975 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-z4t28" event={"ID":"5d6ad520-b407-4b86-867b-9e9658bfa536","Type":"ContainerStarted","Data":"561f208636e4ed3a972d1961d576d8357f830eea84893972b2e168b33bc8de2c"} Feb 18 19:19:44 crc kubenswrapper[4942]: I0218 19:19:44.557954 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 19:19:44 crc kubenswrapper[4942]: E0218 19:19:44.559931 4942 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 19:19:45.059900012 +0000 UTC m=+144.764832677 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 19:19:44 crc kubenswrapper[4942]: I0218 19:19:44.572136 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-z4vt6" event={"ID":"c8e73114-6ccf-40ba-94e8-437e2db303fb","Type":"ContainerStarted","Data":"ef5beed15fc536692fa5b07511ace2af8b9c9d4db6acd414d2bc77602ba4c2be"} Feb 18 19:19:44 crc kubenswrapper[4942]: I0218 19:19:44.601224 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29524035-tk5g4"] Feb 18 19:19:44 crc kubenswrapper[4942]: I0218 19:19:44.606526 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-5l26l" event={"ID":"5683bb73-dc7f-40ed-86cd-0c08f2d38147","Type":"ContainerStarted","Data":"76d66aaf89f1a5aa5957e318124bcfa92f6a6c37df6e5abcffc91fd45db84790"} Feb 18 19:19:44 crc kubenswrapper[4942]: I0218 19:19:44.656598 4942 csr.go:261] certificate signing request csr-rbtjn is approved, waiting to be issued Feb 18 19:19:44 crc kubenswrapper[4942]: I0218 19:19:44.663149 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-s57sd" event={"ID":"67ea138c-808d-40ee-9e77-2435676f7fba","Type":"ContainerStarted","Data":"295d840e2d2cbcd9fed3e4010b0664d7b5d4e6da4e446024f4f143ed6ae594bc"} Feb 18 19:19:44 crc kubenswrapper[4942]: I0218 19:19:44.665647 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-2fcrf\" (UID: \"087f0c6b-3e9f-4db4-bbcb-a8075e218219\") " pod="openshift-image-registry/image-registry-697d97f7c8-2fcrf" Feb 18 19:19:44 crc kubenswrapper[4942]: E0218 19:19:44.666130 4942 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-18 19:19:45.166113976 +0000 UTC m=+144.871046641 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-2fcrf" (UID: "087f0c6b-3e9f-4db4-bbcb-a8075e218219") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 19:19:44 crc kubenswrapper[4942]: I0218 19:19:44.668160 4942 csr.go:257] certificate signing request csr-rbtjn is issued Feb 18 19:19:44 crc kubenswrapper[4942]: I0218 19:19:44.679266 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-xbkl5" event={"ID":"fa346657-46eb-4817-b206-4c09d46d4a55","Type":"ContainerStarted","Data":"ef127dd826aba726a31acfac09be4ab1cb60219849d22bd68a56ddc0ec361b83"} Feb 18 19:19:44 crc kubenswrapper[4942]: I0218 19:19:44.680156 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-xbkl5" Feb 18 19:19:44 crc kubenswrapper[4942]: I0218 19:19:44.687835 4942 patch_prober.go:28] interesting pod/route-controller-manager-6576b87f9c-xbkl5 container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.28:8443/healthz\": dial tcp 10.217.0.28:8443: connect: connection refused" start-of-body= Feb 18 19:19:44 crc kubenswrapper[4942]: I0218 19:19:44.687879 4942 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-xbkl5" podUID="fa346657-46eb-4817-b206-4c09d46d4a55" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.28:8443/healthz\": dial tcp 10.217.0.28:8443: connect: connection refused" Feb 18 19:19:44 crc kubenswrapper[4942]: I0218 19:19:44.689302 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-zz9rm"] Feb 18 19:19:44 crc kubenswrapper[4942]: I0218 19:19:44.717479 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-566m9" event={"ID":"994be5c4-0c9d-4577-82e8-644d64c3ab1d","Type":"ContainerStarted","Data":"5b850e15f65c3ef4888ece7dbbfbdcfb365e837e820448893eebbc6203a65e52"} Feb 18 19:19:44 crc kubenswrapper[4942]: I0218 19:19:44.719425 4942 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-7tzn9" podStartSLOduration=123.719407283 podStartE2EDuration="2m3.719407283s" podCreationTimestamp="2026-02-18 19:17:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:19:44.717367338 +0000 UTC m=+144.422300003" watchObservedRunningTime="2026-02-18 19:19:44.719407283 +0000 UTC m=+144.424339948" Feb 18 19:19:44 crc kubenswrapper[4942]: W0218 19:19:44.720226 4942 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod01ba4570_01bb_4964_8c1d_791c25d72a1a.slice/crio-5e1dc2e31f1a650ed17f640e417c2728e29699e3f206e468494747757484a591 WatchSource:0}: Error finding container 5e1dc2e31f1a650ed17f640e417c2728e29699e3f206e468494747757484a591: Status 404 returned error can't find the container with id 5e1dc2e31f1a650ed17f640e417c2728e29699e3f206e468494747757484a591 Feb 18 19:19:44 crc kubenswrapper[4942]: I0218 19:19:44.738547 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-9mc8z" event={"ID":"2407a935-a8b9-4894-baaf-7460fee3d22b","Type":"ContainerStarted","Data":"ac3114089efca6f7a31fc4b13c9fa503f6eebaa65322f6ddca1e7337eb4a3ab0"} Feb 18 19:19:44 crc kubenswrapper[4942]: I0218 19:19:44.757085 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-jqs9l" event={"ID":"9a79c946-4621-4b6d-af59-6b919d125502","Type":"ContainerStarted","Data":"3e5c12681c963fd415917b457a854cb5a2dd4d42af76619484fdaf4f737d2da1"} Feb 18 19:19:44 crc kubenswrapper[4942]: I0218 19:19:44.769018 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 19:19:44 crc kubenswrapper[4942]: E0218 19:19:44.772404 4942 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 19:19:45.272384271 +0000 UTC m=+144.977316936 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 19:19:44 crc kubenswrapper[4942]: I0218 19:19:44.797110 4942 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-xbkl5" podStartSLOduration=122.797089617 podStartE2EDuration="2m2.797089617s" podCreationTimestamp="2026-02-18 19:17:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:19:44.791648861 +0000 UTC m=+144.496581526" watchObservedRunningTime="2026-02-18 19:19:44.797089617 +0000 UTC m=+144.502022282" Feb 18 19:19:44 crc kubenswrapper[4942]: I0218 19:19:44.804900 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-7x2vd" event={"ID":"709f9378-2d1c-4158-9521-e6000e06eb5e","Type":"ContainerStarted","Data":"4e59809c46fdf61cbd250efd26b5505441876d2b4287cbd6ca8c4b31f6dd6627"} Feb 18 19:19:44 crc kubenswrapper[4942]: I0218 19:19:44.813899 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-g5df6" event={"ID":"5a5bda6e-e1c2-4ecf-a531-fbbe8139e91e","Type":"ContainerStarted","Data":"c680d03d1b77f75b2e5820fafd3de47164c025235120a5a61943f07c8f9f37a9"} Feb 18 19:19:44 crc kubenswrapper[4942]: I0218 19:19:44.813948 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-g5df6" Feb 18 19:19:44 crc kubenswrapper[4942]: I0218 19:19:44.819181 4942 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-g5df6 container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.26:5443/healthz\": dial tcp 10.217.0.26:5443: connect: connection refused" start-of-body= Feb 18 19:19:44 crc kubenswrapper[4942]: I0218 19:19:44.819343 4942 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-g5df6" podUID="5a5bda6e-e1c2-4ecf-a531-fbbe8139e91e" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.26:5443/healthz\": dial tcp 10.217.0.26:5443: connect: connection refused" Feb 18 19:19:44 crc kubenswrapper[4942]: I0218 19:19:44.827259 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console-operator/console-operator-58897d9998-4pmfw" Feb 18 19:19:44 crc kubenswrapper[4942]: I0218 19:19:44.843682 4942 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-558db77b4-kpfjc" podStartSLOduration=123.843663223 podStartE2EDuration="2m3.843663223s" podCreationTimestamp="2026-02-18 19:17:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:19:44.840285332 +0000 UTC m=+144.545218007" watchObservedRunningTime="2026-02-18 19:19:44.843663223 +0000 UTC m=+144.548595878" Feb 18 19:19:44 crc kubenswrapper[4942]: I0218 19:19:44.867378 4942 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication-operator/authentication-operator-69f744f599-bd7zz" podStartSLOduration=123.867365832 podStartE2EDuration="2m3.867365832s" podCreationTimestamp="2026-02-18 19:17:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:19:44.865476151 +0000 UTC m=+144.570408816" watchObservedRunningTime="2026-02-18 19:19:44.867365832 +0000 UTC m=+144.572298497" Feb 18 19:19:44 crc kubenswrapper[4942]: I0218 19:19:44.873428 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-2fcrf\" (UID: \"087f0c6b-3e9f-4db4-bbcb-a8075e218219\") " pod="openshift-image-registry/image-registry-697d97f7c8-2fcrf" Feb 18 19:19:44 crc kubenswrapper[4942]: E0218 19:19:44.874055 4942 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-18 19:19:45.374033802 +0000 UTC m=+145.078966467 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-2fcrf" (UID: "087f0c6b-3e9f-4db4-bbcb-a8075e218219") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 19:19:44 crc kubenswrapper[4942]: I0218 19:19:44.913033 4942 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-f9d7485db-5l26l" podStartSLOduration=123.913016413 podStartE2EDuration="2m3.913016413s" podCreationTimestamp="2026-02-18 19:17:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:19:44.912199671 +0000 UTC m=+144.617132336" watchObservedRunningTime="2026-02-18 19:19:44.913016413 +0000 UTC m=+144.617949078" Feb 18 19:19:44 crc kubenswrapper[4942]: I0218 19:19:44.974682 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 19:19:44 crc kubenswrapper[4942]: E0218 19:19:44.976508 4942 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 19:19:45.476469474 +0000 UTC m=+145.181402139 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 19:19:44 crc kubenswrapper[4942]: I0218 19:19:44.976917 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-2fcrf\" (UID: \"087f0c6b-3e9f-4db4-bbcb-a8075e218219\") " pod="openshift-image-registry/image-registry-697d97f7c8-2fcrf" Feb 18 19:19:44 crc kubenswrapper[4942]: E0218 19:19:44.980463 4942 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-18 19:19:45.480421621 +0000 UTC m=+145.185354476 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-2fcrf" (UID: "087f0c6b-3e9f-4db4-bbcb-a8075e218219") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 19:19:44 crc kubenswrapper[4942]: I0218 19:19:44.994093 4942 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/downloads-7954f5f757-tndhs" podStartSLOduration=123.994071439 podStartE2EDuration="2m3.994071439s" podCreationTimestamp="2026-02-18 19:17:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:19:44.991731536 +0000 UTC m=+144.696664201" watchObservedRunningTime="2026-02-18 19:19:44.994071439 +0000 UTC m=+144.699004104" Feb 18 19:19:45 crc kubenswrapper[4942]: I0218 19:19:45.078446 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 19:19:45 crc kubenswrapper[4942]: E0218 19:19:45.078706 4942 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 19:19:45.57867393 +0000 UTC m=+145.283606595 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 19:19:45 crc kubenswrapper[4942]: I0218 19:19:45.080490 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-2fcrf\" (UID: \"087f0c6b-3e9f-4db4-bbcb-a8075e218219\") " pod="openshift-image-registry/image-registry-697d97f7c8-2fcrf" Feb 18 19:19:45 crc kubenswrapper[4942]: E0218 19:19:45.112276 4942 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-18 19:19:45.612254826 +0000 UTC m=+145.317187491 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-2fcrf" (UID: "087f0c6b-3e9f-4db4-bbcb-a8075e218219") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 19:19:45 crc kubenswrapper[4942]: I0218 19:19:45.197357 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 19:19:45 crc kubenswrapper[4942]: E0218 19:19:45.197952 4942 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 19:19:45.697913515 +0000 UTC m=+145.402846180 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 19:19:45 crc kubenswrapper[4942]: I0218 19:19:45.299475 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-2fcrf\" (UID: \"087f0c6b-3e9f-4db4-bbcb-a8075e218219\") " pod="openshift-image-registry/image-registry-697d97f7c8-2fcrf" Feb 18 19:19:45 crc kubenswrapper[4942]: E0218 19:19:45.301040 4942 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-18 19:19:45.801024946 +0000 UTC m=+145.505957611 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-2fcrf" (UID: "087f0c6b-3e9f-4db4-bbcb-a8075e218219") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 19:19:45 crc kubenswrapper[4942]: I0218 19:19:45.387454 4942 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-jqs9l" podStartSLOduration=124.387434936 podStartE2EDuration="2m4.387434936s" podCreationTimestamp="2026-02-18 19:17:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:19:45.386137631 +0000 UTC m=+145.091070296" watchObservedRunningTime="2026-02-18 19:19:45.387434936 +0000 UTC m=+145.092367601" Feb 18 19:19:45 crc kubenswrapper[4942]: I0218 19:19:45.405104 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 19:19:45 crc kubenswrapper[4942]: E0218 19:19:45.407872 4942 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 19:19:45.907845196 +0000 UTC m=+145.612777861 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 19:19:45 crc kubenswrapper[4942]: I0218 19:19:45.410160 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-2fcrf\" (UID: \"087f0c6b-3e9f-4db4-bbcb-a8075e218219\") " pod="openshift-image-registry/image-registry-697d97f7c8-2fcrf" Feb 18 19:19:45 crc kubenswrapper[4942]: E0218 19:19:45.410679 4942 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-18 19:19:45.910664512 +0000 UTC m=+145.615597177 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-2fcrf" (UID: "087f0c6b-3e9f-4db4-bbcb-a8075e218219") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 19:19:45 crc kubenswrapper[4942]: I0218 19:19:45.515350 4942 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-g5df6" podStartSLOduration=123.51532984400001 podStartE2EDuration="2m3.515329844s" podCreationTimestamp="2026-02-18 19:17:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:19:45.514481451 +0000 UTC m=+145.219414116" watchObservedRunningTime="2026-02-18 19:19:45.515329844 +0000 UTC m=+145.220262509" Feb 18 19:19:45 crc kubenswrapper[4942]: I0218 19:19:45.515710 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 19:19:45 crc kubenswrapper[4942]: E0218 19:19:45.515795 4942 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 19:19:46.015778086 +0000 UTC m=+145.720710751 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 19:19:45 crc kubenswrapper[4942]: I0218 19:19:45.518455 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-2fcrf\" (UID: \"087f0c6b-3e9f-4db4-bbcb-a8075e218219\") " pod="openshift-image-registry/image-registry-697d97f7c8-2fcrf" Feb 18 19:19:45 crc kubenswrapper[4942]: E0218 19:19:45.518880 4942 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-18 19:19:46.01886911 +0000 UTC m=+145.723801765 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-2fcrf" (UID: "087f0c6b-3e9f-4db4-bbcb-a8075e218219") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 19:19:45 crc kubenswrapper[4942]: I0218 19:19:45.540532 4942 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-7x2vd" podStartSLOduration=125.540515423 podStartE2EDuration="2m5.540515423s" podCreationTimestamp="2026-02-18 19:17:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:19:45.540408081 +0000 UTC m=+145.245340746" watchObservedRunningTime="2026-02-18 19:19:45.540515423 +0000 UTC m=+145.245448088" Feb 18 19:19:45 crc kubenswrapper[4942]: I0218 19:19:45.618586 4942 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-566m9" podStartSLOduration=124.618565948 podStartE2EDuration="2m4.618565948s" podCreationTimestamp="2026-02-18 19:17:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:19:45.618309661 +0000 UTC m=+145.323242326" watchObservedRunningTime="2026-02-18 19:19:45.618565948 +0000 UTC m=+145.323498613" Feb 18 19:19:45 crc kubenswrapper[4942]: I0218 19:19:45.620109 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 19:19:45 crc kubenswrapper[4942]: E0218 19:19:45.620623 4942 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 19:19:46.120607653 +0000 UTC m=+145.825540318 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 19:19:45 crc kubenswrapper[4942]: I0218 19:19:45.669211 4942 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2027-02-18 19:14:44 +0000 UTC, rotation deadline is 2026-11-04 01:03:52.474728707 +0000 UTC Feb 18 19:19:45 crc kubenswrapper[4942]: I0218 19:19:45.669248 4942 certificate_manager.go:356] kubernetes.io/kubelet-serving: Waiting 6197h44m6.805484182s for next certificate rotation Feb 18 19:19:45 crc kubenswrapper[4942]: I0218 19:19:45.723554 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-2fcrf\" (UID: \"087f0c6b-3e9f-4db4-bbcb-a8075e218219\") " pod="openshift-image-registry/image-registry-697d97f7c8-2fcrf" Feb 18 19:19:45 crc kubenswrapper[4942]: E0218 19:19:45.724034 4942 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-18 19:19:46.224002391 +0000 UTC m=+145.928935316 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-2fcrf" (UID: "087f0c6b-3e9f-4db4-bbcb-a8075e218219") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 19:19:45 crc kubenswrapper[4942]: I0218 19:19:45.825145 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 19:19:45 crc kubenswrapper[4942]: E0218 19:19:45.825663 4942 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 19:19:46.325645802 +0000 UTC m=+146.030578467 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 19:19:45 crc kubenswrapper[4942]: I0218 19:19:45.880225 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-x5rln" event={"ID":"714a349f-4480-4467-9041-7cae31df7686","Type":"ContainerStarted","Data":"e5d2a47ba0ce96a07bff9f97878903390d678e02fc6d15bb8bb3d1df39682423"} Feb 18 19:19:45 crc kubenswrapper[4942]: I0218 19:19:45.885304 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-rd6k5" event={"ID":"bb96ca2b-27a4-42e3-af7f-3514321500a3","Type":"ContainerStarted","Data":"47429456a27474748583ff17fb1dcdae21305252396f41651b4af5ddedb5f451"} Feb 18 19:19:45 crc kubenswrapper[4942]: I0218 19:19:45.885367 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-rd6k5" event={"ID":"bb96ca2b-27a4-42e3-af7f-3514321500a3","Type":"ContainerStarted","Data":"bffddcd1642856a53d8a14eecb2793e81b211bf6edef12aaa5d85a3d065430f2"} Feb 18 19:19:45 crc kubenswrapper[4942]: I0218 19:19:45.893322 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-9wcp7" event={"ID":"696bcbdd-c9ca-45cd-ae12-e733919e2832","Type":"ContainerStarted","Data":"7ae806a37e35c4beb7c6105bf316a46e7ad9818cc4d2fffbc5e24dbe44cb7317"} Feb 18 19:19:45 crc kubenswrapper[4942]: I0218 19:19:45.893379 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-9wcp7" event={"ID":"696bcbdd-c9ca-45cd-ae12-e733919e2832","Type":"ContainerStarted","Data":"1dc34889cbf59b8791ee47c092b48d9fc2128835cc39bc93bf9951ee5a6d0e78"} Feb 18 19:19:45 crc kubenswrapper[4942]: I0218 19:19:45.911082 4942 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd-operator/etcd-operator-b45778765-x5rln" podStartSLOduration=124.911056935 podStartE2EDuration="2m4.911056935s" podCreationTimestamp="2026-02-18 19:17:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:19:45.909184964 +0000 UTC m=+145.614117629" watchObservedRunningTime="2026-02-18 19:19:45.911056935 +0000 UTC m=+145.615989600" Feb 18 19:19:45 crc kubenswrapper[4942]: I0218 19:19:45.933508 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-566m9" event={"ID":"994be5c4-0c9d-4577-82e8-644d64c3ab1d","Type":"ContainerStarted","Data":"b1a45066242ae6994dd79542ee99045ae7535a669417f996bc8960fa7960fe6f"} Feb 18 19:19:45 crc kubenswrapper[4942]: I0218 19:19:45.935184 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-2fcrf\" (UID: \"087f0c6b-3e9f-4db4-bbcb-a8075e218219\") " pod="openshift-image-registry/image-registry-697d97f7c8-2fcrf" Feb 18 19:19:45 crc kubenswrapper[4942]: E0218 19:19:45.935578 4942 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-18 19:19:46.435564756 +0000 UTC m=+146.140497421 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-2fcrf" (UID: "087f0c6b-3e9f-4db4-bbcb-a8075e218219") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 19:19:45 crc kubenswrapper[4942]: I0218 19:19:45.946339 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-z4t28" event={"ID":"5d6ad520-b407-4b86-867b-9e9658bfa536","Type":"ContainerStarted","Data":"023457a07127e4c5a3020cc7b562185bd2142efdc686d72b522eec24b84f6fdf"} Feb 18 19:19:45 crc kubenswrapper[4942]: I0218 19:19:45.948357 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-879f6c89f-z4t28" Feb 18 19:19:45 crc kubenswrapper[4942]: I0218 19:19:45.949434 4942 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-rd6k5" podStartSLOduration=123.949409029 podStartE2EDuration="2m3.949409029s" podCreationTimestamp="2026-02-18 19:17:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:19:45.939211804 +0000 UTC m=+145.644144469" watchObservedRunningTime="2026-02-18 19:19:45.949409029 +0000 UTC m=+145.654341694" Feb 18 19:19:45 crc kubenswrapper[4942]: I0218 19:19:45.967433 4942 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-z4t28 container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.17:8443/healthz\": dial tcp 10.217.0.17:8443: connect: connection refused" start-of-body= Feb 18 19:19:45 crc kubenswrapper[4942]: I0218 19:19:45.967491 4942 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-z4t28" podUID="5d6ad520-b407-4b86-867b-9e9658bfa536" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.17:8443/healthz\": dial tcp 10.217.0.17:8443: connect: connection refused" Feb 18 19:19:45 crc kubenswrapper[4942]: I0218 19:19:45.985787 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-bgd6x" event={"ID":"ae259edb-f577-48b8-b236-91656ac269d2","Type":"ContainerStarted","Data":"5cecbc2c8e7a93ff837b8535180d6419cbf30671f05bbc9aeefb44cd259a89dc"} Feb 18 19:19:45 crc kubenswrapper[4942]: I0218 19:19:45.992981 4942 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca/service-ca-9c57cc56f-9wcp7" podStartSLOduration=123.992953903 podStartE2EDuration="2m3.992953903s" podCreationTimestamp="2026-02-18 19:17:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:19:45.968147144 +0000 UTC m=+145.673079809" watchObservedRunningTime="2026-02-18 19:19:45.992953903 +0000 UTC m=+145.697886568" Feb 18 19:19:45 crc kubenswrapper[4942]: I0218 19:19:45.994607 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-89fzv" Feb 18 19:19:46 crc kubenswrapper[4942]: I0218 19:19:46.020097 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-lrcbr" event={"ID":"a873b689-a8f1-4125-b97c-e9d0f6b06397","Type":"ContainerStarted","Data":"53a33a4937e1afc56a58b316072961f268a8e4a1365a52c3259f2ce5f8c81354"} Feb 18 19:19:46 crc kubenswrapper[4942]: I0218 19:19:46.020147 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-lrcbr" event={"ID":"a873b689-a8f1-4125-b97c-e9d0f6b06397","Type":"ContainerStarted","Data":"89aedd7f93443a63bf862888ba62bbe12ed9d421789f7e6f7f47bb7c9ca5cc3f"} Feb 18 19:19:46 crc kubenswrapper[4942]: I0218 19:19:46.020910 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-lrcbr" Feb 18 19:19:46 crc kubenswrapper[4942]: I0218 19:19:46.026026 4942 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-lrcbr container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.30:8443/healthz\": dial tcp 10.217.0.30:8443: connect: connection refused" start-of-body= Feb 18 19:19:46 crc kubenswrapper[4942]: I0218 19:19:46.026074 4942 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-lrcbr" podUID="a873b689-a8f1-4125-b97c-e9d0f6b06397" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.30:8443/healthz\": dial tcp 10.217.0.30:8443: connect: connection refused" Feb 18 19:19:46 crc kubenswrapper[4942]: I0218 19:19:46.032674 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-q9pxc" event={"ID":"cbe755cf-b7a2-4557-9368-5d71df455408","Type":"ContainerStarted","Data":"8075d6be95bf7d29fcd6b4ed79f03d35824ff4d5d4466851d8cf2417fc416fa9"} Feb 18 19:19:46 crc kubenswrapper[4942]: I0218 19:19:46.035942 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 19:19:46 crc kubenswrapper[4942]: E0218 19:19:46.036352 4942 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 19:19:46.536306642 +0000 UTC m=+146.241239307 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 19:19:46 crc kubenswrapper[4942]: I0218 19:19:46.036559 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-2fcrf\" (UID: \"087f0c6b-3e9f-4db4-bbcb-a8075e218219\") " pod="openshift-image-registry/image-registry-697d97f7c8-2fcrf" Feb 18 19:19:46 crc kubenswrapper[4942]: E0218 19:19:46.038108 4942 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-18 19:19:46.538099471 +0000 UTC m=+146.243032136 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-2fcrf" (UID: "087f0c6b-3e9f-4db4-bbcb-a8075e218219") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 19:19:46 crc kubenswrapper[4942]: I0218 19:19:46.058198 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-w9lpz" event={"ID":"af99a6af-5df3-4b87-8f14-a564c5d86164","Type":"ContainerStarted","Data":"ae917070a1fb2ae6049a3cb0e5e19f0b7f5904b856c84b810a11428d3d25d00f"} Feb 18 19:19:46 crc kubenswrapper[4942]: I0218 19:19:46.089744 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-rw75p" event={"ID":"05aed8e4-390c-4589-8a61-2aab50a1d90f","Type":"ContainerStarted","Data":"09fdca091a253ff9bd2dc0eac63c261789ab9c1bca446f6ff56e243061cd20cc"} Feb 18 19:19:46 crc kubenswrapper[4942]: I0218 19:19:46.092187 4942 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-879f6c89f-z4t28" podStartSLOduration=125.092162588 podStartE2EDuration="2m5.092162588s" podCreationTimestamp="2026-02-18 19:17:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:19:46.002939453 +0000 UTC m=+145.707872118" watchObservedRunningTime="2026-02-18 19:19:46.092162588 +0000 UTC m=+145.797095253" Feb 18 19:19:46 crc kubenswrapper[4942]: I0218 19:19:46.114153 4942 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns-operator/dns-operator-744455d44c-bgd6x" podStartSLOduration=125.114127591 podStartE2EDuration="2m5.114127591s" podCreationTimestamp="2026-02-18 19:17:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:19:46.11410738 +0000 UTC m=+145.819040045" watchObservedRunningTime="2026-02-18 19:19:46.114127591 +0000 UTC m=+145.819060256" Feb 18 19:19:46 crc kubenswrapper[4942]: I0218 19:19:46.123856 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-v6bqq" event={"ID":"9b732dca-66e7-48c3-bd7d-5efc1d9662d7","Type":"ContainerStarted","Data":"b5a7bbd5e7ba5c2f6d9d9fc0a9eee13fffbab2434f628724afe27d5345b240b4"} Feb 18 19:19:46 crc kubenswrapper[4942]: I0218 19:19:46.123950 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-v6bqq" event={"ID":"9b732dca-66e7-48c3-bd7d-5efc1d9662d7","Type":"ContainerStarted","Data":"d1d56ceab59f37ae4741e7e1c65e710e43f1e0d13713932ac9b62ba435cb6040"} Feb 18 19:19:46 crc kubenswrapper[4942]: I0218 19:19:46.141421 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 19:19:46 crc kubenswrapper[4942]: I0218 19:19:46.143055 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-zj44h" event={"ID":"b69f8ddf-cdf8-4104-bf4a-d2843c2aefa7","Type":"ContainerStarted","Data":"5408388b9e1ab47f7983784f7e9c54819f35fef18e080b4cab94cc3a9cc22231"} Feb 18 19:19:46 crc kubenswrapper[4942]: I0218 19:19:46.143103 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-zj44h" event={"ID":"b69f8ddf-cdf8-4104-bf4a-d2843c2aefa7","Type":"ContainerStarted","Data":"9609c11d3bd02d2379390a0678d9b91ab4b9f13828b5ffd9ce9cb1949f5a7f04"} Feb 18 19:19:46 crc kubenswrapper[4942]: E0218 19:19:46.143166 4942 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 19:19:46.643147263 +0000 UTC m=+146.348079928 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 19:19:46 crc kubenswrapper[4942]: I0218 19:19:46.193968 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-qk5bm" event={"ID":"bccecc4d-32d0-4367-a3b6-e35ddf53dd1a","Type":"ContainerStarted","Data":"65909acaa509149c03419b0d66bafc7e7609f918429ef17452b18ee9b7ab4fd8"} Feb 18 19:19:46 crc kubenswrapper[4942]: I0218 19:19:46.194699 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-7777fb866f-qk5bm" Feb 18 19:19:46 crc kubenswrapper[4942]: I0218 19:19:46.232144 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-vqvpq" event={"ID":"157afc1c-f5df-419b-a760-336d14bbbd6d","Type":"ContainerStarted","Data":"4ddc0e64f67697d4ef526590ed52dd0abc67c021061db8899066480a3187ac33"} Feb 18 19:19:46 crc kubenswrapper[4942]: I0218 19:19:46.232585 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-vqvpq" event={"ID":"157afc1c-f5df-419b-a760-336d14bbbd6d","Type":"ContainerStarted","Data":"8f6ffcaa602c4c74d82a25acb920b5bc142f908014eb2a7b4111736183236ea0"} Feb 18 19:19:46 crc kubenswrapper[4942]: I0218 19:19:46.238226 4942 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-q9pxc" podStartSLOduration=124.238206595 podStartE2EDuration="2m4.238206595s" podCreationTimestamp="2026-02-18 19:17:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:19:46.176840331 +0000 UTC m=+145.881773006" watchObservedRunningTime="2026-02-18 19:19:46.238206595 +0000 UTC m=+145.943139260" Feb 18 19:19:46 crc kubenswrapper[4942]: I0218 19:19:46.239395 4942 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-lrcbr" podStartSLOduration=124.239389587 podStartE2EDuration="2m4.239389587s" podCreationTimestamp="2026-02-18 19:17:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:19:46.236991783 +0000 UTC m=+145.941924448" watchObservedRunningTime="2026-02-18 19:19:46.239389587 +0000 UTC m=+145.944322252" Feb 18 19:19:46 crc kubenswrapper[4942]: I0218 19:19:46.247081 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-8nxhq" event={"ID":"83c8ec50-d07e-4c96-80b8-22cf232b015c","Type":"ContainerStarted","Data":"8fb89d9b578bb5f34f43df673b2dd799864eeb7e473dd13a2af65613144c2452"} Feb 18 19:19:46 crc kubenswrapper[4942]: I0218 19:19:46.247130 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-8nxhq" event={"ID":"83c8ec50-d07e-4c96-80b8-22cf232b015c","Type":"ContainerStarted","Data":"f008ad0a63afef1fd1b3f75636edc934773c6ff3eb99cbd42972daa614dbabd7"} Feb 18 19:19:46 crc kubenswrapper[4942]: I0218 19:19:46.247794 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-8nxhq" Feb 18 19:19:46 crc kubenswrapper[4942]: I0218 19:19:46.248451 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-2fcrf\" (UID: \"087f0c6b-3e9f-4db4-bbcb-a8075e218219\") " pod="openshift-image-registry/image-registry-697d97f7c8-2fcrf" Feb 18 19:19:46 crc kubenswrapper[4942]: E0218 19:19:46.250272 4942 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-18 19:19:46.750259 +0000 UTC m=+146.455191665 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-2fcrf" (UID: "087f0c6b-3e9f-4db4-bbcb-a8075e218219") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 19:19:46 crc kubenswrapper[4942]: I0218 19:19:46.268000 4942 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-rw75p" podStartSLOduration=125.267983218 podStartE2EDuration="2m5.267983218s" podCreationTimestamp="2026-02-18 19:17:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:19:46.267156156 +0000 UTC m=+145.972088821" watchObservedRunningTime="2026-02-18 19:19:46.267983218 +0000 UTC m=+145.972915873" Feb 18 19:19:46 crc kubenswrapper[4942]: I0218 19:19:46.308864 4942 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca-operator/service-ca-operator-777779d784-v6bqq" podStartSLOduration=124.3088347 podStartE2EDuration="2m4.3088347s" podCreationTimestamp="2026-02-18 19:17:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:19:46.305787368 +0000 UTC m=+146.010720033" watchObservedRunningTime="2026-02-18 19:19:46.3088347 +0000 UTC m=+146.013767365" Feb 18 19:19:46 crc kubenswrapper[4942]: I0218 19:19:46.331143 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-z4vt6" event={"ID":"c8e73114-6ccf-40ba-94e8-437e2db303fb","Type":"ContainerStarted","Data":"cdf5934d19dddd363dbdd2cf3f19ac1b20b020a1df145501f6daf86f7077de32"} Feb 18 19:19:46 crc kubenswrapper[4942]: I0218 19:19:46.351337 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 19:19:46 crc kubenswrapper[4942]: E0218 19:19:46.352501 4942 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 19:19:46.852481637 +0000 UTC m=+146.557414302 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 19:19:46 crc kubenswrapper[4942]: I0218 19:19:46.358352 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-jfkrb" event={"ID":"efab374b-fec3-4b4e-81f1-002715812a67","Type":"ContainerStarted","Data":"1be9f6409e8403c5211f5628cc4c7f37ce2a207d76287a814454050db0e28241"} Feb 18 19:19:46 crc kubenswrapper[4942]: I0218 19:19:46.359682 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-jfkrb" Feb 18 19:19:46 crc kubenswrapper[4942]: I0218 19:19:46.388455 4942 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-vqvpq" podStartSLOduration=125.38843747600001 podStartE2EDuration="2m5.388437476s" podCreationTimestamp="2026-02-18 19:17:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:19:46.386445923 +0000 UTC m=+146.091378588" watchObservedRunningTime="2026-02-18 19:19:46.388437476 +0000 UTC m=+146.093370141" Feb 18 19:19:46 crc kubenswrapper[4942]: I0218 19:19:46.400077 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-xs9jl" event={"ID":"ae63de17-3438-46ed-94f9-5f51d8a216fd","Type":"ContainerStarted","Data":"f5e48d7471916432d9cbe7bf403fb08411929b47a2a217b6d0f003a8e4238ee0"} Feb 18 19:19:46 crc kubenswrapper[4942]: I0218 19:19:46.400133 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-xs9jl" event={"ID":"ae63de17-3438-46ed-94f9-5f51d8a216fd","Type":"ContainerStarted","Data":"78fc4d140922930f6a2633852b8874ac954b960b75d9985a28e8bdb6fdb4b4d8"} Feb 18 19:19:46 crc kubenswrapper[4942]: I0218 19:19:46.403752 4942 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-jfkrb container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.38:8080/healthz\": dial tcp 10.217.0.38:8080: connect: connection refused" start-of-body= Feb 18 19:19:46 crc kubenswrapper[4942]: I0218 19:19:46.403810 4942 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-jfkrb" podUID="efab374b-fec3-4b4e-81f1-002715812a67" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.38:8080/healthz\": dial tcp 10.217.0.38:8080: connect: connection refused" Feb 18 19:19:46 crc kubenswrapper[4942]: I0218 19:19:46.420062 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-xbkl5" event={"ID":"fa346657-46eb-4817-b206-4c09d46d4a55","Type":"ContainerStarted","Data":"04d3d8f0260a49004f14e1e12877830297236a2190fa7c6cae15db82a5df0a0c"} Feb 18 19:19:46 crc kubenswrapper[4942]: I0218 19:19:46.429329 4942 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-config-operator/openshift-config-operator-7777fb866f-qk5bm" podStartSLOduration=125.429303478 podStartE2EDuration="2m5.429303478s" podCreationTimestamp="2026-02-18 19:17:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:19:46.420912162 +0000 UTC m=+146.125844827" watchObservedRunningTime="2026-02-18 19:19:46.429303478 +0000 UTC m=+146.134236143" Feb 18 19:19:46 crc kubenswrapper[4942]: I0218 19:19:46.439659 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-xbkl5" Feb 18 19:19:46 crc kubenswrapper[4942]: I0218 19:19:46.439879 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-b488q" event={"ID":"51ed31a1-9bf0-40ff-8bca-041d691662b4","Type":"ContainerStarted","Data":"208faa74619d8143ee886fd8c9a08a6db8b9a28d4026793c511e3bb3bd1b1a6e"} Feb 18 19:19:46 crc kubenswrapper[4942]: I0218 19:19:46.439908 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-b488q" event={"ID":"51ed31a1-9bf0-40ff-8bca-041d691662b4","Type":"ContainerStarted","Data":"caafd366ff0d6b72e6e5c23941738ac2c65895cdf5d540470d5723384db002c6"} Feb 18 19:19:46 crc kubenswrapper[4942]: I0218 19:19:46.455462 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-2fcrf\" (UID: \"087f0c6b-3e9f-4db4-bbcb-a8075e218219\") " pod="openshift-image-registry/image-registry-697d97f7c8-2fcrf" Feb 18 19:19:46 crc kubenswrapper[4942]: E0218 19:19:46.455852 4942 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-18 19:19:46.955839814 +0000 UTC m=+146.660772479 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-2fcrf" (UID: "087f0c6b-3e9f-4db4-bbcb-a8075e218219") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 19:19:46 crc kubenswrapper[4942]: I0218 19:19:46.476163 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-9grql" event={"ID":"0e51d4dc-e813-4166-bb6a-45d083a09d2a","Type":"ContainerStarted","Data":"43de3611589dad5ebefcd809b6f805da55d469f91eabf8ba425d9f9754800f2b"} Feb 18 19:19:46 crc kubenswrapper[4942]: I0218 19:19:46.476226 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-9grql" event={"ID":"0e51d4dc-e813-4166-bb6a-45d083a09d2a","Type":"ContainerStarted","Data":"6099bd6de926b0c3866a897a61dc3972f770d5e22459eb21f11d46fc377c9b7b"} Feb 18 19:19:46 crc kubenswrapper[4942]: I0218 19:19:46.476236 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-9grql" event={"ID":"0e51d4dc-e813-4166-bb6a-45d083a09d2a","Type":"ContainerStarted","Data":"14f50396c8170cc2a69c2e6637c93b60bc26e9ebdd445e1de739aeb3a386b19a"} Feb 18 19:19:46 crc kubenswrapper[4942]: I0218 19:19:46.487608 4942 generic.go:334] "Generic (PLEG): container finished" podID="6890b7aa-fac3-4c00-90cc-4618ddfae25e" containerID="aabc415517a0dc7244ba58d2c2fc6db9a02923059a6710af42a0290ab193e41b" exitCode=0 Feb 18 19:19:46 crc kubenswrapper[4942]: I0218 19:19:46.487727 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-v5w2k" event={"ID":"6890b7aa-fac3-4c00-90cc-4618ddfae25e","Type":"ContainerDied","Data":"aabc415517a0dc7244ba58d2c2fc6db9a02923059a6710af42a0290ab193e41b"} Feb 18 19:19:46 crc kubenswrapper[4942]: I0218 19:19:46.504180 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29524035-tk5g4" event={"ID":"01ba4570-01bb-4964-8c1d-791c25d72a1a","Type":"ContainerStarted","Data":"5e1dc2e31f1a650ed17f640e417c2728e29699e3f206e468494747757484a591"} Feb 18 19:19:46 crc kubenswrapper[4942]: I0218 19:19:46.523515 4942 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-8nxhq" podStartSLOduration=124.523487548 podStartE2EDuration="2m4.523487548s" podCreationTimestamp="2026-02-18 19:17:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:19:46.472188295 +0000 UTC m=+146.177120960" watchObservedRunningTime="2026-02-18 19:19:46.523487548 +0000 UTC m=+146.228420213" Feb 18 19:19:46 crc kubenswrapper[4942]: I0218 19:19:46.541095 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-zpnzn" event={"ID":"8a08dbbe-ad0e-444f-8d4a-4ad6f2e84aae","Type":"ContainerStarted","Data":"5a2671a1f18882459c61a0a3dd8093d7f9cf77bed4bcbedd9e017981b458f1ea"} Feb 18 19:19:46 crc kubenswrapper[4942]: I0218 19:19:46.541485 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-zpnzn" event={"ID":"8a08dbbe-ad0e-444f-8d4a-4ad6f2e84aae","Type":"ContainerStarted","Data":"8132c3f599f90ae6aae6c73184d377f311bcfa30b5c6d6e3837d5606c2fe285d"} Feb 18 19:19:46 crc kubenswrapper[4942]: I0218 19:19:46.553179 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-9mc8z" event={"ID":"2407a935-a8b9-4894-baaf-7460fee3d22b","Type":"ContainerStarted","Data":"da2cd9ae03daf77b5cfa46a4a680a7d22744c4fe54ef89d19fd2f943747bebfa"} Feb 18 19:19:46 crc kubenswrapper[4942]: I0218 19:19:46.561047 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 19:19:46 crc kubenswrapper[4942]: E0218 19:19:46.564033 4942 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 19:19:47.06400645 +0000 UTC m=+146.768939115 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 19:19:46 crc kubenswrapper[4942]: I0218 19:19:46.564451 4942 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-jfkrb" podStartSLOduration=124.564435232 podStartE2EDuration="2m4.564435232s" podCreationTimestamp="2026-02-18 19:17:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:19:46.52653934 +0000 UTC m=+146.231472005" watchObservedRunningTime="2026-02-18 19:19:46.564435232 +0000 UTC m=+146.269367897" Feb 18 19:19:46 crc kubenswrapper[4942]: I0218 19:19:46.568881 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-5l26l" event={"ID":"5683bb73-dc7f-40ed-86cd-0c08f2d38147","Type":"ContainerStarted","Data":"49458ca39b9ba344fe8c10dba2a8e9386f116a326c032cdb747d289d4ac6f704"} Feb 18 19:19:46 crc kubenswrapper[4942]: I0218 19:19:46.586286 4942 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-z4vt6" podStartSLOduration=125.586264391 podStartE2EDuration="2m5.586264391s" podCreationTimestamp="2026-02-18 19:17:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:19:46.564528094 +0000 UTC m=+146.269460759" watchObservedRunningTime="2026-02-18 19:19:46.586264391 +0000 UTC m=+146.291197056" Feb 18 19:19:46 crc kubenswrapper[4942]: I0218 19:19:46.612666 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-g5df6" event={"ID":"5a5bda6e-e1c2-4ecf-a531-fbbe8139e91e","Type":"ContainerStarted","Data":"b851a866367cf3a9e9d464f309cf034e80cf40b320f83dc8f140d81b8ccea539"} Feb 18 19:19:46 crc kubenswrapper[4942]: I0218 19:19:46.637532 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-s57sd" event={"ID":"67ea138c-808d-40ee-9e77-2435676f7fba","Type":"ContainerStarted","Data":"88a9b6c20ec29b02eb14c5a0666366dcf54085c50b07e9ba5fbb1b8473e769ea"} Feb 18 19:19:46 crc kubenswrapper[4942]: I0218 19:19:46.651680 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-zz9rm" event={"ID":"0ec933ee-8c36-49a0-8ba5-c7442f4de367","Type":"ContainerStarted","Data":"73b7ee7c3ba82bf630f09cb666ee9aa5b699be5d474b17959eab9af478c5664e"} Feb 18 19:19:46 crc kubenswrapper[4942]: I0218 19:19:46.651727 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-zz9rm" event={"ID":"0ec933ee-8c36-49a0-8ba5-c7442f4de367","Type":"ContainerStarted","Data":"2cc63270a69f0d0195f40815e636ff4b43d169b2bfb1846e3b0329458017f944"} Feb 18 19:19:46 crc kubenswrapper[4942]: I0218 19:19:46.656651 4942 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-b488q" podStartSLOduration=124.656631108 podStartE2EDuration="2m4.656631108s" podCreationTimestamp="2026-02-18 19:17:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:19:46.654977303 +0000 UTC m=+146.359909968" watchObservedRunningTime="2026-02-18 19:19:46.656631108 +0000 UTC m=+146.361563773" Feb 18 19:19:46 crc kubenswrapper[4942]: I0218 19:19:46.656922 4942 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-server-xs9jl" podStartSLOduration=6.656917086 podStartE2EDuration="6.656917086s" podCreationTimestamp="2026-02-18 19:19:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:19:46.614508892 +0000 UTC m=+146.319441557" watchObservedRunningTime="2026-02-18 19:19:46.656917086 +0000 UTC m=+146.361849751" Feb 18 19:19:46 crc kubenswrapper[4942]: I0218 19:19:46.657176 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-zz9rm" Feb 18 19:19:46 crc kubenswrapper[4942]: I0218 19:19:46.662666 4942 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-zz9rm container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.20:8443/healthz\": dial tcp 10.217.0.20:8443: connect: connection refused" start-of-body= Feb 18 19:19:46 crc kubenswrapper[4942]: I0218 19:19:46.662740 4942 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-zz9rm" podUID="0ec933ee-8c36-49a0-8ba5-c7442f4de367" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.20:8443/healthz\": dial tcp 10.217.0.20:8443: connect: connection refused" Feb 18 19:19:46 crc kubenswrapper[4942]: I0218 19:19:46.663546 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-2fcrf\" (UID: \"087f0c6b-3e9f-4db4-bbcb-a8075e218219\") " pod="openshift-image-registry/image-registry-697d97f7c8-2fcrf" Feb 18 19:19:46 crc kubenswrapper[4942]: E0218 19:19:46.667189 4942 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-18 19:19:47.167172282 +0000 UTC m=+146.872104947 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-2fcrf" (UID: "087f0c6b-3e9f-4db4-bbcb-a8075e218219") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 19:19:46 crc kubenswrapper[4942]: I0218 19:19:46.710695 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-jqs9l" event={"ID":"9a79c946-4621-4b6d-af59-6b919d125502","Type":"ContainerStarted","Data":"48644340e380b46844ef607e947318dad2a4df524b20e1b04fb054fdc4960453"} Feb 18 19:19:46 crc kubenswrapper[4942]: I0218 19:19:46.755903 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-vms6h" event={"ID":"e3586689-cf81-4cd2-84d1-70b0ce221b9d","Type":"ContainerStarted","Data":"be4b176c03020dfebf3570b33951e652a3097cc6fdcfee90689aa5a181dc3945"} Feb 18 19:19:46 crc kubenswrapper[4942]: I0218 19:19:46.756264 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-vms6h" event={"ID":"e3586689-cf81-4cd2-84d1-70b0ce221b9d","Type":"ContainerStarted","Data":"e9e909db52ba119b9620f5e6e0717d04945c6a6467173bcb1c7bc30a6b9c5e35"} Feb 18 19:19:46 crc kubenswrapper[4942]: I0218 19:19:46.766283 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 19:19:46 crc kubenswrapper[4942]: E0218 19:19:46.767052 4942 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 19:19:47.267033765 +0000 UTC m=+146.971966420 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 19:19:46 crc kubenswrapper[4942]: I0218 19:19:46.789610 4942 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29524035-tk5g4" podStartSLOduration=125.789594513 podStartE2EDuration="2m5.789594513s" podCreationTimestamp="2026-02-18 19:17:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:19:46.788174345 +0000 UTC m=+146.493107010" watchObservedRunningTime="2026-02-18 19:19:46.789594513 +0000 UTC m=+146.494527168" Feb 18 19:19:46 crc kubenswrapper[4942]: I0218 19:19:46.797082 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-p42pr" event={"ID":"48a8b317-27eb-4d20-93ad-37fa559ec858","Type":"ContainerStarted","Data":"ce2530ce2afe97ddb9fdeab46bbfef9f1ded96ff21bc7fe65fb26a0a5f0a5540"} Feb 18 19:19:46 crc kubenswrapper[4942]: I0218 19:19:46.797145 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-p42pr" event={"ID":"48a8b317-27eb-4d20-93ad-37fa559ec858","Type":"ContainerStarted","Data":"fde6f5692dc02c57cf442750d36b31132e33ff72cb1eaa4406c7ce4e5cbbfb65"} Feb 18 19:19:46 crc kubenswrapper[4942]: I0218 19:19:46.835063 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-fgw8l" event={"ID":"8134898c-a265-4fa0-8548-075ea0812b7b","Type":"ContainerStarted","Data":"986d51155a2753a87d6ac316d906b8a21b7b92f640efbc7ca4b3ecf774fa6938"} Feb 18 19:19:46 crc kubenswrapper[4942]: I0218 19:19:46.835118 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-fgw8l" event={"ID":"8134898c-a265-4fa0-8548-075ea0812b7b","Type":"ContainerStarted","Data":"0c447cb8f794395e4c9ce2034f6bb4715b0be332558abecd230814c64a4a0eac"} Feb 18 19:19:46 crc kubenswrapper[4942]: I0218 19:19:46.862359 4942 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-zpnzn" podStartSLOduration=124.862335995 podStartE2EDuration="2m4.862335995s" podCreationTimestamp="2026-02-18 19:17:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:19:46.861205544 +0000 UTC m=+146.566138209" watchObservedRunningTime="2026-02-18 19:19:46.862335995 +0000 UTC m=+146.567268660" Feb 18 19:19:46 crc kubenswrapper[4942]: I0218 19:19:46.870998 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-s4kjv" event={"ID":"461a0658-ae3b-4972-8122-2719276793b9","Type":"ContainerStarted","Data":"75e51a4243e3dc24eeba0de1cbc9eefb0a23eb0bb0b0a08204eb6a5396608558"} Feb 18 19:19:46 crc kubenswrapper[4942]: I0218 19:19:46.871050 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-dns/dns-default-s4kjv" Feb 18 19:19:46 crc kubenswrapper[4942]: I0218 19:19:46.871363 4942 patch_prober.go:28] interesting pod/downloads-7954f5f757-tndhs container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.10:8080/\": dial tcp 10.217.0.10:8080: connect: connection refused" start-of-body= Feb 18 19:19:46 crc kubenswrapper[4942]: I0218 19:19:46.871406 4942 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-tndhs" podUID="cb8403e3-f9b3-4ddf-8688-1a025a2b9291" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.10:8080/\": dial tcp 10.217.0.10:8080: connect: connection refused" Feb 18 19:19:46 crc kubenswrapper[4942]: I0218 19:19:46.872466 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-2fcrf\" (UID: \"087f0c6b-3e9f-4db4-bbcb-a8075e218219\") " pod="openshift-image-registry/image-registry-697d97f7c8-2fcrf" Feb 18 19:19:46 crc kubenswrapper[4942]: E0218 19:19:46.872755 4942 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-18 19:19:47.372745045 +0000 UTC m=+147.077677710 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-2fcrf" (UID: "087f0c6b-3e9f-4db4-bbcb-a8075e218219") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 19:19:46 crc kubenswrapper[4942]: I0218 19:19:46.885513 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-558db77b4-kpfjc" Feb 18 19:19:46 crc kubenswrapper[4942]: I0218 19:19:46.906114 4942 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-9mc8z" podStartSLOduration=125.906093275 podStartE2EDuration="2m5.906093275s" podCreationTimestamp="2026-02-18 19:17:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:19:46.904858501 +0000 UTC m=+146.609791176" watchObservedRunningTime="2026-02-18 19:19:46.906093275 +0000 UTC m=+146.611025930" Feb 18 19:19:46 crc kubenswrapper[4942]: I0218 19:19:46.977818 4942 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-canary/ingress-canary-s57sd" podStartSLOduration=7.977798278 podStartE2EDuration="7.977798278s" podCreationTimestamp="2026-02-18 19:19:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:19:46.941376116 +0000 UTC m=+146.646308781" watchObservedRunningTime="2026-02-18 19:19:46.977798278 +0000 UTC m=+146.682730943" Feb 18 19:19:46 crc kubenswrapper[4942]: I0218 19:19:46.978692 4942 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-zz9rm" podStartSLOduration=124.978686262 podStartE2EDuration="2m4.978686262s" podCreationTimestamp="2026-02-18 19:17:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:19:46.977273174 +0000 UTC m=+146.682205839" watchObservedRunningTime="2026-02-18 19:19:46.978686262 +0000 UTC m=+146.683618927" Feb 18 19:19:46 crc kubenswrapper[4942]: I0218 19:19:46.984251 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 19:19:46 crc kubenswrapper[4942]: E0218 19:19:46.985905 4942 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 19:19:47.485876056 +0000 UTC m=+147.190808721 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 19:19:47 crc kubenswrapper[4942]: I0218 19:19:47.064725 4942 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-9grql" podStartSLOduration=126.064700671 podStartE2EDuration="2m6.064700671s" podCreationTimestamp="2026-02-18 19:17:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:19:47.023047818 +0000 UTC m=+146.727980483" watchObservedRunningTime="2026-02-18 19:19:47.064700671 +0000 UTC m=+146.769633336" Feb 18 19:19:47 crc kubenswrapper[4942]: I0218 19:19:47.087080 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-2fcrf\" (UID: \"087f0c6b-3e9f-4db4-bbcb-a8075e218219\") " pod="openshift-image-registry/image-registry-697d97f7c8-2fcrf" Feb 18 19:19:47 crc kubenswrapper[4942]: E0218 19:19:47.087440 4942 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-18 19:19:47.587424044 +0000 UTC m=+147.292356709 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-2fcrf" (UID: "087f0c6b-3e9f-4db4-bbcb-a8075e218219") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 19:19:47 crc kubenswrapper[4942]: I0218 19:19:47.113399 4942 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-vms6h" podStartSLOduration=126.113375344 podStartE2EDuration="2m6.113375344s" podCreationTimestamp="2026-02-18 19:17:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:19:47.108551484 +0000 UTC m=+146.813484149" watchObservedRunningTime="2026-02-18 19:19:47.113375344 +0000 UTC m=+146.818308009" Feb 18 19:19:47 crc kubenswrapper[4942]: I0218 19:19:47.113951 4942 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/dns-default-s4kjv" podStartSLOduration=7.113947759 podStartE2EDuration="7.113947759s" podCreationTimestamp="2026-02-18 19:19:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:19:47.06909876 +0000 UTC m=+146.774031425" watchObservedRunningTime="2026-02-18 19:19:47.113947759 +0000 UTC m=+146.818880424" Feb 18 19:19:47 crc kubenswrapper[4942]: I0218 19:19:47.190419 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 19:19:47 crc kubenswrapper[4942]: E0218 19:19:47.190527 4942 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 19:19:47.690506284 +0000 UTC m=+147.395438949 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 19:19:47 crc kubenswrapper[4942]: I0218 19:19:47.190876 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-2fcrf\" (UID: \"087f0c6b-3e9f-4db4-bbcb-a8075e218219\") " pod="openshift-image-registry/image-registry-697d97f7c8-2fcrf" Feb 18 19:19:47 crc kubenswrapper[4942]: E0218 19:19:47.191229 4942 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-18 19:19:47.691222083 +0000 UTC m=+147.396154748 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-2fcrf" (UID: "087f0c6b-3e9f-4db4-bbcb-a8075e218219") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 19:19:47 crc kubenswrapper[4942]: I0218 19:19:47.193006 4942 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/machine-api-operator-5694c8668f-p42pr" podStartSLOduration=125.19298494 podStartE2EDuration="2m5.19298494s" podCreationTimestamp="2026-02-18 19:17:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:19:47.191588963 +0000 UTC m=+146.896521628" watchObservedRunningTime="2026-02-18 19:19:47.19298494 +0000 UTC m=+146.897917605" Feb 18 19:19:47 crc kubenswrapper[4942]: I0218 19:19:47.255466 4942 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress/router-default-5444994796-fgw8l" podStartSLOduration=126.255443565 podStartE2EDuration="2m6.255443565s" podCreationTimestamp="2026-02-18 19:17:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:19:47.251254912 +0000 UTC m=+146.956187577" watchObservedRunningTime="2026-02-18 19:19:47.255443565 +0000 UTC m=+146.960376230" Feb 18 19:19:47 crc kubenswrapper[4942]: I0218 19:19:47.275675 4942 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-q9pxc" Feb 18 19:19:47 crc kubenswrapper[4942]: I0218 19:19:47.275771 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-q9pxc" Feb 18 19:19:47 crc kubenswrapper[4942]: I0218 19:19:47.293783 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 19:19:47 crc kubenswrapper[4942]: E0218 19:19:47.294308 4942 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 19:19:47.794284852 +0000 UTC m=+147.499217517 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 19:19:47 crc kubenswrapper[4942]: I0218 19:19:47.387517 4942 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-5444994796-fgw8l" Feb 18 19:19:47 crc kubenswrapper[4942]: I0218 19:19:47.395934 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-2fcrf\" (UID: \"087f0c6b-3e9f-4db4-bbcb-a8075e218219\") " pod="openshift-image-registry/image-registry-697d97f7c8-2fcrf" Feb 18 19:19:47 crc kubenswrapper[4942]: E0218 19:19:47.396291 4942 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-18 19:19:47.896274902 +0000 UTC m=+147.601207567 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-2fcrf" (UID: "087f0c6b-3e9f-4db4-bbcb-a8075e218219") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 19:19:47 crc kubenswrapper[4942]: I0218 19:19:47.397589 4942 patch_prober.go:28] interesting pod/router-default-5444994796-fgw8l container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 18 19:19:47 crc kubenswrapper[4942]: [-]has-synced failed: reason withheld Feb 18 19:19:47 crc kubenswrapper[4942]: [+]process-running ok Feb 18 19:19:47 crc kubenswrapper[4942]: healthz check failed Feb 18 19:19:47 crc kubenswrapper[4942]: I0218 19:19:47.397673 4942 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-fgw8l" podUID="8134898c-a265-4fa0-8548-075ea0812b7b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 18 19:19:47 crc kubenswrapper[4942]: I0218 19:19:47.497701 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 19:19:47 crc kubenswrapper[4942]: E0218 19:19:47.498089 4942 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 19:19:47.998055036 +0000 UTC m=+147.702987701 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 19:19:47 crc kubenswrapper[4942]: I0218 19:19:47.498459 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-2fcrf\" (UID: \"087f0c6b-3e9f-4db4-bbcb-a8075e218219\") " pod="openshift-image-registry/image-registry-697d97f7c8-2fcrf" Feb 18 19:19:47 crc kubenswrapper[4942]: E0218 19:19:47.498824 4942 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-18 19:19:47.998809597 +0000 UTC m=+147.703742262 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-2fcrf" (UID: "087f0c6b-3e9f-4db4-bbcb-a8075e218219") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 19:19:47 crc kubenswrapper[4942]: I0218 19:19:47.599432 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 19:19:47 crc kubenswrapper[4942]: E0218 19:19:47.599922 4942 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 19:19:48.099902303 +0000 UTC m=+147.804834968 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 19:19:47 crc kubenswrapper[4942]: I0218 19:19:47.615950 4942 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-g5df6 container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.26:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 18 19:19:47 crc kubenswrapper[4942]: I0218 19:19:47.616051 4942 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-g5df6" podUID="5a5bda6e-e1c2-4ecf-a531-fbbe8139e91e" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.26:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 18 19:19:47 crc kubenswrapper[4942]: I0218 19:19:47.701591 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-2fcrf\" (UID: \"087f0c6b-3e9f-4db4-bbcb-a8075e218219\") " pod="openshift-image-registry/image-registry-697d97f7c8-2fcrf" Feb 18 19:19:47 crc kubenswrapper[4942]: E0218 19:19:47.702138 4942 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-18 19:19:48.202115199 +0000 UTC m=+147.907047864 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-2fcrf" (UID: "087f0c6b-3e9f-4db4-bbcb-a8075e218219") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 19:19:47 crc kubenswrapper[4942]: I0218 19:19:47.802833 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 19:19:47 crc kubenswrapper[4942]: E0218 19:19:47.803314 4942 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 19:19:48.303280417 +0000 UTC m=+148.008213082 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 19:19:47 crc kubenswrapper[4942]: I0218 19:19:47.876141 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-b488q" event={"ID":"51ed31a1-9bf0-40ff-8bca-041d691662b4","Type":"ContainerStarted","Data":"32ad471fc189e5a633ffd8159099200358fe5c48344c0dc9526ac321b7f1c8f5"} Feb 18 19:19:47 crc kubenswrapper[4942]: I0218 19:19:47.879200 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-v5w2k" event={"ID":"6890b7aa-fac3-4c00-90cc-4618ddfae25e","Type":"ContainerStarted","Data":"7748e5ef607983d5270cd4243cc208c0398aafb4dd52ff8a17b3a606606813a9"} Feb 18 19:19:47 crc kubenswrapper[4942]: I0218 19:19:47.879266 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-v5w2k" event={"ID":"6890b7aa-fac3-4c00-90cc-4618ddfae25e","Type":"ContainerStarted","Data":"b8cc889c625035d34efbc631c55c3b0b102ba0591c1e64a53cd13bcf88045c57"} Feb 18 19:19:47 crc kubenswrapper[4942]: I0218 19:19:47.880839 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29524035-tk5g4" event={"ID":"01ba4570-01bb-4964-8c1d-791c25d72a1a","Type":"ContainerStarted","Data":"5fb82fb77a7895a43a30ace42481cf4c1da624e8742b15c1cb5a5cf3044d7c22"} Feb 18 19:19:47 crc kubenswrapper[4942]: I0218 19:19:47.882092 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-w9lpz" event={"ID":"af99a6af-5df3-4b87-8f14-a564c5d86164","Type":"ContainerStarted","Data":"1314083d10e40e71c4cd17f089a72147abfcc4fee1bb370d542c298f25e78b02"} Feb 18 19:19:47 crc kubenswrapper[4942]: I0218 19:19:47.883479 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-rw75p" event={"ID":"05aed8e4-390c-4589-8a61-2aab50a1d90f","Type":"ContainerStarted","Data":"5e9e204518cb98e53f5cfb13561837e69ea57625a59ec69cdf232fc75373a59e"} Feb 18 19:19:47 crc kubenswrapper[4942]: I0218 19:19:47.885326 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-8nxhq" event={"ID":"83c8ec50-d07e-4c96-80b8-22cf232b015c","Type":"ContainerStarted","Data":"5ce9d1a50c6dcfc37ebf364214ff445874e411fd19d28dd01d5dd58e037a60ad"} Feb 18 19:19:47 crc kubenswrapper[4942]: I0218 19:19:47.887244 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-s4kjv" event={"ID":"461a0658-ae3b-4972-8122-2719276793b9","Type":"ContainerStarted","Data":"3f300ea2b013a0074ea8815ac3e6dd2bde21d5361e0e174bd8c460315a72b91d"} Feb 18 19:19:47 crc kubenswrapper[4942]: I0218 19:19:47.887271 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-s4kjv" event={"ID":"461a0658-ae3b-4972-8122-2719276793b9","Type":"ContainerStarted","Data":"2197dd032944bf4c31c8f13538d2e095332de34388cbac0f53a2ad55277f35ca"} Feb 18 19:19:47 crc kubenswrapper[4942]: I0218 19:19:47.888638 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-zj44h" event={"ID":"b69f8ddf-cdf8-4104-bf4a-d2843c2aefa7","Type":"ContainerStarted","Data":"9b0e82244d95209b70718ead33145da919469f201aed63bae4f5aeff682b279e"} Feb 18 19:19:47 crc kubenswrapper[4942]: I0218 19:19:47.889583 4942 patch_prober.go:28] interesting pod/downloads-7954f5f757-tndhs container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.10:8080/\": dial tcp 10.217.0.10:8080: connect: connection refused" start-of-body= Feb 18 19:19:47 crc kubenswrapper[4942]: I0218 19:19:47.889638 4942 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-tndhs" podUID="cb8403e3-f9b3-4ddf-8688-1a025a2b9291" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.10:8080/\": dial tcp 10.217.0.10:8080: connect: connection refused" Feb 18 19:19:47 crc kubenswrapper[4942]: I0218 19:19:47.897900 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-879f6c89f-z4t28" Feb 18 19:19:47 crc kubenswrapper[4942]: I0218 19:19:47.900542 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-g5df6" Feb 18 19:19:47 crc kubenswrapper[4942]: I0218 19:19:47.904853 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-2fcrf\" (UID: \"087f0c6b-3e9f-4db4-bbcb-a8075e218219\") " pod="openshift-image-registry/image-registry-697d97f7c8-2fcrf" Feb 18 19:19:47 crc kubenswrapper[4942]: E0218 19:19:47.905255 4942 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-18 19:19:48.405240936 +0000 UTC m=+148.110173591 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-2fcrf" (UID: "087f0c6b-3e9f-4db4-bbcb-a8075e218219") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 19:19:47 crc kubenswrapper[4942]: I0218 19:19:47.909178 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-jfkrb" Feb 18 19:19:47 crc kubenswrapper[4942]: I0218 19:19:47.919970 4942 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver/apiserver-76f77b778f-v5w2k" podStartSLOduration=126.919954533 podStartE2EDuration="2m6.919954533s" podCreationTimestamp="2026-02-18 19:17:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:19:47.918678388 +0000 UTC m=+147.623611053" watchObservedRunningTime="2026-02-18 19:19:47.919954533 +0000 UTC m=+147.624887188" Feb 18 19:19:47 crc kubenswrapper[4942]: I0218 19:19:47.922224 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-zz9rm" Feb 18 19:19:47 crc kubenswrapper[4942]: I0218 19:19:47.948607 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-lrcbr" Feb 18 19:19:48 crc kubenswrapper[4942]: I0218 19:19:48.006068 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 19:19:48 crc kubenswrapper[4942]: E0218 19:19:48.008987 4942 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 19:19:48.508958933 +0000 UTC m=+148.213891808 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 19:19:48 crc kubenswrapper[4942]: I0218 19:19:48.055130 4942 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-admission-controller-857f4d67dd-zj44h" podStartSLOduration=126.055107977 podStartE2EDuration="2m6.055107977s" podCreationTimestamp="2026-02-18 19:17:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:19:48.026312701 +0000 UTC m=+147.731245366" watchObservedRunningTime="2026-02-18 19:19:48.055107977 +0000 UTC m=+147.760040652" Feb 18 19:19:48 crc kubenswrapper[4942]: I0218 19:19:48.108887 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-2fcrf\" (UID: \"087f0c6b-3e9f-4db4-bbcb-a8075e218219\") " pod="openshift-image-registry/image-registry-697d97f7c8-2fcrf" Feb 18 19:19:48 crc kubenswrapper[4942]: E0218 19:19:48.109228 4942 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-18 19:19:48.609214856 +0000 UTC m=+148.314147521 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-2fcrf" (UID: "087f0c6b-3e9f-4db4-bbcb-a8075e218219") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 19:19:48 crc kubenswrapper[4942]: I0218 19:19:48.221081 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 19:19:48 crc kubenswrapper[4942]: E0218 19:19:48.221544 4942 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 19:19:48.721496944 +0000 UTC m=+148.426429609 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 19:19:48 crc kubenswrapper[4942]: I0218 19:19:48.324457 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-2fcrf\" (UID: \"087f0c6b-3e9f-4db4-bbcb-a8075e218219\") " pod="openshift-image-registry/image-registry-697d97f7c8-2fcrf" Feb 18 19:19:48 crc kubenswrapper[4942]: E0218 19:19:48.325189 4942 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-18 19:19:48.825167039 +0000 UTC m=+148.530099704 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-2fcrf" (UID: "087f0c6b-3e9f-4db4-bbcb-a8075e218219") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 19:19:48 crc kubenswrapper[4942]: I0218 19:19:48.385634 4942 patch_prober.go:28] interesting pod/router-default-5444994796-fgw8l container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 18 19:19:48 crc kubenswrapper[4942]: [-]has-synced failed: reason withheld Feb 18 19:19:48 crc kubenswrapper[4942]: [+]process-running ok Feb 18 19:19:48 crc kubenswrapper[4942]: healthz check failed Feb 18 19:19:48 crc kubenswrapper[4942]: I0218 19:19:48.385692 4942 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-fgw8l" podUID="8134898c-a265-4fa0-8548-075ea0812b7b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 18 19:19:48 crc kubenswrapper[4942]: I0218 19:19:48.426461 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 19:19:48 crc kubenswrapper[4942]: E0218 19:19:48.426864 4942 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 19:19:48.926845141 +0000 UTC m=+148.631777806 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 19:19:48 crc kubenswrapper[4942]: I0218 19:19:48.448300 4942 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-q9pxc" Feb 18 19:19:48 crc kubenswrapper[4942]: I0218 19:19:48.480364 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-config-operator/openshift-config-operator-7777fb866f-qk5bm" Feb 18 19:19:48 crc kubenswrapper[4942]: I0218 19:19:48.528800 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-2fcrf\" (UID: \"087f0c6b-3e9f-4db4-bbcb-a8075e218219\") " pod="openshift-image-registry/image-registry-697d97f7c8-2fcrf" Feb 18 19:19:48 crc kubenswrapper[4942]: E0218 19:19:48.529571 4942 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-18 19:19:49.02955667 +0000 UTC m=+148.734489335 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-2fcrf" (UID: "087f0c6b-3e9f-4db4-bbcb-a8075e218219") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 19:19:48 crc kubenswrapper[4942]: I0218 19:19:48.630236 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 19:19:48 crc kubenswrapper[4942]: E0218 19:19:48.630500 4942 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 19:19:49.130468111 +0000 UTC m=+148.835400776 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 19:19:48 crc kubenswrapper[4942]: I0218 19:19:48.631029 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-2fcrf\" (UID: \"087f0c6b-3e9f-4db4-bbcb-a8075e218219\") " pod="openshift-image-registry/image-registry-697d97f7c8-2fcrf" Feb 18 19:19:48 crc kubenswrapper[4942]: E0218 19:19:48.631474 4942 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-18 19:19:49.131458528 +0000 UTC m=+148.836391203 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-2fcrf" (UID: "087f0c6b-3e9f-4db4-bbcb-a8075e218219") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 19:19:48 crc kubenswrapper[4942]: I0218 19:19:48.733230 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 19:19:48 crc kubenswrapper[4942]: E0218 19:19:48.734285 4942 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 19:19:49.23426677 +0000 UTC m=+148.939199435 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 19:19:48 crc kubenswrapper[4942]: I0218 19:19:48.776102 4942 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-gjnbk"] Feb 18 19:19:48 crc kubenswrapper[4942]: I0218 19:19:48.777541 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-gjnbk" Feb 18 19:19:48 crc kubenswrapper[4942]: I0218 19:19:48.781250 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Feb 18 19:19:48 crc kubenswrapper[4942]: I0218 19:19:48.835686 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-2fcrf\" (UID: \"087f0c6b-3e9f-4db4-bbcb-a8075e218219\") " pod="openshift-image-registry/image-registry-697d97f7c8-2fcrf" Feb 18 19:19:48 crc kubenswrapper[4942]: E0218 19:19:48.836155 4942 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-18 19:19:49.336139447 +0000 UTC m=+149.041072102 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-2fcrf" (UID: "087f0c6b-3e9f-4db4-bbcb-a8075e218219") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 19:19:48 crc kubenswrapper[4942]: I0218 19:19:48.859563 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-gjnbk"] Feb 18 19:19:48 crc kubenswrapper[4942]: I0218 19:19:48.903295 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-w9lpz" event={"ID":"af99a6af-5df3-4b87-8f14-a564c5d86164","Type":"ContainerStarted","Data":"7cda0c908c76ae8935029e4e7acf22d17e9f3e667885843090b60d581423bc78"} Feb 18 19:19:48 crc kubenswrapper[4942]: I0218 19:19:48.903358 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-w9lpz" event={"ID":"af99a6af-5df3-4b87-8f14-a564c5d86164","Type":"ContainerStarted","Data":"6c5356081faf18c6411088b4f8fffb5713c5e43020c169a23e6820e7fb892b5b"} Feb 18 19:19:48 crc kubenswrapper[4942]: I0218 19:19:48.905557 4942 generic.go:334] "Generic (PLEG): container finished" podID="01ba4570-01bb-4964-8c1d-791c25d72a1a" containerID="5fb82fb77a7895a43a30ace42481cf4c1da624e8742b15c1cb5a5cf3044d7c22" exitCode=0 Feb 18 19:19:48 crc kubenswrapper[4942]: I0218 19:19:48.905815 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29524035-tk5g4" event={"ID":"01ba4570-01bb-4964-8c1d-791c25d72a1a","Type":"ContainerDied","Data":"5fb82fb77a7895a43a30ace42481cf4c1da624e8742b15c1cb5a5cf3044d7c22"} Feb 18 19:19:48 crc kubenswrapper[4942]: I0218 19:19:48.916080 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-q9pxc" Feb 18 19:19:48 crc kubenswrapper[4942]: I0218 19:19:48.936772 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 19:19:48 crc kubenswrapper[4942]: E0218 19:19:48.937036 4942 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 19:19:49.436989577 +0000 UTC m=+149.141922282 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 19:19:48 crc kubenswrapper[4942]: I0218 19:19:48.937122 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-2fcrf\" (UID: \"087f0c6b-3e9f-4db4-bbcb-a8075e218219\") " pod="openshift-image-registry/image-registry-697d97f7c8-2fcrf" Feb 18 19:19:48 crc kubenswrapper[4942]: I0218 19:19:48.937335 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-89296\" (UniqueName: \"kubernetes.io/projected/a7f05662-6e61-4d86-8a52-13000d4bd2be-kube-api-access-89296\") pod \"community-operators-gjnbk\" (UID: \"a7f05662-6e61-4d86-8a52-13000d4bd2be\") " pod="openshift-marketplace/community-operators-gjnbk" Feb 18 19:19:48 crc kubenswrapper[4942]: I0218 19:19:48.937510 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a7f05662-6e61-4d86-8a52-13000d4bd2be-utilities\") pod \"community-operators-gjnbk\" (UID: \"a7f05662-6e61-4d86-8a52-13000d4bd2be\") " pod="openshift-marketplace/community-operators-gjnbk" Feb 18 19:19:48 crc kubenswrapper[4942]: I0218 19:19:48.937605 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a7f05662-6e61-4d86-8a52-13000d4bd2be-catalog-content\") pod \"community-operators-gjnbk\" (UID: \"a7f05662-6e61-4d86-8a52-13000d4bd2be\") " pod="openshift-marketplace/community-operators-gjnbk" Feb 18 19:19:48 crc kubenswrapper[4942]: E0218 19:19:48.937781 4942 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-18 19:19:49.437740337 +0000 UTC m=+149.142673002 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-2fcrf" (UID: "087f0c6b-3e9f-4db4-bbcb-a8075e218219") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 19:19:48 crc kubenswrapper[4942]: I0218 19:19:48.981325 4942 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-tm22r"] Feb 18 19:19:48 crc kubenswrapper[4942]: I0218 19:19:48.982485 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-tm22r" Feb 18 19:19:48 crc kubenswrapper[4942]: I0218 19:19:48.990729 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Feb 18 19:19:49 crc kubenswrapper[4942]: I0218 19:19:49.039191 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 19:19:49 crc kubenswrapper[4942]: I0218 19:19:49.039479 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a7f05662-6e61-4d86-8a52-13000d4bd2be-catalog-content\") pod \"community-operators-gjnbk\" (UID: \"a7f05662-6e61-4d86-8a52-13000d4bd2be\") " pod="openshift-marketplace/community-operators-gjnbk" Feb 18 19:19:49 crc kubenswrapper[4942]: I0218 19:19:49.040271 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-89296\" (UniqueName: \"kubernetes.io/projected/a7f05662-6e61-4d86-8a52-13000d4bd2be-kube-api-access-89296\") pod \"community-operators-gjnbk\" (UID: \"a7f05662-6e61-4d86-8a52-13000d4bd2be\") " pod="openshift-marketplace/community-operators-gjnbk" Feb 18 19:19:49 crc kubenswrapper[4942]: I0218 19:19:49.040558 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a7f05662-6e61-4d86-8a52-13000d4bd2be-utilities\") pod \"community-operators-gjnbk\" (UID: \"a7f05662-6e61-4d86-8a52-13000d4bd2be\") " pod="openshift-marketplace/community-operators-gjnbk" Feb 18 19:19:49 crc kubenswrapper[4942]: E0218 19:19:49.041073 4942 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 19:19:49.541051382 +0000 UTC m=+149.245984047 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 19:19:49 crc kubenswrapper[4942]: I0218 19:19:49.042436 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a7f05662-6e61-4d86-8a52-13000d4bd2be-catalog-content\") pod \"community-operators-gjnbk\" (UID: \"a7f05662-6e61-4d86-8a52-13000d4bd2be\") " pod="openshift-marketplace/community-operators-gjnbk" Feb 18 19:19:49 crc kubenswrapper[4942]: I0218 19:19:49.044334 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a7f05662-6e61-4d86-8a52-13000d4bd2be-utilities\") pod \"community-operators-gjnbk\" (UID: \"a7f05662-6e61-4d86-8a52-13000d4bd2be\") " pod="openshift-marketplace/community-operators-gjnbk" Feb 18 19:19:49 crc kubenswrapper[4942]: I0218 19:19:49.086893 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-tm22r"] Feb 18 19:19:49 crc kubenswrapper[4942]: I0218 19:19:49.096303 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-89296\" (UniqueName: \"kubernetes.io/projected/a7f05662-6e61-4d86-8a52-13000d4bd2be-kube-api-access-89296\") pod \"community-operators-gjnbk\" (UID: \"a7f05662-6e61-4d86-8a52-13000d4bd2be\") " pod="openshift-marketplace/community-operators-gjnbk" Feb 18 19:19:49 crc kubenswrapper[4942]: I0218 19:19:49.141742 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-2fcrf\" (UID: \"087f0c6b-3e9f-4db4-bbcb-a8075e218219\") " pod="openshift-image-registry/image-registry-697d97f7c8-2fcrf" Feb 18 19:19:49 crc kubenswrapper[4942]: I0218 19:19:49.141845 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9b0511d8-736f-48fa-94a5-9a45e8482467-utilities\") pod \"certified-operators-tm22r\" (UID: \"9b0511d8-736f-48fa-94a5-9a45e8482467\") " pod="openshift-marketplace/certified-operators-tm22r" Feb 18 19:19:49 crc kubenswrapper[4942]: I0218 19:19:49.141879 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9b0511d8-736f-48fa-94a5-9a45e8482467-catalog-content\") pod \"certified-operators-tm22r\" (UID: \"9b0511d8-736f-48fa-94a5-9a45e8482467\") " pod="openshift-marketplace/certified-operators-tm22r" Feb 18 19:19:49 crc kubenswrapper[4942]: I0218 19:19:49.141931 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lw4w8\" (UniqueName: \"kubernetes.io/projected/9b0511d8-736f-48fa-94a5-9a45e8482467-kube-api-access-lw4w8\") pod \"certified-operators-tm22r\" (UID: \"9b0511d8-736f-48fa-94a5-9a45e8482467\") " pod="openshift-marketplace/certified-operators-tm22r" Feb 18 19:19:49 crc kubenswrapper[4942]: E0218 19:19:49.142401 4942 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-18 19:19:49.642383215 +0000 UTC m=+149.347315880 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-2fcrf" (UID: "087f0c6b-3e9f-4db4-bbcb-a8075e218219") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 19:19:49 crc kubenswrapper[4942]: I0218 19:19:49.166068 4942 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-tk5v7"] Feb 18 19:19:49 crc kubenswrapper[4942]: I0218 19:19:49.167109 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-tk5v7" Feb 18 19:19:49 crc kubenswrapper[4942]: I0218 19:19:49.192104 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-tk5v7"] Feb 18 19:19:49 crc kubenswrapper[4942]: I0218 19:19:49.242777 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 19:19:49 crc kubenswrapper[4942]: I0218 19:19:49.243135 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9b0511d8-736f-48fa-94a5-9a45e8482467-utilities\") pod \"certified-operators-tm22r\" (UID: \"9b0511d8-736f-48fa-94a5-9a45e8482467\") " pod="openshift-marketplace/certified-operators-tm22r" Feb 18 19:19:49 crc kubenswrapper[4942]: I0218 19:19:49.243183 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9b0511d8-736f-48fa-94a5-9a45e8482467-catalog-content\") pod \"certified-operators-tm22r\" (UID: \"9b0511d8-736f-48fa-94a5-9a45e8482467\") " pod="openshift-marketplace/certified-operators-tm22r" Feb 18 19:19:49 crc kubenswrapper[4942]: I0218 19:19:49.243240 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lw4w8\" (UniqueName: \"kubernetes.io/projected/9b0511d8-736f-48fa-94a5-9a45e8482467-kube-api-access-lw4w8\") pod \"certified-operators-tm22r\" (UID: \"9b0511d8-736f-48fa-94a5-9a45e8482467\") " pod="openshift-marketplace/certified-operators-tm22r" Feb 18 19:19:49 crc kubenswrapper[4942]: I0218 19:19:49.244214 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9b0511d8-736f-48fa-94a5-9a45e8482467-catalog-content\") pod \"certified-operators-tm22r\" (UID: \"9b0511d8-736f-48fa-94a5-9a45e8482467\") " pod="openshift-marketplace/certified-operators-tm22r" Feb 18 19:19:49 crc kubenswrapper[4942]: E0218 19:19:49.244311 4942 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 19:19:49.744294223 +0000 UTC m=+149.449226888 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 19:19:49 crc kubenswrapper[4942]: I0218 19:19:49.244645 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9b0511d8-736f-48fa-94a5-9a45e8482467-utilities\") pod \"certified-operators-tm22r\" (UID: \"9b0511d8-736f-48fa-94a5-9a45e8482467\") " pod="openshift-marketplace/certified-operators-tm22r" Feb 18 19:19:49 crc kubenswrapper[4942]: I0218 19:19:49.274416 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lw4w8\" (UniqueName: \"kubernetes.io/projected/9b0511d8-736f-48fa-94a5-9a45e8482467-kube-api-access-lw4w8\") pod \"certified-operators-tm22r\" (UID: \"9b0511d8-736f-48fa-94a5-9a45e8482467\") " pod="openshift-marketplace/certified-operators-tm22r" Feb 18 19:19:49 crc kubenswrapper[4942]: I0218 19:19:49.295395 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-tm22r" Feb 18 19:19:49 crc kubenswrapper[4942]: I0218 19:19:49.344791 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/934bc032-4641-47ee-9689-39edb4e5a24a-catalog-content\") pod \"community-operators-tk5v7\" (UID: \"934bc032-4641-47ee-9689-39edb4e5a24a\") " pod="openshift-marketplace/community-operators-tk5v7" Feb 18 19:19:49 crc kubenswrapper[4942]: I0218 19:19:49.344840 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/934bc032-4641-47ee-9689-39edb4e5a24a-utilities\") pod \"community-operators-tk5v7\" (UID: \"934bc032-4641-47ee-9689-39edb4e5a24a\") " pod="openshift-marketplace/community-operators-tk5v7" Feb 18 19:19:49 crc kubenswrapper[4942]: I0218 19:19:49.344887 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-2fcrf\" (UID: \"087f0c6b-3e9f-4db4-bbcb-a8075e218219\") " pod="openshift-image-registry/image-registry-697d97f7c8-2fcrf" Feb 18 19:19:49 crc kubenswrapper[4942]: I0218 19:19:49.344925 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-98p66\" (UniqueName: \"kubernetes.io/projected/934bc032-4641-47ee-9689-39edb4e5a24a-kube-api-access-98p66\") pod \"community-operators-tk5v7\" (UID: \"934bc032-4641-47ee-9689-39edb4e5a24a\") " pod="openshift-marketplace/community-operators-tk5v7" Feb 18 19:19:49 crc kubenswrapper[4942]: E0218 19:19:49.345478 4942 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-18 19:19:49.84543871 +0000 UTC m=+149.550371455 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-2fcrf" (UID: "087f0c6b-3e9f-4db4-bbcb-a8075e218219") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 19:19:49 crc kubenswrapper[4942]: I0218 19:19:49.392018 4942 patch_prober.go:28] interesting pod/router-default-5444994796-fgw8l container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 18 19:19:49 crc kubenswrapper[4942]: [-]has-synced failed: reason withheld Feb 18 19:19:49 crc kubenswrapper[4942]: [+]process-running ok Feb 18 19:19:49 crc kubenswrapper[4942]: healthz check failed Feb 18 19:19:49 crc kubenswrapper[4942]: I0218 19:19:49.392136 4942 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-fgw8l" podUID="8134898c-a265-4fa0-8548-075ea0812b7b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 18 19:19:49 crc kubenswrapper[4942]: I0218 19:19:49.396478 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-gjnbk" Feb 18 19:19:49 crc kubenswrapper[4942]: I0218 19:19:49.404635 4942 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-c28tv"] Feb 18 19:19:49 crc kubenswrapper[4942]: I0218 19:19:49.405650 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-c28tv" Feb 18 19:19:49 crc kubenswrapper[4942]: I0218 19:19:49.427753 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-c28tv"] Feb 18 19:19:49 crc kubenswrapper[4942]: I0218 19:19:49.446689 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 19:19:49 crc kubenswrapper[4942]: I0218 19:19:49.447181 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/934bc032-4641-47ee-9689-39edb4e5a24a-catalog-content\") pod \"community-operators-tk5v7\" (UID: \"934bc032-4641-47ee-9689-39edb4e5a24a\") " pod="openshift-marketplace/community-operators-tk5v7" Feb 18 19:19:49 crc kubenswrapper[4942]: I0218 19:19:49.447235 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/934bc032-4641-47ee-9689-39edb4e5a24a-utilities\") pod \"community-operators-tk5v7\" (UID: \"934bc032-4641-47ee-9689-39edb4e5a24a\") " pod="openshift-marketplace/community-operators-tk5v7" Feb 18 19:19:49 crc kubenswrapper[4942]: I0218 19:19:49.447325 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-98p66\" (UniqueName: \"kubernetes.io/projected/934bc032-4641-47ee-9689-39edb4e5a24a-kube-api-access-98p66\") pod \"community-operators-tk5v7\" (UID: \"934bc032-4641-47ee-9689-39edb4e5a24a\") " pod="openshift-marketplace/community-operators-tk5v7" Feb 18 19:19:49 crc kubenswrapper[4942]: E0218 19:19:49.447946 4942 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 19:19:49.947905033 +0000 UTC m=+149.652837718 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 19:19:49 crc kubenswrapper[4942]: I0218 19:19:49.449277 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/934bc032-4641-47ee-9689-39edb4e5a24a-catalog-content\") pod \"community-operators-tk5v7\" (UID: \"934bc032-4641-47ee-9689-39edb4e5a24a\") " pod="openshift-marketplace/community-operators-tk5v7" Feb 18 19:19:49 crc kubenswrapper[4942]: I0218 19:19:49.449294 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/934bc032-4641-47ee-9689-39edb4e5a24a-utilities\") pod \"community-operators-tk5v7\" (UID: \"934bc032-4641-47ee-9689-39edb4e5a24a\") " pod="openshift-marketplace/community-operators-tk5v7" Feb 18 19:19:49 crc kubenswrapper[4942]: I0218 19:19:49.507163 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-98p66\" (UniqueName: \"kubernetes.io/projected/934bc032-4641-47ee-9689-39edb4e5a24a-kube-api-access-98p66\") pod \"community-operators-tk5v7\" (UID: \"934bc032-4641-47ee-9689-39edb4e5a24a\") " pod="openshift-marketplace/community-operators-tk5v7" Feb 18 19:19:49 crc kubenswrapper[4942]: I0218 19:19:49.548988 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/02b9174b-0251-447d-8266-56e92f6e9be1-catalog-content\") pod \"certified-operators-c28tv\" (UID: \"02b9174b-0251-447d-8266-56e92f6e9be1\") " pod="openshift-marketplace/certified-operators-c28tv" Feb 18 19:19:49 crc kubenswrapper[4942]: I0218 19:19:49.549051 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-2fcrf\" (UID: \"087f0c6b-3e9f-4db4-bbcb-a8075e218219\") " pod="openshift-image-registry/image-registry-697d97f7c8-2fcrf" Feb 18 19:19:49 crc kubenswrapper[4942]: I0218 19:19:49.549083 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rjfn2\" (UniqueName: \"kubernetes.io/projected/02b9174b-0251-447d-8266-56e92f6e9be1-kube-api-access-rjfn2\") pod \"certified-operators-c28tv\" (UID: \"02b9174b-0251-447d-8266-56e92f6e9be1\") " pod="openshift-marketplace/certified-operators-c28tv" Feb 18 19:19:49 crc kubenswrapper[4942]: I0218 19:19:49.549111 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/02b9174b-0251-447d-8266-56e92f6e9be1-utilities\") pod \"certified-operators-c28tv\" (UID: \"02b9174b-0251-447d-8266-56e92f6e9be1\") " pod="openshift-marketplace/certified-operators-c28tv" Feb 18 19:19:49 crc kubenswrapper[4942]: E0218 19:19:49.549468 4942 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-18 19:19:50.049455331 +0000 UTC m=+149.754387996 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-2fcrf" (UID: "087f0c6b-3e9f-4db4-bbcb-a8075e218219") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 19:19:49 crc kubenswrapper[4942]: I0218 19:19:49.652406 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 19:19:49 crc kubenswrapper[4942]: E0218 19:19:49.652679 4942 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 19:19:50.152645274 +0000 UTC m=+149.857577939 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 19:19:49 crc kubenswrapper[4942]: I0218 19:19:49.652794 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/02b9174b-0251-447d-8266-56e92f6e9be1-catalog-content\") pod \"certified-operators-c28tv\" (UID: \"02b9174b-0251-447d-8266-56e92f6e9be1\") " pod="openshift-marketplace/certified-operators-c28tv" Feb 18 19:19:49 crc kubenswrapper[4942]: I0218 19:19:49.652931 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-2fcrf\" (UID: \"087f0c6b-3e9f-4db4-bbcb-a8075e218219\") " pod="openshift-image-registry/image-registry-697d97f7c8-2fcrf" Feb 18 19:19:49 crc kubenswrapper[4942]: I0218 19:19:49.652975 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rjfn2\" (UniqueName: \"kubernetes.io/projected/02b9174b-0251-447d-8266-56e92f6e9be1-kube-api-access-rjfn2\") pod \"certified-operators-c28tv\" (UID: \"02b9174b-0251-447d-8266-56e92f6e9be1\") " pod="openshift-marketplace/certified-operators-c28tv" Feb 18 19:19:49 crc kubenswrapper[4942]: I0218 19:19:49.653041 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/02b9174b-0251-447d-8266-56e92f6e9be1-utilities\") pod \"certified-operators-c28tv\" (UID: \"02b9174b-0251-447d-8266-56e92f6e9be1\") " pod="openshift-marketplace/certified-operators-c28tv" Feb 18 19:19:49 crc kubenswrapper[4942]: I0218 19:19:49.653612 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/02b9174b-0251-447d-8266-56e92f6e9be1-utilities\") pod \"certified-operators-c28tv\" (UID: \"02b9174b-0251-447d-8266-56e92f6e9be1\") " pod="openshift-marketplace/certified-operators-c28tv" Feb 18 19:19:49 crc kubenswrapper[4942]: I0218 19:19:49.653847 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/02b9174b-0251-447d-8266-56e92f6e9be1-catalog-content\") pod \"certified-operators-c28tv\" (UID: \"02b9174b-0251-447d-8266-56e92f6e9be1\") " pod="openshift-marketplace/certified-operators-c28tv" Feb 18 19:19:49 crc kubenswrapper[4942]: E0218 19:19:49.654099 4942 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-18 19:19:50.154092513 +0000 UTC m=+149.859025178 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-2fcrf" (UID: "087f0c6b-3e9f-4db4-bbcb-a8075e218219") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 19:19:49 crc kubenswrapper[4942]: I0218 19:19:49.695739 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rjfn2\" (UniqueName: \"kubernetes.io/projected/02b9174b-0251-447d-8266-56e92f6e9be1-kube-api-access-rjfn2\") pod \"certified-operators-c28tv\" (UID: \"02b9174b-0251-447d-8266-56e92f6e9be1\") " pod="openshift-marketplace/certified-operators-c28tv" Feb 18 19:19:49 crc kubenswrapper[4942]: I0218 19:19:49.722224 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-c28tv" Feb 18 19:19:49 crc kubenswrapper[4942]: I0218 19:19:49.754945 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 19:19:49 crc kubenswrapper[4942]: E0218 19:19:49.755819 4942 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 19:19:50.255798214 +0000 UTC m=+149.960730869 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 19:19:49 crc kubenswrapper[4942]: I0218 19:19:49.795100 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-tk5v7" Feb 18 19:19:49 crc kubenswrapper[4942]: I0218 19:19:49.857052 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-2fcrf\" (UID: \"087f0c6b-3e9f-4db4-bbcb-a8075e218219\") " pod="openshift-image-registry/image-registry-697d97f7c8-2fcrf" Feb 18 19:19:49 crc kubenswrapper[4942]: E0218 19:19:49.857417 4942 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-18 19:19:50.357405374 +0000 UTC m=+150.062338039 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-2fcrf" (UID: "087f0c6b-3e9f-4db4-bbcb-a8075e218219") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 19:19:49 crc kubenswrapper[4942]: I0218 19:19:49.875161 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-tm22r"] Feb 18 19:19:49 crc kubenswrapper[4942]: I0218 19:19:49.892253 4942 plugin_watcher.go:194] "Adding socket path or updating timestamp to desired state cache" path="/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock" Feb 18 19:19:49 crc kubenswrapper[4942]: I0218 19:19:49.962052 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 19:19:49 crc kubenswrapper[4942]: E0218 19:19:49.962345 4942 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 19:19:50.462279192 +0000 UTC m=+150.167211857 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 19:19:49 crc kubenswrapper[4942]: I0218 19:19:49.962466 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-2fcrf\" (UID: \"087f0c6b-3e9f-4db4-bbcb-a8075e218219\") " pod="openshift-image-registry/image-registry-697d97f7c8-2fcrf" Feb 18 19:19:49 crc kubenswrapper[4942]: I0218 19:19:49.962831 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 19:19:49 crc kubenswrapper[4942]: I0218 19:19:49.962876 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 19:19:49 crc kubenswrapper[4942]: E0218 19:19:49.963044 4942 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-18 19:19:50.463025342 +0000 UTC m=+150.167958007 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-2fcrf" (UID: "087f0c6b-3e9f-4db4-bbcb-a8075e218219") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 19:19:49 crc kubenswrapper[4942]: I0218 19:19:49.969273 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-w9lpz" event={"ID":"af99a6af-5df3-4b87-8f14-a564c5d86164","Type":"ContainerStarted","Data":"6e237fc824969bf20176670a4af0fe4f179c5f11b94b08b852ef7d05237298da"} Feb 18 19:19:49 crc kubenswrapper[4942]: I0218 19:19:49.970127 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 19:19:49 crc kubenswrapper[4942]: I0218 19:19:49.973140 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 19:19:50 crc kubenswrapper[4942]: I0218 19:19:50.031447 4942 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="hostpath-provisioner/csi-hostpathplugin-w9lpz" podStartSLOduration=11.031419196 podStartE2EDuration="11.031419196s" podCreationTimestamp="2026-02-18 19:19:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:19:50.022535117 +0000 UTC m=+149.727467782" watchObservedRunningTime="2026-02-18 19:19:50.031419196 +0000 UTC m=+149.736351861" Feb 18 19:19:50 crc kubenswrapper[4942]: I0218 19:19:50.051483 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-gjnbk"] Feb 18 19:19:50 crc kubenswrapper[4942]: I0218 19:19:50.064347 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 19:19:50 crc kubenswrapper[4942]: E0218 19:19:50.065652 4942 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 19:19:50.565632809 +0000 UTC m=+150.270565474 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 19:19:50 crc kubenswrapper[4942]: I0218 19:19:50.092218 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 19:19:50 crc kubenswrapper[4942]: I0218 19:19:50.160947 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 19:19:50 crc kubenswrapper[4942]: I0218 19:19:50.174160 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-2fcrf\" (UID: \"087f0c6b-3e9f-4db4-bbcb-a8075e218219\") " pod="openshift-image-registry/image-registry-697d97f7c8-2fcrf" Feb 18 19:19:50 crc kubenswrapper[4942]: I0218 19:19:50.174644 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 19:19:50 crc kubenswrapper[4942]: E0218 19:19:50.174680 4942 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-18 19:19:50.674658329 +0000 UTC m=+150.379590984 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-2fcrf" (UID: "087f0c6b-3e9f-4db4-bbcb-a8075e218219") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 19:19:50 crc kubenswrapper[4942]: I0218 19:19:50.174743 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 19:19:50 crc kubenswrapper[4942]: I0218 19:19:50.179981 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 19:19:50 crc kubenswrapper[4942]: I0218 19:19:50.191923 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 19:19:50 crc kubenswrapper[4942]: I0218 19:19:50.249672 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-c28tv"] Feb 18 19:19:50 crc kubenswrapper[4942]: I0218 19:19:50.275617 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 19:19:50 crc kubenswrapper[4942]: E0218 19:19:50.276037 4942 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 19:19:50.775994591 +0000 UTC m=+150.480927266 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 19:19:50 crc kubenswrapper[4942]: I0218 19:19:50.377683 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-2fcrf\" (UID: \"087f0c6b-3e9f-4db4-bbcb-a8075e218219\") " pod="openshift-image-registry/image-registry-697d97f7c8-2fcrf" Feb 18 19:19:50 crc kubenswrapper[4942]: E0218 19:19:50.378037 4942 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-18 19:19:50.878024282 +0000 UTC m=+150.582956947 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-2fcrf" (UID: "087f0c6b-3e9f-4db4-bbcb-a8075e218219") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 19:19:50 crc kubenswrapper[4942]: I0218 19:19:50.378307 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 19:19:50 crc kubenswrapper[4942]: I0218 19:19:50.390615 4942 patch_prober.go:28] interesting pod/router-default-5444994796-fgw8l container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 18 19:19:50 crc kubenswrapper[4942]: [-]has-synced failed: reason withheld Feb 18 19:19:50 crc kubenswrapper[4942]: [+]process-running ok Feb 18 19:19:50 crc kubenswrapper[4942]: healthz check failed Feb 18 19:19:50 crc kubenswrapper[4942]: I0218 19:19:50.390675 4942 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-fgw8l" podUID="8134898c-a265-4fa0-8548-075ea0812b7b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 18 19:19:50 crc kubenswrapper[4942]: I0218 19:19:50.404846 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-tk5v7"] Feb 18 19:19:50 crc kubenswrapper[4942]: I0218 19:19:50.409499 4942 reconciler.go:161] "OperationExecutor.RegisterPlugin started" plugin={"SocketPath":"/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock","Timestamp":"2026-02-18T19:19:49.892280804Z","Handler":null,"Name":""} Feb 18 19:19:50 crc kubenswrapper[4942]: I0218 19:19:50.411840 4942 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Feb 18 19:19:50 crc kubenswrapper[4942]: I0218 19:19:50.412565 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 18 19:19:50 crc kubenswrapper[4942]: I0218 19:19:50.424316 4942 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager"/"kube-root-ca.crt" Feb 18 19:19:50 crc kubenswrapper[4942]: I0218 19:19:50.428824 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager"/"installer-sa-dockercfg-kjl2n" Feb 18 19:19:50 crc kubenswrapper[4942]: I0218 19:19:50.429875 4942 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: kubevirt.io.hostpath-provisioner endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock versions: 1.0.0 Feb 18 19:19:50 crc kubenswrapper[4942]: I0218 19:19:50.429925 4942 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: kubevirt.io.hostpath-provisioner at endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock Feb 18 19:19:50 crc kubenswrapper[4942]: I0218 19:19:50.479999 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 19:19:50 crc kubenswrapper[4942]: I0218 19:19:50.481151 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/4d362dd3-7195-4c71-9a1c-b4170b339f6d-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"4d362dd3-7195-4c71-9a1c-b4170b339f6d\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 18 19:19:50 crc kubenswrapper[4942]: I0218 19:19:50.481246 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4d362dd3-7195-4c71-9a1c-b4170b339f6d-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"4d362dd3-7195-4c71-9a1c-b4170b339f6d\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 18 19:19:50 crc kubenswrapper[4942]: I0218 19:19:50.486456 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Feb 18 19:19:50 crc kubenswrapper[4942]: I0218 19:19:50.505382 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Feb 18 19:19:50 crc kubenswrapper[4942]: I0218 19:19:50.582052 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4d362dd3-7195-4c71-9a1c-b4170b339f6d-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"4d362dd3-7195-4c71-9a1c-b4170b339f6d\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 18 19:19:50 crc kubenswrapper[4942]: I0218 19:19:50.582114 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-2fcrf\" (UID: \"087f0c6b-3e9f-4db4-bbcb-a8075e218219\") " pod="openshift-image-registry/image-registry-697d97f7c8-2fcrf" Feb 18 19:19:50 crc kubenswrapper[4942]: I0218 19:19:50.582159 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/4d362dd3-7195-4c71-9a1c-b4170b339f6d-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"4d362dd3-7195-4c71-9a1c-b4170b339f6d\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 18 19:19:50 crc kubenswrapper[4942]: I0218 19:19:50.582250 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/4d362dd3-7195-4c71-9a1c-b4170b339f6d-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"4d362dd3-7195-4c71-9a1c-b4170b339f6d\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 18 19:19:50 crc kubenswrapper[4942]: I0218 19:19:50.608569 4942 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 18 19:19:50 crc kubenswrapper[4942]: I0218 19:19:50.608610 4942 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-2fcrf\" (UID: \"087f0c6b-3e9f-4db4-bbcb-a8075e218219\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount\"" pod="openshift-image-registry/image-registry-697d97f7c8-2fcrf" Feb 18 19:19:50 crc kubenswrapper[4942]: I0218 19:19:50.639640 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4d362dd3-7195-4c71-9a1c-b4170b339f6d-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"4d362dd3-7195-4c71-9a1c-b4170b339f6d\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 18 19:19:50 crc kubenswrapper[4942]: I0218 19:19:50.687386 4942 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29524035-tk5g4" Feb 18 19:19:50 crc kubenswrapper[4942]: I0218 19:19:50.740328 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-2fcrf\" (UID: \"087f0c6b-3e9f-4db4-bbcb-a8075e218219\") " pod="openshift-image-registry/image-registry-697d97f7c8-2fcrf" Feb 18 19:19:50 crc kubenswrapper[4942]: I0218 19:19:50.789271 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-2fcrf" Feb 18 19:19:50 crc kubenswrapper[4942]: I0218 19:19:50.789467 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/01ba4570-01bb-4964-8c1d-791c25d72a1a-secret-volume\") pod \"01ba4570-01bb-4964-8c1d-791c25d72a1a\" (UID: \"01ba4570-01bb-4964-8c1d-791c25d72a1a\") " Feb 18 19:19:50 crc kubenswrapper[4942]: I0218 19:19:50.789513 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7g7fx\" (UniqueName: \"kubernetes.io/projected/01ba4570-01bb-4964-8c1d-791c25d72a1a-kube-api-access-7g7fx\") pod \"01ba4570-01bb-4964-8c1d-791c25d72a1a\" (UID: \"01ba4570-01bb-4964-8c1d-791c25d72a1a\") " Feb 18 19:19:50 crc kubenswrapper[4942]: I0218 19:19:50.789664 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/01ba4570-01bb-4964-8c1d-791c25d72a1a-config-volume\") pod \"01ba4570-01bb-4964-8c1d-791c25d72a1a\" (UID: \"01ba4570-01bb-4964-8c1d-791c25d72a1a\") " Feb 18 19:19:50 crc kubenswrapper[4942]: I0218 19:19:50.790586 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/01ba4570-01bb-4964-8c1d-791c25d72a1a-config-volume" (OuterVolumeSpecName: "config-volume") pod "01ba4570-01bb-4964-8c1d-791c25d72a1a" (UID: "01ba4570-01bb-4964-8c1d-791c25d72a1a"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:19:50 crc kubenswrapper[4942]: I0218 19:19:50.796895 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01ba4570-01bb-4964-8c1d-791c25d72a1a-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "01ba4570-01bb-4964-8c1d-791c25d72a1a" (UID: "01ba4570-01bb-4964-8c1d-791c25d72a1a"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:19:50 crc kubenswrapper[4942]: I0218 19:19:50.797117 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01ba4570-01bb-4964-8c1d-791c25d72a1a-kube-api-access-7g7fx" (OuterVolumeSpecName: "kube-api-access-7g7fx") pod "01ba4570-01bb-4964-8c1d-791c25d72a1a" (UID: "01ba4570-01bb-4964-8c1d-791c25d72a1a"). InnerVolumeSpecName "kube-api-access-7g7fx". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:19:50 crc kubenswrapper[4942]: W0218 19:19:50.820233 4942 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3b6479f0_333b_4a96_9adf_2099afdc2447.slice/crio-a093cb01c937fdb5121367f0adbbd61b3fc43c52df093cc472879fb6fbf77971 WatchSource:0}: Error finding container a093cb01c937fdb5121367f0adbbd61b3fc43c52df093cc472879fb6fbf77971: Status 404 returned error can't find the container with id a093cb01c937fdb5121367f0adbbd61b3fc43c52df093cc472879fb6fbf77971 Feb 18 19:19:50 crc kubenswrapper[4942]: I0218 19:19:50.844233 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 18 19:19:50 crc kubenswrapper[4942]: I0218 19:19:50.891229 4942 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/01ba4570-01bb-4964-8c1d-791c25d72a1a-config-volume\") on node \"crc\" DevicePath \"\"" Feb 18 19:19:50 crc kubenswrapper[4942]: I0218 19:19:50.891255 4942 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/01ba4570-01bb-4964-8c1d-791c25d72a1a-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 18 19:19:50 crc kubenswrapper[4942]: I0218 19:19:50.891265 4942 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7g7fx\" (UniqueName: \"kubernetes.io/projected/01ba4570-01bb-4964-8c1d-791c25d72a1a-kube-api-access-7g7fx\") on node \"crc\" DevicePath \"\"" Feb 18 19:19:50 crc kubenswrapper[4942]: W0218 19:19:50.922467 4942 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9d751cbb_f2e2_430d_9754_c882a5e924a5.slice/crio-b3bfb2a8535c25a420ab989ee36a2598252571329bfa55dddc61982069c783f6 WatchSource:0}: Error finding container b3bfb2a8535c25a420ab989ee36a2598252571329bfa55dddc61982069c783f6: Status 404 returned error can't find the container with id b3bfb2a8535c25a420ab989ee36a2598252571329bfa55dddc61982069c783f6 Feb 18 19:19:50 crc kubenswrapper[4942]: I0218 19:19:50.950295 4942 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-vrlpg"] Feb 18 19:19:50 crc kubenswrapper[4942]: E0218 19:19:50.950520 4942 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="01ba4570-01bb-4964-8c1d-791c25d72a1a" containerName="collect-profiles" Feb 18 19:19:50 crc kubenswrapper[4942]: I0218 19:19:50.950533 4942 state_mem.go:107] "Deleted CPUSet assignment" podUID="01ba4570-01bb-4964-8c1d-791c25d72a1a" containerName="collect-profiles" Feb 18 19:19:50 crc kubenswrapper[4942]: I0218 19:19:50.950640 4942 memory_manager.go:354] "RemoveStaleState removing state" podUID="01ba4570-01bb-4964-8c1d-791c25d72a1a" containerName="collect-profiles" Feb 18 19:19:50 crc kubenswrapper[4942]: I0218 19:19:50.951654 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-vrlpg" Feb 18 19:19:50 crc kubenswrapper[4942]: I0218 19:19:50.953870 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Feb 18 19:19:50 crc kubenswrapper[4942]: I0218 19:19:50.959010 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-vrlpg"] Feb 18 19:19:51 crc kubenswrapper[4942]: I0218 19:19:51.017065 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29524035-tk5g4" event={"ID":"01ba4570-01bb-4964-8c1d-791c25d72a1a","Type":"ContainerDied","Data":"5e1dc2e31f1a650ed17f640e417c2728e29699e3f206e468494747757484a591"} Feb 18 19:19:51 crc kubenswrapper[4942]: I0218 19:19:51.017121 4942 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5e1dc2e31f1a650ed17f640e417c2728e29699e3f206e468494747757484a591" Feb 18 19:19:51 crc kubenswrapper[4942]: I0218 19:19:51.017184 4942 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29524035-tk5g4" Feb 18 19:19:51 crc kubenswrapper[4942]: I0218 19:19:51.024526 4942 generic.go:334] "Generic (PLEG): container finished" podID="9b0511d8-736f-48fa-94a5-9a45e8482467" containerID="75b2c06df75750c4c383a2a5c55da9c635db709e0a2c8fdf77529d081e81914f" exitCode=0 Feb 18 19:19:51 crc kubenswrapper[4942]: I0218 19:19:51.024568 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tm22r" event={"ID":"9b0511d8-736f-48fa-94a5-9a45e8482467","Type":"ContainerDied","Data":"75b2c06df75750c4c383a2a5c55da9c635db709e0a2c8fdf77529d081e81914f"} Feb 18 19:19:51 crc kubenswrapper[4942]: I0218 19:19:51.024583 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tm22r" event={"ID":"9b0511d8-736f-48fa-94a5-9a45e8482467","Type":"ContainerStarted","Data":"903844334b076d9d3fb48a98e733d182c6c0ea5de7f8aeb1362b7e203a4a8fa4"} Feb 18 19:19:51 crc kubenswrapper[4942]: I0218 19:19:51.026124 4942 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 18 19:19:51 crc kubenswrapper[4942]: I0218 19:19:51.030627 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"656ad242e374557a7c0f48ee3bd1592417527c18ddcd2b2f34caa19c26d34b58"} Feb 18 19:19:51 crc kubenswrapper[4942]: I0218 19:19:51.046582 4942 generic.go:334] "Generic (PLEG): container finished" podID="02b9174b-0251-447d-8266-56e92f6e9be1" containerID="01a84768c2d4f4eb7b1180f3d4ce6ea22f3b2fc585b9417ed7bc6475cacbd4a4" exitCode=0 Feb 18 19:19:51 crc kubenswrapper[4942]: I0218 19:19:51.053581 4942 generic.go:334] "Generic (PLEG): container finished" podID="a7f05662-6e61-4d86-8a52-13000d4bd2be" containerID="e891a3b4b4ba4720cda01043300773838c67303d7afcead4f05b8f2e095463e4" exitCode=0 Feb 18 19:19:51 crc kubenswrapper[4942]: I0218 19:19:51.053940 4942 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8f668bae-612b-4b75-9490-919e737c6a3b" path="/var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes" Feb 18 19:19:51 crc kubenswrapper[4942]: I0218 19:19:51.054802 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-c28tv" event={"ID":"02b9174b-0251-447d-8266-56e92f6e9be1","Type":"ContainerDied","Data":"01a84768c2d4f4eb7b1180f3d4ce6ea22f3b2fc585b9417ed7bc6475cacbd4a4"} Feb 18 19:19:51 crc kubenswrapper[4942]: I0218 19:19:51.054835 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-c28tv" event={"ID":"02b9174b-0251-447d-8266-56e92f6e9be1","Type":"ContainerStarted","Data":"8c6af65ee9862a635b1667bd69dde2c1cdffc885f9052d205608bd240b148144"} Feb 18 19:19:51 crc kubenswrapper[4942]: I0218 19:19:51.054862 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gjnbk" event={"ID":"a7f05662-6e61-4d86-8a52-13000d4bd2be","Type":"ContainerDied","Data":"e891a3b4b4ba4720cda01043300773838c67303d7afcead4f05b8f2e095463e4"} Feb 18 19:19:51 crc kubenswrapper[4942]: I0218 19:19:51.054877 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gjnbk" event={"ID":"a7f05662-6e61-4d86-8a52-13000d4bd2be","Type":"ContainerStarted","Data":"48be7d221e592c508e0024c55b4c7ad66329680b58e7532a74bd5a930a0ac4bd"} Feb 18 19:19:51 crc kubenswrapper[4942]: I0218 19:19:51.065066 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"a093cb01c937fdb5121367f0adbbd61b3fc43c52df093cc472879fb6fbf77971"} Feb 18 19:19:51 crc kubenswrapper[4942]: I0218 19:19:51.068897 4942 generic.go:334] "Generic (PLEG): container finished" podID="934bc032-4641-47ee-9689-39edb4e5a24a" containerID="7fe3b6c87d6a3eef04ee129d8d8024bf02b590368b744b11406d1709338db6c7" exitCode=0 Feb 18 19:19:51 crc kubenswrapper[4942]: I0218 19:19:51.068966 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-tk5v7" event={"ID":"934bc032-4641-47ee-9689-39edb4e5a24a","Type":"ContainerDied","Data":"7fe3b6c87d6a3eef04ee129d8d8024bf02b590368b744b11406d1709338db6c7"} Feb 18 19:19:51 crc kubenswrapper[4942]: I0218 19:19:51.068996 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-tk5v7" event={"ID":"934bc032-4641-47ee-9689-39edb4e5a24a","Type":"ContainerStarted","Data":"5c780f3eaf3a7663544d07b41d9cc753cd4008f1802dbe09d0227e582dd487c7"} Feb 18 19:19:51 crc kubenswrapper[4942]: I0218 19:19:51.073430 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"b3bfb2a8535c25a420ab989ee36a2598252571329bfa55dddc61982069c783f6"} Feb 18 19:19:51 crc kubenswrapper[4942]: I0218 19:19:51.094249 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/07639322-4f8b-47d5-85c7-da678ca9eaf1-utilities\") pod \"redhat-marketplace-vrlpg\" (UID: \"07639322-4f8b-47d5-85c7-da678ca9eaf1\") " pod="openshift-marketplace/redhat-marketplace-vrlpg" Feb 18 19:19:51 crc kubenswrapper[4942]: I0218 19:19:51.094297 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h8vnr\" (UniqueName: \"kubernetes.io/projected/07639322-4f8b-47d5-85c7-da678ca9eaf1-kube-api-access-h8vnr\") pod \"redhat-marketplace-vrlpg\" (UID: \"07639322-4f8b-47d5-85c7-da678ca9eaf1\") " pod="openshift-marketplace/redhat-marketplace-vrlpg" Feb 18 19:19:51 crc kubenswrapper[4942]: I0218 19:19:51.094329 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/07639322-4f8b-47d5-85c7-da678ca9eaf1-catalog-content\") pod \"redhat-marketplace-vrlpg\" (UID: \"07639322-4f8b-47d5-85c7-da678ca9eaf1\") " pod="openshift-marketplace/redhat-marketplace-vrlpg" Feb 18 19:19:51 crc kubenswrapper[4942]: I0218 19:19:51.198172 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/07639322-4f8b-47d5-85c7-da678ca9eaf1-utilities\") pod \"redhat-marketplace-vrlpg\" (UID: \"07639322-4f8b-47d5-85c7-da678ca9eaf1\") " pod="openshift-marketplace/redhat-marketplace-vrlpg" Feb 18 19:19:51 crc kubenswrapper[4942]: I0218 19:19:51.198292 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h8vnr\" (UniqueName: \"kubernetes.io/projected/07639322-4f8b-47d5-85c7-da678ca9eaf1-kube-api-access-h8vnr\") pod \"redhat-marketplace-vrlpg\" (UID: \"07639322-4f8b-47d5-85c7-da678ca9eaf1\") " pod="openshift-marketplace/redhat-marketplace-vrlpg" Feb 18 19:19:51 crc kubenswrapper[4942]: I0218 19:19:51.198339 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/07639322-4f8b-47d5-85c7-da678ca9eaf1-catalog-content\") pod \"redhat-marketplace-vrlpg\" (UID: \"07639322-4f8b-47d5-85c7-da678ca9eaf1\") " pod="openshift-marketplace/redhat-marketplace-vrlpg" Feb 18 19:19:51 crc kubenswrapper[4942]: I0218 19:19:51.206754 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/07639322-4f8b-47d5-85c7-da678ca9eaf1-utilities\") pod \"redhat-marketplace-vrlpg\" (UID: \"07639322-4f8b-47d5-85c7-da678ca9eaf1\") " pod="openshift-marketplace/redhat-marketplace-vrlpg" Feb 18 19:19:51 crc kubenswrapper[4942]: I0218 19:19:51.206866 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/07639322-4f8b-47d5-85c7-da678ca9eaf1-catalog-content\") pod \"redhat-marketplace-vrlpg\" (UID: \"07639322-4f8b-47d5-85c7-da678ca9eaf1\") " pod="openshift-marketplace/redhat-marketplace-vrlpg" Feb 18 19:19:51 crc kubenswrapper[4942]: I0218 19:19:51.255735 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h8vnr\" (UniqueName: \"kubernetes.io/projected/07639322-4f8b-47d5-85c7-da678ca9eaf1-kube-api-access-h8vnr\") pod \"redhat-marketplace-vrlpg\" (UID: \"07639322-4f8b-47d5-85c7-da678ca9eaf1\") " pod="openshift-marketplace/redhat-marketplace-vrlpg" Feb 18 19:19:51 crc kubenswrapper[4942]: I0218 19:19:51.269638 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-2fcrf"] Feb 18 19:19:51 crc kubenswrapper[4942]: I0218 19:19:51.282506 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-vrlpg" Feb 18 19:19:51 crc kubenswrapper[4942]: I0218 19:19:51.362296 4942 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-w75d5"] Feb 18 19:19:51 crc kubenswrapper[4942]: I0218 19:19:51.363704 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-w75d5" Feb 18 19:19:51 crc kubenswrapper[4942]: I0218 19:19:51.380160 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-w75d5"] Feb 18 19:19:51 crc kubenswrapper[4942]: I0218 19:19:51.384913 4942 patch_prober.go:28] interesting pod/router-default-5444994796-fgw8l container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 18 19:19:51 crc kubenswrapper[4942]: [-]has-synced failed: reason withheld Feb 18 19:19:51 crc kubenswrapper[4942]: [+]process-running ok Feb 18 19:19:51 crc kubenswrapper[4942]: healthz check failed Feb 18 19:19:51 crc kubenswrapper[4942]: I0218 19:19:51.384975 4942 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-fgw8l" podUID="8134898c-a265-4fa0-8548-075ea0812b7b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 18 19:19:51 crc kubenswrapper[4942]: I0218 19:19:51.400576 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f8dc55ee-28aa-4789-96c1-0809c7abdc99-utilities\") pod \"redhat-marketplace-w75d5\" (UID: \"f8dc55ee-28aa-4789-96c1-0809c7abdc99\") " pod="openshift-marketplace/redhat-marketplace-w75d5" Feb 18 19:19:51 crc kubenswrapper[4942]: I0218 19:19:51.400835 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f8dc55ee-28aa-4789-96c1-0809c7abdc99-catalog-content\") pod \"redhat-marketplace-w75d5\" (UID: \"f8dc55ee-28aa-4789-96c1-0809c7abdc99\") " pod="openshift-marketplace/redhat-marketplace-w75d5" Feb 18 19:19:51 crc kubenswrapper[4942]: I0218 19:19:51.400935 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-db9ht\" (UniqueName: \"kubernetes.io/projected/f8dc55ee-28aa-4789-96c1-0809c7abdc99-kube-api-access-db9ht\") pod \"redhat-marketplace-w75d5\" (UID: \"f8dc55ee-28aa-4789-96c1-0809c7abdc99\") " pod="openshift-marketplace/redhat-marketplace-w75d5" Feb 18 19:19:51 crc kubenswrapper[4942]: I0218 19:19:51.477142 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Feb 18 19:19:51 crc kubenswrapper[4942]: I0218 19:19:51.501556 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f8dc55ee-28aa-4789-96c1-0809c7abdc99-catalog-content\") pod \"redhat-marketplace-w75d5\" (UID: \"f8dc55ee-28aa-4789-96c1-0809c7abdc99\") " pod="openshift-marketplace/redhat-marketplace-w75d5" Feb 18 19:19:51 crc kubenswrapper[4942]: I0218 19:19:51.501703 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-db9ht\" (UniqueName: \"kubernetes.io/projected/f8dc55ee-28aa-4789-96c1-0809c7abdc99-kube-api-access-db9ht\") pod \"redhat-marketplace-w75d5\" (UID: \"f8dc55ee-28aa-4789-96c1-0809c7abdc99\") " pod="openshift-marketplace/redhat-marketplace-w75d5" Feb 18 19:19:51 crc kubenswrapper[4942]: I0218 19:19:51.501831 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f8dc55ee-28aa-4789-96c1-0809c7abdc99-utilities\") pod \"redhat-marketplace-w75d5\" (UID: \"f8dc55ee-28aa-4789-96c1-0809c7abdc99\") " pod="openshift-marketplace/redhat-marketplace-w75d5" Feb 18 19:19:51 crc kubenswrapper[4942]: I0218 19:19:51.502531 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f8dc55ee-28aa-4789-96c1-0809c7abdc99-utilities\") pod \"redhat-marketplace-w75d5\" (UID: \"f8dc55ee-28aa-4789-96c1-0809c7abdc99\") " pod="openshift-marketplace/redhat-marketplace-w75d5" Feb 18 19:19:51 crc kubenswrapper[4942]: I0218 19:19:51.503168 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f8dc55ee-28aa-4789-96c1-0809c7abdc99-catalog-content\") pod \"redhat-marketplace-w75d5\" (UID: \"f8dc55ee-28aa-4789-96c1-0809c7abdc99\") " pod="openshift-marketplace/redhat-marketplace-w75d5" Feb 18 19:19:51 crc kubenswrapper[4942]: I0218 19:19:51.531756 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-db9ht\" (UniqueName: \"kubernetes.io/projected/f8dc55ee-28aa-4789-96c1-0809c7abdc99-kube-api-access-db9ht\") pod \"redhat-marketplace-w75d5\" (UID: \"f8dc55ee-28aa-4789-96c1-0809c7abdc99\") " pod="openshift-marketplace/redhat-marketplace-w75d5" Feb 18 19:19:51 crc kubenswrapper[4942]: I0218 19:19:51.610615 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-vrlpg"] Feb 18 19:19:51 crc kubenswrapper[4942]: I0218 19:19:51.824358 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-w75d5" Feb 18 19:19:51 crc kubenswrapper[4942]: I0218 19:19:51.954429 4942 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-5dn7d"] Feb 18 19:19:51 crc kubenswrapper[4942]: I0218 19:19:51.958253 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-5dn7d" Feb 18 19:19:51 crc kubenswrapper[4942]: I0218 19:19:51.975358 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Feb 18 19:19:51 crc kubenswrapper[4942]: I0218 19:19:51.983433 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-5dn7d"] Feb 18 19:19:52 crc kubenswrapper[4942]: I0218 19:19:52.009232 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bjnb8\" (UniqueName: \"kubernetes.io/projected/fc54a822-e044-4d85-a0a8-499a79d09aaf-kube-api-access-bjnb8\") pod \"redhat-operators-5dn7d\" (UID: \"fc54a822-e044-4d85-a0a8-499a79d09aaf\") " pod="openshift-marketplace/redhat-operators-5dn7d" Feb 18 19:19:52 crc kubenswrapper[4942]: I0218 19:19:52.009411 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fc54a822-e044-4d85-a0a8-499a79d09aaf-utilities\") pod \"redhat-operators-5dn7d\" (UID: \"fc54a822-e044-4d85-a0a8-499a79d09aaf\") " pod="openshift-marketplace/redhat-operators-5dn7d" Feb 18 19:19:52 crc kubenswrapper[4942]: I0218 19:19:52.009469 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fc54a822-e044-4d85-a0a8-499a79d09aaf-catalog-content\") pod \"redhat-operators-5dn7d\" (UID: \"fc54a822-e044-4d85-a0a8-499a79d09aaf\") " pod="openshift-marketplace/redhat-operators-5dn7d" Feb 18 19:19:52 crc kubenswrapper[4942]: I0218 19:19:52.048053 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-w75d5"] Feb 18 19:19:52 crc kubenswrapper[4942]: I0218 19:19:52.096970 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"4d362dd3-7195-4c71-9a1c-b4170b339f6d","Type":"ContainerStarted","Data":"d611b2fd6d19ba6e77558cb0cceca6c0fc49ca36f03c6714a84c829a14578ad1"} Feb 18 19:19:52 crc kubenswrapper[4942]: I0218 19:19:52.097026 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"4d362dd3-7195-4c71-9a1c-b4170b339f6d","Type":"ContainerStarted","Data":"770c328bd70507fcba345ada4ee9d1bbc463303df462a624765af1530b1f96a8"} Feb 18 19:19:52 crc kubenswrapper[4942]: I0218 19:19:52.103383 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"b3a6fc6222411fdb47064907c4990ec6c44fd3b214fc9a57b775bd8ce1fc878f"} Feb 18 19:19:52 crc kubenswrapper[4942]: I0218 19:19:52.105375 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"9b470e6050affd8d474fdc4cac7d36379ebcd294fa3642bdcd62d0bf86676651"} Feb 18 19:19:52 crc kubenswrapper[4942]: I0218 19:19:52.105825 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 19:19:52 crc kubenswrapper[4942]: I0218 19:19:52.110560 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fc54a822-e044-4d85-a0a8-499a79d09aaf-catalog-content\") pod \"redhat-operators-5dn7d\" (UID: \"fc54a822-e044-4d85-a0a8-499a79d09aaf\") " pod="openshift-marketplace/redhat-operators-5dn7d" Feb 18 19:19:52 crc kubenswrapper[4942]: I0218 19:19:52.110566 4942 generic.go:334] "Generic (PLEG): container finished" podID="07639322-4f8b-47d5-85c7-da678ca9eaf1" containerID="656a607515eaebac36b55875247d64557f81a70e0dda53e05599d7bfce8c0c9a" exitCode=0 Feb 18 19:19:52 crc kubenswrapper[4942]: I0218 19:19:52.110604 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fc54a822-e044-4d85-a0a8-499a79d09aaf-utilities\") pod \"redhat-operators-5dn7d\" (UID: \"fc54a822-e044-4d85-a0a8-499a79d09aaf\") " pod="openshift-marketplace/redhat-operators-5dn7d" Feb 18 19:19:52 crc kubenswrapper[4942]: I0218 19:19:52.110689 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bjnb8\" (UniqueName: \"kubernetes.io/projected/fc54a822-e044-4d85-a0a8-499a79d09aaf-kube-api-access-bjnb8\") pod \"redhat-operators-5dn7d\" (UID: \"fc54a822-e044-4d85-a0a8-499a79d09aaf\") " pod="openshift-marketplace/redhat-operators-5dn7d" Feb 18 19:19:52 crc kubenswrapper[4942]: I0218 19:19:52.110705 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vrlpg" event={"ID":"07639322-4f8b-47d5-85c7-da678ca9eaf1","Type":"ContainerDied","Data":"656a607515eaebac36b55875247d64557f81a70e0dda53e05599d7bfce8c0c9a"} Feb 18 19:19:52 crc kubenswrapper[4942]: I0218 19:19:52.110835 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vrlpg" event={"ID":"07639322-4f8b-47d5-85c7-da678ca9eaf1","Type":"ContainerStarted","Data":"1342033222b8b7017fedfcc1a993530ce3bb6c2c950b03c672270884763e7952"} Feb 18 19:19:52 crc kubenswrapper[4942]: I0218 19:19:52.111640 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fc54a822-e044-4d85-a0a8-499a79d09aaf-catalog-content\") pod \"redhat-operators-5dn7d\" (UID: \"fc54a822-e044-4d85-a0a8-499a79d09aaf\") " pod="openshift-marketplace/redhat-operators-5dn7d" Feb 18 19:19:52 crc kubenswrapper[4942]: I0218 19:19:52.112063 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fc54a822-e044-4d85-a0a8-499a79d09aaf-utilities\") pod \"redhat-operators-5dn7d\" (UID: \"fc54a822-e044-4d85-a0a8-499a79d09aaf\") " pod="openshift-marketplace/redhat-operators-5dn7d" Feb 18 19:19:52 crc kubenswrapper[4942]: I0218 19:19:52.118198 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-w75d5" event={"ID":"f8dc55ee-28aa-4789-96c1-0809c7abdc99","Type":"ContainerStarted","Data":"bf827b49c615857eb54c5c1b4eb25133056e0a9065497fbb34a9215010ac6e9f"} Feb 18 19:19:52 crc kubenswrapper[4942]: I0218 19:19:52.123543 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"c30f7cef23c7c260de5e039e122b4d9c004a73673d2db332b83648495c2b3ced"} Feb 18 19:19:52 crc kubenswrapper[4942]: I0218 19:19:52.132494 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bjnb8\" (UniqueName: \"kubernetes.io/projected/fc54a822-e044-4d85-a0a8-499a79d09aaf-kube-api-access-bjnb8\") pod \"redhat-operators-5dn7d\" (UID: \"fc54a822-e044-4d85-a0a8-499a79d09aaf\") " pod="openshift-marketplace/redhat-operators-5dn7d" Feb 18 19:19:52 crc kubenswrapper[4942]: I0218 19:19:52.134221 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-2fcrf" event={"ID":"087f0c6b-3e9f-4db4-bbcb-a8075e218219","Type":"ContainerStarted","Data":"91e860bb5e26a16c65c27e2d570478576e7d6d20c751b07a7d8ecff08551af59"} Feb 18 19:19:52 crc kubenswrapper[4942]: I0218 19:19:52.134274 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-2fcrf" event={"ID":"087f0c6b-3e9f-4db4-bbcb-a8075e218219","Type":"ContainerStarted","Data":"c9af7faf6591829dd44fe7e25f59f09e1004d7cfb6e0f93079ef222657176a3e"} Feb 18 19:19:52 crc kubenswrapper[4942]: I0218 19:19:52.134681 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-697d97f7c8-2fcrf" Feb 18 19:19:52 crc kubenswrapper[4942]: I0218 19:19:52.149243 4942 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/revision-pruner-9-crc" podStartSLOduration=2.149209141 podStartE2EDuration="2.149209141s" podCreationTimestamp="2026-02-18 19:19:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:19:52.108792082 +0000 UTC m=+151.813724757" watchObservedRunningTime="2026-02-18 19:19:52.149209141 +0000 UTC m=+151.854141806" Feb 18 19:19:52 crc kubenswrapper[4942]: I0218 19:19:52.171421 4942 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-697d97f7c8-2fcrf" podStartSLOduration=131.17140105 podStartE2EDuration="2m11.17140105s" podCreationTimestamp="2026-02-18 19:17:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:19:52.167506415 +0000 UTC m=+151.872439080" watchObservedRunningTime="2026-02-18 19:19:52.17140105 +0000 UTC m=+151.876333715" Feb 18 19:19:52 crc kubenswrapper[4942]: I0218 19:19:52.211978 4942 patch_prober.go:28] interesting pod/downloads-7954f5f757-tndhs container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.10:8080/\": dial tcp 10.217.0.10:8080: connect: connection refused" start-of-body= Feb 18 19:19:52 crc kubenswrapper[4942]: I0218 19:19:52.211991 4942 patch_prober.go:28] interesting pod/downloads-7954f5f757-tndhs container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.10:8080/\": dial tcp 10.217.0.10:8080: connect: connection refused" start-of-body= Feb 18 19:19:52 crc kubenswrapper[4942]: I0218 19:19:52.212060 4942 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-tndhs" podUID="cb8403e3-f9b3-4ddf-8688-1a025a2b9291" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.10:8080/\": dial tcp 10.217.0.10:8080: connect: connection refused" Feb 18 19:19:52 crc kubenswrapper[4942]: I0218 19:19:52.212098 4942 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-tndhs" podUID="cb8403e3-f9b3-4ddf-8688-1a025a2b9291" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.10:8080/\": dial tcp 10.217.0.10:8080: connect: connection refused" Feb 18 19:19:52 crc kubenswrapper[4942]: I0218 19:19:52.297712 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-5dn7d" Feb 18 19:19:52 crc kubenswrapper[4942]: I0218 19:19:52.364147 4942 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-5jvhl"] Feb 18 19:19:52 crc kubenswrapper[4942]: I0218 19:19:52.365981 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-5jvhl" Feb 18 19:19:52 crc kubenswrapper[4942]: I0218 19:19:52.378552 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-5jvhl"] Feb 18 19:19:52 crc kubenswrapper[4942]: I0218 19:19:52.392211 4942 patch_prober.go:28] interesting pod/router-default-5444994796-fgw8l container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 18 19:19:52 crc kubenswrapper[4942]: [-]has-synced failed: reason withheld Feb 18 19:19:52 crc kubenswrapper[4942]: [+]process-running ok Feb 18 19:19:52 crc kubenswrapper[4942]: healthz check failed Feb 18 19:19:52 crc kubenswrapper[4942]: I0218 19:19:52.392407 4942 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-fgw8l" podUID="8134898c-a265-4fa0-8548-075ea0812b7b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 18 19:19:52 crc kubenswrapper[4942]: I0218 19:19:52.414933 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/44e2bd42-1f23-4563-a1d2-7765ab9181f6-catalog-content\") pod \"redhat-operators-5jvhl\" (UID: \"44e2bd42-1f23-4563-a1d2-7765ab9181f6\") " pod="openshift-marketplace/redhat-operators-5jvhl" Feb 18 19:19:52 crc kubenswrapper[4942]: I0218 19:19:52.414995 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tr6h8\" (UniqueName: \"kubernetes.io/projected/44e2bd42-1f23-4563-a1d2-7765ab9181f6-kube-api-access-tr6h8\") pod \"redhat-operators-5jvhl\" (UID: \"44e2bd42-1f23-4563-a1d2-7765ab9181f6\") " pod="openshift-marketplace/redhat-operators-5jvhl" Feb 18 19:19:52 crc kubenswrapper[4942]: I0218 19:19:52.415083 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/44e2bd42-1f23-4563-a1d2-7765ab9181f6-utilities\") pod \"redhat-operators-5jvhl\" (UID: \"44e2bd42-1f23-4563-a1d2-7765ab9181f6\") " pod="openshift-marketplace/redhat-operators-5jvhl" Feb 18 19:19:52 crc kubenswrapper[4942]: I0218 19:19:52.558795 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/44e2bd42-1f23-4563-a1d2-7765ab9181f6-catalog-content\") pod \"redhat-operators-5jvhl\" (UID: \"44e2bd42-1f23-4563-a1d2-7765ab9181f6\") " pod="openshift-marketplace/redhat-operators-5jvhl" Feb 18 19:19:52 crc kubenswrapper[4942]: I0218 19:19:52.558951 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tr6h8\" (UniqueName: \"kubernetes.io/projected/44e2bd42-1f23-4563-a1d2-7765ab9181f6-kube-api-access-tr6h8\") pod \"redhat-operators-5jvhl\" (UID: \"44e2bd42-1f23-4563-a1d2-7765ab9181f6\") " pod="openshift-marketplace/redhat-operators-5jvhl" Feb 18 19:19:52 crc kubenswrapper[4942]: I0218 19:19:52.559242 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/44e2bd42-1f23-4563-a1d2-7765ab9181f6-utilities\") pod \"redhat-operators-5jvhl\" (UID: \"44e2bd42-1f23-4563-a1d2-7765ab9181f6\") " pod="openshift-marketplace/redhat-operators-5jvhl" Feb 18 19:19:52 crc kubenswrapper[4942]: I0218 19:19:52.559581 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/44e2bd42-1f23-4563-a1d2-7765ab9181f6-catalog-content\") pod \"redhat-operators-5jvhl\" (UID: \"44e2bd42-1f23-4563-a1d2-7765ab9181f6\") " pod="openshift-marketplace/redhat-operators-5jvhl" Feb 18 19:19:52 crc kubenswrapper[4942]: I0218 19:19:52.559835 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/44e2bd42-1f23-4563-a1d2-7765ab9181f6-utilities\") pod \"redhat-operators-5jvhl\" (UID: \"44e2bd42-1f23-4563-a1d2-7765ab9181f6\") " pod="openshift-marketplace/redhat-operators-5jvhl" Feb 18 19:19:52 crc kubenswrapper[4942]: I0218 19:19:52.566339 4942 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-f9d7485db-5l26l" Feb 18 19:19:52 crc kubenswrapper[4942]: I0218 19:19:52.567948 4942 patch_prober.go:28] interesting pod/console-f9d7485db-5l26l container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.9:8443/health\": dial tcp 10.217.0.9:8443: connect: connection refused" start-of-body= Feb 18 19:19:52 crc kubenswrapper[4942]: I0218 19:19:52.567999 4942 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-5l26l" podUID="5683bb73-dc7f-40ed-86cd-0c08f2d38147" containerName="console" probeResult="failure" output="Get \"https://10.217.0.9:8443/health\": dial tcp 10.217.0.9:8443: connect: connection refused" Feb 18 19:19:52 crc kubenswrapper[4942]: I0218 19:19:52.568207 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-f9d7485db-5l26l" Feb 18 19:19:52 crc kubenswrapper[4942]: I0218 19:19:52.580556 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tr6h8\" (UniqueName: \"kubernetes.io/projected/44e2bd42-1f23-4563-a1d2-7765ab9181f6-kube-api-access-tr6h8\") pod \"redhat-operators-5jvhl\" (UID: \"44e2bd42-1f23-4563-a1d2-7765ab9181f6\") " pod="openshift-marketplace/redhat-operators-5jvhl" Feb 18 19:19:52 crc kubenswrapper[4942]: I0218 19:19:52.680738 4942 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-apiserver/apiserver-76f77b778f-v5w2k" Feb 18 19:19:52 crc kubenswrapper[4942]: I0218 19:19:52.682846 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-apiserver/apiserver-76f77b778f-v5w2k" Feb 18 19:19:52 crc kubenswrapper[4942]: I0218 19:19:52.694157 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-5dn7d"] Feb 18 19:19:52 crc kubenswrapper[4942]: I0218 19:19:52.695323 4942 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-apiserver/apiserver-76f77b778f-v5w2k" Feb 18 19:19:52 crc kubenswrapper[4942]: I0218 19:19:52.789127 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-5jvhl" Feb 18 19:19:53 crc kubenswrapper[4942]: I0218 19:19:53.161859 4942 generic.go:334] "Generic (PLEG): container finished" podID="fc54a822-e044-4d85-a0a8-499a79d09aaf" containerID="45ae396e5a2bb9c54d7b56f4a32d81eba0135151fa1a2d7722d17d0a8667d980" exitCode=0 Feb 18 19:19:53 crc kubenswrapper[4942]: I0218 19:19:53.162049 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5dn7d" event={"ID":"fc54a822-e044-4d85-a0a8-499a79d09aaf","Type":"ContainerDied","Data":"45ae396e5a2bb9c54d7b56f4a32d81eba0135151fa1a2d7722d17d0a8667d980"} Feb 18 19:19:53 crc kubenswrapper[4942]: I0218 19:19:53.162307 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5dn7d" event={"ID":"fc54a822-e044-4d85-a0a8-499a79d09aaf","Type":"ContainerStarted","Data":"56923a9d84e1c384a4de3a0f2cac66f27ae78aee76d844588bcd57af55695ead"} Feb 18 19:19:53 crc kubenswrapper[4942]: I0218 19:19:53.165724 4942 generic.go:334] "Generic (PLEG): container finished" podID="f8dc55ee-28aa-4789-96c1-0809c7abdc99" containerID="b6793cfda70d58e1f6a7766cda5ab7da29921b9b2216e4d8bab414d83dbeaadd" exitCode=0 Feb 18 19:19:53 crc kubenswrapper[4942]: I0218 19:19:53.165822 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-w75d5" event={"ID":"f8dc55ee-28aa-4789-96c1-0809c7abdc99","Type":"ContainerDied","Data":"b6793cfda70d58e1f6a7766cda5ab7da29921b9b2216e4d8bab414d83dbeaadd"} Feb 18 19:19:53 crc kubenswrapper[4942]: I0218 19:19:53.180240 4942 generic.go:334] "Generic (PLEG): container finished" podID="4d362dd3-7195-4c71-9a1c-b4170b339f6d" containerID="d611b2fd6d19ba6e77558cb0cceca6c0fc49ca36f03c6714a84c829a14578ad1" exitCode=0 Feb 18 19:19:53 crc kubenswrapper[4942]: I0218 19:19:53.193954 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"4d362dd3-7195-4c71-9a1c-b4170b339f6d","Type":"ContainerDied","Data":"d611b2fd6d19ba6e77558cb0cceca6c0fc49ca36f03c6714a84c829a14578ad1"} Feb 18 19:19:53 crc kubenswrapper[4942]: I0218 19:19:53.200927 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-apiserver/apiserver-76f77b778f-v5w2k" Feb 18 19:19:53 crc kubenswrapper[4942]: I0218 19:19:53.366344 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-5jvhl"] Feb 18 19:19:53 crc kubenswrapper[4942]: I0218 19:19:53.380264 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ingress/router-default-5444994796-fgw8l" Feb 18 19:19:53 crc kubenswrapper[4942]: I0218 19:19:53.385369 4942 patch_prober.go:28] interesting pod/router-default-5444994796-fgw8l container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 18 19:19:53 crc kubenswrapper[4942]: [-]has-synced failed: reason withheld Feb 18 19:19:53 crc kubenswrapper[4942]: [+]process-running ok Feb 18 19:19:53 crc kubenswrapper[4942]: healthz check failed Feb 18 19:19:53 crc kubenswrapper[4942]: I0218 19:19:53.385431 4942 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-fgw8l" podUID="8134898c-a265-4fa0-8548-075ea0812b7b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 18 19:19:53 crc kubenswrapper[4942]: W0218 19:19:53.432960 4942 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod44e2bd42_1f23_4563_a1d2_7765ab9181f6.slice/crio-076f790451e8965bca0e7fb3c29d623ec83f9c2b76666a5189e58eb3eab1c839 WatchSource:0}: Error finding container 076f790451e8965bca0e7fb3c29d623ec83f9c2b76666a5189e58eb3eab1c839: Status 404 returned error can't find the container with id 076f790451e8965bca0e7fb3c29d623ec83f9c2b76666a5189e58eb3eab1c839 Feb 18 19:19:53 crc kubenswrapper[4942]: I0218 19:19:53.483273 4942 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Feb 18 19:19:53 crc kubenswrapper[4942]: I0218 19:19:53.484018 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 18 19:19:53 crc kubenswrapper[4942]: I0218 19:19:53.490332 4942 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Feb 18 19:19:53 crc kubenswrapper[4942]: I0218 19:19:53.491218 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Feb 18 19:19:53 crc kubenswrapper[4942]: I0218 19:19:53.502028 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Feb 18 19:19:53 crc kubenswrapper[4942]: I0218 19:19:53.601808 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/6aba94fa-2207-4cae-8a64-536109c9c967-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"6aba94fa-2207-4cae-8a64-536109c9c967\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 18 19:19:53 crc kubenswrapper[4942]: I0218 19:19:53.601882 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/6aba94fa-2207-4cae-8a64-536109c9c967-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"6aba94fa-2207-4cae-8a64-536109c9c967\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 18 19:19:53 crc kubenswrapper[4942]: I0218 19:19:53.709452 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/6aba94fa-2207-4cae-8a64-536109c9c967-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"6aba94fa-2207-4cae-8a64-536109c9c967\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 18 19:19:53 crc kubenswrapper[4942]: I0218 19:19:53.709885 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/6aba94fa-2207-4cae-8a64-536109c9c967-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"6aba94fa-2207-4cae-8a64-536109c9c967\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 18 19:19:53 crc kubenswrapper[4942]: I0218 19:19:53.709898 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/6aba94fa-2207-4cae-8a64-536109c9c967-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"6aba94fa-2207-4cae-8a64-536109c9c967\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 18 19:19:53 crc kubenswrapper[4942]: I0218 19:19:53.742198 4942 patch_prober.go:28] interesting pod/machine-config-daemon-wqxh4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 19:19:53 crc kubenswrapper[4942]: I0218 19:19:53.742256 4942 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wqxh4" podUID="28921539-823a-4439-a230-3b5aed7085cc" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 19:19:53 crc kubenswrapper[4942]: I0218 19:19:53.744893 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/6aba94fa-2207-4cae-8a64-536109c9c967-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"6aba94fa-2207-4cae-8a64-536109c9c967\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 18 19:19:53 crc kubenswrapper[4942]: I0218 19:19:53.805397 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 18 19:19:54 crc kubenswrapper[4942]: I0218 19:19:54.088278 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Feb 18 19:19:54 crc kubenswrapper[4942]: I0218 19:19:54.189419 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"6aba94fa-2207-4cae-8a64-536109c9c967","Type":"ContainerStarted","Data":"8b70d6fbaa88c2459201d464b33a2dadc0c468a0749f9e8892f0e9c58f4a80f0"} Feb 18 19:19:54 crc kubenswrapper[4942]: I0218 19:19:54.195810 4942 generic.go:334] "Generic (PLEG): container finished" podID="44e2bd42-1f23-4563-a1d2-7765ab9181f6" containerID="9f0f2fd818af74deb03761abad4e3f260742b088b8ddfc49644611d327e71c74" exitCode=0 Feb 18 19:19:54 crc kubenswrapper[4942]: I0218 19:19:54.196102 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5jvhl" event={"ID":"44e2bd42-1f23-4563-a1d2-7765ab9181f6","Type":"ContainerDied","Data":"9f0f2fd818af74deb03761abad4e3f260742b088b8ddfc49644611d327e71c74"} Feb 18 19:19:54 crc kubenswrapper[4942]: I0218 19:19:54.196139 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5jvhl" event={"ID":"44e2bd42-1f23-4563-a1d2-7765ab9181f6","Type":"ContainerStarted","Data":"076f790451e8965bca0e7fb3c29d623ec83f9c2b76666a5189e58eb3eab1c839"} Feb 18 19:19:54 crc kubenswrapper[4942]: I0218 19:19:54.412885 4942 patch_prober.go:28] interesting pod/router-default-5444994796-fgw8l container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 18 19:19:54 crc kubenswrapper[4942]: [-]has-synced failed: reason withheld Feb 18 19:19:54 crc kubenswrapper[4942]: [+]process-running ok Feb 18 19:19:54 crc kubenswrapper[4942]: healthz check failed Feb 18 19:19:54 crc kubenswrapper[4942]: I0218 19:19:54.412953 4942 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-fgw8l" podUID="8134898c-a265-4fa0-8548-075ea0812b7b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 18 19:19:54 crc kubenswrapper[4942]: I0218 19:19:54.448637 4942 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 18 19:19:54 crc kubenswrapper[4942]: I0218 19:19:54.525587 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4d362dd3-7195-4c71-9a1c-b4170b339f6d-kube-api-access\") pod \"4d362dd3-7195-4c71-9a1c-b4170b339f6d\" (UID: \"4d362dd3-7195-4c71-9a1c-b4170b339f6d\") " Feb 18 19:19:54 crc kubenswrapper[4942]: I0218 19:19:54.526881 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/4d362dd3-7195-4c71-9a1c-b4170b339f6d-kubelet-dir\") pod \"4d362dd3-7195-4c71-9a1c-b4170b339f6d\" (UID: \"4d362dd3-7195-4c71-9a1c-b4170b339f6d\") " Feb 18 19:19:54 crc kubenswrapper[4942]: I0218 19:19:54.526981 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4d362dd3-7195-4c71-9a1c-b4170b339f6d-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "4d362dd3-7195-4c71-9a1c-b4170b339f6d" (UID: "4d362dd3-7195-4c71-9a1c-b4170b339f6d"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 18 19:19:54 crc kubenswrapper[4942]: I0218 19:19:54.527240 4942 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/4d362dd3-7195-4c71-9a1c-b4170b339f6d-kubelet-dir\") on node \"crc\" DevicePath \"\"" Feb 18 19:19:54 crc kubenswrapper[4942]: I0218 19:19:54.535260 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4d362dd3-7195-4c71-9a1c-b4170b339f6d-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "4d362dd3-7195-4c71-9a1c-b4170b339f6d" (UID: "4d362dd3-7195-4c71-9a1c-b4170b339f6d"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:19:54 crc kubenswrapper[4942]: I0218 19:19:54.628285 4942 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4d362dd3-7195-4c71-9a1c-b4170b339f6d-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 18 19:19:55 crc kubenswrapper[4942]: I0218 19:19:55.203873 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"4d362dd3-7195-4c71-9a1c-b4170b339f6d","Type":"ContainerDied","Data":"770c328bd70507fcba345ada4ee9d1bbc463303df462a624765af1530b1f96a8"} Feb 18 19:19:55 crc kubenswrapper[4942]: I0218 19:19:55.203919 4942 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="770c328bd70507fcba345ada4ee9d1bbc463303df462a624765af1530b1f96a8" Feb 18 19:19:55 crc kubenswrapper[4942]: I0218 19:19:55.204016 4942 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 18 19:19:55 crc kubenswrapper[4942]: I0218 19:19:55.212860 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"6aba94fa-2207-4cae-8a64-536109c9c967","Type":"ContainerStarted","Data":"d09193c45f6e62ddc9dbffb92ad54e0e6ea5a2a3f0c6a2dae876860e6f985516"} Feb 18 19:19:55 crc kubenswrapper[4942]: I0218 19:19:55.250529 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-dns/dns-default-s4kjv" Feb 18 19:19:55 crc kubenswrapper[4942]: I0218 19:19:55.273667 4942 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/revision-pruner-8-crc" podStartSLOduration=2.2732953 podStartE2EDuration="2.2732953s" podCreationTimestamp="2026-02-18 19:19:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:19:55.222669684 +0000 UTC m=+154.927602349" watchObservedRunningTime="2026-02-18 19:19:55.2732953 +0000 UTC m=+154.978227965" Feb 18 19:19:55 crc kubenswrapper[4942]: I0218 19:19:55.384929 4942 patch_prober.go:28] interesting pod/router-default-5444994796-fgw8l container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 18 19:19:55 crc kubenswrapper[4942]: [-]has-synced failed: reason withheld Feb 18 19:19:55 crc kubenswrapper[4942]: [+]process-running ok Feb 18 19:19:55 crc kubenswrapper[4942]: healthz check failed Feb 18 19:19:55 crc kubenswrapper[4942]: I0218 19:19:55.385333 4942 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-fgw8l" podUID="8134898c-a265-4fa0-8548-075ea0812b7b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 18 19:19:56 crc kubenswrapper[4942]: I0218 19:19:56.226017 4942 generic.go:334] "Generic (PLEG): container finished" podID="6aba94fa-2207-4cae-8a64-536109c9c967" containerID="d09193c45f6e62ddc9dbffb92ad54e0e6ea5a2a3f0c6a2dae876860e6f985516" exitCode=0 Feb 18 19:19:56 crc kubenswrapper[4942]: I0218 19:19:56.226077 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"6aba94fa-2207-4cae-8a64-536109c9c967","Type":"ContainerDied","Data":"d09193c45f6e62ddc9dbffb92ad54e0e6ea5a2a3f0c6a2dae876860e6f985516"} Feb 18 19:19:56 crc kubenswrapper[4942]: I0218 19:19:56.383392 4942 patch_prober.go:28] interesting pod/router-default-5444994796-fgw8l container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 18 19:19:56 crc kubenswrapper[4942]: [-]has-synced failed: reason withheld Feb 18 19:19:56 crc kubenswrapper[4942]: [+]process-running ok Feb 18 19:19:56 crc kubenswrapper[4942]: healthz check failed Feb 18 19:19:56 crc kubenswrapper[4942]: I0218 19:19:56.383464 4942 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-fgw8l" podUID="8134898c-a265-4fa0-8548-075ea0812b7b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 18 19:19:57 crc kubenswrapper[4942]: I0218 19:19:57.382024 4942 patch_prober.go:28] interesting pod/router-default-5444994796-fgw8l container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 18 19:19:57 crc kubenswrapper[4942]: [-]has-synced failed: reason withheld Feb 18 19:19:57 crc kubenswrapper[4942]: [+]process-running ok Feb 18 19:19:57 crc kubenswrapper[4942]: healthz check failed Feb 18 19:19:57 crc kubenswrapper[4942]: I0218 19:19:57.382133 4942 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-fgw8l" podUID="8134898c-a265-4fa0-8548-075ea0812b7b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 18 19:19:58 crc kubenswrapper[4942]: I0218 19:19:58.382972 4942 patch_prober.go:28] interesting pod/router-default-5444994796-fgw8l container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 18 19:19:58 crc kubenswrapper[4942]: [-]has-synced failed: reason withheld Feb 18 19:19:58 crc kubenswrapper[4942]: [+]process-running ok Feb 18 19:19:58 crc kubenswrapper[4942]: healthz check failed Feb 18 19:19:58 crc kubenswrapper[4942]: I0218 19:19:58.383308 4942 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-fgw8l" podUID="8134898c-a265-4fa0-8548-075ea0812b7b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 18 19:19:59 crc kubenswrapper[4942]: I0218 19:19:59.383295 4942 patch_prober.go:28] interesting pod/router-default-5444994796-fgw8l container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 18 19:19:59 crc kubenswrapper[4942]: [-]has-synced failed: reason withheld Feb 18 19:19:59 crc kubenswrapper[4942]: [+]process-running ok Feb 18 19:19:59 crc kubenswrapper[4942]: healthz check failed Feb 18 19:19:59 crc kubenswrapper[4942]: I0218 19:19:59.383362 4942 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-fgw8l" podUID="8134898c-a265-4fa0-8548-075ea0812b7b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 18 19:20:00 crc kubenswrapper[4942]: I0218 19:20:00.382534 4942 patch_prober.go:28] interesting pod/router-default-5444994796-fgw8l container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 18 19:20:00 crc kubenswrapper[4942]: [-]has-synced failed: reason withheld Feb 18 19:20:00 crc kubenswrapper[4942]: [+]process-running ok Feb 18 19:20:00 crc kubenswrapper[4942]: healthz check failed Feb 18 19:20:00 crc kubenswrapper[4942]: I0218 19:20:00.382598 4942 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-fgw8l" podUID="8134898c-a265-4fa0-8548-075ea0812b7b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 18 19:20:01 crc kubenswrapper[4942]: I0218 19:20:01.383355 4942 patch_prober.go:28] interesting pod/router-default-5444994796-fgw8l container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 18 19:20:01 crc kubenswrapper[4942]: [-]has-synced failed: reason withheld Feb 18 19:20:01 crc kubenswrapper[4942]: [+]process-running ok Feb 18 19:20:01 crc kubenswrapper[4942]: healthz check failed Feb 18 19:20:01 crc kubenswrapper[4942]: I0218 19:20:01.383420 4942 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-fgw8l" podUID="8134898c-a265-4fa0-8548-075ea0812b7b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 18 19:20:02 crc kubenswrapper[4942]: I0218 19:20:02.216429 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/downloads-7954f5f757-tndhs" Feb 18 19:20:02 crc kubenswrapper[4942]: I0218 19:20:02.382564 4942 patch_prober.go:28] interesting pod/router-default-5444994796-fgw8l container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 18 19:20:02 crc kubenswrapper[4942]: [-]has-synced failed: reason withheld Feb 18 19:20:02 crc kubenswrapper[4942]: [+]process-running ok Feb 18 19:20:02 crc kubenswrapper[4942]: healthz check failed Feb 18 19:20:02 crc kubenswrapper[4942]: I0218 19:20:02.382620 4942 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-fgw8l" podUID="8134898c-a265-4fa0-8548-075ea0812b7b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 18 19:20:02 crc kubenswrapper[4942]: I0218 19:20:02.566339 4942 patch_prober.go:28] interesting pod/console-f9d7485db-5l26l container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.9:8443/health\": dial tcp 10.217.0.9:8443: connect: connection refused" start-of-body= Feb 18 19:20:02 crc kubenswrapper[4942]: I0218 19:20:02.566406 4942 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-5l26l" podUID="5683bb73-dc7f-40ed-86cd-0c08f2d38147" containerName="console" probeResult="failure" output="Get \"https://10.217.0.9:8443/health\": dial tcp 10.217.0.9:8443: connect: connection refused" Feb 18 19:20:02 crc kubenswrapper[4942]: I0218 19:20:02.777505 4942 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 18 19:20:02 crc kubenswrapper[4942]: I0218 19:20:02.841431 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/6aba94fa-2207-4cae-8a64-536109c9c967-kubelet-dir\") pod \"6aba94fa-2207-4cae-8a64-536109c9c967\" (UID: \"6aba94fa-2207-4cae-8a64-536109c9c967\") " Feb 18 19:20:02 crc kubenswrapper[4942]: I0218 19:20:02.841555 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/6aba94fa-2207-4cae-8a64-536109c9c967-kube-api-access\") pod \"6aba94fa-2207-4cae-8a64-536109c9c967\" (UID: \"6aba94fa-2207-4cae-8a64-536109c9c967\") " Feb 18 19:20:02 crc kubenswrapper[4942]: I0218 19:20:02.841587 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6aba94fa-2207-4cae-8a64-536109c9c967-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "6aba94fa-2207-4cae-8a64-536109c9c967" (UID: "6aba94fa-2207-4cae-8a64-536109c9c967"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 18 19:20:02 crc kubenswrapper[4942]: I0218 19:20:02.845003 4942 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/6aba94fa-2207-4cae-8a64-536109c9c967-kubelet-dir\") on node \"crc\" DevicePath \"\"" Feb 18 19:20:02 crc kubenswrapper[4942]: I0218 19:20:02.854872 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6aba94fa-2207-4cae-8a64-536109c9c967-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "6aba94fa-2207-4cae-8a64-536109c9c967" (UID: "6aba94fa-2207-4cae-8a64-536109c9c967"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:20:02 crc kubenswrapper[4942]: I0218 19:20:02.946754 4942 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/6aba94fa-2207-4cae-8a64-536109c9c967-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 18 19:20:03 crc kubenswrapper[4942]: I0218 19:20:03.306177 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"6aba94fa-2207-4cae-8a64-536109c9c967","Type":"ContainerDied","Data":"8b70d6fbaa88c2459201d464b33a2dadc0c468a0749f9e8892f0e9c58f4a80f0"} Feb 18 19:20:03 crc kubenswrapper[4942]: I0218 19:20:03.306248 4942 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8b70d6fbaa88c2459201d464b33a2dadc0c468a0749f9e8892f0e9c58f4a80f0" Feb 18 19:20:03 crc kubenswrapper[4942]: I0218 19:20:03.306342 4942 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 18 19:20:03 crc kubenswrapper[4942]: I0218 19:20:03.388355 4942 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-ingress/router-default-5444994796-fgw8l" Feb 18 19:20:03 crc kubenswrapper[4942]: I0218 19:20:03.391526 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ingress/router-default-5444994796-fgw8l" Feb 18 19:20:04 crc kubenswrapper[4942]: I0218 19:20:04.280836 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/ac5b5f40-34db-4aeb-abb4-57204673bd53-metrics-certs\") pod \"network-metrics-daemon-qwg6q\" (UID: \"ac5b5f40-34db-4aeb-abb4-57204673bd53\") " pod="openshift-multus/network-metrics-daemon-qwg6q" Feb 18 19:20:04 crc kubenswrapper[4942]: I0218 19:20:04.472236 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/ac5b5f40-34db-4aeb-abb4-57204673bd53-metrics-certs\") pod \"network-metrics-daemon-qwg6q\" (UID: \"ac5b5f40-34db-4aeb-abb4-57204673bd53\") " pod="openshift-multus/network-metrics-daemon-qwg6q" Feb 18 19:20:04 crc kubenswrapper[4942]: I0218 19:20:04.761159 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-qwg6q" Feb 18 19:20:04 crc kubenswrapper[4942]: I0218 19:20:04.996860 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-qwg6q"] Feb 18 19:20:05 crc kubenswrapper[4942]: W0218 19:20:05.006034 4942 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podac5b5f40_34db_4aeb_abb4_57204673bd53.slice/crio-17a3cc3533e30c7fbfd9660f8bcd8c7e569611b1699b0b92c095537673688fef WatchSource:0}: Error finding container 17a3cc3533e30c7fbfd9660f8bcd8c7e569611b1699b0b92c095537673688fef: Status 404 returned error can't find the container with id 17a3cc3533e30c7fbfd9660f8bcd8c7e569611b1699b0b92c095537673688fef Feb 18 19:20:05 crc kubenswrapper[4942]: I0218 19:20:05.320913 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-qwg6q" event={"ID":"ac5b5f40-34db-4aeb-abb4-57204673bd53","Type":"ContainerStarted","Data":"17a3cc3533e30c7fbfd9660f8bcd8c7e569611b1699b0b92c095537673688fef"} Feb 18 19:20:07 crc kubenswrapper[4942]: I0218 19:20:07.335034 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-qwg6q" event={"ID":"ac5b5f40-34db-4aeb-abb4-57204673bd53","Type":"ContainerStarted","Data":"ec8b6a4ddaadb7281693f90b113cfc80e98418e1a90f41db6206c8f1d36cc3f6"} Feb 18 19:20:08 crc kubenswrapper[4942]: I0218 19:20:08.347668 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-qwg6q" event={"ID":"ac5b5f40-34db-4aeb-abb4-57204673bd53","Type":"ContainerStarted","Data":"9e9f58eac6f7fe85b907639b1da53c3daf775ef93c64e93576c0047e49dcd4b1"} Feb 18 19:20:09 crc kubenswrapper[4942]: I0218 19:20:09.377480 4942 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/network-metrics-daemon-qwg6q" podStartSLOduration=148.377453696 podStartE2EDuration="2m28.377453696s" podCreationTimestamp="2026-02-18 19:17:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:20:09.372569425 +0000 UTC m=+169.077502090" watchObservedRunningTime="2026-02-18 19:20:09.377453696 +0000 UTC m=+169.082386361" Feb 18 19:20:10 crc kubenswrapper[4942]: I0218 19:20:10.797903 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-697d97f7c8-2fcrf" Feb 18 19:20:12 crc kubenswrapper[4942]: I0218 19:20:12.579391 4942 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-f9d7485db-5l26l" Feb 18 19:20:12 crc kubenswrapper[4942]: I0218 19:20:12.589850 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-f9d7485db-5l26l" Feb 18 19:20:20 crc kubenswrapper[4942]: E0218 19:20:20.663116 4942 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Feb 18 19:20:20 crc kubenswrapper[4942]: E0218 19:20:20.664259 4942 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-h8vnr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-vrlpg_openshift-marketplace(07639322-4f8b-47d5-85c7-da678ca9eaf1): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Feb 18 19:20:20 crc kubenswrapper[4942]: E0218 19:20:20.665540 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-marketplace-vrlpg" podUID="07639322-4f8b-47d5-85c7-da678ca9eaf1" Feb 18 19:20:20 crc kubenswrapper[4942]: E0218 19:20:20.697492 4942 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Feb 18 19:20:20 crc kubenswrapper[4942]: E0218 19:20:20.697704 4942 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-db9ht,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-w75d5_openshift-marketplace(f8dc55ee-28aa-4789-96c1-0809c7abdc99): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Feb 18 19:20:20 crc kubenswrapper[4942]: E0218 19:20:20.699125 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-marketplace-w75d5" podUID="f8dc55ee-28aa-4789-96c1-0809c7abdc99" Feb 18 19:20:22 crc kubenswrapper[4942]: E0218 19:20:22.193459 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-vrlpg" podUID="07639322-4f8b-47d5-85c7-da678ca9eaf1" Feb 18 19:20:22 crc kubenswrapper[4942]: E0218 19:20:22.193486 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-w75d5" podUID="f8dc55ee-28aa-4789-96c1-0809c7abdc99" Feb 18 19:20:22 crc kubenswrapper[4942]: E0218 19:20:22.288009 4942 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/community-operator-index:v4.18" Feb 18 19:20:22 crc kubenswrapper[4942]: E0218 19:20:22.288448 4942 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-98p66,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-tk5v7_openshift-marketplace(934bc032-4641-47ee-9689-39edb4e5a24a): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Feb 18 19:20:22 crc kubenswrapper[4942]: E0218 19:20:22.289934 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/community-operators-tk5v7" podUID="934bc032-4641-47ee-9689-39edb4e5a24a" Feb 18 19:20:22 crc kubenswrapper[4942]: E0218 19:20:22.335973 4942 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/community-operator-index:v4.18" Feb 18 19:20:22 crc kubenswrapper[4942]: E0218 19:20:22.336146 4942 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-89296,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-gjnbk_openshift-marketplace(a7f05662-6e61-4d86-8a52-13000d4bd2be): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Feb 18 19:20:22 crc kubenswrapper[4942]: E0218 19:20:22.337371 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/community-operators-gjnbk" podUID="a7f05662-6e61-4d86-8a52-13000d4bd2be" Feb 18 19:20:22 crc kubenswrapper[4942]: E0218 19:20:22.436500 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-tk5v7" podUID="934bc032-4641-47ee-9689-39edb4e5a24a" Feb 18 19:20:22 crc kubenswrapper[4942]: E0218 19:20:22.436682 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-gjnbk" podUID="a7f05662-6e61-4d86-8a52-13000d4bd2be" Feb 18 19:20:23 crc kubenswrapper[4942]: I0218 19:20:23.129297 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-8nxhq" Feb 18 19:20:23 crc kubenswrapper[4942]: I0218 19:20:23.437451 4942 generic.go:334] "Generic (PLEG): container finished" podID="02b9174b-0251-447d-8266-56e92f6e9be1" containerID="31a34a3009919984a90fc292e1925ef6a14bfec470e5201664bf267723f6d086" exitCode=0 Feb 18 19:20:23 crc kubenswrapper[4942]: I0218 19:20:23.437524 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-c28tv" event={"ID":"02b9174b-0251-447d-8266-56e92f6e9be1","Type":"ContainerDied","Data":"31a34a3009919984a90fc292e1925ef6a14bfec470e5201664bf267723f6d086"} Feb 18 19:20:23 crc kubenswrapper[4942]: I0218 19:20:23.440866 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5dn7d" event={"ID":"fc54a822-e044-4d85-a0a8-499a79d09aaf","Type":"ContainerStarted","Data":"f47c297aaa4179fbd75fe8d9514cdd383b6ab6c7b7fa7596996fa94fd2798c4b"} Feb 18 19:20:23 crc kubenswrapper[4942]: I0218 19:20:23.442569 4942 generic.go:334] "Generic (PLEG): container finished" podID="44e2bd42-1f23-4563-a1d2-7765ab9181f6" containerID="798e050f4f2aa0f54695a9005889ba42dc8611c4b4073be200e3426ca54b5a65" exitCode=0 Feb 18 19:20:23 crc kubenswrapper[4942]: I0218 19:20:23.442632 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5jvhl" event={"ID":"44e2bd42-1f23-4563-a1d2-7765ab9181f6","Type":"ContainerDied","Data":"798e050f4f2aa0f54695a9005889ba42dc8611c4b4073be200e3426ca54b5a65"} Feb 18 19:20:23 crc kubenswrapper[4942]: I0218 19:20:23.445499 4942 generic.go:334] "Generic (PLEG): container finished" podID="9b0511d8-736f-48fa-94a5-9a45e8482467" containerID="a4840f4a2d896cd262391705ac29acf6d59d0478aeff45cc3eafd7da73237848" exitCode=0 Feb 18 19:20:23 crc kubenswrapper[4942]: I0218 19:20:23.445519 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tm22r" event={"ID":"9b0511d8-736f-48fa-94a5-9a45e8482467","Type":"ContainerDied","Data":"a4840f4a2d896cd262391705ac29acf6d59d0478aeff45cc3eafd7da73237848"} Feb 18 19:20:23 crc kubenswrapper[4942]: I0218 19:20:23.740737 4942 patch_prober.go:28] interesting pod/machine-config-daemon-wqxh4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 19:20:23 crc kubenswrapper[4942]: I0218 19:20:23.741148 4942 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wqxh4" podUID="28921539-823a-4439-a230-3b5aed7085cc" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 19:20:23 crc kubenswrapper[4942]: I0218 19:20:23.783801 4942 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-kpfjc"] Feb 18 19:20:24 crc kubenswrapper[4942]: I0218 19:20:24.456247 4942 generic.go:334] "Generic (PLEG): container finished" podID="fc54a822-e044-4d85-a0a8-499a79d09aaf" containerID="f47c297aaa4179fbd75fe8d9514cdd383b6ab6c7b7fa7596996fa94fd2798c4b" exitCode=0 Feb 18 19:20:24 crc kubenswrapper[4942]: I0218 19:20:24.456299 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5dn7d" event={"ID":"fc54a822-e044-4d85-a0a8-499a79d09aaf","Type":"ContainerDied","Data":"f47c297aaa4179fbd75fe8d9514cdd383b6ab6c7b7fa7596996fa94fd2798c4b"} Feb 18 19:20:25 crc kubenswrapper[4942]: I0218 19:20:25.463476 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-c28tv" event={"ID":"02b9174b-0251-447d-8266-56e92f6e9be1","Type":"ContainerStarted","Data":"b50a1ea31397a1041190322b11879c6dddcf223bd4897058c0f33a269c3df980"} Feb 18 19:20:25 crc kubenswrapper[4942]: I0218 19:20:25.467997 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5dn7d" event={"ID":"fc54a822-e044-4d85-a0a8-499a79d09aaf","Type":"ContainerStarted","Data":"5b076eb0931e413c70c108596f5ee9f710dd64a76e5895d3b7dca278f88f019c"} Feb 18 19:20:25 crc kubenswrapper[4942]: I0218 19:20:25.470016 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5jvhl" event={"ID":"44e2bd42-1f23-4563-a1d2-7765ab9181f6","Type":"ContainerStarted","Data":"8c3cfa9632209ead80d64ceea7d0876bbdcfdfc0eaa6cacd6e715b124bc2afb4"} Feb 18 19:20:25 crc kubenswrapper[4942]: I0218 19:20:25.472333 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tm22r" event={"ID":"9b0511d8-736f-48fa-94a5-9a45e8482467","Type":"ContainerStarted","Data":"83405b8b823dd9443c9b689919187f4e07d4402df4f5ed4f940ea091c7001e2b"} Feb 18 19:20:25 crc kubenswrapper[4942]: I0218 19:20:25.484036 4942 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-c28tv" podStartSLOduration=2.4129747200000002 podStartE2EDuration="36.484013898s" podCreationTimestamp="2026-02-18 19:19:49 +0000 UTC" firstStartedPulling="2026-02-18 19:19:51.052080028 +0000 UTC m=+150.757012693" lastFinishedPulling="2026-02-18 19:20:25.123119196 +0000 UTC m=+184.828051871" observedRunningTime="2026-02-18 19:20:25.483182965 +0000 UTC m=+185.188115630" watchObservedRunningTime="2026-02-18 19:20:25.484013898 +0000 UTC m=+185.188946563" Feb 18 19:20:25 crc kubenswrapper[4942]: I0218 19:20:25.500039 4942 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-5jvhl" podStartSLOduration=2.458009175 podStartE2EDuration="33.500020979s" podCreationTimestamp="2026-02-18 19:19:52 +0000 UTC" firstStartedPulling="2026-02-18 19:19:54.198024865 +0000 UTC m=+153.902957530" lastFinishedPulling="2026-02-18 19:20:25.240036669 +0000 UTC m=+184.944969334" observedRunningTime="2026-02-18 19:20:25.498398715 +0000 UTC m=+185.203331380" watchObservedRunningTime="2026-02-18 19:20:25.500020979 +0000 UTC m=+185.204953644" Feb 18 19:20:25 crc kubenswrapper[4942]: I0218 19:20:25.517576 4942 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-tm22r" podStartSLOduration=3.618653725 podStartE2EDuration="37.517544512s" podCreationTimestamp="2026-02-18 19:19:48 +0000 UTC" firstStartedPulling="2026-02-18 19:19:51.025906872 +0000 UTC m=+150.730839537" lastFinishedPulling="2026-02-18 19:20:24.924797659 +0000 UTC m=+184.629730324" observedRunningTime="2026-02-18 19:20:25.516617897 +0000 UTC m=+185.221550562" watchObservedRunningTime="2026-02-18 19:20:25.517544512 +0000 UTC m=+185.222477187" Feb 18 19:20:25 crc kubenswrapper[4942]: I0218 19:20:25.533752 4942 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-5dn7d" podStartSLOduration=2.6751583070000002 podStartE2EDuration="34.533732758s" podCreationTimestamp="2026-02-18 19:19:51 +0000 UTC" firstStartedPulling="2026-02-18 19:19:53.165031542 +0000 UTC m=+152.869964207" lastFinishedPulling="2026-02-18 19:20:25.023605993 +0000 UTC m=+184.728538658" observedRunningTime="2026-02-18 19:20:25.529315419 +0000 UTC m=+185.234248084" watchObservedRunningTime="2026-02-18 19:20:25.533732758 +0000 UTC m=+185.238665413" Feb 18 19:20:29 crc kubenswrapper[4942]: I0218 19:20:29.295584 4942 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-tm22r" Feb 18 19:20:29 crc kubenswrapper[4942]: I0218 19:20:29.297812 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-tm22r" Feb 18 19:20:29 crc kubenswrapper[4942]: I0218 19:20:29.483866 4942 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Feb 18 19:20:29 crc kubenswrapper[4942]: E0218 19:20:29.484124 4942 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4d362dd3-7195-4c71-9a1c-b4170b339f6d" containerName="pruner" Feb 18 19:20:29 crc kubenswrapper[4942]: I0218 19:20:29.484136 4942 state_mem.go:107] "Deleted CPUSet assignment" podUID="4d362dd3-7195-4c71-9a1c-b4170b339f6d" containerName="pruner" Feb 18 19:20:29 crc kubenswrapper[4942]: E0218 19:20:29.484144 4942 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6aba94fa-2207-4cae-8a64-536109c9c967" containerName="pruner" Feb 18 19:20:29 crc kubenswrapper[4942]: I0218 19:20:29.484149 4942 state_mem.go:107] "Deleted CPUSet assignment" podUID="6aba94fa-2207-4cae-8a64-536109c9c967" containerName="pruner" Feb 18 19:20:29 crc kubenswrapper[4942]: I0218 19:20:29.484250 4942 memory_manager.go:354] "RemoveStaleState removing state" podUID="6aba94fa-2207-4cae-8a64-536109c9c967" containerName="pruner" Feb 18 19:20:29 crc kubenswrapper[4942]: I0218 19:20:29.484260 4942 memory_manager.go:354] "RemoveStaleState removing state" podUID="4d362dd3-7195-4c71-9a1c-b4170b339f6d" containerName="pruner" Feb 18 19:20:29 crc kubenswrapper[4942]: I0218 19:20:29.484691 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 18 19:20:29 crc kubenswrapper[4942]: I0218 19:20:29.487993 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Feb 18 19:20:29 crc kubenswrapper[4942]: I0218 19:20:29.489024 4942 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Feb 18 19:20:29 crc kubenswrapper[4942]: I0218 19:20:29.493689 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Feb 18 19:20:29 crc kubenswrapper[4942]: I0218 19:20:29.552051 4942 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-tm22r" Feb 18 19:20:29 crc kubenswrapper[4942]: I0218 19:20:29.582777 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/4bc416be-42a9-48cf-842d-51c1dcf886ad-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"4bc416be-42a9-48cf-842d-51c1dcf886ad\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 18 19:20:29 crc kubenswrapper[4942]: I0218 19:20:29.582844 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4bc416be-42a9-48cf-842d-51c1dcf886ad-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"4bc416be-42a9-48cf-842d-51c1dcf886ad\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 18 19:20:29 crc kubenswrapper[4942]: I0218 19:20:29.684201 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4bc416be-42a9-48cf-842d-51c1dcf886ad-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"4bc416be-42a9-48cf-842d-51c1dcf886ad\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 18 19:20:29 crc kubenswrapper[4942]: I0218 19:20:29.684369 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/4bc416be-42a9-48cf-842d-51c1dcf886ad-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"4bc416be-42a9-48cf-842d-51c1dcf886ad\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 18 19:20:29 crc kubenswrapper[4942]: I0218 19:20:29.684473 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/4bc416be-42a9-48cf-842d-51c1dcf886ad-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"4bc416be-42a9-48cf-842d-51c1dcf886ad\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 18 19:20:29 crc kubenswrapper[4942]: I0218 19:20:29.703883 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4bc416be-42a9-48cf-842d-51c1dcf886ad-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"4bc416be-42a9-48cf-842d-51c1dcf886ad\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 18 19:20:29 crc kubenswrapper[4942]: I0218 19:20:29.724161 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-c28tv" Feb 18 19:20:29 crc kubenswrapper[4942]: I0218 19:20:29.724242 4942 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-c28tv" Feb 18 19:20:29 crc kubenswrapper[4942]: I0218 19:20:29.776474 4942 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-c28tv" Feb 18 19:20:29 crc kubenswrapper[4942]: I0218 19:20:29.817412 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 18 19:20:30 crc kubenswrapper[4942]: I0218 19:20:30.100906 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 19:20:30 crc kubenswrapper[4942]: I0218 19:20:30.278949 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Feb 18 19:20:30 crc kubenswrapper[4942]: I0218 19:20:30.500179 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"4bc416be-42a9-48cf-842d-51c1dcf886ad","Type":"ContainerStarted","Data":"f23f76c49f2f8dfecc25ad98e9df225a57a759b2c8f14b46c81c3a2c628841ef"} Feb 18 19:20:30 crc kubenswrapper[4942]: I0218 19:20:30.555625 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-c28tv" Feb 18 19:20:31 crc kubenswrapper[4942]: I0218 19:20:31.516682 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"4bc416be-42a9-48cf-842d-51c1dcf886ad","Type":"ContainerStarted","Data":"73473b2188b5bacea6130af5e2e141bdd79f2eb606c80044881381f2bc22846e"} Feb 18 19:20:31 crc kubenswrapper[4942]: I0218 19:20:31.566065 4942 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-c28tv"] Feb 18 19:20:32 crc kubenswrapper[4942]: I0218 19:20:32.298747 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-5dn7d" Feb 18 19:20:32 crc kubenswrapper[4942]: I0218 19:20:32.298838 4942 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-5dn7d" Feb 18 19:20:32 crc kubenswrapper[4942]: I0218 19:20:32.355681 4942 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-5dn7d" Feb 18 19:20:32 crc kubenswrapper[4942]: I0218 19:20:32.558098 4942 generic.go:334] "Generic (PLEG): container finished" podID="4bc416be-42a9-48cf-842d-51c1dcf886ad" containerID="73473b2188b5bacea6130af5e2e141bdd79f2eb606c80044881381f2bc22846e" exitCode=0 Feb 18 19:20:32 crc kubenswrapper[4942]: I0218 19:20:32.558272 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"4bc416be-42a9-48cf-842d-51c1dcf886ad","Type":"ContainerDied","Data":"73473b2188b5bacea6130af5e2e141bdd79f2eb606c80044881381f2bc22846e"} Feb 18 19:20:32 crc kubenswrapper[4942]: I0218 19:20:32.558574 4942 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-c28tv" podUID="02b9174b-0251-447d-8266-56e92f6e9be1" containerName="registry-server" containerID="cri-o://b50a1ea31397a1041190322b11879c6dddcf223bd4897058c0f33a269c3df980" gracePeriod=2 Feb 18 19:20:32 crc kubenswrapper[4942]: I0218 19:20:32.603154 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-5dn7d" Feb 18 19:20:32 crc kubenswrapper[4942]: I0218 19:20:32.790385 4942 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-5jvhl" Feb 18 19:20:32 crc kubenswrapper[4942]: I0218 19:20:32.791597 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-5jvhl" Feb 18 19:20:32 crc kubenswrapper[4942]: I0218 19:20:32.849343 4942 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-5jvhl" Feb 18 19:20:33 crc kubenswrapper[4942]: I0218 19:20:33.571487 4942 generic.go:334] "Generic (PLEG): container finished" podID="02b9174b-0251-447d-8266-56e92f6e9be1" containerID="b50a1ea31397a1041190322b11879c6dddcf223bd4897058c0f33a269c3df980" exitCode=0 Feb 18 19:20:33 crc kubenswrapper[4942]: I0218 19:20:33.572210 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-c28tv" event={"ID":"02b9174b-0251-447d-8266-56e92f6e9be1","Type":"ContainerDied","Data":"b50a1ea31397a1041190322b11879c6dddcf223bd4897058c0f33a269c3df980"} Feb 18 19:20:33 crc kubenswrapper[4942]: I0218 19:20:33.666366 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-5jvhl" Feb 18 19:20:33 crc kubenswrapper[4942]: I0218 19:20:33.803802 4942 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-c28tv" Feb 18 19:20:33 crc kubenswrapper[4942]: I0218 19:20:33.883324 4942 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 18 19:20:33 crc kubenswrapper[4942]: I0218 19:20:33.969314 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rjfn2\" (UniqueName: \"kubernetes.io/projected/02b9174b-0251-447d-8266-56e92f6e9be1-kube-api-access-rjfn2\") pod \"02b9174b-0251-447d-8266-56e92f6e9be1\" (UID: \"02b9174b-0251-447d-8266-56e92f6e9be1\") " Feb 18 19:20:33 crc kubenswrapper[4942]: I0218 19:20:33.969434 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/02b9174b-0251-447d-8266-56e92f6e9be1-catalog-content\") pod \"02b9174b-0251-447d-8266-56e92f6e9be1\" (UID: \"02b9174b-0251-447d-8266-56e92f6e9be1\") " Feb 18 19:20:33 crc kubenswrapper[4942]: I0218 19:20:33.969525 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/02b9174b-0251-447d-8266-56e92f6e9be1-utilities\") pod \"02b9174b-0251-447d-8266-56e92f6e9be1\" (UID: \"02b9174b-0251-447d-8266-56e92f6e9be1\") " Feb 18 19:20:33 crc kubenswrapper[4942]: I0218 19:20:33.970569 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/02b9174b-0251-447d-8266-56e92f6e9be1-utilities" (OuterVolumeSpecName: "utilities") pod "02b9174b-0251-447d-8266-56e92f6e9be1" (UID: "02b9174b-0251-447d-8266-56e92f6e9be1"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 19:20:33 crc kubenswrapper[4942]: I0218 19:20:33.979099 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/02b9174b-0251-447d-8266-56e92f6e9be1-kube-api-access-rjfn2" (OuterVolumeSpecName: "kube-api-access-rjfn2") pod "02b9174b-0251-447d-8266-56e92f6e9be1" (UID: "02b9174b-0251-447d-8266-56e92f6e9be1"). InnerVolumeSpecName "kube-api-access-rjfn2". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:20:34 crc kubenswrapper[4942]: I0218 19:20:34.028755 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/02b9174b-0251-447d-8266-56e92f6e9be1-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "02b9174b-0251-447d-8266-56e92f6e9be1" (UID: "02b9174b-0251-447d-8266-56e92f6e9be1"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 19:20:34 crc kubenswrapper[4942]: I0218 19:20:34.070690 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/4bc416be-42a9-48cf-842d-51c1dcf886ad-kubelet-dir\") pod \"4bc416be-42a9-48cf-842d-51c1dcf886ad\" (UID: \"4bc416be-42a9-48cf-842d-51c1dcf886ad\") " Feb 18 19:20:34 crc kubenswrapper[4942]: I0218 19:20:34.070831 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4bc416be-42a9-48cf-842d-51c1dcf886ad-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "4bc416be-42a9-48cf-842d-51c1dcf886ad" (UID: "4bc416be-42a9-48cf-842d-51c1dcf886ad"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 18 19:20:34 crc kubenswrapper[4942]: I0218 19:20:34.070865 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4bc416be-42a9-48cf-842d-51c1dcf886ad-kube-api-access\") pod \"4bc416be-42a9-48cf-842d-51c1dcf886ad\" (UID: \"4bc416be-42a9-48cf-842d-51c1dcf886ad\") " Feb 18 19:20:34 crc kubenswrapper[4942]: I0218 19:20:34.071629 4942 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rjfn2\" (UniqueName: \"kubernetes.io/projected/02b9174b-0251-447d-8266-56e92f6e9be1-kube-api-access-rjfn2\") on node \"crc\" DevicePath \"\"" Feb 18 19:20:34 crc kubenswrapper[4942]: I0218 19:20:34.071661 4942 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/02b9174b-0251-447d-8266-56e92f6e9be1-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 18 19:20:34 crc kubenswrapper[4942]: I0218 19:20:34.071676 4942 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/4bc416be-42a9-48cf-842d-51c1dcf886ad-kubelet-dir\") on node \"crc\" DevicePath \"\"" Feb 18 19:20:34 crc kubenswrapper[4942]: I0218 19:20:34.071741 4942 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/02b9174b-0251-447d-8266-56e92f6e9be1-utilities\") on node \"crc\" DevicePath \"\"" Feb 18 19:20:34 crc kubenswrapper[4942]: I0218 19:20:34.074913 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4bc416be-42a9-48cf-842d-51c1dcf886ad-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "4bc416be-42a9-48cf-842d-51c1dcf886ad" (UID: "4bc416be-42a9-48cf-842d-51c1dcf886ad"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:20:34 crc kubenswrapper[4942]: I0218 19:20:34.173352 4942 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4bc416be-42a9-48cf-842d-51c1dcf886ad-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 18 19:20:34 crc kubenswrapper[4942]: I0218 19:20:34.578620 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-c28tv" event={"ID":"02b9174b-0251-447d-8266-56e92f6e9be1","Type":"ContainerDied","Data":"8c6af65ee9862a635b1667bd69dde2c1cdffc885f9052d205608bd240b148144"} Feb 18 19:20:34 crc kubenswrapper[4942]: I0218 19:20:34.578689 4942 scope.go:117] "RemoveContainer" containerID="b50a1ea31397a1041190322b11879c6dddcf223bd4897058c0f33a269c3df980" Feb 18 19:20:34 crc kubenswrapper[4942]: I0218 19:20:34.578643 4942 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-c28tv" Feb 18 19:20:34 crc kubenswrapper[4942]: I0218 19:20:34.581224 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"4bc416be-42a9-48cf-842d-51c1dcf886ad","Type":"ContainerDied","Data":"f23f76c49f2f8dfecc25ad98e9df225a57a759b2c8f14b46c81c3a2c628841ef"} Feb 18 19:20:34 crc kubenswrapper[4942]: I0218 19:20:34.581315 4942 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f23f76c49f2f8dfecc25ad98e9df225a57a759b2c8f14b46c81c3a2c628841ef" Feb 18 19:20:34 crc kubenswrapper[4942]: I0218 19:20:34.581491 4942 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 18 19:20:34 crc kubenswrapper[4942]: I0218 19:20:34.602957 4942 scope.go:117] "RemoveContainer" containerID="31a34a3009919984a90fc292e1925ef6a14bfec470e5201664bf267723f6d086" Feb 18 19:20:34 crc kubenswrapper[4942]: I0218 19:20:34.613983 4942 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-c28tv"] Feb 18 19:20:34 crc kubenswrapper[4942]: I0218 19:20:34.618604 4942 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-c28tv"] Feb 18 19:20:34 crc kubenswrapper[4942]: I0218 19:20:34.641794 4942 scope.go:117] "RemoveContainer" containerID="01a84768c2d4f4eb7b1180f3d4ce6ea22f3b2fc585b9417ed7bc6475cacbd4a4" Feb 18 19:20:34 crc kubenswrapper[4942]: I0218 19:20:34.761675 4942 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-5jvhl"] Feb 18 19:20:35 crc kubenswrapper[4942]: I0218 19:20:35.044042 4942 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="02b9174b-0251-447d-8266-56e92f6e9be1" path="/var/lib/kubelet/pods/02b9174b-0251-447d-8266-56e92f6e9be1/volumes" Feb 18 19:20:36 crc kubenswrapper[4942]: I0218 19:20:36.601826 4942 generic.go:334] "Generic (PLEG): container finished" podID="934bc032-4641-47ee-9689-39edb4e5a24a" containerID="3c851917d5f3c7ded5c54a7fc6671076bec8b33064360e4e202185900c76a141" exitCode=0 Feb 18 19:20:36 crc kubenswrapper[4942]: I0218 19:20:36.601946 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-tk5v7" event={"ID":"934bc032-4641-47ee-9689-39edb4e5a24a","Type":"ContainerDied","Data":"3c851917d5f3c7ded5c54a7fc6671076bec8b33064360e4e202185900c76a141"} Feb 18 19:20:36 crc kubenswrapper[4942]: I0218 19:20:36.602512 4942 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-5jvhl" podUID="44e2bd42-1f23-4563-a1d2-7765ab9181f6" containerName="registry-server" containerID="cri-o://8c3cfa9632209ead80d64ceea7d0876bbdcfdfc0eaa6cacd6e715b124bc2afb4" gracePeriod=2 Feb 18 19:20:36 crc kubenswrapper[4942]: I0218 19:20:36.977109 4942 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-5jvhl" Feb 18 19:20:37 crc kubenswrapper[4942]: I0218 19:20:37.109534 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tr6h8\" (UniqueName: \"kubernetes.io/projected/44e2bd42-1f23-4563-a1d2-7765ab9181f6-kube-api-access-tr6h8\") pod \"44e2bd42-1f23-4563-a1d2-7765ab9181f6\" (UID: \"44e2bd42-1f23-4563-a1d2-7765ab9181f6\") " Feb 18 19:20:37 crc kubenswrapper[4942]: I0218 19:20:37.109889 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/44e2bd42-1f23-4563-a1d2-7765ab9181f6-utilities\") pod \"44e2bd42-1f23-4563-a1d2-7765ab9181f6\" (UID: \"44e2bd42-1f23-4563-a1d2-7765ab9181f6\") " Feb 18 19:20:37 crc kubenswrapper[4942]: I0218 19:20:37.109943 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/44e2bd42-1f23-4563-a1d2-7765ab9181f6-catalog-content\") pod \"44e2bd42-1f23-4563-a1d2-7765ab9181f6\" (UID: \"44e2bd42-1f23-4563-a1d2-7765ab9181f6\") " Feb 18 19:20:37 crc kubenswrapper[4942]: I0218 19:20:37.110671 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/44e2bd42-1f23-4563-a1d2-7765ab9181f6-utilities" (OuterVolumeSpecName: "utilities") pod "44e2bd42-1f23-4563-a1d2-7765ab9181f6" (UID: "44e2bd42-1f23-4563-a1d2-7765ab9181f6"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 19:20:37 crc kubenswrapper[4942]: I0218 19:20:37.115547 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/44e2bd42-1f23-4563-a1d2-7765ab9181f6-kube-api-access-tr6h8" (OuterVolumeSpecName: "kube-api-access-tr6h8") pod "44e2bd42-1f23-4563-a1d2-7765ab9181f6" (UID: "44e2bd42-1f23-4563-a1d2-7765ab9181f6"). InnerVolumeSpecName "kube-api-access-tr6h8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:20:37 crc kubenswrapper[4942]: I0218 19:20:37.211356 4942 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/44e2bd42-1f23-4563-a1d2-7765ab9181f6-utilities\") on node \"crc\" DevicePath \"\"" Feb 18 19:20:37 crc kubenswrapper[4942]: I0218 19:20:37.211401 4942 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tr6h8\" (UniqueName: \"kubernetes.io/projected/44e2bd42-1f23-4563-a1d2-7765ab9181f6-kube-api-access-tr6h8\") on node \"crc\" DevicePath \"\"" Feb 18 19:20:37 crc kubenswrapper[4942]: I0218 19:20:37.238246 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/44e2bd42-1f23-4563-a1d2-7765ab9181f6-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "44e2bd42-1f23-4563-a1d2-7765ab9181f6" (UID: "44e2bd42-1f23-4563-a1d2-7765ab9181f6"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 19:20:37 crc kubenswrapper[4942]: I0218 19:20:37.312911 4942 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/44e2bd42-1f23-4563-a1d2-7765ab9181f6-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 18 19:20:37 crc kubenswrapper[4942]: I0218 19:20:37.613714 4942 generic.go:334] "Generic (PLEG): container finished" podID="a7f05662-6e61-4d86-8a52-13000d4bd2be" containerID="c444eba8355a9abafffa8c61716275eed3f491c2d65fff1fbecb6c7394ac87ff" exitCode=0 Feb 18 19:20:37 crc kubenswrapper[4942]: I0218 19:20:37.613869 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gjnbk" event={"ID":"a7f05662-6e61-4d86-8a52-13000d4bd2be","Type":"ContainerDied","Data":"c444eba8355a9abafffa8c61716275eed3f491c2d65fff1fbecb6c7394ac87ff"} Feb 18 19:20:37 crc kubenswrapper[4942]: I0218 19:20:37.619728 4942 generic.go:334] "Generic (PLEG): container finished" podID="44e2bd42-1f23-4563-a1d2-7765ab9181f6" containerID="8c3cfa9632209ead80d64ceea7d0876bbdcfdfc0eaa6cacd6e715b124bc2afb4" exitCode=0 Feb 18 19:20:37 crc kubenswrapper[4942]: I0218 19:20:37.619810 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5jvhl" event={"ID":"44e2bd42-1f23-4563-a1d2-7765ab9181f6","Type":"ContainerDied","Data":"8c3cfa9632209ead80d64ceea7d0876bbdcfdfc0eaa6cacd6e715b124bc2afb4"} Feb 18 19:20:37 crc kubenswrapper[4942]: I0218 19:20:37.619831 4942 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-5jvhl" Feb 18 19:20:37 crc kubenswrapper[4942]: I0218 19:20:37.619868 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5jvhl" event={"ID":"44e2bd42-1f23-4563-a1d2-7765ab9181f6","Type":"ContainerDied","Data":"076f790451e8965bca0e7fb3c29d623ec83f9c2b76666a5189e58eb3eab1c839"} Feb 18 19:20:37 crc kubenswrapper[4942]: I0218 19:20:37.619892 4942 scope.go:117] "RemoveContainer" containerID="8c3cfa9632209ead80d64ceea7d0876bbdcfdfc0eaa6cacd6e715b124bc2afb4" Feb 18 19:20:37 crc kubenswrapper[4942]: I0218 19:20:37.621936 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-tk5v7" event={"ID":"934bc032-4641-47ee-9689-39edb4e5a24a","Type":"ContainerStarted","Data":"451c9fa4101f2cbb7a9ff1f28f5cddcf7c7d862a8917ab84623fde49ee3b6da1"} Feb 18 19:20:37 crc kubenswrapper[4942]: I0218 19:20:37.640670 4942 scope.go:117] "RemoveContainer" containerID="798e050f4f2aa0f54695a9005889ba42dc8611c4b4073be200e3426ca54b5a65" Feb 18 19:20:37 crc kubenswrapper[4942]: I0218 19:20:37.656179 4942 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-tk5v7" podStartSLOduration=2.699569652 podStartE2EDuration="48.656150933s" podCreationTimestamp="2026-02-18 19:19:49 +0000 UTC" firstStartedPulling="2026-02-18 19:19:51.071874792 +0000 UTC m=+150.776807457" lastFinishedPulling="2026-02-18 19:20:37.028456073 +0000 UTC m=+196.733388738" observedRunningTime="2026-02-18 19:20:37.655812723 +0000 UTC m=+197.360745398" watchObservedRunningTime="2026-02-18 19:20:37.656150933 +0000 UTC m=+197.361083598" Feb 18 19:20:37 crc kubenswrapper[4942]: I0218 19:20:37.664378 4942 scope.go:117] "RemoveContainer" containerID="9f0f2fd818af74deb03761abad4e3f260742b088b8ddfc49644611d327e71c74" Feb 18 19:20:37 crc kubenswrapper[4942]: I0218 19:20:37.678001 4942 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-5jvhl"] Feb 18 19:20:37 crc kubenswrapper[4942]: I0218 19:20:37.679029 4942 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-5jvhl"] Feb 18 19:20:37 crc kubenswrapper[4942]: I0218 19:20:37.684268 4942 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Feb 18 19:20:37 crc kubenswrapper[4942]: E0218 19:20:37.684527 4942 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="44e2bd42-1f23-4563-a1d2-7765ab9181f6" containerName="registry-server" Feb 18 19:20:37 crc kubenswrapper[4942]: I0218 19:20:37.684545 4942 state_mem.go:107] "Deleted CPUSet assignment" podUID="44e2bd42-1f23-4563-a1d2-7765ab9181f6" containerName="registry-server" Feb 18 19:20:37 crc kubenswrapper[4942]: E0218 19:20:37.684556 4942 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="44e2bd42-1f23-4563-a1d2-7765ab9181f6" containerName="extract-content" Feb 18 19:20:37 crc kubenswrapper[4942]: I0218 19:20:37.684563 4942 state_mem.go:107] "Deleted CPUSet assignment" podUID="44e2bd42-1f23-4563-a1d2-7765ab9181f6" containerName="extract-content" Feb 18 19:20:37 crc kubenswrapper[4942]: E0218 19:20:37.684574 4942 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="44e2bd42-1f23-4563-a1d2-7765ab9181f6" containerName="extract-utilities" Feb 18 19:20:37 crc kubenswrapper[4942]: I0218 19:20:37.684581 4942 state_mem.go:107] "Deleted CPUSet assignment" podUID="44e2bd42-1f23-4563-a1d2-7765ab9181f6" containerName="extract-utilities" Feb 18 19:20:37 crc kubenswrapper[4942]: E0218 19:20:37.684591 4942 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="02b9174b-0251-447d-8266-56e92f6e9be1" containerName="registry-server" Feb 18 19:20:37 crc kubenswrapper[4942]: I0218 19:20:37.684597 4942 state_mem.go:107] "Deleted CPUSet assignment" podUID="02b9174b-0251-447d-8266-56e92f6e9be1" containerName="registry-server" Feb 18 19:20:37 crc kubenswrapper[4942]: E0218 19:20:37.684607 4942 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="02b9174b-0251-447d-8266-56e92f6e9be1" containerName="extract-utilities" Feb 18 19:20:37 crc kubenswrapper[4942]: I0218 19:20:37.684613 4942 state_mem.go:107] "Deleted CPUSet assignment" podUID="02b9174b-0251-447d-8266-56e92f6e9be1" containerName="extract-utilities" Feb 18 19:20:37 crc kubenswrapper[4942]: E0218 19:20:37.684622 4942 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="02b9174b-0251-447d-8266-56e92f6e9be1" containerName="extract-content" Feb 18 19:20:37 crc kubenswrapper[4942]: I0218 19:20:37.684628 4942 state_mem.go:107] "Deleted CPUSet assignment" podUID="02b9174b-0251-447d-8266-56e92f6e9be1" containerName="extract-content" Feb 18 19:20:37 crc kubenswrapper[4942]: E0218 19:20:37.684635 4942 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4bc416be-42a9-48cf-842d-51c1dcf886ad" containerName="pruner" Feb 18 19:20:37 crc kubenswrapper[4942]: I0218 19:20:37.684641 4942 state_mem.go:107] "Deleted CPUSet assignment" podUID="4bc416be-42a9-48cf-842d-51c1dcf886ad" containerName="pruner" Feb 18 19:20:37 crc kubenswrapper[4942]: I0218 19:20:37.684736 4942 memory_manager.go:354] "RemoveStaleState removing state" podUID="02b9174b-0251-447d-8266-56e92f6e9be1" containerName="registry-server" Feb 18 19:20:37 crc kubenswrapper[4942]: I0218 19:20:37.684745 4942 memory_manager.go:354] "RemoveStaleState removing state" podUID="44e2bd42-1f23-4563-a1d2-7765ab9181f6" containerName="registry-server" Feb 18 19:20:37 crc kubenswrapper[4942]: I0218 19:20:37.684777 4942 memory_manager.go:354] "RemoveStaleState removing state" podUID="4bc416be-42a9-48cf-842d-51c1dcf886ad" containerName="pruner" Feb 18 19:20:37 crc kubenswrapper[4942]: I0218 19:20:37.685184 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Feb 18 19:20:37 crc kubenswrapper[4942]: I0218 19:20:37.688284 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Feb 18 19:20:37 crc kubenswrapper[4942]: I0218 19:20:37.688442 4942 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Feb 18 19:20:37 crc kubenswrapper[4942]: I0218 19:20:37.691478 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Feb 18 19:20:37 crc kubenswrapper[4942]: I0218 19:20:37.701345 4942 scope.go:117] "RemoveContainer" containerID="8c3cfa9632209ead80d64ceea7d0876bbdcfdfc0eaa6cacd6e715b124bc2afb4" Feb 18 19:20:37 crc kubenswrapper[4942]: E0218 19:20:37.707298 4942 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8c3cfa9632209ead80d64ceea7d0876bbdcfdfc0eaa6cacd6e715b124bc2afb4\": container with ID starting with 8c3cfa9632209ead80d64ceea7d0876bbdcfdfc0eaa6cacd6e715b124bc2afb4 not found: ID does not exist" containerID="8c3cfa9632209ead80d64ceea7d0876bbdcfdfc0eaa6cacd6e715b124bc2afb4" Feb 18 19:20:37 crc kubenswrapper[4942]: I0218 19:20:37.707690 4942 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8c3cfa9632209ead80d64ceea7d0876bbdcfdfc0eaa6cacd6e715b124bc2afb4"} err="failed to get container status \"8c3cfa9632209ead80d64ceea7d0876bbdcfdfc0eaa6cacd6e715b124bc2afb4\": rpc error: code = NotFound desc = could not find container \"8c3cfa9632209ead80d64ceea7d0876bbdcfdfc0eaa6cacd6e715b124bc2afb4\": container with ID starting with 8c3cfa9632209ead80d64ceea7d0876bbdcfdfc0eaa6cacd6e715b124bc2afb4 not found: ID does not exist" Feb 18 19:20:37 crc kubenswrapper[4942]: I0218 19:20:37.707788 4942 scope.go:117] "RemoveContainer" containerID="798e050f4f2aa0f54695a9005889ba42dc8611c4b4073be200e3426ca54b5a65" Feb 18 19:20:37 crc kubenswrapper[4942]: E0218 19:20:37.708087 4942 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"798e050f4f2aa0f54695a9005889ba42dc8611c4b4073be200e3426ca54b5a65\": container with ID starting with 798e050f4f2aa0f54695a9005889ba42dc8611c4b4073be200e3426ca54b5a65 not found: ID does not exist" containerID="798e050f4f2aa0f54695a9005889ba42dc8611c4b4073be200e3426ca54b5a65" Feb 18 19:20:37 crc kubenswrapper[4942]: I0218 19:20:37.708108 4942 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"798e050f4f2aa0f54695a9005889ba42dc8611c4b4073be200e3426ca54b5a65"} err="failed to get container status \"798e050f4f2aa0f54695a9005889ba42dc8611c4b4073be200e3426ca54b5a65\": rpc error: code = NotFound desc = could not find container \"798e050f4f2aa0f54695a9005889ba42dc8611c4b4073be200e3426ca54b5a65\": container with ID starting with 798e050f4f2aa0f54695a9005889ba42dc8611c4b4073be200e3426ca54b5a65 not found: ID does not exist" Feb 18 19:20:37 crc kubenswrapper[4942]: I0218 19:20:37.708126 4942 scope.go:117] "RemoveContainer" containerID="9f0f2fd818af74deb03761abad4e3f260742b088b8ddfc49644611d327e71c74" Feb 18 19:20:37 crc kubenswrapper[4942]: E0218 19:20:37.708339 4942 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9f0f2fd818af74deb03761abad4e3f260742b088b8ddfc49644611d327e71c74\": container with ID starting with 9f0f2fd818af74deb03761abad4e3f260742b088b8ddfc49644611d327e71c74 not found: ID does not exist" containerID="9f0f2fd818af74deb03761abad4e3f260742b088b8ddfc49644611d327e71c74" Feb 18 19:20:37 crc kubenswrapper[4942]: I0218 19:20:37.708361 4942 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9f0f2fd818af74deb03761abad4e3f260742b088b8ddfc49644611d327e71c74"} err="failed to get container status \"9f0f2fd818af74deb03761abad4e3f260742b088b8ddfc49644611d327e71c74\": rpc error: code = NotFound desc = could not find container \"9f0f2fd818af74deb03761abad4e3f260742b088b8ddfc49644611d327e71c74\": container with ID starting with 9f0f2fd818af74deb03761abad4e3f260742b088b8ddfc49644611d327e71c74 not found: ID does not exist" Feb 18 19:20:37 crc kubenswrapper[4942]: I0218 19:20:37.818518 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/eea7d003-0909-4006-b81d-e566f256b0aa-var-lock\") pod \"installer-9-crc\" (UID: \"eea7d003-0909-4006-b81d-e566f256b0aa\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 18 19:20:37 crc kubenswrapper[4942]: I0218 19:20:37.818584 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/eea7d003-0909-4006-b81d-e566f256b0aa-kubelet-dir\") pod \"installer-9-crc\" (UID: \"eea7d003-0909-4006-b81d-e566f256b0aa\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 18 19:20:37 crc kubenswrapper[4942]: I0218 19:20:37.818729 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/eea7d003-0909-4006-b81d-e566f256b0aa-kube-api-access\") pod \"installer-9-crc\" (UID: \"eea7d003-0909-4006-b81d-e566f256b0aa\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 18 19:20:37 crc kubenswrapper[4942]: I0218 19:20:37.920134 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/eea7d003-0909-4006-b81d-e566f256b0aa-kube-api-access\") pod \"installer-9-crc\" (UID: \"eea7d003-0909-4006-b81d-e566f256b0aa\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 18 19:20:37 crc kubenswrapper[4942]: I0218 19:20:37.920191 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/eea7d003-0909-4006-b81d-e566f256b0aa-var-lock\") pod \"installer-9-crc\" (UID: \"eea7d003-0909-4006-b81d-e566f256b0aa\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 18 19:20:37 crc kubenswrapper[4942]: I0218 19:20:37.920212 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/eea7d003-0909-4006-b81d-e566f256b0aa-kubelet-dir\") pod \"installer-9-crc\" (UID: \"eea7d003-0909-4006-b81d-e566f256b0aa\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 18 19:20:37 crc kubenswrapper[4942]: I0218 19:20:37.920288 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/eea7d003-0909-4006-b81d-e566f256b0aa-var-lock\") pod \"installer-9-crc\" (UID: \"eea7d003-0909-4006-b81d-e566f256b0aa\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 18 19:20:37 crc kubenswrapper[4942]: I0218 19:20:37.920316 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/eea7d003-0909-4006-b81d-e566f256b0aa-kubelet-dir\") pod \"installer-9-crc\" (UID: \"eea7d003-0909-4006-b81d-e566f256b0aa\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 18 19:20:37 crc kubenswrapper[4942]: I0218 19:20:37.939337 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/eea7d003-0909-4006-b81d-e566f256b0aa-kube-api-access\") pod \"installer-9-crc\" (UID: \"eea7d003-0909-4006-b81d-e566f256b0aa\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 18 19:20:37 crc kubenswrapper[4942]: I0218 19:20:37.999381 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Feb 18 19:20:38 crc kubenswrapper[4942]: I0218 19:20:38.190521 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Feb 18 19:20:38 crc kubenswrapper[4942]: W0218 19:20:38.199164 4942 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-podeea7d003_0909_4006_b81d_e566f256b0aa.slice/crio-239dfe6552f2af1d441cc549207cdf73962930bc9e631840f8db8225c35b625e WatchSource:0}: Error finding container 239dfe6552f2af1d441cc549207cdf73962930bc9e631840f8db8225c35b625e: Status 404 returned error can't find the container with id 239dfe6552f2af1d441cc549207cdf73962930bc9e631840f8db8225c35b625e Feb 18 19:20:38 crc kubenswrapper[4942]: I0218 19:20:38.630548 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gjnbk" event={"ID":"a7f05662-6e61-4d86-8a52-13000d4bd2be","Type":"ContainerStarted","Data":"c4aefe50bfa9203f0d0c77dad76433e93286db36b8b60f68b3ee0a8c684c0a58"} Feb 18 19:20:38 crc kubenswrapper[4942]: I0218 19:20:38.632296 4942 generic.go:334] "Generic (PLEG): container finished" podID="07639322-4f8b-47d5-85c7-da678ca9eaf1" containerID="6f1810b19a4355734dbaeee787309c5dab10550211c3759c7c9b6ebd65265c09" exitCode=0 Feb 18 19:20:38 crc kubenswrapper[4942]: I0218 19:20:38.632366 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vrlpg" event={"ID":"07639322-4f8b-47d5-85c7-da678ca9eaf1","Type":"ContainerDied","Data":"6f1810b19a4355734dbaeee787309c5dab10550211c3759c7c9b6ebd65265c09"} Feb 18 19:20:38 crc kubenswrapper[4942]: I0218 19:20:38.634977 4942 generic.go:334] "Generic (PLEG): container finished" podID="f8dc55ee-28aa-4789-96c1-0809c7abdc99" containerID="fe0ffd866fd030a97ec5deaa0ccffe18989c2702e90b791041142cdf766720bc" exitCode=0 Feb 18 19:20:38 crc kubenswrapper[4942]: I0218 19:20:38.635023 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-w75d5" event={"ID":"f8dc55ee-28aa-4789-96c1-0809c7abdc99","Type":"ContainerDied","Data":"fe0ffd866fd030a97ec5deaa0ccffe18989c2702e90b791041142cdf766720bc"} Feb 18 19:20:38 crc kubenswrapper[4942]: I0218 19:20:38.641601 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"eea7d003-0909-4006-b81d-e566f256b0aa","Type":"ContainerStarted","Data":"cd03ae906b7bee058d56cb0846d1b0e67c0721c950835150412c01f8c34159f0"} Feb 18 19:20:38 crc kubenswrapper[4942]: I0218 19:20:38.641799 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"eea7d003-0909-4006-b81d-e566f256b0aa","Type":"ContainerStarted","Data":"239dfe6552f2af1d441cc549207cdf73962930bc9e631840f8db8225c35b625e"} Feb 18 19:20:38 crc kubenswrapper[4942]: I0218 19:20:38.679198 4942 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-gjnbk" podStartSLOduration=3.685763764 podStartE2EDuration="50.679177156s" podCreationTimestamp="2026-02-18 19:19:48 +0000 UTC" firstStartedPulling="2026-02-18 19:19:51.058944893 +0000 UTC m=+150.763877558" lastFinishedPulling="2026-02-18 19:20:38.052358285 +0000 UTC m=+197.757290950" observedRunningTime="2026-02-18 19:20:38.653073356 +0000 UTC m=+198.358006021" watchObservedRunningTime="2026-02-18 19:20:38.679177156 +0000 UTC m=+198.384109821" Feb 18 19:20:38 crc kubenswrapper[4942]: I0218 19:20:38.722256 4942 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/installer-9-crc" podStartSLOduration=1.722229636 podStartE2EDuration="1.722229636s" podCreationTimestamp="2026-02-18 19:20:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:20:38.721927787 +0000 UTC m=+198.426860452" watchObservedRunningTime="2026-02-18 19:20:38.722229636 +0000 UTC m=+198.427162301" Feb 18 19:20:39 crc kubenswrapper[4942]: I0218 19:20:39.055657 4942 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="44e2bd42-1f23-4563-a1d2-7765ab9181f6" path="/var/lib/kubelet/pods/44e2bd42-1f23-4563-a1d2-7765ab9181f6/volumes" Feb 18 19:20:39 crc kubenswrapper[4942]: I0218 19:20:39.336907 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-tm22r" Feb 18 19:20:39 crc kubenswrapper[4942]: I0218 19:20:39.397211 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-gjnbk" Feb 18 19:20:39 crc kubenswrapper[4942]: I0218 19:20:39.397266 4942 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-gjnbk" Feb 18 19:20:39 crc kubenswrapper[4942]: I0218 19:20:39.664421 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vrlpg" event={"ID":"07639322-4f8b-47d5-85c7-da678ca9eaf1","Type":"ContainerStarted","Data":"0485d8ae4ec42faf8a5c04d463333b6555724522c62202a6a4e3aa41dc6c9e87"} Feb 18 19:20:39 crc kubenswrapper[4942]: I0218 19:20:39.668199 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-w75d5" event={"ID":"f8dc55ee-28aa-4789-96c1-0809c7abdc99","Type":"ContainerStarted","Data":"d498a2655c5a085ae0ca0309638a428cb63df27ec2e1344df0a14c1d63913544"} Feb 18 19:20:39 crc kubenswrapper[4942]: I0218 19:20:39.686551 4942 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-vrlpg" podStartSLOduration=2.639806826 podStartE2EDuration="49.686528534s" podCreationTimestamp="2026-02-18 19:19:50 +0000 UTC" firstStartedPulling="2026-02-18 19:19:52.116175591 +0000 UTC m=+151.821108256" lastFinishedPulling="2026-02-18 19:20:39.162897309 +0000 UTC m=+198.867829964" observedRunningTime="2026-02-18 19:20:39.683496788 +0000 UTC m=+199.388429463" watchObservedRunningTime="2026-02-18 19:20:39.686528534 +0000 UTC m=+199.391461199" Feb 18 19:20:39 crc kubenswrapper[4942]: I0218 19:20:39.699587 4942 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-w75d5" podStartSLOduration=2.845630752 podStartE2EDuration="48.699563904s" podCreationTimestamp="2026-02-18 19:19:51 +0000 UTC" firstStartedPulling="2026-02-18 19:19:53.169210515 +0000 UTC m=+152.874143180" lastFinishedPulling="2026-02-18 19:20:39.023143657 +0000 UTC m=+198.728076332" observedRunningTime="2026-02-18 19:20:39.698329689 +0000 UTC m=+199.403262354" watchObservedRunningTime="2026-02-18 19:20:39.699563904 +0000 UTC m=+199.404496569" Feb 18 19:20:39 crc kubenswrapper[4942]: I0218 19:20:39.795866 4942 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-tk5v7" Feb 18 19:20:39 crc kubenswrapper[4942]: I0218 19:20:39.795926 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-tk5v7" Feb 18 19:20:39 crc kubenswrapper[4942]: I0218 19:20:39.870033 4942 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-tk5v7" Feb 18 19:20:40 crc kubenswrapper[4942]: I0218 19:20:40.438319 4942 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-gjnbk" podUID="a7f05662-6e61-4d86-8a52-13000d4bd2be" containerName="registry-server" probeResult="failure" output=< Feb 18 19:20:40 crc kubenswrapper[4942]: timeout: failed to connect service ":50051" within 1s Feb 18 19:20:40 crc kubenswrapper[4942]: > Feb 18 19:20:41 crc kubenswrapper[4942]: I0218 19:20:41.283275 4942 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-vrlpg" Feb 18 19:20:41 crc kubenswrapper[4942]: I0218 19:20:41.283542 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-vrlpg" Feb 18 19:20:41 crc kubenswrapper[4942]: I0218 19:20:41.321819 4942 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-vrlpg" Feb 18 19:20:41 crc kubenswrapper[4942]: I0218 19:20:41.825515 4942 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-w75d5" Feb 18 19:20:41 crc kubenswrapper[4942]: I0218 19:20:41.825583 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-w75d5" Feb 18 19:20:41 crc kubenswrapper[4942]: I0218 19:20:41.876379 4942 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-w75d5" Feb 18 19:20:48 crc kubenswrapper[4942]: I0218 19:20:48.817957 4942 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-authentication/oauth-openshift-558db77b4-kpfjc" podUID="42dda107-038c-42c1-8182-52bee75caea9" containerName="oauth-openshift" containerID="cri-o://00cc93cdee68acda24bf0c7ef246cedca573cf4a425ffa82cd541a7e7fb12fe0" gracePeriod=15 Feb 18 19:20:49 crc kubenswrapper[4942]: I0218 19:20:49.458777 4942 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-gjnbk" Feb 18 19:20:49 crc kubenswrapper[4942]: I0218 19:20:49.509121 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-gjnbk" Feb 18 19:20:49 crc kubenswrapper[4942]: I0218 19:20:49.716549 4942 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-kpfjc" Feb 18 19:20:49 crc kubenswrapper[4942]: I0218 19:20:49.747189 4942 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-666545c866-26rlh"] Feb 18 19:20:49 crc kubenswrapper[4942]: E0218 19:20:49.747465 4942 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="42dda107-038c-42c1-8182-52bee75caea9" containerName="oauth-openshift" Feb 18 19:20:49 crc kubenswrapper[4942]: I0218 19:20:49.747479 4942 state_mem.go:107] "Deleted CPUSet assignment" podUID="42dda107-038c-42c1-8182-52bee75caea9" containerName="oauth-openshift" Feb 18 19:20:49 crc kubenswrapper[4942]: I0218 19:20:49.747607 4942 memory_manager.go:354] "RemoveStaleState removing state" podUID="42dda107-038c-42c1-8182-52bee75caea9" containerName="oauth-openshift" Feb 18 19:20:49 crc kubenswrapper[4942]: I0218 19:20:49.748073 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-666545c866-26rlh" Feb 18 19:20:49 crc kubenswrapper[4942]: I0218 19:20:49.748249 4942 generic.go:334] "Generic (PLEG): container finished" podID="42dda107-038c-42c1-8182-52bee75caea9" containerID="00cc93cdee68acda24bf0c7ef246cedca573cf4a425ffa82cd541a7e7fb12fe0" exitCode=0 Feb 18 19:20:49 crc kubenswrapper[4942]: I0218 19:20:49.748456 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-kpfjc" event={"ID":"42dda107-038c-42c1-8182-52bee75caea9","Type":"ContainerDied","Data":"00cc93cdee68acda24bf0c7ef246cedca573cf4a425ffa82cd541a7e7fb12fe0"} Feb 18 19:20:49 crc kubenswrapper[4942]: I0218 19:20:49.748500 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-kpfjc" event={"ID":"42dda107-038c-42c1-8182-52bee75caea9","Type":"ContainerDied","Data":"549a45966f3465b915ee762043425f7fc34d780e5d763266b632f538fe2cd88e"} Feb 18 19:20:49 crc kubenswrapper[4942]: I0218 19:20:49.748520 4942 scope.go:117] "RemoveContainer" containerID="00cc93cdee68acda24bf0c7ef246cedca573cf4a425ffa82cd541a7e7fb12fe0" Feb 18 19:20:49 crc kubenswrapper[4942]: I0218 19:20:49.748523 4942 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-kpfjc" Feb 18 19:20:49 crc kubenswrapper[4942]: I0218 19:20:49.766385 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-666545c866-26rlh"] Feb 18 19:20:49 crc kubenswrapper[4942]: I0218 19:20:49.772493 4942 scope.go:117] "RemoveContainer" containerID="00cc93cdee68acda24bf0c7ef246cedca573cf4a425ffa82cd541a7e7fb12fe0" Feb 18 19:20:49 crc kubenswrapper[4942]: E0218 19:20:49.784299 4942 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"00cc93cdee68acda24bf0c7ef246cedca573cf4a425ffa82cd541a7e7fb12fe0\": container with ID starting with 00cc93cdee68acda24bf0c7ef246cedca573cf4a425ffa82cd541a7e7fb12fe0 not found: ID does not exist" containerID="00cc93cdee68acda24bf0c7ef246cedca573cf4a425ffa82cd541a7e7fb12fe0" Feb 18 19:20:49 crc kubenswrapper[4942]: I0218 19:20:49.784356 4942 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"00cc93cdee68acda24bf0c7ef246cedca573cf4a425ffa82cd541a7e7fb12fe0"} err="failed to get container status \"00cc93cdee68acda24bf0c7ef246cedca573cf4a425ffa82cd541a7e7fb12fe0\": rpc error: code = NotFound desc = could not find container \"00cc93cdee68acda24bf0c7ef246cedca573cf4a425ffa82cd541a7e7fb12fe0\": container with ID starting with 00cc93cdee68acda24bf0c7ef246cedca573cf4a425ffa82cd541a7e7fb12fe0 not found: ID does not exist" Feb 18 19:20:49 crc kubenswrapper[4942]: I0218 19:20:49.806568 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/42dda107-038c-42c1-8182-52bee75caea9-v4-0-config-user-idp-0-file-data\") pod \"42dda107-038c-42c1-8182-52bee75caea9\" (UID: \"42dda107-038c-42c1-8182-52bee75caea9\") " Feb 18 19:20:49 crc kubenswrapper[4942]: I0218 19:20:49.806686 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/42dda107-038c-42c1-8182-52bee75caea9-v4-0-config-system-router-certs\") pod \"42dda107-038c-42c1-8182-52bee75caea9\" (UID: \"42dda107-038c-42c1-8182-52bee75caea9\") " Feb 18 19:20:49 crc kubenswrapper[4942]: I0218 19:20:49.806718 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/42dda107-038c-42c1-8182-52bee75caea9-audit-policies\") pod \"42dda107-038c-42c1-8182-52bee75caea9\" (UID: \"42dda107-038c-42c1-8182-52bee75caea9\") " Feb 18 19:20:49 crc kubenswrapper[4942]: I0218 19:20:49.806752 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/42dda107-038c-42c1-8182-52bee75caea9-v4-0-config-user-template-error\") pod \"42dda107-038c-42c1-8182-52bee75caea9\" (UID: \"42dda107-038c-42c1-8182-52bee75caea9\") " Feb 18 19:20:49 crc kubenswrapper[4942]: I0218 19:20:49.806794 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/42dda107-038c-42c1-8182-52bee75caea9-v4-0-config-user-template-provider-selection\") pod \"42dda107-038c-42c1-8182-52bee75caea9\" (UID: \"42dda107-038c-42c1-8182-52bee75caea9\") " Feb 18 19:20:49 crc kubenswrapper[4942]: I0218 19:20:49.806873 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/42dda107-038c-42c1-8182-52bee75caea9-v4-0-config-user-template-login\") pod \"42dda107-038c-42c1-8182-52bee75caea9\" (UID: \"42dda107-038c-42c1-8182-52bee75caea9\") " Feb 18 19:20:49 crc kubenswrapper[4942]: I0218 19:20:49.806920 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/42dda107-038c-42c1-8182-52bee75caea9-audit-dir\") pod \"42dda107-038c-42c1-8182-52bee75caea9\" (UID: \"42dda107-038c-42c1-8182-52bee75caea9\") " Feb 18 19:20:49 crc kubenswrapper[4942]: I0218 19:20:49.806948 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/42dda107-038c-42c1-8182-52bee75caea9-v4-0-config-system-serving-cert\") pod \"42dda107-038c-42c1-8182-52bee75caea9\" (UID: \"42dda107-038c-42c1-8182-52bee75caea9\") " Feb 18 19:20:49 crc kubenswrapper[4942]: I0218 19:20:49.806975 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2wh89\" (UniqueName: \"kubernetes.io/projected/42dda107-038c-42c1-8182-52bee75caea9-kube-api-access-2wh89\") pod \"42dda107-038c-42c1-8182-52bee75caea9\" (UID: \"42dda107-038c-42c1-8182-52bee75caea9\") " Feb 18 19:20:49 crc kubenswrapper[4942]: I0218 19:20:49.806997 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/42dda107-038c-42c1-8182-52bee75caea9-v4-0-config-system-service-ca\") pod \"42dda107-038c-42c1-8182-52bee75caea9\" (UID: \"42dda107-038c-42c1-8182-52bee75caea9\") " Feb 18 19:20:49 crc kubenswrapper[4942]: I0218 19:20:49.807045 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/42dda107-038c-42c1-8182-52bee75caea9-v4-0-config-system-trusted-ca-bundle\") pod \"42dda107-038c-42c1-8182-52bee75caea9\" (UID: \"42dda107-038c-42c1-8182-52bee75caea9\") " Feb 18 19:20:49 crc kubenswrapper[4942]: I0218 19:20:49.807073 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/42dda107-038c-42c1-8182-52bee75caea9-v4-0-config-system-ocp-branding-template\") pod \"42dda107-038c-42c1-8182-52bee75caea9\" (UID: \"42dda107-038c-42c1-8182-52bee75caea9\") " Feb 18 19:20:49 crc kubenswrapper[4942]: I0218 19:20:49.807130 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/42dda107-038c-42c1-8182-52bee75caea9-v4-0-config-system-cliconfig\") pod \"42dda107-038c-42c1-8182-52bee75caea9\" (UID: \"42dda107-038c-42c1-8182-52bee75caea9\") " Feb 18 19:20:49 crc kubenswrapper[4942]: I0218 19:20:49.807162 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/42dda107-038c-42c1-8182-52bee75caea9-v4-0-config-system-session\") pod \"42dda107-038c-42c1-8182-52bee75caea9\" (UID: \"42dda107-038c-42c1-8182-52bee75caea9\") " Feb 18 19:20:49 crc kubenswrapper[4942]: I0218 19:20:49.807353 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/78f383f9-664c-43eb-9253-d9df1eaa9716-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-666545c866-26rlh\" (UID: \"78f383f9-664c-43eb-9253-d9df1eaa9716\") " pod="openshift-authentication/oauth-openshift-666545c866-26rlh" Feb 18 19:20:49 crc kubenswrapper[4942]: I0218 19:20:49.807385 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/78f383f9-664c-43eb-9253-d9df1eaa9716-v4-0-config-system-cliconfig\") pod \"oauth-openshift-666545c866-26rlh\" (UID: \"78f383f9-664c-43eb-9253-d9df1eaa9716\") " pod="openshift-authentication/oauth-openshift-666545c866-26rlh" Feb 18 19:20:49 crc kubenswrapper[4942]: I0218 19:20:49.807406 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/78f383f9-664c-43eb-9253-d9df1eaa9716-v4-0-config-system-serving-cert\") pod \"oauth-openshift-666545c866-26rlh\" (UID: \"78f383f9-664c-43eb-9253-d9df1eaa9716\") " pod="openshift-authentication/oauth-openshift-666545c866-26rlh" Feb 18 19:20:49 crc kubenswrapper[4942]: I0218 19:20:49.807447 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/78f383f9-664c-43eb-9253-d9df1eaa9716-v4-0-config-user-template-login\") pod \"oauth-openshift-666545c866-26rlh\" (UID: \"78f383f9-664c-43eb-9253-d9df1eaa9716\") " pod="openshift-authentication/oauth-openshift-666545c866-26rlh" Feb 18 19:20:49 crc kubenswrapper[4942]: I0218 19:20:49.807481 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/78f383f9-664c-43eb-9253-d9df1eaa9716-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-666545c866-26rlh\" (UID: \"78f383f9-664c-43eb-9253-d9df1eaa9716\") " pod="openshift-authentication/oauth-openshift-666545c866-26rlh" Feb 18 19:20:49 crc kubenswrapper[4942]: I0218 19:20:49.807504 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/78f383f9-664c-43eb-9253-d9df1eaa9716-audit-dir\") pod \"oauth-openshift-666545c866-26rlh\" (UID: \"78f383f9-664c-43eb-9253-d9df1eaa9716\") " pod="openshift-authentication/oauth-openshift-666545c866-26rlh" Feb 18 19:20:49 crc kubenswrapper[4942]: I0218 19:20:49.807542 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/78f383f9-664c-43eb-9253-d9df1eaa9716-audit-policies\") pod \"oauth-openshift-666545c866-26rlh\" (UID: \"78f383f9-664c-43eb-9253-d9df1eaa9716\") " pod="openshift-authentication/oauth-openshift-666545c866-26rlh" Feb 18 19:20:49 crc kubenswrapper[4942]: I0218 19:20:49.807599 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/78f383f9-664c-43eb-9253-d9df1eaa9716-v4-0-config-system-router-certs\") pod \"oauth-openshift-666545c866-26rlh\" (UID: \"78f383f9-664c-43eb-9253-d9df1eaa9716\") " pod="openshift-authentication/oauth-openshift-666545c866-26rlh" Feb 18 19:20:49 crc kubenswrapper[4942]: I0218 19:20:49.807634 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/78f383f9-664c-43eb-9253-d9df1eaa9716-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-666545c866-26rlh\" (UID: \"78f383f9-664c-43eb-9253-d9df1eaa9716\") " pod="openshift-authentication/oauth-openshift-666545c866-26rlh" Feb 18 19:20:49 crc kubenswrapper[4942]: I0218 19:20:49.807661 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/78f383f9-664c-43eb-9253-d9df1eaa9716-v4-0-config-user-template-error\") pod \"oauth-openshift-666545c866-26rlh\" (UID: \"78f383f9-664c-43eb-9253-d9df1eaa9716\") " pod="openshift-authentication/oauth-openshift-666545c866-26rlh" Feb 18 19:20:49 crc kubenswrapper[4942]: I0218 19:20:49.807686 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/78f383f9-664c-43eb-9253-d9df1eaa9716-v4-0-config-system-service-ca\") pod \"oauth-openshift-666545c866-26rlh\" (UID: \"78f383f9-664c-43eb-9253-d9df1eaa9716\") " pod="openshift-authentication/oauth-openshift-666545c866-26rlh" Feb 18 19:20:49 crc kubenswrapper[4942]: I0218 19:20:49.807713 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/78f383f9-664c-43eb-9253-d9df1eaa9716-v4-0-config-system-session\") pod \"oauth-openshift-666545c866-26rlh\" (UID: \"78f383f9-664c-43eb-9253-d9df1eaa9716\") " pod="openshift-authentication/oauth-openshift-666545c866-26rlh" Feb 18 19:20:49 crc kubenswrapper[4942]: I0218 19:20:49.807748 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x8lgn\" (UniqueName: \"kubernetes.io/projected/78f383f9-664c-43eb-9253-d9df1eaa9716-kube-api-access-x8lgn\") pod \"oauth-openshift-666545c866-26rlh\" (UID: \"78f383f9-664c-43eb-9253-d9df1eaa9716\") " pod="openshift-authentication/oauth-openshift-666545c866-26rlh" Feb 18 19:20:49 crc kubenswrapper[4942]: I0218 19:20:49.807802 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/78f383f9-664c-43eb-9253-d9df1eaa9716-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-666545c866-26rlh\" (UID: \"78f383f9-664c-43eb-9253-d9df1eaa9716\") " pod="openshift-authentication/oauth-openshift-666545c866-26rlh" Feb 18 19:20:49 crc kubenswrapper[4942]: I0218 19:20:49.810387 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/42dda107-038c-42c1-8182-52bee75caea9-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "42dda107-038c-42c1-8182-52bee75caea9" (UID: "42dda107-038c-42c1-8182-52bee75caea9"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:20:49 crc kubenswrapper[4942]: I0218 19:20:49.810972 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/42dda107-038c-42c1-8182-52bee75caea9-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "42dda107-038c-42c1-8182-52bee75caea9" (UID: "42dda107-038c-42c1-8182-52bee75caea9"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 18 19:20:49 crc kubenswrapper[4942]: I0218 19:20:49.811313 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/42dda107-038c-42c1-8182-52bee75caea9-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "42dda107-038c-42c1-8182-52bee75caea9" (UID: "42dda107-038c-42c1-8182-52bee75caea9"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:20:49 crc kubenswrapper[4942]: I0218 19:20:49.812260 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/42dda107-038c-42c1-8182-52bee75caea9-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "42dda107-038c-42c1-8182-52bee75caea9" (UID: "42dda107-038c-42c1-8182-52bee75caea9"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:20:49 crc kubenswrapper[4942]: I0218 19:20:49.817735 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/42dda107-038c-42c1-8182-52bee75caea9-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "42dda107-038c-42c1-8182-52bee75caea9" (UID: "42dda107-038c-42c1-8182-52bee75caea9"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:20:49 crc kubenswrapper[4942]: I0218 19:20:49.817792 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/42dda107-038c-42c1-8182-52bee75caea9-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "42dda107-038c-42c1-8182-52bee75caea9" (UID: "42dda107-038c-42c1-8182-52bee75caea9"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:20:49 crc kubenswrapper[4942]: I0218 19:20:49.819468 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/42dda107-038c-42c1-8182-52bee75caea9-kube-api-access-2wh89" (OuterVolumeSpecName: "kube-api-access-2wh89") pod "42dda107-038c-42c1-8182-52bee75caea9" (UID: "42dda107-038c-42c1-8182-52bee75caea9"). InnerVolumeSpecName "kube-api-access-2wh89". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:20:49 crc kubenswrapper[4942]: I0218 19:20:49.820370 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/42dda107-038c-42c1-8182-52bee75caea9-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "42dda107-038c-42c1-8182-52bee75caea9" (UID: "42dda107-038c-42c1-8182-52bee75caea9"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:20:49 crc kubenswrapper[4942]: I0218 19:20:49.831867 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/42dda107-038c-42c1-8182-52bee75caea9-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "42dda107-038c-42c1-8182-52bee75caea9" (UID: "42dda107-038c-42c1-8182-52bee75caea9"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:20:49 crc kubenswrapper[4942]: I0218 19:20:49.832518 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/42dda107-038c-42c1-8182-52bee75caea9-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "42dda107-038c-42c1-8182-52bee75caea9" (UID: "42dda107-038c-42c1-8182-52bee75caea9"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:20:49 crc kubenswrapper[4942]: I0218 19:20:49.836010 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/42dda107-038c-42c1-8182-52bee75caea9-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "42dda107-038c-42c1-8182-52bee75caea9" (UID: "42dda107-038c-42c1-8182-52bee75caea9"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:20:49 crc kubenswrapper[4942]: I0218 19:20:49.839937 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/42dda107-038c-42c1-8182-52bee75caea9-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "42dda107-038c-42c1-8182-52bee75caea9" (UID: "42dda107-038c-42c1-8182-52bee75caea9"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:20:49 crc kubenswrapper[4942]: I0218 19:20:49.842241 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/42dda107-038c-42c1-8182-52bee75caea9-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "42dda107-038c-42c1-8182-52bee75caea9" (UID: "42dda107-038c-42c1-8182-52bee75caea9"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:20:49 crc kubenswrapper[4942]: I0218 19:20:49.847566 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/42dda107-038c-42c1-8182-52bee75caea9-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "42dda107-038c-42c1-8182-52bee75caea9" (UID: "42dda107-038c-42c1-8182-52bee75caea9"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:20:49 crc kubenswrapper[4942]: I0218 19:20:49.862535 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-tk5v7" Feb 18 19:20:49 crc kubenswrapper[4942]: I0218 19:20:49.908604 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/78f383f9-664c-43eb-9253-d9df1eaa9716-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-666545c866-26rlh\" (UID: \"78f383f9-664c-43eb-9253-d9df1eaa9716\") " pod="openshift-authentication/oauth-openshift-666545c866-26rlh" Feb 18 19:20:49 crc kubenswrapper[4942]: I0218 19:20:49.908658 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/78f383f9-664c-43eb-9253-d9df1eaa9716-v4-0-config-system-cliconfig\") pod \"oauth-openshift-666545c866-26rlh\" (UID: \"78f383f9-664c-43eb-9253-d9df1eaa9716\") " pod="openshift-authentication/oauth-openshift-666545c866-26rlh" Feb 18 19:20:49 crc kubenswrapper[4942]: I0218 19:20:49.908678 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/78f383f9-664c-43eb-9253-d9df1eaa9716-v4-0-config-system-serving-cert\") pod \"oauth-openshift-666545c866-26rlh\" (UID: \"78f383f9-664c-43eb-9253-d9df1eaa9716\") " pod="openshift-authentication/oauth-openshift-666545c866-26rlh" Feb 18 19:20:49 crc kubenswrapper[4942]: I0218 19:20:49.908700 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/78f383f9-664c-43eb-9253-d9df1eaa9716-v4-0-config-user-template-login\") pod \"oauth-openshift-666545c866-26rlh\" (UID: \"78f383f9-664c-43eb-9253-d9df1eaa9716\") " pod="openshift-authentication/oauth-openshift-666545c866-26rlh" Feb 18 19:20:49 crc kubenswrapper[4942]: I0218 19:20:49.908732 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/78f383f9-664c-43eb-9253-d9df1eaa9716-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-666545c866-26rlh\" (UID: \"78f383f9-664c-43eb-9253-d9df1eaa9716\") " pod="openshift-authentication/oauth-openshift-666545c866-26rlh" Feb 18 19:20:49 crc kubenswrapper[4942]: I0218 19:20:49.908749 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/78f383f9-664c-43eb-9253-d9df1eaa9716-audit-dir\") pod \"oauth-openshift-666545c866-26rlh\" (UID: \"78f383f9-664c-43eb-9253-d9df1eaa9716\") " pod="openshift-authentication/oauth-openshift-666545c866-26rlh" Feb 18 19:20:49 crc kubenswrapper[4942]: I0218 19:20:49.908787 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/78f383f9-664c-43eb-9253-d9df1eaa9716-audit-policies\") pod \"oauth-openshift-666545c866-26rlh\" (UID: \"78f383f9-664c-43eb-9253-d9df1eaa9716\") " pod="openshift-authentication/oauth-openshift-666545c866-26rlh" Feb 18 19:20:49 crc kubenswrapper[4942]: I0218 19:20:49.908807 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/78f383f9-664c-43eb-9253-d9df1eaa9716-v4-0-config-system-router-certs\") pod \"oauth-openshift-666545c866-26rlh\" (UID: \"78f383f9-664c-43eb-9253-d9df1eaa9716\") " pod="openshift-authentication/oauth-openshift-666545c866-26rlh" Feb 18 19:20:49 crc kubenswrapper[4942]: I0218 19:20:49.908831 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/78f383f9-664c-43eb-9253-d9df1eaa9716-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-666545c866-26rlh\" (UID: \"78f383f9-664c-43eb-9253-d9df1eaa9716\") " pod="openshift-authentication/oauth-openshift-666545c866-26rlh" Feb 18 19:20:49 crc kubenswrapper[4942]: I0218 19:20:49.908851 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/78f383f9-664c-43eb-9253-d9df1eaa9716-v4-0-config-user-template-error\") pod \"oauth-openshift-666545c866-26rlh\" (UID: \"78f383f9-664c-43eb-9253-d9df1eaa9716\") " pod="openshift-authentication/oauth-openshift-666545c866-26rlh" Feb 18 19:20:49 crc kubenswrapper[4942]: I0218 19:20:49.908865 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/78f383f9-664c-43eb-9253-d9df1eaa9716-v4-0-config-system-service-ca\") pod \"oauth-openshift-666545c866-26rlh\" (UID: \"78f383f9-664c-43eb-9253-d9df1eaa9716\") " pod="openshift-authentication/oauth-openshift-666545c866-26rlh" Feb 18 19:20:49 crc kubenswrapper[4942]: I0218 19:20:49.908884 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/78f383f9-664c-43eb-9253-d9df1eaa9716-v4-0-config-system-session\") pod \"oauth-openshift-666545c866-26rlh\" (UID: \"78f383f9-664c-43eb-9253-d9df1eaa9716\") " pod="openshift-authentication/oauth-openshift-666545c866-26rlh" Feb 18 19:20:49 crc kubenswrapper[4942]: I0218 19:20:49.908910 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x8lgn\" (UniqueName: \"kubernetes.io/projected/78f383f9-664c-43eb-9253-d9df1eaa9716-kube-api-access-x8lgn\") pod \"oauth-openshift-666545c866-26rlh\" (UID: \"78f383f9-664c-43eb-9253-d9df1eaa9716\") " pod="openshift-authentication/oauth-openshift-666545c866-26rlh" Feb 18 19:20:49 crc kubenswrapper[4942]: I0218 19:20:49.908932 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/78f383f9-664c-43eb-9253-d9df1eaa9716-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-666545c866-26rlh\" (UID: \"78f383f9-664c-43eb-9253-d9df1eaa9716\") " pod="openshift-authentication/oauth-openshift-666545c866-26rlh" Feb 18 19:20:49 crc kubenswrapper[4942]: I0218 19:20:49.908974 4942 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/42dda107-038c-42c1-8182-52bee75caea9-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Feb 18 19:20:49 crc kubenswrapper[4942]: I0218 19:20:49.908985 4942 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/42dda107-038c-42c1-8182-52bee75caea9-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Feb 18 19:20:49 crc kubenswrapper[4942]: I0218 19:20:49.908994 4942 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/42dda107-038c-42c1-8182-52bee75caea9-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Feb 18 19:20:49 crc kubenswrapper[4942]: I0218 19:20:49.909004 4942 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/42dda107-038c-42c1-8182-52bee75caea9-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Feb 18 19:20:49 crc kubenswrapper[4942]: I0218 19:20:49.909015 4942 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/42dda107-038c-42c1-8182-52bee75caea9-audit-policies\") on node \"crc\" DevicePath \"\"" Feb 18 19:20:49 crc kubenswrapper[4942]: I0218 19:20:49.909025 4942 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/42dda107-038c-42c1-8182-52bee75caea9-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Feb 18 19:20:49 crc kubenswrapper[4942]: I0218 19:20:49.909036 4942 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/42dda107-038c-42c1-8182-52bee75caea9-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Feb 18 19:20:49 crc kubenswrapper[4942]: I0218 19:20:49.909046 4942 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/42dda107-038c-42c1-8182-52bee75caea9-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Feb 18 19:20:49 crc kubenswrapper[4942]: I0218 19:20:49.909055 4942 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/42dda107-038c-42c1-8182-52bee75caea9-audit-dir\") on node \"crc\" DevicePath \"\"" Feb 18 19:20:49 crc kubenswrapper[4942]: I0218 19:20:49.909063 4942 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/42dda107-038c-42c1-8182-52bee75caea9-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 18 19:20:49 crc kubenswrapper[4942]: I0218 19:20:49.909072 4942 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2wh89\" (UniqueName: \"kubernetes.io/projected/42dda107-038c-42c1-8182-52bee75caea9-kube-api-access-2wh89\") on node \"crc\" DevicePath \"\"" Feb 18 19:20:49 crc kubenswrapper[4942]: I0218 19:20:49.909080 4942 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/42dda107-038c-42c1-8182-52bee75caea9-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Feb 18 19:20:49 crc kubenswrapper[4942]: I0218 19:20:49.909091 4942 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/42dda107-038c-42c1-8182-52bee75caea9-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 19:20:49 crc kubenswrapper[4942]: I0218 19:20:49.909099 4942 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/42dda107-038c-42c1-8182-52bee75caea9-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Feb 18 19:20:49 crc kubenswrapper[4942]: I0218 19:20:49.909847 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/78f383f9-664c-43eb-9253-d9df1eaa9716-audit-dir\") pod \"oauth-openshift-666545c866-26rlh\" (UID: \"78f383f9-664c-43eb-9253-d9df1eaa9716\") " pod="openshift-authentication/oauth-openshift-666545c866-26rlh" Feb 18 19:20:49 crc kubenswrapper[4942]: I0218 19:20:49.910055 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/78f383f9-664c-43eb-9253-d9df1eaa9716-v4-0-config-system-cliconfig\") pod \"oauth-openshift-666545c866-26rlh\" (UID: \"78f383f9-664c-43eb-9253-d9df1eaa9716\") " pod="openshift-authentication/oauth-openshift-666545c866-26rlh" Feb 18 19:20:49 crc kubenswrapper[4942]: I0218 19:20:49.910585 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/78f383f9-664c-43eb-9253-d9df1eaa9716-v4-0-config-system-service-ca\") pod \"oauth-openshift-666545c866-26rlh\" (UID: \"78f383f9-664c-43eb-9253-d9df1eaa9716\") " pod="openshift-authentication/oauth-openshift-666545c866-26rlh" Feb 18 19:20:49 crc kubenswrapper[4942]: I0218 19:20:49.910870 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/78f383f9-664c-43eb-9253-d9df1eaa9716-audit-policies\") pod \"oauth-openshift-666545c866-26rlh\" (UID: \"78f383f9-664c-43eb-9253-d9df1eaa9716\") " pod="openshift-authentication/oauth-openshift-666545c866-26rlh" Feb 18 19:20:49 crc kubenswrapper[4942]: I0218 19:20:49.910888 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/78f383f9-664c-43eb-9253-d9df1eaa9716-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-666545c866-26rlh\" (UID: \"78f383f9-664c-43eb-9253-d9df1eaa9716\") " pod="openshift-authentication/oauth-openshift-666545c866-26rlh" Feb 18 19:20:49 crc kubenswrapper[4942]: I0218 19:20:49.913292 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/78f383f9-664c-43eb-9253-d9df1eaa9716-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-666545c866-26rlh\" (UID: \"78f383f9-664c-43eb-9253-d9df1eaa9716\") " pod="openshift-authentication/oauth-openshift-666545c866-26rlh" Feb 18 19:20:49 crc kubenswrapper[4942]: I0218 19:20:49.913321 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/78f383f9-664c-43eb-9253-d9df1eaa9716-v4-0-config-system-serving-cert\") pod \"oauth-openshift-666545c866-26rlh\" (UID: \"78f383f9-664c-43eb-9253-d9df1eaa9716\") " pod="openshift-authentication/oauth-openshift-666545c866-26rlh" Feb 18 19:20:49 crc kubenswrapper[4942]: I0218 19:20:49.913310 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/78f383f9-664c-43eb-9253-d9df1eaa9716-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-666545c866-26rlh\" (UID: \"78f383f9-664c-43eb-9253-d9df1eaa9716\") " pod="openshift-authentication/oauth-openshift-666545c866-26rlh" Feb 18 19:20:49 crc kubenswrapper[4942]: I0218 19:20:49.913419 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/78f383f9-664c-43eb-9253-d9df1eaa9716-v4-0-config-system-session\") pod \"oauth-openshift-666545c866-26rlh\" (UID: \"78f383f9-664c-43eb-9253-d9df1eaa9716\") " pod="openshift-authentication/oauth-openshift-666545c866-26rlh" Feb 18 19:20:49 crc kubenswrapper[4942]: I0218 19:20:49.913783 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/78f383f9-664c-43eb-9253-d9df1eaa9716-v4-0-config-system-router-certs\") pod \"oauth-openshift-666545c866-26rlh\" (UID: \"78f383f9-664c-43eb-9253-d9df1eaa9716\") " pod="openshift-authentication/oauth-openshift-666545c866-26rlh" Feb 18 19:20:49 crc kubenswrapper[4942]: I0218 19:20:49.914178 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/78f383f9-664c-43eb-9253-d9df1eaa9716-v4-0-config-user-template-login\") pod \"oauth-openshift-666545c866-26rlh\" (UID: \"78f383f9-664c-43eb-9253-d9df1eaa9716\") " pod="openshift-authentication/oauth-openshift-666545c866-26rlh" Feb 18 19:20:49 crc kubenswrapper[4942]: I0218 19:20:49.915024 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/78f383f9-664c-43eb-9253-d9df1eaa9716-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-666545c866-26rlh\" (UID: \"78f383f9-664c-43eb-9253-d9df1eaa9716\") " pod="openshift-authentication/oauth-openshift-666545c866-26rlh" Feb 18 19:20:49 crc kubenswrapper[4942]: I0218 19:20:49.915270 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/78f383f9-664c-43eb-9253-d9df1eaa9716-v4-0-config-user-template-error\") pod \"oauth-openshift-666545c866-26rlh\" (UID: \"78f383f9-664c-43eb-9253-d9df1eaa9716\") " pod="openshift-authentication/oauth-openshift-666545c866-26rlh" Feb 18 19:20:49 crc kubenswrapper[4942]: I0218 19:20:49.925594 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x8lgn\" (UniqueName: \"kubernetes.io/projected/78f383f9-664c-43eb-9253-d9df1eaa9716-kube-api-access-x8lgn\") pod \"oauth-openshift-666545c866-26rlh\" (UID: \"78f383f9-664c-43eb-9253-d9df1eaa9716\") " pod="openshift-authentication/oauth-openshift-666545c866-26rlh" Feb 18 19:20:50 crc kubenswrapper[4942]: I0218 19:20:50.078543 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-666545c866-26rlh" Feb 18 19:20:50 crc kubenswrapper[4942]: I0218 19:20:50.104851 4942 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-kpfjc"] Feb 18 19:20:50 crc kubenswrapper[4942]: I0218 19:20:50.109161 4942 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-kpfjc"] Feb 18 19:20:50 crc kubenswrapper[4942]: I0218 19:20:50.378824 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-666545c866-26rlh"] Feb 18 19:20:50 crc kubenswrapper[4942]: W0218 19:20:50.398660 4942 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod78f383f9_664c_43eb_9253_d9df1eaa9716.slice/crio-965e0d200427d08586902f78bf1baa491dbf59fa6e987cf6374a2c46d986a8a7 WatchSource:0}: Error finding container 965e0d200427d08586902f78bf1baa491dbf59fa6e987cf6374a2c46d986a8a7: Status 404 returned error can't find the container with id 965e0d200427d08586902f78bf1baa491dbf59fa6e987cf6374a2c46d986a8a7 Feb 18 19:20:50 crc kubenswrapper[4942]: I0218 19:20:50.762689 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-666545c866-26rlh" event={"ID":"78f383f9-664c-43eb-9253-d9df1eaa9716","Type":"ContainerStarted","Data":"eee278d9dc862c887d60eff6183347bcd5020cd73bab83eb57f6af86c7f3f58a"} Feb 18 19:20:50 crc kubenswrapper[4942]: I0218 19:20:50.763338 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-666545c866-26rlh" Feb 18 19:20:50 crc kubenswrapper[4942]: I0218 19:20:50.763367 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-666545c866-26rlh" event={"ID":"78f383f9-664c-43eb-9253-d9df1eaa9716","Type":"ContainerStarted","Data":"965e0d200427d08586902f78bf1baa491dbf59fa6e987cf6374a2c46d986a8a7"} Feb 18 19:20:50 crc kubenswrapper[4942]: I0218 19:20:50.766129 4942 patch_prober.go:28] interesting pod/oauth-openshift-666545c866-26rlh container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.56:6443/healthz\": dial tcp 10.217.0.56:6443: connect: connection refused" start-of-body= Feb 18 19:20:50 crc kubenswrapper[4942]: I0218 19:20:50.766204 4942 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-666545c866-26rlh" podUID="78f383f9-664c-43eb-9253-d9df1eaa9716" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.56:6443/healthz\": dial tcp 10.217.0.56:6443: connect: connection refused" Feb 18 19:20:50 crc kubenswrapper[4942]: I0218 19:20:50.794481 4942 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-666545c866-26rlh" podStartSLOduration=27.794446709 podStartE2EDuration="27.794446709s" podCreationTimestamp="2026-02-18 19:20:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:20:50.790084885 +0000 UTC m=+210.495017610" watchObservedRunningTime="2026-02-18 19:20:50.794446709 +0000 UTC m=+210.499379414" Feb 18 19:20:51 crc kubenswrapper[4942]: I0218 19:20:51.047254 4942 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="42dda107-038c-42c1-8182-52bee75caea9" path="/var/lib/kubelet/pods/42dda107-038c-42c1-8182-52bee75caea9/volumes" Feb 18 19:20:51 crc kubenswrapper[4942]: I0218 19:20:51.353825 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-vrlpg" Feb 18 19:20:51 crc kubenswrapper[4942]: I0218 19:20:51.367105 4942 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-tk5v7"] Feb 18 19:20:51 crc kubenswrapper[4942]: I0218 19:20:51.367574 4942 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-tk5v7" podUID="934bc032-4641-47ee-9689-39edb4e5a24a" containerName="registry-server" containerID="cri-o://451c9fa4101f2cbb7a9ff1f28f5cddcf7c7d862a8917ab84623fde49ee3b6da1" gracePeriod=2 Feb 18 19:20:51 crc kubenswrapper[4942]: I0218 19:20:51.726616 4942 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-tk5v7" Feb 18 19:20:51 crc kubenswrapper[4942]: I0218 19:20:51.755005 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-98p66\" (UniqueName: \"kubernetes.io/projected/934bc032-4641-47ee-9689-39edb4e5a24a-kube-api-access-98p66\") pod \"934bc032-4641-47ee-9689-39edb4e5a24a\" (UID: \"934bc032-4641-47ee-9689-39edb4e5a24a\") " Feb 18 19:20:51 crc kubenswrapper[4942]: I0218 19:20:51.755153 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/934bc032-4641-47ee-9689-39edb4e5a24a-catalog-content\") pod \"934bc032-4641-47ee-9689-39edb4e5a24a\" (UID: \"934bc032-4641-47ee-9689-39edb4e5a24a\") " Feb 18 19:20:51 crc kubenswrapper[4942]: I0218 19:20:51.755229 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/934bc032-4641-47ee-9689-39edb4e5a24a-utilities\") pod \"934bc032-4641-47ee-9689-39edb4e5a24a\" (UID: \"934bc032-4641-47ee-9689-39edb4e5a24a\") " Feb 18 19:20:51 crc kubenswrapper[4942]: I0218 19:20:51.757078 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/934bc032-4641-47ee-9689-39edb4e5a24a-utilities" (OuterVolumeSpecName: "utilities") pod "934bc032-4641-47ee-9689-39edb4e5a24a" (UID: "934bc032-4641-47ee-9689-39edb4e5a24a"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 19:20:51 crc kubenswrapper[4942]: I0218 19:20:51.764648 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/934bc032-4641-47ee-9689-39edb4e5a24a-kube-api-access-98p66" (OuterVolumeSpecName: "kube-api-access-98p66") pod "934bc032-4641-47ee-9689-39edb4e5a24a" (UID: "934bc032-4641-47ee-9689-39edb4e5a24a"). InnerVolumeSpecName "kube-api-access-98p66". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:20:51 crc kubenswrapper[4942]: I0218 19:20:51.775718 4942 generic.go:334] "Generic (PLEG): container finished" podID="934bc032-4641-47ee-9689-39edb4e5a24a" containerID="451c9fa4101f2cbb7a9ff1f28f5cddcf7c7d862a8917ab84623fde49ee3b6da1" exitCode=0 Feb 18 19:20:51 crc kubenswrapper[4942]: I0218 19:20:51.775831 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-tk5v7" event={"ID":"934bc032-4641-47ee-9689-39edb4e5a24a","Type":"ContainerDied","Data":"451c9fa4101f2cbb7a9ff1f28f5cddcf7c7d862a8917ab84623fde49ee3b6da1"} Feb 18 19:20:51 crc kubenswrapper[4942]: I0218 19:20:51.775884 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-tk5v7" event={"ID":"934bc032-4641-47ee-9689-39edb4e5a24a","Type":"ContainerDied","Data":"5c780f3eaf3a7663544d07b41d9cc753cd4008f1802dbe09d0227e582dd487c7"} Feb 18 19:20:51 crc kubenswrapper[4942]: I0218 19:20:51.775906 4942 scope.go:117] "RemoveContainer" containerID="451c9fa4101f2cbb7a9ff1f28f5cddcf7c7d862a8917ab84623fde49ee3b6da1" Feb 18 19:20:51 crc kubenswrapper[4942]: I0218 19:20:51.775842 4942 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-tk5v7" Feb 18 19:20:51 crc kubenswrapper[4942]: I0218 19:20:51.783343 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-666545c866-26rlh" Feb 18 19:20:51 crc kubenswrapper[4942]: I0218 19:20:51.796022 4942 scope.go:117] "RemoveContainer" containerID="3c851917d5f3c7ded5c54a7fc6671076bec8b33064360e4e202185900c76a141" Feb 18 19:20:51 crc kubenswrapper[4942]: I0218 19:20:51.817275 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/934bc032-4641-47ee-9689-39edb4e5a24a-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "934bc032-4641-47ee-9689-39edb4e5a24a" (UID: "934bc032-4641-47ee-9689-39edb4e5a24a"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 19:20:51 crc kubenswrapper[4942]: I0218 19:20:51.841889 4942 scope.go:117] "RemoveContainer" containerID="7fe3b6c87d6a3eef04ee129d8d8024bf02b590368b744b11406d1709338db6c7" Feb 18 19:20:51 crc kubenswrapper[4942]: I0218 19:20:51.860754 4942 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-98p66\" (UniqueName: \"kubernetes.io/projected/934bc032-4641-47ee-9689-39edb4e5a24a-kube-api-access-98p66\") on node \"crc\" DevicePath \"\"" Feb 18 19:20:51 crc kubenswrapper[4942]: I0218 19:20:51.860808 4942 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/934bc032-4641-47ee-9689-39edb4e5a24a-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 18 19:20:51 crc kubenswrapper[4942]: I0218 19:20:51.860818 4942 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/934bc032-4641-47ee-9689-39edb4e5a24a-utilities\") on node \"crc\" DevicePath \"\"" Feb 18 19:20:51 crc kubenswrapper[4942]: I0218 19:20:51.876541 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-w75d5" Feb 18 19:20:51 crc kubenswrapper[4942]: I0218 19:20:51.879493 4942 scope.go:117] "RemoveContainer" containerID="451c9fa4101f2cbb7a9ff1f28f5cddcf7c7d862a8917ab84623fde49ee3b6da1" Feb 18 19:20:51 crc kubenswrapper[4942]: E0218 19:20:51.881400 4942 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"451c9fa4101f2cbb7a9ff1f28f5cddcf7c7d862a8917ab84623fde49ee3b6da1\": container with ID starting with 451c9fa4101f2cbb7a9ff1f28f5cddcf7c7d862a8917ab84623fde49ee3b6da1 not found: ID does not exist" containerID="451c9fa4101f2cbb7a9ff1f28f5cddcf7c7d862a8917ab84623fde49ee3b6da1" Feb 18 19:20:51 crc kubenswrapper[4942]: I0218 19:20:51.881460 4942 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"451c9fa4101f2cbb7a9ff1f28f5cddcf7c7d862a8917ab84623fde49ee3b6da1"} err="failed to get container status \"451c9fa4101f2cbb7a9ff1f28f5cddcf7c7d862a8917ab84623fde49ee3b6da1\": rpc error: code = NotFound desc = could not find container \"451c9fa4101f2cbb7a9ff1f28f5cddcf7c7d862a8917ab84623fde49ee3b6da1\": container with ID starting with 451c9fa4101f2cbb7a9ff1f28f5cddcf7c7d862a8917ab84623fde49ee3b6da1 not found: ID does not exist" Feb 18 19:20:51 crc kubenswrapper[4942]: I0218 19:20:51.881489 4942 scope.go:117] "RemoveContainer" containerID="3c851917d5f3c7ded5c54a7fc6671076bec8b33064360e4e202185900c76a141" Feb 18 19:20:51 crc kubenswrapper[4942]: E0218 19:20:51.881953 4942 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3c851917d5f3c7ded5c54a7fc6671076bec8b33064360e4e202185900c76a141\": container with ID starting with 3c851917d5f3c7ded5c54a7fc6671076bec8b33064360e4e202185900c76a141 not found: ID does not exist" containerID="3c851917d5f3c7ded5c54a7fc6671076bec8b33064360e4e202185900c76a141" Feb 18 19:20:51 crc kubenswrapper[4942]: I0218 19:20:51.882012 4942 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3c851917d5f3c7ded5c54a7fc6671076bec8b33064360e4e202185900c76a141"} err="failed to get container status \"3c851917d5f3c7ded5c54a7fc6671076bec8b33064360e4e202185900c76a141\": rpc error: code = NotFound desc = could not find container \"3c851917d5f3c7ded5c54a7fc6671076bec8b33064360e4e202185900c76a141\": container with ID starting with 3c851917d5f3c7ded5c54a7fc6671076bec8b33064360e4e202185900c76a141 not found: ID does not exist" Feb 18 19:20:51 crc kubenswrapper[4942]: I0218 19:20:51.882045 4942 scope.go:117] "RemoveContainer" containerID="7fe3b6c87d6a3eef04ee129d8d8024bf02b590368b744b11406d1709338db6c7" Feb 18 19:20:51 crc kubenswrapper[4942]: E0218 19:20:51.882458 4942 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7fe3b6c87d6a3eef04ee129d8d8024bf02b590368b744b11406d1709338db6c7\": container with ID starting with 7fe3b6c87d6a3eef04ee129d8d8024bf02b590368b744b11406d1709338db6c7 not found: ID does not exist" containerID="7fe3b6c87d6a3eef04ee129d8d8024bf02b590368b744b11406d1709338db6c7" Feb 18 19:20:51 crc kubenswrapper[4942]: I0218 19:20:51.882490 4942 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7fe3b6c87d6a3eef04ee129d8d8024bf02b590368b744b11406d1709338db6c7"} err="failed to get container status \"7fe3b6c87d6a3eef04ee129d8d8024bf02b590368b744b11406d1709338db6c7\": rpc error: code = NotFound desc = could not find container \"7fe3b6c87d6a3eef04ee129d8d8024bf02b590368b744b11406d1709338db6c7\": container with ID starting with 7fe3b6c87d6a3eef04ee129d8d8024bf02b590368b744b11406d1709338db6c7 not found: ID does not exist" Feb 18 19:20:52 crc kubenswrapper[4942]: I0218 19:20:52.101122 4942 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-tk5v7"] Feb 18 19:20:52 crc kubenswrapper[4942]: I0218 19:20:52.103946 4942 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-tk5v7"] Feb 18 19:20:53 crc kubenswrapper[4942]: I0218 19:20:53.047993 4942 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="934bc032-4641-47ee-9689-39edb4e5a24a" path="/var/lib/kubelet/pods/934bc032-4641-47ee-9689-39edb4e5a24a/volumes" Feb 18 19:20:53 crc kubenswrapper[4942]: I0218 19:20:53.742550 4942 patch_prober.go:28] interesting pod/machine-config-daemon-wqxh4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 19:20:53 crc kubenswrapper[4942]: I0218 19:20:53.742663 4942 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wqxh4" podUID="28921539-823a-4439-a230-3b5aed7085cc" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 19:20:53 crc kubenswrapper[4942]: I0218 19:20:53.742745 4942 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-wqxh4" Feb 18 19:20:53 crc kubenswrapper[4942]: I0218 19:20:53.743838 4942 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"d3f2583de812c35d32f50918d2ea1071672e650d7bb1eca09416558ca25526b1"} pod="openshift-machine-config-operator/machine-config-daemon-wqxh4" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 18 19:20:53 crc kubenswrapper[4942]: I0218 19:20:53.743972 4942 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-wqxh4" podUID="28921539-823a-4439-a230-3b5aed7085cc" containerName="machine-config-daemon" containerID="cri-o://d3f2583de812c35d32f50918d2ea1071672e650d7bb1eca09416558ca25526b1" gracePeriod=600 Feb 18 19:20:53 crc kubenswrapper[4942]: I0218 19:20:53.768741 4942 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-w75d5"] Feb 18 19:20:53 crc kubenswrapper[4942]: I0218 19:20:53.769176 4942 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-w75d5" podUID="f8dc55ee-28aa-4789-96c1-0809c7abdc99" containerName="registry-server" containerID="cri-o://d498a2655c5a085ae0ca0309638a428cb63df27ec2e1344df0a14c1d63913544" gracePeriod=2 Feb 18 19:20:54 crc kubenswrapper[4942]: I0218 19:20:54.188221 4942 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-w75d5" Feb 18 19:20:54 crc kubenswrapper[4942]: I0218 19:20:54.199433 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f8dc55ee-28aa-4789-96c1-0809c7abdc99-utilities\") pod \"f8dc55ee-28aa-4789-96c1-0809c7abdc99\" (UID: \"f8dc55ee-28aa-4789-96c1-0809c7abdc99\") " Feb 18 19:20:54 crc kubenswrapper[4942]: I0218 19:20:54.199559 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f8dc55ee-28aa-4789-96c1-0809c7abdc99-catalog-content\") pod \"f8dc55ee-28aa-4789-96c1-0809c7abdc99\" (UID: \"f8dc55ee-28aa-4789-96c1-0809c7abdc99\") " Feb 18 19:20:54 crc kubenswrapper[4942]: I0218 19:20:54.199635 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-db9ht\" (UniqueName: \"kubernetes.io/projected/f8dc55ee-28aa-4789-96c1-0809c7abdc99-kube-api-access-db9ht\") pod \"f8dc55ee-28aa-4789-96c1-0809c7abdc99\" (UID: \"f8dc55ee-28aa-4789-96c1-0809c7abdc99\") " Feb 18 19:20:54 crc kubenswrapper[4942]: I0218 19:20:54.202865 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f8dc55ee-28aa-4789-96c1-0809c7abdc99-utilities" (OuterVolumeSpecName: "utilities") pod "f8dc55ee-28aa-4789-96c1-0809c7abdc99" (UID: "f8dc55ee-28aa-4789-96c1-0809c7abdc99"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 19:20:54 crc kubenswrapper[4942]: I0218 19:20:54.206213 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f8dc55ee-28aa-4789-96c1-0809c7abdc99-kube-api-access-db9ht" (OuterVolumeSpecName: "kube-api-access-db9ht") pod "f8dc55ee-28aa-4789-96c1-0809c7abdc99" (UID: "f8dc55ee-28aa-4789-96c1-0809c7abdc99"). InnerVolumeSpecName "kube-api-access-db9ht". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:20:54 crc kubenswrapper[4942]: I0218 19:20:54.228480 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f8dc55ee-28aa-4789-96c1-0809c7abdc99-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "f8dc55ee-28aa-4789-96c1-0809c7abdc99" (UID: "f8dc55ee-28aa-4789-96c1-0809c7abdc99"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 19:20:54 crc kubenswrapper[4942]: I0218 19:20:54.300454 4942 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f8dc55ee-28aa-4789-96c1-0809c7abdc99-utilities\") on node \"crc\" DevicePath \"\"" Feb 18 19:20:54 crc kubenswrapper[4942]: I0218 19:20:54.300519 4942 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f8dc55ee-28aa-4789-96c1-0809c7abdc99-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 18 19:20:54 crc kubenswrapper[4942]: I0218 19:20:54.300539 4942 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-db9ht\" (UniqueName: \"kubernetes.io/projected/f8dc55ee-28aa-4789-96c1-0809c7abdc99-kube-api-access-db9ht\") on node \"crc\" DevicePath \"\"" Feb 18 19:20:54 crc kubenswrapper[4942]: I0218 19:20:54.804328 4942 generic.go:334] "Generic (PLEG): container finished" podID="f8dc55ee-28aa-4789-96c1-0809c7abdc99" containerID="d498a2655c5a085ae0ca0309638a428cb63df27ec2e1344df0a14c1d63913544" exitCode=0 Feb 18 19:20:54 crc kubenswrapper[4942]: I0218 19:20:54.804424 4942 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-w75d5" Feb 18 19:20:54 crc kubenswrapper[4942]: I0218 19:20:54.804434 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-w75d5" event={"ID":"f8dc55ee-28aa-4789-96c1-0809c7abdc99","Type":"ContainerDied","Data":"d498a2655c5a085ae0ca0309638a428cb63df27ec2e1344df0a14c1d63913544"} Feb 18 19:20:54 crc kubenswrapper[4942]: I0218 19:20:54.804996 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-w75d5" event={"ID":"f8dc55ee-28aa-4789-96c1-0809c7abdc99","Type":"ContainerDied","Data":"bf827b49c615857eb54c5c1b4eb25133056e0a9065497fbb34a9215010ac6e9f"} Feb 18 19:20:54 crc kubenswrapper[4942]: I0218 19:20:54.805035 4942 scope.go:117] "RemoveContainer" containerID="d498a2655c5a085ae0ca0309638a428cb63df27ec2e1344df0a14c1d63913544" Feb 18 19:20:54 crc kubenswrapper[4942]: I0218 19:20:54.808747 4942 generic.go:334] "Generic (PLEG): container finished" podID="28921539-823a-4439-a230-3b5aed7085cc" containerID="d3f2583de812c35d32f50918d2ea1071672e650d7bb1eca09416558ca25526b1" exitCode=0 Feb 18 19:20:54 crc kubenswrapper[4942]: I0218 19:20:54.808793 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wqxh4" event={"ID":"28921539-823a-4439-a230-3b5aed7085cc","Type":"ContainerDied","Data":"d3f2583de812c35d32f50918d2ea1071672e650d7bb1eca09416558ca25526b1"} Feb 18 19:20:54 crc kubenswrapper[4942]: I0218 19:20:54.808875 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wqxh4" event={"ID":"28921539-823a-4439-a230-3b5aed7085cc","Type":"ContainerStarted","Data":"cbd8c39f4ca27a862760680c197d71be21444460d43b83855f644da4c249ce06"} Feb 18 19:20:54 crc kubenswrapper[4942]: I0218 19:20:54.839906 4942 scope.go:117] "RemoveContainer" containerID="fe0ffd866fd030a97ec5deaa0ccffe18989c2702e90b791041142cdf766720bc" Feb 18 19:20:54 crc kubenswrapper[4942]: I0218 19:20:54.859473 4942 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-w75d5"] Feb 18 19:20:54 crc kubenswrapper[4942]: I0218 19:20:54.862740 4942 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-w75d5"] Feb 18 19:20:54 crc kubenswrapper[4942]: I0218 19:20:54.870517 4942 scope.go:117] "RemoveContainer" containerID="b6793cfda70d58e1f6a7766cda5ab7da29921b9b2216e4d8bab414d83dbeaadd" Feb 18 19:20:54 crc kubenswrapper[4942]: I0218 19:20:54.890004 4942 scope.go:117] "RemoveContainer" containerID="d498a2655c5a085ae0ca0309638a428cb63df27ec2e1344df0a14c1d63913544" Feb 18 19:20:54 crc kubenswrapper[4942]: E0218 19:20:54.890563 4942 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d498a2655c5a085ae0ca0309638a428cb63df27ec2e1344df0a14c1d63913544\": container with ID starting with d498a2655c5a085ae0ca0309638a428cb63df27ec2e1344df0a14c1d63913544 not found: ID does not exist" containerID="d498a2655c5a085ae0ca0309638a428cb63df27ec2e1344df0a14c1d63913544" Feb 18 19:20:54 crc kubenswrapper[4942]: I0218 19:20:54.890619 4942 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d498a2655c5a085ae0ca0309638a428cb63df27ec2e1344df0a14c1d63913544"} err="failed to get container status \"d498a2655c5a085ae0ca0309638a428cb63df27ec2e1344df0a14c1d63913544\": rpc error: code = NotFound desc = could not find container \"d498a2655c5a085ae0ca0309638a428cb63df27ec2e1344df0a14c1d63913544\": container with ID starting with d498a2655c5a085ae0ca0309638a428cb63df27ec2e1344df0a14c1d63913544 not found: ID does not exist" Feb 18 19:20:54 crc kubenswrapper[4942]: I0218 19:20:54.890658 4942 scope.go:117] "RemoveContainer" containerID="fe0ffd866fd030a97ec5deaa0ccffe18989c2702e90b791041142cdf766720bc" Feb 18 19:20:54 crc kubenswrapper[4942]: E0218 19:20:54.891325 4942 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fe0ffd866fd030a97ec5deaa0ccffe18989c2702e90b791041142cdf766720bc\": container with ID starting with fe0ffd866fd030a97ec5deaa0ccffe18989c2702e90b791041142cdf766720bc not found: ID does not exist" containerID="fe0ffd866fd030a97ec5deaa0ccffe18989c2702e90b791041142cdf766720bc" Feb 18 19:20:54 crc kubenswrapper[4942]: I0218 19:20:54.891356 4942 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fe0ffd866fd030a97ec5deaa0ccffe18989c2702e90b791041142cdf766720bc"} err="failed to get container status \"fe0ffd866fd030a97ec5deaa0ccffe18989c2702e90b791041142cdf766720bc\": rpc error: code = NotFound desc = could not find container \"fe0ffd866fd030a97ec5deaa0ccffe18989c2702e90b791041142cdf766720bc\": container with ID starting with fe0ffd866fd030a97ec5deaa0ccffe18989c2702e90b791041142cdf766720bc not found: ID does not exist" Feb 18 19:20:54 crc kubenswrapper[4942]: I0218 19:20:54.891376 4942 scope.go:117] "RemoveContainer" containerID="b6793cfda70d58e1f6a7766cda5ab7da29921b9b2216e4d8bab414d83dbeaadd" Feb 18 19:20:54 crc kubenswrapper[4942]: E0218 19:20:54.891863 4942 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b6793cfda70d58e1f6a7766cda5ab7da29921b9b2216e4d8bab414d83dbeaadd\": container with ID starting with b6793cfda70d58e1f6a7766cda5ab7da29921b9b2216e4d8bab414d83dbeaadd not found: ID does not exist" containerID="b6793cfda70d58e1f6a7766cda5ab7da29921b9b2216e4d8bab414d83dbeaadd" Feb 18 19:20:54 crc kubenswrapper[4942]: I0218 19:20:54.891928 4942 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b6793cfda70d58e1f6a7766cda5ab7da29921b9b2216e4d8bab414d83dbeaadd"} err="failed to get container status \"b6793cfda70d58e1f6a7766cda5ab7da29921b9b2216e4d8bab414d83dbeaadd\": rpc error: code = NotFound desc = could not find container \"b6793cfda70d58e1f6a7766cda5ab7da29921b9b2216e4d8bab414d83dbeaadd\": container with ID starting with b6793cfda70d58e1f6a7766cda5ab7da29921b9b2216e4d8bab414d83dbeaadd not found: ID does not exist" Feb 18 19:20:55 crc kubenswrapper[4942]: I0218 19:20:55.047959 4942 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f8dc55ee-28aa-4789-96c1-0809c7abdc99" path="/var/lib/kubelet/pods/f8dc55ee-28aa-4789-96c1-0809c7abdc99/volumes" Feb 18 19:21:16 crc kubenswrapper[4942]: I0218 19:21:16.191948 4942 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Feb 18 19:21:16 crc kubenswrapper[4942]: E0218 19:21:16.193050 4942 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f8dc55ee-28aa-4789-96c1-0809c7abdc99" containerName="extract-content" Feb 18 19:21:16 crc kubenswrapper[4942]: I0218 19:21:16.193073 4942 state_mem.go:107] "Deleted CPUSet assignment" podUID="f8dc55ee-28aa-4789-96c1-0809c7abdc99" containerName="extract-content" Feb 18 19:21:16 crc kubenswrapper[4942]: E0218 19:21:16.193093 4942 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f8dc55ee-28aa-4789-96c1-0809c7abdc99" containerName="registry-server" Feb 18 19:21:16 crc kubenswrapper[4942]: I0218 19:21:16.193106 4942 state_mem.go:107] "Deleted CPUSet assignment" podUID="f8dc55ee-28aa-4789-96c1-0809c7abdc99" containerName="registry-server" Feb 18 19:21:16 crc kubenswrapper[4942]: E0218 19:21:16.193130 4942 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="934bc032-4641-47ee-9689-39edb4e5a24a" containerName="registry-server" Feb 18 19:21:16 crc kubenswrapper[4942]: I0218 19:21:16.193144 4942 state_mem.go:107] "Deleted CPUSet assignment" podUID="934bc032-4641-47ee-9689-39edb4e5a24a" containerName="registry-server" Feb 18 19:21:16 crc kubenswrapper[4942]: E0218 19:21:16.193167 4942 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="934bc032-4641-47ee-9689-39edb4e5a24a" containerName="extract-utilities" Feb 18 19:21:16 crc kubenswrapper[4942]: I0218 19:21:16.193180 4942 state_mem.go:107] "Deleted CPUSet assignment" podUID="934bc032-4641-47ee-9689-39edb4e5a24a" containerName="extract-utilities" Feb 18 19:21:16 crc kubenswrapper[4942]: E0218 19:21:16.193200 4942 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="934bc032-4641-47ee-9689-39edb4e5a24a" containerName="extract-content" Feb 18 19:21:16 crc kubenswrapper[4942]: I0218 19:21:16.193212 4942 state_mem.go:107] "Deleted CPUSet assignment" podUID="934bc032-4641-47ee-9689-39edb4e5a24a" containerName="extract-content" Feb 18 19:21:16 crc kubenswrapper[4942]: E0218 19:21:16.193232 4942 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f8dc55ee-28aa-4789-96c1-0809c7abdc99" containerName="extract-utilities" Feb 18 19:21:16 crc kubenswrapper[4942]: I0218 19:21:16.193245 4942 state_mem.go:107] "Deleted CPUSet assignment" podUID="f8dc55ee-28aa-4789-96c1-0809c7abdc99" containerName="extract-utilities" Feb 18 19:21:16 crc kubenswrapper[4942]: I0218 19:21:16.193414 4942 memory_manager.go:354] "RemoveStaleState removing state" podUID="f8dc55ee-28aa-4789-96c1-0809c7abdc99" containerName="registry-server" Feb 18 19:21:16 crc kubenswrapper[4942]: I0218 19:21:16.193433 4942 memory_manager.go:354] "RemoveStaleState removing state" podUID="934bc032-4641-47ee-9689-39edb4e5a24a" containerName="registry-server" Feb 18 19:21:16 crc kubenswrapper[4942]: I0218 19:21:16.193996 4942 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Feb 18 19:21:16 crc kubenswrapper[4942]: I0218 19:21:16.194036 4942 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Feb 18 19:21:16 crc kubenswrapper[4942]: I0218 19:21:16.194155 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 18 19:21:16 crc kubenswrapper[4942]: I0218 19:21:16.194438 4942 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" containerID="cri-o://beecfbdf76954e7b9895240b52a2ec033ec3b81094ece02095f67a5f389d0383" gracePeriod=15 Feb 18 19:21:16 crc kubenswrapper[4942]: I0218 19:21:16.194451 4942 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" containerID="cri-o://c787e65428258ae002dd2569d2e100857851a5b699d573b42e59d1be987da8b3" gracePeriod=15 Feb 18 19:21:16 crc kubenswrapper[4942]: I0218 19:21:16.194497 4942 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" containerID="cri-o://5fcd5de3303bba82e4a354de9f77b9aac574912955c2e49e2e74232f4d432a88" gracePeriod=15 Feb 18 19:21:16 crc kubenswrapper[4942]: I0218 19:21:16.194520 4942 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" containerID="cri-o://ee5e19c2c5a503ae69e8052828713b9b399137e0fb7f3a06865d4d7f6b29c954" gracePeriod=15 Feb 18 19:21:16 crc kubenswrapper[4942]: I0218 19:21:16.194446 4942 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" containerID="cri-o://ca3d8e99733c89b17e7211c9bae268f8e75942d896d32a6e2e9fc7e613000a6d" gracePeriod=15 Feb 18 19:21:16 crc kubenswrapper[4942]: E0218 19:21:16.194752 4942 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Feb 18 19:21:16 crc kubenswrapper[4942]: I0218 19:21:16.194779 4942 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Feb 18 19:21:16 crc kubenswrapper[4942]: E0218 19:21:16.194789 4942 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Feb 18 19:21:16 crc kubenswrapper[4942]: I0218 19:21:16.194794 4942 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Feb 18 19:21:16 crc kubenswrapper[4942]: E0218 19:21:16.194800 4942 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 18 19:21:16 crc kubenswrapper[4942]: I0218 19:21:16.194806 4942 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 18 19:21:16 crc kubenswrapper[4942]: E0218 19:21:16.194814 4942 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Feb 18 19:21:16 crc kubenswrapper[4942]: I0218 19:21:16.194820 4942 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Feb 18 19:21:16 crc kubenswrapper[4942]: E0218 19:21:16.194830 4942 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Feb 18 19:21:16 crc kubenswrapper[4942]: I0218 19:21:16.194835 4942 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Feb 18 19:21:16 crc kubenswrapper[4942]: E0218 19:21:16.194841 4942 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 18 19:21:16 crc kubenswrapper[4942]: I0218 19:21:16.194847 4942 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 18 19:21:16 crc kubenswrapper[4942]: E0218 19:21:16.194854 4942 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Feb 18 19:21:16 crc kubenswrapper[4942]: I0218 19:21:16.194860 4942 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Feb 18 19:21:16 crc kubenswrapper[4942]: I0218 19:21:16.194961 4942 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 18 19:21:16 crc kubenswrapper[4942]: I0218 19:21:16.194974 4942 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Feb 18 19:21:16 crc kubenswrapper[4942]: I0218 19:21:16.194985 4942 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Feb 18 19:21:16 crc kubenswrapper[4942]: I0218 19:21:16.194994 4942 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 18 19:21:16 crc kubenswrapper[4942]: I0218 19:21:16.195002 4942 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Feb 18 19:21:16 crc kubenswrapper[4942]: I0218 19:21:16.195012 4942 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Feb 18 19:21:16 crc kubenswrapper[4942]: E0218 19:21:16.195094 4942 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 18 19:21:16 crc kubenswrapper[4942]: I0218 19:21:16.195100 4942 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 18 19:21:16 crc kubenswrapper[4942]: I0218 19:21:16.195222 4942 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 18 19:21:16 crc kubenswrapper[4942]: I0218 19:21:16.198795 4942 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="f4b27818a5e8e43d0dc095d08835c792" podUID="71bb4a3aecc4ba5b26c4b7318770ce13" Feb 18 19:21:16 crc kubenswrapper[4942]: E0218 19:21:16.275090 4942 kubelet.go:1929] "Failed creating a mirror pod for" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 38.102.83.188:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 18 19:21:16 crc kubenswrapper[4942]: I0218 19:21:16.324168 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 18 19:21:16 crc kubenswrapper[4942]: I0218 19:21:16.324221 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 18 19:21:16 crc kubenswrapper[4942]: I0218 19:21:16.324240 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 18 19:21:16 crc kubenswrapper[4942]: I0218 19:21:16.324257 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 18 19:21:16 crc kubenswrapper[4942]: I0218 19:21:16.324274 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 18 19:21:16 crc kubenswrapper[4942]: I0218 19:21:16.324472 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 18 19:21:16 crc kubenswrapper[4942]: I0218 19:21:16.324564 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 18 19:21:16 crc kubenswrapper[4942]: I0218 19:21:16.324705 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 18 19:21:16 crc kubenswrapper[4942]: I0218 19:21:16.425778 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 18 19:21:16 crc kubenswrapper[4942]: I0218 19:21:16.426108 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 18 19:21:16 crc kubenswrapper[4942]: I0218 19:21:16.426136 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 18 19:21:16 crc kubenswrapper[4942]: I0218 19:21:16.425899 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 18 19:21:16 crc kubenswrapper[4942]: I0218 19:21:16.426157 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 18 19:21:16 crc kubenswrapper[4942]: I0218 19:21:16.426195 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 18 19:21:16 crc kubenswrapper[4942]: I0218 19:21:16.426216 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 18 19:21:16 crc kubenswrapper[4942]: I0218 19:21:16.426237 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 18 19:21:16 crc kubenswrapper[4942]: I0218 19:21:16.426251 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 18 19:21:16 crc kubenswrapper[4942]: I0218 19:21:16.426295 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 18 19:21:16 crc kubenswrapper[4942]: I0218 19:21:16.426258 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 18 19:21:16 crc kubenswrapper[4942]: I0218 19:21:16.426263 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 18 19:21:16 crc kubenswrapper[4942]: I0218 19:21:16.426355 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 18 19:21:16 crc kubenswrapper[4942]: I0218 19:21:16.426390 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 18 19:21:16 crc kubenswrapper[4942]: I0218 19:21:16.426442 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 18 19:21:16 crc kubenswrapper[4942]: I0218 19:21:16.426491 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 18 19:21:16 crc kubenswrapper[4942]: I0218 19:21:16.575835 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 18 19:21:16 crc kubenswrapper[4942]: E0218 19:21:16.603534 4942 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.102.83.188:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.18956d8c05eebe81 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f85e55b1a89d02b0cb034b1ea31ed45a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-18 19:21:16.602539649 +0000 UTC m=+236.307472314,LastTimestamp:2026-02-18 19:21:16.602539649 +0000 UTC m=+236.307472314,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 18 19:21:16 crc kubenswrapper[4942]: I0218 19:21:16.946127 4942 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/1.log" Feb 18 19:21:16 crc kubenswrapper[4942]: I0218 19:21:16.947984 4942 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Feb 18 19:21:16 crc kubenswrapper[4942]: I0218 19:21:16.948747 4942 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="c787e65428258ae002dd2569d2e100857851a5b699d573b42e59d1be987da8b3" exitCode=0 Feb 18 19:21:16 crc kubenswrapper[4942]: I0218 19:21:16.948786 4942 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="5fcd5de3303bba82e4a354de9f77b9aac574912955c2e49e2e74232f4d432a88" exitCode=0 Feb 18 19:21:16 crc kubenswrapper[4942]: I0218 19:21:16.948794 4942 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="ca3d8e99733c89b17e7211c9bae268f8e75942d896d32a6e2e9fc7e613000a6d" exitCode=0 Feb 18 19:21:16 crc kubenswrapper[4942]: I0218 19:21:16.948801 4942 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="ee5e19c2c5a503ae69e8052828713b9b399137e0fb7f3a06865d4d7f6b29c954" exitCode=2 Feb 18 19:21:16 crc kubenswrapper[4942]: I0218 19:21:16.948800 4942 scope.go:117] "RemoveContainer" containerID="b82ad9b4d66dae63d0d0fcf0dd65333cf43c3b1d6006e129b7db1c6320bfdee8" Feb 18 19:21:16 crc kubenswrapper[4942]: I0218 19:21:16.954292 4942 generic.go:334] "Generic (PLEG): container finished" podID="eea7d003-0909-4006-b81d-e566f256b0aa" containerID="cd03ae906b7bee058d56cb0846d1b0e67c0721c950835150412c01f8c34159f0" exitCode=0 Feb 18 19:21:16 crc kubenswrapper[4942]: I0218 19:21:16.954346 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"eea7d003-0909-4006-b81d-e566f256b0aa","Type":"ContainerDied","Data":"cd03ae906b7bee058d56cb0846d1b0e67c0721c950835150412c01f8c34159f0"} Feb 18 19:21:16 crc kubenswrapper[4942]: I0218 19:21:16.954987 4942 status_manager.go:851] "Failed to get status for pod" podUID="eea7d003-0909-4006-b81d-e566f256b0aa" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.188:6443: connect: connection refused" Feb 18 19:21:16 crc kubenswrapper[4942]: I0218 19:21:16.955923 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"17d52aa652e2262a448752f8eeedf1ade032558596806a1871b71588f0f54812"} Feb 18 19:21:16 crc kubenswrapper[4942]: I0218 19:21:16.955984 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"45e67579ac13322a7f5886f560eaf4d5f854a9c9c1fd56d9f69639efc91d0d7f"} Feb 18 19:21:16 crc kubenswrapper[4942]: E0218 19:21:16.956444 4942 kubelet.go:1929] "Failed creating a mirror pod for" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 38.102.83.188:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 18 19:21:16 crc kubenswrapper[4942]: I0218 19:21:16.956474 4942 status_manager.go:851] "Failed to get status for pod" podUID="eea7d003-0909-4006-b81d-e566f256b0aa" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.188:6443: connect: connection refused" Feb 18 19:21:17 crc kubenswrapper[4942]: I0218 19:21:17.965854 4942 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Feb 18 19:21:18 crc kubenswrapper[4942]: I0218 19:21:18.223462 4942 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Feb 18 19:21:18 crc kubenswrapper[4942]: I0218 19:21:18.224725 4942 status_manager.go:851] "Failed to get status for pod" podUID="eea7d003-0909-4006-b81d-e566f256b0aa" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.188:6443: connect: connection refused" Feb 18 19:21:18 crc kubenswrapper[4942]: I0218 19:21:18.249278 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/eea7d003-0909-4006-b81d-e566f256b0aa-kube-api-access\") pod \"eea7d003-0909-4006-b81d-e566f256b0aa\" (UID: \"eea7d003-0909-4006-b81d-e566f256b0aa\") " Feb 18 19:21:18 crc kubenswrapper[4942]: I0218 19:21:18.249332 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/eea7d003-0909-4006-b81d-e566f256b0aa-var-lock\") pod \"eea7d003-0909-4006-b81d-e566f256b0aa\" (UID: \"eea7d003-0909-4006-b81d-e566f256b0aa\") " Feb 18 19:21:18 crc kubenswrapper[4942]: I0218 19:21:18.249380 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/eea7d003-0909-4006-b81d-e566f256b0aa-kubelet-dir\") pod \"eea7d003-0909-4006-b81d-e566f256b0aa\" (UID: \"eea7d003-0909-4006-b81d-e566f256b0aa\") " Feb 18 19:21:18 crc kubenswrapper[4942]: I0218 19:21:18.249593 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/eea7d003-0909-4006-b81d-e566f256b0aa-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "eea7d003-0909-4006-b81d-e566f256b0aa" (UID: "eea7d003-0909-4006-b81d-e566f256b0aa"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 18 19:21:18 crc kubenswrapper[4942]: I0218 19:21:18.249621 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/eea7d003-0909-4006-b81d-e566f256b0aa-var-lock" (OuterVolumeSpecName: "var-lock") pod "eea7d003-0909-4006-b81d-e566f256b0aa" (UID: "eea7d003-0909-4006-b81d-e566f256b0aa"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 18 19:21:18 crc kubenswrapper[4942]: I0218 19:21:18.260120 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/eea7d003-0909-4006-b81d-e566f256b0aa-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "eea7d003-0909-4006-b81d-e566f256b0aa" (UID: "eea7d003-0909-4006-b81d-e566f256b0aa"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:21:18 crc kubenswrapper[4942]: I0218 19:21:18.350733 4942 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/eea7d003-0909-4006-b81d-e566f256b0aa-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 18 19:21:18 crc kubenswrapper[4942]: I0218 19:21:18.350789 4942 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/eea7d003-0909-4006-b81d-e566f256b0aa-var-lock\") on node \"crc\" DevicePath \"\"" Feb 18 19:21:18 crc kubenswrapper[4942]: I0218 19:21:18.350801 4942 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/eea7d003-0909-4006-b81d-e566f256b0aa-kubelet-dir\") on node \"crc\" DevicePath \"\"" Feb 18 19:21:18 crc kubenswrapper[4942]: I0218 19:21:18.549245 4942 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Feb 18 19:21:18 crc kubenswrapper[4942]: I0218 19:21:18.550021 4942 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 18 19:21:18 crc kubenswrapper[4942]: I0218 19:21:18.550669 4942 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.188:6443: connect: connection refused" Feb 18 19:21:18 crc kubenswrapper[4942]: I0218 19:21:18.551262 4942 status_manager.go:851] "Failed to get status for pod" podUID="eea7d003-0909-4006-b81d-e566f256b0aa" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.188:6443: connect: connection refused" Feb 18 19:21:18 crc kubenswrapper[4942]: I0218 19:21:18.551543 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Feb 18 19:21:18 crc kubenswrapper[4942]: I0218 19:21:18.551586 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Feb 18 19:21:18 crc kubenswrapper[4942]: I0218 19:21:18.551640 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 18 19:21:18 crc kubenswrapper[4942]: I0218 19:21:18.551726 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir" (OuterVolumeSpecName: "cert-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "cert-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 18 19:21:18 crc kubenswrapper[4942]: I0218 19:21:18.551941 4942 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") on node \"crc\" DevicePath \"\"" Feb 18 19:21:18 crc kubenswrapper[4942]: I0218 19:21:18.551965 4942 reconciler_common.go:293] "Volume detached for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") on node \"crc\" DevicePath \"\"" Feb 18 19:21:18 crc kubenswrapper[4942]: I0218 19:21:18.653213 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Feb 18 19:21:18 crc kubenswrapper[4942]: I0218 19:21:18.653323 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 18 19:21:18 crc kubenswrapper[4942]: I0218 19:21:18.654134 4942 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") on node \"crc\" DevicePath \"\"" Feb 18 19:21:18 crc kubenswrapper[4942]: E0218 19:21:18.767356 4942 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.102.83.188:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.18956d8c05eebe81 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f85e55b1a89d02b0cb034b1ea31ed45a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-18 19:21:16.602539649 +0000 UTC m=+236.307472314,LastTimestamp:2026-02-18 19:21:16.602539649 +0000 UTC m=+236.307472314,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 18 19:21:18 crc kubenswrapper[4942]: I0218 19:21:18.980695 4942 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Feb 18 19:21:18 crc kubenswrapper[4942]: I0218 19:21:18.981564 4942 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="beecfbdf76954e7b9895240b52a2ec033ec3b81094ece02095f67a5f389d0383" exitCode=0 Feb 18 19:21:18 crc kubenswrapper[4942]: I0218 19:21:18.981667 4942 scope.go:117] "RemoveContainer" containerID="c787e65428258ae002dd2569d2e100857851a5b699d573b42e59d1be987da8b3" Feb 18 19:21:18 crc kubenswrapper[4942]: I0218 19:21:18.981727 4942 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 18 19:21:18 crc kubenswrapper[4942]: I0218 19:21:18.983976 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"eea7d003-0909-4006-b81d-e566f256b0aa","Type":"ContainerDied","Data":"239dfe6552f2af1d441cc549207cdf73962930bc9e631840f8db8225c35b625e"} Feb 18 19:21:18 crc kubenswrapper[4942]: I0218 19:21:18.984015 4942 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="239dfe6552f2af1d441cc549207cdf73962930bc9e631840f8db8225c35b625e" Feb 18 19:21:18 crc kubenswrapper[4942]: I0218 19:21:18.984085 4942 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Feb 18 19:21:19 crc kubenswrapper[4942]: I0218 19:21:19.005226 4942 scope.go:117] "RemoveContainer" containerID="5fcd5de3303bba82e4a354de9f77b9aac574912955c2e49e2e74232f4d432a88" Feb 18 19:21:19 crc kubenswrapper[4942]: I0218 19:21:19.006241 4942 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.188:6443: connect: connection refused" Feb 18 19:21:19 crc kubenswrapper[4942]: I0218 19:21:19.007902 4942 status_manager.go:851] "Failed to get status for pod" podUID="eea7d003-0909-4006-b81d-e566f256b0aa" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.188:6443: connect: connection refused" Feb 18 19:21:19 crc kubenswrapper[4942]: I0218 19:21:19.008661 4942 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.188:6443: connect: connection refused" Feb 18 19:21:19 crc kubenswrapper[4942]: I0218 19:21:19.009172 4942 status_manager.go:851] "Failed to get status for pod" podUID="eea7d003-0909-4006-b81d-e566f256b0aa" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.188:6443: connect: connection refused" Feb 18 19:21:19 crc kubenswrapper[4942]: I0218 19:21:19.025161 4942 scope.go:117] "RemoveContainer" containerID="ca3d8e99733c89b17e7211c9bae268f8e75942d896d32a6e2e9fc7e613000a6d" Feb 18 19:21:19 crc kubenswrapper[4942]: I0218 19:21:19.039780 4942 scope.go:117] "RemoveContainer" containerID="ee5e19c2c5a503ae69e8052828713b9b399137e0fb7f3a06865d4d7f6b29c954" Feb 18 19:21:19 crc kubenswrapper[4942]: I0218 19:21:19.042524 4942 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f4b27818a5e8e43d0dc095d08835c792" path="/var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/volumes" Feb 18 19:21:19 crc kubenswrapper[4942]: I0218 19:21:19.064831 4942 scope.go:117] "RemoveContainer" containerID="beecfbdf76954e7b9895240b52a2ec033ec3b81094ece02095f67a5f389d0383" Feb 18 19:21:19 crc kubenswrapper[4942]: I0218 19:21:19.080603 4942 scope.go:117] "RemoveContainer" containerID="6adb31d2464d541db843bddad1c43811ca11900db12deeb0d7c13ff8a3186d09" Feb 18 19:21:19 crc kubenswrapper[4942]: I0218 19:21:19.105669 4942 scope.go:117] "RemoveContainer" containerID="c787e65428258ae002dd2569d2e100857851a5b699d573b42e59d1be987da8b3" Feb 18 19:21:19 crc kubenswrapper[4942]: E0218 19:21:19.107362 4942 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c787e65428258ae002dd2569d2e100857851a5b699d573b42e59d1be987da8b3\": container with ID starting with c787e65428258ae002dd2569d2e100857851a5b699d573b42e59d1be987da8b3 not found: ID does not exist" containerID="c787e65428258ae002dd2569d2e100857851a5b699d573b42e59d1be987da8b3" Feb 18 19:21:19 crc kubenswrapper[4942]: I0218 19:21:19.107428 4942 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c787e65428258ae002dd2569d2e100857851a5b699d573b42e59d1be987da8b3"} err="failed to get container status \"c787e65428258ae002dd2569d2e100857851a5b699d573b42e59d1be987da8b3\": rpc error: code = NotFound desc = could not find container \"c787e65428258ae002dd2569d2e100857851a5b699d573b42e59d1be987da8b3\": container with ID starting with c787e65428258ae002dd2569d2e100857851a5b699d573b42e59d1be987da8b3 not found: ID does not exist" Feb 18 19:21:19 crc kubenswrapper[4942]: I0218 19:21:19.107469 4942 scope.go:117] "RemoveContainer" containerID="5fcd5de3303bba82e4a354de9f77b9aac574912955c2e49e2e74232f4d432a88" Feb 18 19:21:19 crc kubenswrapper[4942]: E0218 19:21:19.107851 4942 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5fcd5de3303bba82e4a354de9f77b9aac574912955c2e49e2e74232f4d432a88\": container with ID starting with 5fcd5de3303bba82e4a354de9f77b9aac574912955c2e49e2e74232f4d432a88 not found: ID does not exist" containerID="5fcd5de3303bba82e4a354de9f77b9aac574912955c2e49e2e74232f4d432a88" Feb 18 19:21:19 crc kubenswrapper[4942]: I0218 19:21:19.107895 4942 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5fcd5de3303bba82e4a354de9f77b9aac574912955c2e49e2e74232f4d432a88"} err="failed to get container status \"5fcd5de3303bba82e4a354de9f77b9aac574912955c2e49e2e74232f4d432a88\": rpc error: code = NotFound desc = could not find container \"5fcd5de3303bba82e4a354de9f77b9aac574912955c2e49e2e74232f4d432a88\": container with ID starting with 5fcd5de3303bba82e4a354de9f77b9aac574912955c2e49e2e74232f4d432a88 not found: ID does not exist" Feb 18 19:21:19 crc kubenswrapper[4942]: I0218 19:21:19.107919 4942 scope.go:117] "RemoveContainer" containerID="ca3d8e99733c89b17e7211c9bae268f8e75942d896d32a6e2e9fc7e613000a6d" Feb 18 19:21:19 crc kubenswrapper[4942]: E0218 19:21:19.109009 4942 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ca3d8e99733c89b17e7211c9bae268f8e75942d896d32a6e2e9fc7e613000a6d\": container with ID starting with ca3d8e99733c89b17e7211c9bae268f8e75942d896d32a6e2e9fc7e613000a6d not found: ID does not exist" containerID="ca3d8e99733c89b17e7211c9bae268f8e75942d896d32a6e2e9fc7e613000a6d" Feb 18 19:21:19 crc kubenswrapper[4942]: I0218 19:21:19.109039 4942 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ca3d8e99733c89b17e7211c9bae268f8e75942d896d32a6e2e9fc7e613000a6d"} err="failed to get container status \"ca3d8e99733c89b17e7211c9bae268f8e75942d896d32a6e2e9fc7e613000a6d\": rpc error: code = NotFound desc = could not find container \"ca3d8e99733c89b17e7211c9bae268f8e75942d896d32a6e2e9fc7e613000a6d\": container with ID starting with ca3d8e99733c89b17e7211c9bae268f8e75942d896d32a6e2e9fc7e613000a6d not found: ID does not exist" Feb 18 19:21:19 crc kubenswrapper[4942]: I0218 19:21:19.109055 4942 scope.go:117] "RemoveContainer" containerID="ee5e19c2c5a503ae69e8052828713b9b399137e0fb7f3a06865d4d7f6b29c954" Feb 18 19:21:19 crc kubenswrapper[4942]: E0218 19:21:19.109454 4942 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ee5e19c2c5a503ae69e8052828713b9b399137e0fb7f3a06865d4d7f6b29c954\": container with ID starting with ee5e19c2c5a503ae69e8052828713b9b399137e0fb7f3a06865d4d7f6b29c954 not found: ID does not exist" containerID="ee5e19c2c5a503ae69e8052828713b9b399137e0fb7f3a06865d4d7f6b29c954" Feb 18 19:21:19 crc kubenswrapper[4942]: I0218 19:21:19.109486 4942 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ee5e19c2c5a503ae69e8052828713b9b399137e0fb7f3a06865d4d7f6b29c954"} err="failed to get container status \"ee5e19c2c5a503ae69e8052828713b9b399137e0fb7f3a06865d4d7f6b29c954\": rpc error: code = NotFound desc = could not find container \"ee5e19c2c5a503ae69e8052828713b9b399137e0fb7f3a06865d4d7f6b29c954\": container with ID starting with ee5e19c2c5a503ae69e8052828713b9b399137e0fb7f3a06865d4d7f6b29c954 not found: ID does not exist" Feb 18 19:21:19 crc kubenswrapper[4942]: I0218 19:21:19.109501 4942 scope.go:117] "RemoveContainer" containerID="beecfbdf76954e7b9895240b52a2ec033ec3b81094ece02095f67a5f389d0383" Feb 18 19:21:19 crc kubenswrapper[4942]: E0218 19:21:19.109840 4942 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"beecfbdf76954e7b9895240b52a2ec033ec3b81094ece02095f67a5f389d0383\": container with ID starting with beecfbdf76954e7b9895240b52a2ec033ec3b81094ece02095f67a5f389d0383 not found: ID does not exist" containerID="beecfbdf76954e7b9895240b52a2ec033ec3b81094ece02095f67a5f389d0383" Feb 18 19:21:19 crc kubenswrapper[4942]: I0218 19:21:19.109882 4942 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"beecfbdf76954e7b9895240b52a2ec033ec3b81094ece02095f67a5f389d0383"} err="failed to get container status \"beecfbdf76954e7b9895240b52a2ec033ec3b81094ece02095f67a5f389d0383\": rpc error: code = NotFound desc = could not find container \"beecfbdf76954e7b9895240b52a2ec033ec3b81094ece02095f67a5f389d0383\": container with ID starting with beecfbdf76954e7b9895240b52a2ec033ec3b81094ece02095f67a5f389d0383 not found: ID does not exist" Feb 18 19:21:19 crc kubenswrapper[4942]: I0218 19:21:19.109916 4942 scope.go:117] "RemoveContainer" containerID="6adb31d2464d541db843bddad1c43811ca11900db12deeb0d7c13ff8a3186d09" Feb 18 19:21:19 crc kubenswrapper[4942]: E0218 19:21:19.110258 4942 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6adb31d2464d541db843bddad1c43811ca11900db12deeb0d7c13ff8a3186d09\": container with ID starting with 6adb31d2464d541db843bddad1c43811ca11900db12deeb0d7c13ff8a3186d09 not found: ID does not exist" containerID="6adb31d2464d541db843bddad1c43811ca11900db12deeb0d7c13ff8a3186d09" Feb 18 19:21:19 crc kubenswrapper[4942]: I0218 19:21:19.110283 4942 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6adb31d2464d541db843bddad1c43811ca11900db12deeb0d7c13ff8a3186d09"} err="failed to get container status \"6adb31d2464d541db843bddad1c43811ca11900db12deeb0d7c13ff8a3186d09\": rpc error: code = NotFound desc = could not find container \"6adb31d2464d541db843bddad1c43811ca11900db12deeb0d7c13ff8a3186d09\": container with ID starting with 6adb31d2464d541db843bddad1c43811ca11900db12deeb0d7c13ff8a3186d09 not found: ID does not exist" Feb 18 19:21:21 crc kubenswrapper[4942]: I0218 19:21:21.041898 4942 status_manager.go:851] "Failed to get status for pod" podUID="eea7d003-0909-4006-b81d-e566f256b0aa" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.188:6443: connect: connection refused" Feb 18 19:21:21 crc kubenswrapper[4942]: E0218 19:21:21.700433 4942 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.188:6443: connect: connection refused" Feb 18 19:21:21 crc kubenswrapper[4942]: E0218 19:21:21.701307 4942 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.188:6443: connect: connection refused" Feb 18 19:21:21 crc kubenswrapper[4942]: E0218 19:21:21.701691 4942 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.188:6443: connect: connection refused" Feb 18 19:21:21 crc kubenswrapper[4942]: E0218 19:21:21.702154 4942 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.188:6443: connect: connection refused" Feb 18 19:21:21 crc kubenswrapper[4942]: E0218 19:21:21.702534 4942 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.188:6443: connect: connection refused" Feb 18 19:21:21 crc kubenswrapper[4942]: I0218 19:21:21.702573 4942 controller.go:115] "failed to update lease using latest lease, fallback to ensure lease" err="failed 5 attempts to update lease" Feb 18 19:21:21 crc kubenswrapper[4942]: E0218 19:21:21.702962 4942 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.188:6443: connect: connection refused" interval="200ms" Feb 18 19:21:21 crc kubenswrapper[4942]: E0218 19:21:21.904404 4942 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.188:6443: connect: connection refused" interval="400ms" Feb 18 19:21:22 crc kubenswrapper[4942]: E0218 19:21:22.305379 4942 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.188:6443: connect: connection refused" interval="800ms" Feb 18 19:21:23 crc kubenswrapper[4942]: E0218 19:21:23.106496 4942 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.188:6443: connect: connection refused" interval="1.6s" Feb 18 19:21:24 crc kubenswrapper[4942]: E0218 19:21:24.707539 4942 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.188:6443: connect: connection refused" interval="3.2s" Feb 18 19:21:27 crc kubenswrapper[4942]: E0218 19:21:27.908586 4942 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.188:6443: connect: connection refused" interval="6.4s" Feb 18 19:21:28 crc kubenswrapper[4942]: E0218 19:21:28.769012 4942 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.102.83.188:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.18956d8c05eebe81 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f85e55b1a89d02b0cb034b1ea31ed45a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-18 19:21:16.602539649 +0000 UTC m=+236.307472314,LastTimestamp:2026-02-18 19:21:16.602539649 +0000 UTC m=+236.307472314,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 18 19:21:29 crc kubenswrapper[4942]: I0218 19:21:29.035689 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 18 19:21:29 crc kubenswrapper[4942]: I0218 19:21:29.036624 4942 status_manager.go:851] "Failed to get status for pod" podUID="eea7d003-0909-4006-b81d-e566f256b0aa" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.188:6443: connect: connection refused" Feb 18 19:21:29 crc kubenswrapper[4942]: I0218 19:21:29.067854 4942 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="4da93830-99a3-4d84-91c8-a5352a987b3f" Feb 18 19:21:29 crc kubenswrapper[4942]: I0218 19:21:29.068119 4942 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="4da93830-99a3-4d84-91c8-a5352a987b3f" Feb 18 19:21:29 crc kubenswrapper[4942]: E0218 19:21:29.068716 4942 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.188:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 18 19:21:29 crc kubenswrapper[4942]: I0218 19:21:29.069665 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 18 19:21:30 crc kubenswrapper[4942]: I0218 19:21:30.066390 4942 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Feb 18 19:21:30 crc kubenswrapper[4942]: I0218 19:21:30.067811 4942 generic.go:334] "Generic (PLEG): container finished" podID="f614b9022728cf315e60c057852e563e" containerID="a3654d3b4a5084ce9ffb9ef8aeab6155788b56ac636aee44b098f6e9d457a8d4" exitCode=1 Feb 18 19:21:30 crc kubenswrapper[4942]: I0218 19:21:30.067878 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerDied","Data":"a3654d3b4a5084ce9ffb9ef8aeab6155788b56ac636aee44b098f6e9d457a8d4"} Feb 18 19:21:30 crc kubenswrapper[4942]: I0218 19:21:30.068922 4942 scope.go:117] "RemoveContainer" containerID="a3654d3b4a5084ce9ffb9ef8aeab6155788b56ac636aee44b098f6e9d457a8d4" Feb 18 19:21:30 crc kubenswrapper[4942]: I0218 19:21:30.069752 4942 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.188:6443: connect: connection refused" Feb 18 19:21:30 crc kubenswrapper[4942]: I0218 19:21:30.070373 4942 status_manager.go:851] "Failed to get status for pod" podUID="eea7d003-0909-4006-b81d-e566f256b0aa" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.188:6443: connect: connection refused" Feb 18 19:21:30 crc kubenswrapper[4942]: I0218 19:21:30.074321 4942 generic.go:334] "Generic (PLEG): container finished" podID="71bb4a3aecc4ba5b26c4b7318770ce13" containerID="fa1703555cb3fb7f21a9e0d57be7e2d37dad00a8ff5e00e3a584823e82a9a71d" exitCode=0 Feb 18 19:21:30 crc kubenswrapper[4942]: I0218 19:21:30.074399 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerDied","Data":"fa1703555cb3fb7f21a9e0d57be7e2d37dad00a8ff5e00e3a584823e82a9a71d"} Feb 18 19:21:30 crc kubenswrapper[4942]: I0218 19:21:30.074957 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"b0c3eddb68e1ad1a8b897f6bd0279d49852d50f8b285c09b376df99296f4d9ba"} Feb 18 19:21:30 crc kubenswrapper[4942]: I0218 19:21:30.075844 4942 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="4da93830-99a3-4d84-91c8-a5352a987b3f" Feb 18 19:21:30 crc kubenswrapper[4942]: I0218 19:21:30.075924 4942 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="4da93830-99a3-4d84-91c8-a5352a987b3f" Feb 18 19:21:30 crc kubenswrapper[4942]: I0218 19:21:30.076365 4942 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.188:6443: connect: connection refused" Feb 18 19:21:30 crc kubenswrapper[4942]: E0218 19:21:30.076801 4942 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.188:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 18 19:21:30 crc kubenswrapper[4942]: I0218 19:21:30.076997 4942 status_manager.go:851] "Failed to get status for pod" podUID="eea7d003-0909-4006-b81d-e566f256b0aa" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.188:6443: connect: connection refused" Feb 18 19:21:31 crc kubenswrapper[4942]: I0218 19:21:31.100149 4942 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Feb 18 19:21:31 crc kubenswrapper[4942]: I0218 19:21:31.100271 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"5036f7403b191694066ef320028a2bf55bd13b329a0f1d42f1a10a59b7bac1be"} Feb 18 19:21:31 crc kubenswrapper[4942]: I0218 19:21:31.105941 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"253d75b2ceadf269e8eebde80a5116a461b9c35dff7feb969427e61d705af5ee"} Feb 18 19:21:31 crc kubenswrapper[4942]: I0218 19:21:31.105979 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"35e1e6e82d1c193dc851eefeee988519e4dad0c8f4a376471c152d12d878218a"} Feb 18 19:21:31 crc kubenswrapper[4942]: I0218 19:21:31.105989 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"eaa409a84e6532023beb10a5eba80842788739fb818b83a09abcf1c01f9f8972"} Feb 18 19:21:32 crc kubenswrapper[4942]: I0218 19:21:32.115630 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"e66ecb43a8be01a3e34f28aec8f5fa7100a9b6199018b4b430a8b5563769ffeb"} Feb 18 19:21:32 crc kubenswrapper[4942]: I0218 19:21:32.116024 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"54c0a911eba9a8d427b880b3104dc6abd434c843d2d34b687f30414cbd53f687"} Feb 18 19:21:32 crc kubenswrapper[4942]: I0218 19:21:32.116080 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 18 19:21:32 crc kubenswrapper[4942]: I0218 19:21:32.116121 4942 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="4da93830-99a3-4d84-91c8-a5352a987b3f" Feb 18 19:21:32 crc kubenswrapper[4942]: I0218 19:21:32.116148 4942 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="4da93830-99a3-4d84-91c8-a5352a987b3f" Feb 18 19:21:34 crc kubenswrapper[4942]: I0218 19:21:34.070218 4942 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 18 19:21:34 crc kubenswrapper[4942]: I0218 19:21:34.070820 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 18 19:21:34 crc kubenswrapper[4942]: I0218 19:21:34.079973 4942 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 18 19:21:34 crc kubenswrapper[4942]: I0218 19:21:34.714567 4942 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 18 19:21:34 crc kubenswrapper[4942]: I0218 19:21:34.720790 4942 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 18 19:21:35 crc kubenswrapper[4942]: I0218 19:21:35.131621 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 18 19:21:37 crc kubenswrapper[4942]: I0218 19:21:37.131123 4942 kubelet.go:1914] "Deleted mirror pod because it is outdated" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 18 19:21:37 crc kubenswrapper[4942]: I0218 19:21:37.191372 4942 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="50238630-c949-4324-b183-d0cf16046628" Feb 18 19:21:38 crc kubenswrapper[4942]: I0218 19:21:38.153596 4942 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="4da93830-99a3-4d84-91c8-a5352a987b3f" Feb 18 19:21:38 crc kubenswrapper[4942]: I0218 19:21:38.153629 4942 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="4da93830-99a3-4d84-91c8-a5352a987b3f" Feb 18 19:21:38 crc kubenswrapper[4942]: I0218 19:21:38.158873 4942 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="50238630-c949-4324-b183-d0cf16046628" Feb 18 19:21:46 crc kubenswrapper[4942]: I0218 19:21:46.677444 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 18 19:21:46 crc kubenswrapper[4942]: I0218 19:21:46.791262 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Feb 18 19:21:47 crc kubenswrapper[4942]: I0218 19:21:47.070470 4942 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Feb 18 19:21:47 crc kubenswrapper[4942]: I0218 19:21:47.465201 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Feb 18 19:21:47 crc kubenswrapper[4942]: I0218 19:21:47.634850 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Feb 18 19:21:47 crc kubenswrapper[4942]: I0218 19:21:47.853869 4942 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Feb 18 19:21:48 crc kubenswrapper[4942]: I0218 19:21:48.215437 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Feb 18 19:21:48 crc kubenswrapper[4942]: I0218 19:21:48.345992 4942 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Feb 18 19:21:48 crc kubenswrapper[4942]: I0218 19:21:48.445196 4942 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Feb 18 19:21:48 crc kubenswrapper[4942]: I0218 19:21:48.526855 4942 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Feb 18 19:21:48 crc kubenswrapper[4942]: I0218 19:21:48.740926 4942 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Feb 18 19:21:48 crc kubenswrapper[4942]: I0218 19:21:48.814994 4942 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Feb 18 19:21:49 crc kubenswrapper[4942]: I0218 19:21:49.210496 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Feb 18 19:21:49 crc kubenswrapper[4942]: I0218 19:21:49.460083 4942 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Feb 18 19:21:49 crc kubenswrapper[4942]: I0218 19:21:49.616311 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Feb 18 19:21:49 crc kubenswrapper[4942]: I0218 19:21:49.740970 4942 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Feb 18 19:21:49 crc kubenswrapper[4942]: I0218 19:21:49.761251 4942 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Feb 18 19:21:49 crc kubenswrapper[4942]: I0218 19:21:49.777017 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Feb 18 19:21:49 crc kubenswrapper[4942]: I0218 19:21:49.874938 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Feb 18 19:21:49 crc kubenswrapper[4942]: I0218 19:21:49.961532 4942 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Feb 18 19:21:49 crc kubenswrapper[4942]: I0218 19:21:49.991265 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Feb 18 19:21:50 crc kubenswrapper[4942]: I0218 19:21:50.126160 4942 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Feb 18 19:21:50 crc kubenswrapper[4942]: I0218 19:21:50.150928 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Feb 18 19:21:50 crc kubenswrapper[4942]: I0218 19:21:50.198094 4942 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Feb 18 19:21:50 crc kubenswrapper[4942]: I0218 19:21:50.253875 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Feb 18 19:21:50 crc kubenswrapper[4942]: I0218 19:21:50.417922 4942 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Feb 18 19:21:50 crc kubenswrapper[4942]: I0218 19:21:50.489551 4942 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Feb 18 19:21:50 crc kubenswrapper[4942]: I0218 19:21:50.497478 4942 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Feb 18 19:21:50 crc kubenswrapper[4942]: I0218 19:21:50.497568 4942 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Feb 18 19:21:50 crc kubenswrapper[4942]: I0218 19:21:50.497598 4942 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-jfkrb","openshift-marketplace/redhat-operators-5dn7d","openshift-marketplace/certified-operators-tm22r","openshift-marketplace/community-operators-gjnbk","openshift-marketplace/redhat-marketplace-vrlpg"] Feb 18 19:21:50 crc kubenswrapper[4942]: I0218 19:21:50.498015 4942 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-vrlpg" podUID="07639322-4f8b-47d5-85c7-da678ca9eaf1" containerName="registry-server" containerID="cri-o://0485d8ae4ec42faf8a5c04d463333b6555724522c62202a6a4e3aa41dc6c9e87" gracePeriod=30 Feb 18 19:21:50 crc kubenswrapper[4942]: I0218 19:21:50.498600 4942 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-gjnbk" podUID="a7f05662-6e61-4d86-8a52-13000d4bd2be" containerName="registry-server" containerID="cri-o://c4aefe50bfa9203f0d0c77dad76433e93286db36b8b60f68b3ee0a8c684c0a58" gracePeriod=30 Feb 18 19:21:50 crc kubenswrapper[4942]: I0218 19:21:50.498838 4942 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-5dn7d" podUID="fc54a822-e044-4d85-a0a8-499a79d09aaf" containerName="registry-server" containerID="cri-o://5b076eb0931e413c70c108596f5ee9f710dd64a76e5895d3b7dca278f88f019c" gracePeriod=30 Feb 18 19:21:50 crc kubenswrapper[4942]: I0218 19:21:50.499250 4942 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-tm22r" podUID="9b0511d8-736f-48fa-94a5-9a45e8482467" containerName="registry-server" containerID="cri-o://83405b8b823dd9443c9b689919187f4e07d4402df4f5ed4f940ea091c7001e2b" gracePeriod=30 Feb 18 19:21:50 crc kubenswrapper[4942]: I0218 19:21:50.500403 4942 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/marketplace-operator-79b997595-jfkrb" podUID="efab374b-fec3-4b4e-81f1-002715812a67" containerName="marketplace-operator" containerID="cri-o://1be9f6409e8403c5211f5628cc4c7f37ce2a207d76287a814454050db0e28241" gracePeriod=30 Feb 18 19:21:50 crc kubenswrapper[4942]: I0218 19:21:50.502974 4942 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Feb 18 19:21:50 crc kubenswrapper[4942]: I0218 19:21:50.561033 4942 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=13.56101556 podStartE2EDuration="13.56101556s" podCreationTimestamp="2026-02-18 19:21:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:21:50.557537691 +0000 UTC m=+270.262470406" watchObservedRunningTime="2026-02-18 19:21:50.56101556 +0000 UTC m=+270.265948235" Feb 18 19:21:50 crc kubenswrapper[4942]: I0218 19:21:50.639548 4942 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Feb 18 19:21:50 crc kubenswrapper[4942]: I0218 19:21:50.856824 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Feb 18 19:21:50 crc kubenswrapper[4942]: I0218 19:21:50.875298 4942 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Feb 18 19:21:50 crc kubenswrapper[4942]: I0218 19:21:50.905064 4942 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-5dn7d" Feb 18 19:21:50 crc kubenswrapper[4942]: I0218 19:21:50.967344 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fc54a822-e044-4d85-a0a8-499a79d09aaf-catalog-content\") pod \"fc54a822-e044-4d85-a0a8-499a79d09aaf\" (UID: \"fc54a822-e044-4d85-a0a8-499a79d09aaf\") " Feb 18 19:21:50 crc kubenswrapper[4942]: I0218 19:21:50.967403 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fc54a822-e044-4d85-a0a8-499a79d09aaf-utilities\") pod \"fc54a822-e044-4d85-a0a8-499a79d09aaf\" (UID: \"fc54a822-e044-4d85-a0a8-499a79d09aaf\") " Feb 18 19:21:50 crc kubenswrapper[4942]: I0218 19:21:50.967436 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bjnb8\" (UniqueName: \"kubernetes.io/projected/fc54a822-e044-4d85-a0a8-499a79d09aaf-kube-api-access-bjnb8\") pod \"fc54a822-e044-4d85-a0a8-499a79d09aaf\" (UID: \"fc54a822-e044-4d85-a0a8-499a79d09aaf\") " Feb 18 19:21:50 crc kubenswrapper[4942]: I0218 19:21:50.968326 4942 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-tm22r" Feb 18 19:21:50 crc kubenswrapper[4942]: I0218 19:21:50.968691 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fc54a822-e044-4d85-a0a8-499a79d09aaf-utilities" (OuterVolumeSpecName: "utilities") pod "fc54a822-e044-4d85-a0a8-499a79d09aaf" (UID: "fc54a822-e044-4d85-a0a8-499a79d09aaf"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 19:21:50 crc kubenswrapper[4942]: I0218 19:21:50.975023 4942 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-vrlpg" Feb 18 19:21:50 crc kubenswrapper[4942]: I0218 19:21:50.978408 4942 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-gjnbk" Feb 18 19:21:50 crc kubenswrapper[4942]: I0218 19:21:50.978657 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fc54a822-e044-4d85-a0a8-499a79d09aaf-kube-api-access-bjnb8" (OuterVolumeSpecName: "kube-api-access-bjnb8") pod "fc54a822-e044-4d85-a0a8-499a79d09aaf" (UID: "fc54a822-e044-4d85-a0a8-499a79d09aaf"). InnerVolumeSpecName "kube-api-access-bjnb8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:21:50 crc kubenswrapper[4942]: I0218 19:21:50.983030 4942 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-jfkrb" Feb 18 19:21:51 crc kubenswrapper[4942]: I0218 19:21:51.046014 4942 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Feb 18 19:21:51 crc kubenswrapper[4942]: I0218 19:21:51.068705 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9b0511d8-736f-48fa-94a5-9a45e8482467-catalog-content\") pod \"9b0511d8-736f-48fa-94a5-9a45e8482467\" (UID: \"9b0511d8-736f-48fa-94a5-9a45e8482467\") " Feb 18 19:21:51 crc kubenswrapper[4942]: I0218 19:21:51.068746 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/efab374b-fec3-4b4e-81f1-002715812a67-marketplace-operator-metrics\") pod \"efab374b-fec3-4b4e-81f1-002715812a67\" (UID: \"efab374b-fec3-4b4e-81f1-002715812a67\") " Feb 18 19:21:51 crc kubenswrapper[4942]: I0218 19:21:51.068804 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a7f05662-6e61-4d86-8a52-13000d4bd2be-utilities\") pod \"a7f05662-6e61-4d86-8a52-13000d4bd2be\" (UID: \"a7f05662-6e61-4d86-8a52-13000d4bd2be\") " Feb 18 19:21:51 crc kubenswrapper[4942]: I0218 19:21:51.068820 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lw4w8\" (UniqueName: \"kubernetes.io/projected/9b0511d8-736f-48fa-94a5-9a45e8482467-kube-api-access-lw4w8\") pod \"9b0511d8-736f-48fa-94a5-9a45e8482467\" (UID: \"9b0511d8-736f-48fa-94a5-9a45e8482467\") " Feb 18 19:21:51 crc kubenswrapper[4942]: I0218 19:21:51.068836 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-89296\" (UniqueName: \"kubernetes.io/projected/a7f05662-6e61-4d86-8a52-13000d4bd2be-kube-api-access-89296\") pod \"a7f05662-6e61-4d86-8a52-13000d4bd2be\" (UID: \"a7f05662-6e61-4d86-8a52-13000d4bd2be\") " Feb 18 19:21:51 crc kubenswrapper[4942]: I0218 19:21:51.068979 4942 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fc54a822-e044-4d85-a0a8-499a79d09aaf-utilities\") on node \"crc\" DevicePath \"\"" Feb 18 19:21:51 crc kubenswrapper[4942]: I0218 19:21:51.068989 4942 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bjnb8\" (UniqueName: \"kubernetes.io/projected/fc54a822-e044-4d85-a0a8-499a79d09aaf-kube-api-access-bjnb8\") on node \"crc\" DevicePath \"\"" Feb 18 19:21:51 crc kubenswrapper[4942]: I0218 19:21:51.070285 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a7f05662-6e61-4d86-8a52-13000d4bd2be-utilities" (OuterVolumeSpecName: "utilities") pod "a7f05662-6e61-4d86-8a52-13000d4bd2be" (UID: "a7f05662-6e61-4d86-8a52-13000d4bd2be"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 19:21:51 crc kubenswrapper[4942]: I0218 19:21:51.071493 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a7f05662-6e61-4d86-8a52-13000d4bd2be-kube-api-access-89296" (OuterVolumeSpecName: "kube-api-access-89296") pod "a7f05662-6e61-4d86-8a52-13000d4bd2be" (UID: "a7f05662-6e61-4d86-8a52-13000d4bd2be"). InnerVolumeSpecName "kube-api-access-89296". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:21:51 crc kubenswrapper[4942]: I0218 19:21:51.072693 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9b0511d8-736f-48fa-94a5-9a45e8482467-kube-api-access-lw4w8" (OuterVolumeSpecName: "kube-api-access-lw4w8") pod "9b0511d8-736f-48fa-94a5-9a45e8482467" (UID: "9b0511d8-736f-48fa-94a5-9a45e8482467"). InnerVolumeSpecName "kube-api-access-lw4w8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:21:51 crc kubenswrapper[4942]: I0218 19:21:51.074128 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/efab374b-fec3-4b4e-81f1-002715812a67-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "efab374b-fec3-4b4e-81f1-002715812a67" (UID: "efab374b-fec3-4b4e-81f1-002715812a67"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:21:51 crc kubenswrapper[4942]: I0218 19:21:51.108128 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fc54a822-e044-4d85-a0a8-499a79d09aaf-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "fc54a822-e044-4d85-a0a8-499a79d09aaf" (UID: "fc54a822-e044-4d85-a0a8-499a79d09aaf"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 19:21:51 crc kubenswrapper[4942]: I0218 19:21:51.141728 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9b0511d8-736f-48fa-94a5-9a45e8482467-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "9b0511d8-736f-48fa-94a5-9a45e8482467" (UID: "9b0511d8-736f-48fa-94a5-9a45e8482467"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 19:21:51 crc kubenswrapper[4942]: I0218 19:21:51.169386 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/efab374b-fec3-4b4e-81f1-002715812a67-marketplace-trusted-ca\") pod \"efab374b-fec3-4b4e-81f1-002715812a67\" (UID: \"efab374b-fec3-4b4e-81f1-002715812a67\") " Feb 18 19:21:51 crc kubenswrapper[4942]: I0218 19:21:51.169440 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/07639322-4f8b-47d5-85c7-da678ca9eaf1-utilities\") pod \"07639322-4f8b-47d5-85c7-da678ca9eaf1\" (UID: \"07639322-4f8b-47d5-85c7-da678ca9eaf1\") " Feb 18 19:21:51 crc kubenswrapper[4942]: I0218 19:21:51.169544 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-phjwr\" (UniqueName: \"kubernetes.io/projected/efab374b-fec3-4b4e-81f1-002715812a67-kube-api-access-phjwr\") pod \"efab374b-fec3-4b4e-81f1-002715812a67\" (UID: \"efab374b-fec3-4b4e-81f1-002715812a67\") " Feb 18 19:21:51 crc kubenswrapper[4942]: I0218 19:21:51.169568 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/07639322-4f8b-47d5-85c7-da678ca9eaf1-catalog-content\") pod \"07639322-4f8b-47d5-85c7-da678ca9eaf1\" (UID: \"07639322-4f8b-47d5-85c7-da678ca9eaf1\") " Feb 18 19:21:51 crc kubenswrapper[4942]: I0218 19:21:51.169613 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h8vnr\" (UniqueName: \"kubernetes.io/projected/07639322-4f8b-47d5-85c7-da678ca9eaf1-kube-api-access-h8vnr\") pod \"07639322-4f8b-47d5-85c7-da678ca9eaf1\" (UID: \"07639322-4f8b-47d5-85c7-da678ca9eaf1\") " Feb 18 19:21:51 crc kubenswrapper[4942]: I0218 19:21:51.169641 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a7f05662-6e61-4d86-8a52-13000d4bd2be-catalog-content\") pod \"a7f05662-6e61-4d86-8a52-13000d4bd2be\" (UID: \"a7f05662-6e61-4d86-8a52-13000d4bd2be\") " Feb 18 19:21:51 crc kubenswrapper[4942]: I0218 19:21:51.169671 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9b0511d8-736f-48fa-94a5-9a45e8482467-utilities\") pod \"9b0511d8-736f-48fa-94a5-9a45e8482467\" (UID: \"9b0511d8-736f-48fa-94a5-9a45e8482467\") " Feb 18 19:21:51 crc kubenswrapper[4942]: I0218 19:21:51.170323 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/efab374b-fec3-4b4e-81f1-002715812a67-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "efab374b-fec3-4b4e-81f1-002715812a67" (UID: "efab374b-fec3-4b4e-81f1-002715812a67"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:21:51 crc kubenswrapper[4942]: I0218 19:21:51.170592 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9b0511d8-736f-48fa-94a5-9a45e8482467-utilities" (OuterVolumeSpecName: "utilities") pod "9b0511d8-736f-48fa-94a5-9a45e8482467" (UID: "9b0511d8-736f-48fa-94a5-9a45e8482467"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 19:21:51 crc kubenswrapper[4942]: I0218 19:21:51.170819 4942 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/efab374b-fec3-4b4e-81f1-002715812a67-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 18 19:21:51 crc kubenswrapper[4942]: I0218 19:21:51.170840 4942 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9b0511d8-736f-48fa-94a5-9a45e8482467-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 18 19:21:51 crc kubenswrapper[4942]: I0218 19:21:51.170855 4942 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/efab374b-fec3-4b4e-81f1-002715812a67-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Feb 18 19:21:51 crc kubenswrapper[4942]: I0218 19:21:51.170913 4942 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fc54a822-e044-4d85-a0a8-499a79d09aaf-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 18 19:21:51 crc kubenswrapper[4942]: I0218 19:21:51.170928 4942 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a7f05662-6e61-4d86-8a52-13000d4bd2be-utilities\") on node \"crc\" DevicePath \"\"" Feb 18 19:21:51 crc kubenswrapper[4942]: I0218 19:21:51.170940 4942 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lw4w8\" (UniqueName: \"kubernetes.io/projected/9b0511d8-736f-48fa-94a5-9a45e8482467-kube-api-access-lw4w8\") on node \"crc\" DevicePath \"\"" Feb 18 19:21:51 crc kubenswrapper[4942]: I0218 19:21:51.170953 4942 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-89296\" (UniqueName: \"kubernetes.io/projected/a7f05662-6e61-4d86-8a52-13000d4bd2be-kube-api-access-89296\") on node \"crc\" DevicePath \"\"" Feb 18 19:21:51 crc kubenswrapper[4942]: I0218 19:21:51.170964 4942 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9b0511d8-736f-48fa-94a5-9a45e8482467-utilities\") on node \"crc\" DevicePath \"\"" Feb 18 19:21:51 crc kubenswrapper[4942]: I0218 19:21:51.171101 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/07639322-4f8b-47d5-85c7-da678ca9eaf1-utilities" (OuterVolumeSpecName: "utilities") pod "07639322-4f8b-47d5-85c7-da678ca9eaf1" (UID: "07639322-4f8b-47d5-85c7-da678ca9eaf1"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 19:21:51 crc kubenswrapper[4942]: I0218 19:21:51.173903 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/efab374b-fec3-4b4e-81f1-002715812a67-kube-api-access-phjwr" (OuterVolumeSpecName: "kube-api-access-phjwr") pod "efab374b-fec3-4b4e-81f1-002715812a67" (UID: "efab374b-fec3-4b4e-81f1-002715812a67"). InnerVolumeSpecName "kube-api-access-phjwr". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:21:51 crc kubenswrapper[4942]: I0218 19:21:51.174318 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/07639322-4f8b-47d5-85c7-da678ca9eaf1-kube-api-access-h8vnr" (OuterVolumeSpecName: "kube-api-access-h8vnr") pod "07639322-4f8b-47d5-85c7-da678ca9eaf1" (UID: "07639322-4f8b-47d5-85c7-da678ca9eaf1"). InnerVolumeSpecName "kube-api-access-h8vnr". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:21:51 crc kubenswrapper[4942]: I0218 19:21:51.208523 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/07639322-4f8b-47d5-85c7-da678ca9eaf1-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "07639322-4f8b-47d5-85c7-da678ca9eaf1" (UID: "07639322-4f8b-47d5-85c7-da678ca9eaf1"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 19:21:51 crc kubenswrapper[4942]: I0218 19:21:51.217224 4942 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Feb 18 19:21:51 crc kubenswrapper[4942]: I0218 19:21:51.228472 4942 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Feb 18 19:21:51 crc kubenswrapper[4942]: I0218 19:21:51.239058 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Feb 18 19:21:51 crc kubenswrapper[4942]: I0218 19:21:51.242734 4942 generic.go:334] "Generic (PLEG): container finished" podID="a7f05662-6e61-4d86-8a52-13000d4bd2be" containerID="c4aefe50bfa9203f0d0c77dad76433e93286db36b8b60f68b3ee0a8c684c0a58" exitCode=0 Feb 18 19:21:51 crc kubenswrapper[4942]: I0218 19:21:51.242847 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gjnbk" event={"ID":"a7f05662-6e61-4d86-8a52-13000d4bd2be","Type":"ContainerDied","Data":"c4aefe50bfa9203f0d0c77dad76433e93286db36b8b60f68b3ee0a8c684c0a58"} Feb 18 19:21:51 crc kubenswrapper[4942]: I0218 19:21:51.242869 4942 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-gjnbk" Feb 18 19:21:51 crc kubenswrapper[4942]: I0218 19:21:51.242880 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gjnbk" event={"ID":"a7f05662-6e61-4d86-8a52-13000d4bd2be","Type":"ContainerDied","Data":"48be7d221e592c508e0024c55b4c7ad66329680b58e7532a74bd5a930a0ac4bd"} Feb 18 19:21:51 crc kubenswrapper[4942]: I0218 19:21:51.242899 4942 scope.go:117] "RemoveContainer" containerID="c4aefe50bfa9203f0d0c77dad76433e93286db36b8b60f68b3ee0a8c684c0a58" Feb 18 19:21:51 crc kubenswrapper[4942]: I0218 19:21:51.246241 4942 generic.go:334] "Generic (PLEG): container finished" podID="07639322-4f8b-47d5-85c7-da678ca9eaf1" containerID="0485d8ae4ec42faf8a5c04d463333b6555724522c62202a6a4e3aa41dc6c9e87" exitCode=0 Feb 18 19:21:51 crc kubenswrapper[4942]: I0218 19:21:51.246289 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vrlpg" event={"ID":"07639322-4f8b-47d5-85c7-da678ca9eaf1","Type":"ContainerDied","Data":"0485d8ae4ec42faf8a5c04d463333b6555724522c62202a6a4e3aa41dc6c9e87"} Feb 18 19:21:51 crc kubenswrapper[4942]: I0218 19:21:51.246325 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vrlpg" event={"ID":"07639322-4f8b-47d5-85c7-da678ca9eaf1","Type":"ContainerDied","Data":"1342033222b8b7017fedfcc1a993530ce3bb6c2c950b03c672270884763e7952"} Feb 18 19:21:51 crc kubenswrapper[4942]: I0218 19:21:51.246865 4942 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-vrlpg" Feb 18 19:21:51 crc kubenswrapper[4942]: I0218 19:21:51.254342 4942 generic.go:334] "Generic (PLEG): container finished" podID="efab374b-fec3-4b4e-81f1-002715812a67" containerID="1be9f6409e8403c5211f5628cc4c7f37ce2a207d76287a814454050db0e28241" exitCode=0 Feb 18 19:21:51 crc kubenswrapper[4942]: I0218 19:21:51.254393 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-jfkrb" event={"ID":"efab374b-fec3-4b4e-81f1-002715812a67","Type":"ContainerDied","Data":"1be9f6409e8403c5211f5628cc4c7f37ce2a207d76287a814454050db0e28241"} Feb 18 19:21:51 crc kubenswrapper[4942]: I0218 19:21:51.254430 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-jfkrb" event={"ID":"efab374b-fec3-4b4e-81f1-002715812a67","Type":"ContainerDied","Data":"3c276811f364fb83706109331be8399abc2c7a535cfd237e4abe3dc07119fee5"} Feb 18 19:21:51 crc kubenswrapper[4942]: I0218 19:21:51.254377 4942 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-jfkrb" Feb 18 19:21:51 crc kubenswrapper[4942]: I0218 19:21:51.259826 4942 generic.go:334] "Generic (PLEG): container finished" podID="9b0511d8-736f-48fa-94a5-9a45e8482467" containerID="83405b8b823dd9443c9b689919187f4e07d4402df4f5ed4f940ea091c7001e2b" exitCode=0 Feb 18 19:21:51 crc kubenswrapper[4942]: I0218 19:21:51.259881 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a7f05662-6e61-4d86-8a52-13000d4bd2be-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "a7f05662-6e61-4d86-8a52-13000d4bd2be" (UID: "a7f05662-6e61-4d86-8a52-13000d4bd2be"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 19:21:51 crc kubenswrapper[4942]: I0218 19:21:51.259946 4942 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-tm22r" Feb 18 19:21:51 crc kubenswrapper[4942]: I0218 19:21:51.260206 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tm22r" event={"ID":"9b0511d8-736f-48fa-94a5-9a45e8482467","Type":"ContainerDied","Data":"83405b8b823dd9443c9b689919187f4e07d4402df4f5ed4f940ea091c7001e2b"} Feb 18 19:21:51 crc kubenswrapper[4942]: I0218 19:21:51.260248 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tm22r" event={"ID":"9b0511d8-736f-48fa-94a5-9a45e8482467","Type":"ContainerDied","Data":"903844334b076d9d3fb48a98e733d182c6c0ea5de7f8aeb1362b7e203a4a8fa4"} Feb 18 19:21:51 crc kubenswrapper[4942]: I0218 19:21:51.264454 4942 generic.go:334] "Generic (PLEG): container finished" podID="fc54a822-e044-4d85-a0a8-499a79d09aaf" containerID="5b076eb0931e413c70c108596f5ee9f710dd64a76e5895d3b7dca278f88f019c" exitCode=0 Feb 18 19:21:51 crc kubenswrapper[4942]: I0218 19:21:51.264493 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5dn7d" event={"ID":"fc54a822-e044-4d85-a0a8-499a79d09aaf","Type":"ContainerDied","Data":"5b076eb0931e413c70c108596f5ee9f710dd64a76e5895d3b7dca278f88f019c"} Feb 18 19:21:51 crc kubenswrapper[4942]: I0218 19:21:51.264512 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5dn7d" event={"ID":"fc54a822-e044-4d85-a0a8-499a79d09aaf","Type":"ContainerDied","Data":"56923a9d84e1c384a4de3a0f2cac66f27ae78aee76d844588bcd57af55695ead"} Feb 18 19:21:51 crc kubenswrapper[4942]: I0218 19:21:51.264568 4942 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-5dn7d" Feb 18 19:21:51 crc kubenswrapper[4942]: I0218 19:21:51.271748 4942 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/07639322-4f8b-47d5-85c7-da678ca9eaf1-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 18 19:21:51 crc kubenswrapper[4942]: I0218 19:21:51.271826 4942 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h8vnr\" (UniqueName: \"kubernetes.io/projected/07639322-4f8b-47d5-85c7-da678ca9eaf1-kube-api-access-h8vnr\") on node \"crc\" DevicePath \"\"" Feb 18 19:21:51 crc kubenswrapper[4942]: I0218 19:21:51.271849 4942 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a7f05662-6e61-4d86-8a52-13000d4bd2be-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 18 19:21:51 crc kubenswrapper[4942]: I0218 19:21:51.271864 4942 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/07639322-4f8b-47d5-85c7-da678ca9eaf1-utilities\") on node \"crc\" DevicePath \"\"" Feb 18 19:21:51 crc kubenswrapper[4942]: I0218 19:21:51.271880 4942 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-phjwr\" (UniqueName: \"kubernetes.io/projected/efab374b-fec3-4b4e-81f1-002715812a67-kube-api-access-phjwr\") on node \"crc\" DevicePath \"\"" Feb 18 19:21:51 crc kubenswrapper[4942]: I0218 19:21:51.309328 4942 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-vrlpg"] Feb 18 19:21:51 crc kubenswrapper[4942]: I0218 19:21:51.311783 4942 scope.go:117] "RemoveContainer" containerID="c444eba8355a9abafffa8c61716275eed3f491c2d65fff1fbecb6c7394ac87ff" Feb 18 19:21:51 crc kubenswrapper[4942]: I0218 19:21:51.311841 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Feb 18 19:21:51 crc kubenswrapper[4942]: I0218 19:21:51.312574 4942 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-vrlpg"] Feb 18 19:21:51 crc kubenswrapper[4942]: I0218 19:21:51.328724 4942 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-tm22r"] Feb 18 19:21:51 crc kubenswrapper[4942]: I0218 19:21:51.332047 4942 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Feb 18 19:21:51 crc kubenswrapper[4942]: I0218 19:21:51.337923 4942 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-tm22r"] Feb 18 19:21:51 crc kubenswrapper[4942]: I0218 19:21:51.346850 4942 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-5dn7d"] Feb 18 19:21:51 crc kubenswrapper[4942]: I0218 19:21:51.354292 4942 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-5dn7d"] Feb 18 19:21:51 crc kubenswrapper[4942]: I0218 19:21:51.354562 4942 scope.go:117] "RemoveContainer" containerID="e891a3b4b4ba4720cda01043300773838c67303d7afcead4f05b8f2e095463e4" Feb 18 19:21:51 crc kubenswrapper[4942]: I0218 19:21:51.360006 4942 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-jfkrb"] Feb 18 19:21:51 crc kubenswrapper[4942]: I0218 19:21:51.365602 4942 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-jfkrb"] Feb 18 19:21:51 crc kubenswrapper[4942]: I0218 19:21:51.375562 4942 scope.go:117] "RemoveContainer" containerID="c4aefe50bfa9203f0d0c77dad76433e93286db36b8b60f68b3ee0a8c684c0a58" Feb 18 19:21:51 crc kubenswrapper[4942]: E0218 19:21:51.376538 4942 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c4aefe50bfa9203f0d0c77dad76433e93286db36b8b60f68b3ee0a8c684c0a58\": container with ID starting with c4aefe50bfa9203f0d0c77dad76433e93286db36b8b60f68b3ee0a8c684c0a58 not found: ID does not exist" containerID="c4aefe50bfa9203f0d0c77dad76433e93286db36b8b60f68b3ee0a8c684c0a58" Feb 18 19:21:51 crc kubenswrapper[4942]: I0218 19:21:51.376584 4942 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c4aefe50bfa9203f0d0c77dad76433e93286db36b8b60f68b3ee0a8c684c0a58"} err="failed to get container status \"c4aefe50bfa9203f0d0c77dad76433e93286db36b8b60f68b3ee0a8c684c0a58\": rpc error: code = NotFound desc = could not find container \"c4aefe50bfa9203f0d0c77dad76433e93286db36b8b60f68b3ee0a8c684c0a58\": container with ID starting with c4aefe50bfa9203f0d0c77dad76433e93286db36b8b60f68b3ee0a8c684c0a58 not found: ID does not exist" Feb 18 19:21:51 crc kubenswrapper[4942]: I0218 19:21:51.376609 4942 scope.go:117] "RemoveContainer" containerID="c444eba8355a9abafffa8c61716275eed3f491c2d65fff1fbecb6c7394ac87ff" Feb 18 19:21:51 crc kubenswrapper[4942]: E0218 19:21:51.377009 4942 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c444eba8355a9abafffa8c61716275eed3f491c2d65fff1fbecb6c7394ac87ff\": container with ID starting with c444eba8355a9abafffa8c61716275eed3f491c2d65fff1fbecb6c7394ac87ff not found: ID does not exist" containerID="c444eba8355a9abafffa8c61716275eed3f491c2d65fff1fbecb6c7394ac87ff" Feb 18 19:21:51 crc kubenswrapper[4942]: I0218 19:21:51.377038 4942 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c444eba8355a9abafffa8c61716275eed3f491c2d65fff1fbecb6c7394ac87ff"} err="failed to get container status \"c444eba8355a9abafffa8c61716275eed3f491c2d65fff1fbecb6c7394ac87ff\": rpc error: code = NotFound desc = could not find container \"c444eba8355a9abafffa8c61716275eed3f491c2d65fff1fbecb6c7394ac87ff\": container with ID starting with c444eba8355a9abafffa8c61716275eed3f491c2d65fff1fbecb6c7394ac87ff not found: ID does not exist" Feb 18 19:21:51 crc kubenswrapper[4942]: I0218 19:21:51.377056 4942 scope.go:117] "RemoveContainer" containerID="e891a3b4b4ba4720cda01043300773838c67303d7afcead4f05b8f2e095463e4" Feb 18 19:21:51 crc kubenswrapper[4942]: E0218 19:21:51.377474 4942 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e891a3b4b4ba4720cda01043300773838c67303d7afcead4f05b8f2e095463e4\": container with ID starting with e891a3b4b4ba4720cda01043300773838c67303d7afcead4f05b8f2e095463e4 not found: ID does not exist" containerID="e891a3b4b4ba4720cda01043300773838c67303d7afcead4f05b8f2e095463e4" Feb 18 19:21:51 crc kubenswrapper[4942]: I0218 19:21:51.377520 4942 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e891a3b4b4ba4720cda01043300773838c67303d7afcead4f05b8f2e095463e4"} err="failed to get container status \"e891a3b4b4ba4720cda01043300773838c67303d7afcead4f05b8f2e095463e4\": rpc error: code = NotFound desc = could not find container \"e891a3b4b4ba4720cda01043300773838c67303d7afcead4f05b8f2e095463e4\": container with ID starting with e891a3b4b4ba4720cda01043300773838c67303d7afcead4f05b8f2e095463e4 not found: ID does not exist" Feb 18 19:21:51 crc kubenswrapper[4942]: I0218 19:21:51.377541 4942 scope.go:117] "RemoveContainer" containerID="0485d8ae4ec42faf8a5c04d463333b6555724522c62202a6a4e3aa41dc6c9e87" Feb 18 19:21:51 crc kubenswrapper[4942]: I0218 19:21:51.377664 4942 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Feb 18 19:21:51 crc kubenswrapper[4942]: I0218 19:21:51.392574 4942 scope.go:117] "RemoveContainer" containerID="6f1810b19a4355734dbaeee787309c5dab10550211c3759c7c9b6ebd65265c09" Feb 18 19:21:51 crc kubenswrapper[4942]: I0218 19:21:51.414551 4942 scope.go:117] "RemoveContainer" containerID="656a607515eaebac36b55875247d64557f81a70e0dda53e05599d7bfce8c0c9a" Feb 18 19:21:51 crc kubenswrapper[4942]: I0218 19:21:51.428750 4942 scope.go:117] "RemoveContainer" containerID="0485d8ae4ec42faf8a5c04d463333b6555724522c62202a6a4e3aa41dc6c9e87" Feb 18 19:21:51 crc kubenswrapper[4942]: E0218 19:21:51.429164 4942 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0485d8ae4ec42faf8a5c04d463333b6555724522c62202a6a4e3aa41dc6c9e87\": container with ID starting with 0485d8ae4ec42faf8a5c04d463333b6555724522c62202a6a4e3aa41dc6c9e87 not found: ID does not exist" containerID="0485d8ae4ec42faf8a5c04d463333b6555724522c62202a6a4e3aa41dc6c9e87" Feb 18 19:21:51 crc kubenswrapper[4942]: I0218 19:21:51.429195 4942 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0485d8ae4ec42faf8a5c04d463333b6555724522c62202a6a4e3aa41dc6c9e87"} err="failed to get container status \"0485d8ae4ec42faf8a5c04d463333b6555724522c62202a6a4e3aa41dc6c9e87\": rpc error: code = NotFound desc = could not find container \"0485d8ae4ec42faf8a5c04d463333b6555724522c62202a6a4e3aa41dc6c9e87\": container with ID starting with 0485d8ae4ec42faf8a5c04d463333b6555724522c62202a6a4e3aa41dc6c9e87 not found: ID does not exist" Feb 18 19:21:51 crc kubenswrapper[4942]: I0218 19:21:51.429220 4942 scope.go:117] "RemoveContainer" containerID="6f1810b19a4355734dbaeee787309c5dab10550211c3759c7c9b6ebd65265c09" Feb 18 19:21:51 crc kubenswrapper[4942]: E0218 19:21:51.429544 4942 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6f1810b19a4355734dbaeee787309c5dab10550211c3759c7c9b6ebd65265c09\": container with ID starting with 6f1810b19a4355734dbaeee787309c5dab10550211c3759c7c9b6ebd65265c09 not found: ID does not exist" containerID="6f1810b19a4355734dbaeee787309c5dab10550211c3759c7c9b6ebd65265c09" Feb 18 19:21:51 crc kubenswrapper[4942]: I0218 19:21:51.429566 4942 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6f1810b19a4355734dbaeee787309c5dab10550211c3759c7c9b6ebd65265c09"} err="failed to get container status \"6f1810b19a4355734dbaeee787309c5dab10550211c3759c7c9b6ebd65265c09\": rpc error: code = NotFound desc = could not find container \"6f1810b19a4355734dbaeee787309c5dab10550211c3759c7c9b6ebd65265c09\": container with ID starting with 6f1810b19a4355734dbaeee787309c5dab10550211c3759c7c9b6ebd65265c09 not found: ID does not exist" Feb 18 19:21:51 crc kubenswrapper[4942]: I0218 19:21:51.429580 4942 scope.go:117] "RemoveContainer" containerID="656a607515eaebac36b55875247d64557f81a70e0dda53e05599d7bfce8c0c9a" Feb 18 19:21:51 crc kubenswrapper[4942]: E0218 19:21:51.429865 4942 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"656a607515eaebac36b55875247d64557f81a70e0dda53e05599d7bfce8c0c9a\": container with ID starting with 656a607515eaebac36b55875247d64557f81a70e0dda53e05599d7bfce8c0c9a not found: ID does not exist" containerID="656a607515eaebac36b55875247d64557f81a70e0dda53e05599d7bfce8c0c9a" Feb 18 19:21:51 crc kubenswrapper[4942]: I0218 19:21:51.429884 4942 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"656a607515eaebac36b55875247d64557f81a70e0dda53e05599d7bfce8c0c9a"} err="failed to get container status \"656a607515eaebac36b55875247d64557f81a70e0dda53e05599d7bfce8c0c9a\": rpc error: code = NotFound desc = could not find container \"656a607515eaebac36b55875247d64557f81a70e0dda53e05599d7bfce8c0c9a\": container with ID starting with 656a607515eaebac36b55875247d64557f81a70e0dda53e05599d7bfce8c0c9a not found: ID does not exist" Feb 18 19:21:51 crc kubenswrapper[4942]: I0218 19:21:51.429897 4942 scope.go:117] "RemoveContainer" containerID="1be9f6409e8403c5211f5628cc4c7f37ce2a207d76287a814454050db0e28241" Feb 18 19:21:51 crc kubenswrapper[4942]: I0218 19:21:51.443872 4942 scope.go:117] "RemoveContainer" containerID="1be9f6409e8403c5211f5628cc4c7f37ce2a207d76287a814454050db0e28241" Feb 18 19:21:51 crc kubenswrapper[4942]: E0218 19:21:51.444130 4942 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1be9f6409e8403c5211f5628cc4c7f37ce2a207d76287a814454050db0e28241\": container with ID starting with 1be9f6409e8403c5211f5628cc4c7f37ce2a207d76287a814454050db0e28241 not found: ID does not exist" containerID="1be9f6409e8403c5211f5628cc4c7f37ce2a207d76287a814454050db0e28241" Feb 18 19:21:51 crc kubenswrapper[4942]: I0218 19:21:51.444153 4942 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1be9f6409e8403c5211f5628cc4c7f37ce2a207d76287a814454050db0e28241"} err="failed to get container status \"1be9f6409e8403c5211f5628cc4c7f37ce2a207d76287a814454050db0e28241\": rpc error: code = NotFound desc = could not find container \"1be9f6409e8403c5211f5628cc4c7f37ce2a207d76287a814454050db0e28241\": container with ID starting with 1be9f6409e8403c5211f5628cc4c7f37ce2a207d76287a814454050db0e28241 not found: ID does not exist" Feb 18 19:21:51 crc kubenswrapper[4942]: I0218 19:21:51.444170 4942 scope.go:117] "RemoveContainer" containerID="83405b8b823dd9443c9b689919187f4e07d4402df4f5ed4f940ea091c7001e2b" Feb 18 19:21:51 crc kubenswrapper[4942]: I0218 19:21:51.461473 4942 scope.go:117] "RemoveContainer" containerID="a4840f4a2d896cd262391705ac29acf6d59d0478aeff45cc3eafd7da73237848" Feb 18 19:21:51 crc kubenswrapper[4942]: I0218 19:21:51.475194 4942 scope.go:117] "RemoveContainer" containerID="75b2c06df75750c4c383a2a5c55da9c635db709e0a2c8fdf77529d081e81914f" Feb 18 19:21:51 crc kubenswrapper[4942]: I0218 19:21:51.488033 4942 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Feb 18 19:21:51 crc kubenswrapper[4942]: I0218 19:21:51.488391 4942 scope.go:117] "RemoveContainer" containerID="83405b8b823dd9443c9b689919187f4e07d4402df4f5ed4f940ea091c7001e2b" Feb 18 19:21:51 crc kubenswrapper[4942]: E0218 19:21:51.488688 4942 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"83405b8b823dd9443c9b689919187f4e07d4402df4f5ed4f940ea091c7001e2b\": container with ID starting with 83405b8b823dd9443c9b689919187f4e07d4402df4f5ed4f940ea091c7001e2b not found: ID does not exist" containerID="83405b8b823dd9443c9b689919187f4e07d4402df4f5ed4f940ea091c7001e2b" Feb 18 19:21:51 crc kubenswrapper[4942]: I0218 19:21:51.488715 4942 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"83405b8b823dd9443c9b689919187f4e07d4402df4f5ed4f940ea091c7001e2b"} err="failed to get container status \"83405b8b823dd9443c9b689919187f4e07d4402df4f5ed4f940ea091c7001e2b\": rpc error: code = NotFound desc = could not find container \"83405b8b823dd9443c9b689919187f4e07d4402df4f5ed4f940ea091c7001e2b\": container with ID starting with 83405b8b823dd9443c9b689919187f4e07d4402df4f5ed4f940ea091c7001e2b not found: ID does not exist" Feb 18 19:21:51 crc kubenswrapper[4942]: I0218 19:21:51.488739 4942 scope.go:117] "RemoveContainer" containerID="a4840f4a2d896cd262391705ac29acf6d59d0478aeff45cc3eafd7da73237848" Feb 18 19:21:51 crc kubenswrapper[4942]: E0218 19:21:51.489155 4942 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a4840f4a2d896cd262391705ac29acf6d59d0478aeff45cc3eafd7da73237848\": container with ID starting with a4840f4a2d896cd262391705ac29acf6d59d0478aeff45cc3eafd7da73237848 not found: ID does not exist" containerID="a4840f4a2d896cd262391705ac29acf6d59d0478aeff45cc3eafd7da73237848" Feb 18 19:21:51 crc kubenswrapper[4942]: I0218 19:21:51.489186 4942 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a4840f4a2d896cd262391705ac29acf6d59d0478aeff45cc3eafd7da73237848"} err="failed to get container status \"a4840f4a2d896cd262391705ac29acf6d59d0478aeff45cc3eafd7da73237848\": rpc error: code = NotFound desc = could not find container \"a4840f4a2d896cd262391705ac29acf6d59d0478aeff45cc3eafd7da73237848\": container with ID starting with a4840f4a2d896cd262391705ac29acf6d59d0478aeff45cc3eafd7da73237848 not found: ID does not exist" Feb 18 19:21:51 crc kubenswrapper[4942]: I0218 19:21:51.489205 4942 scope.go:117] "RemoveContainer" containerID="75b2c06df75750c4c383a2a5c55da9c635db709e0a2c8fdf77529d081e81914f" Feb 18 19:21:51 crc kubenswrapper[4942]: E0218 19:21:51.489431 4942 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"75b2c06df75750c4c383a2a5c55da9c635db709e0a2c8fdf77529d081e81914f\": container with ID starting with 75b2c06df75750c4c383a2a5c55da9c635db709e0a2c8fdf77529d081e81914f not found: ID does not exist" containerID="75b2c06df75750c4c383a2a5c55da9c635db709e0a2c8fdf77529d081e81914f" Feb 18 19:21:51 crc kubenswrapper[4942]: I0218 19:21:51.489455 4942 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"75b2c06df75750c4c383a2a5c55da9c635db709e0a2c8fdf77529d081e81914f"} err="failed to get container status \"75b2c06df75750c4c383a2a5c55da9c635db709e0a2c8fdf77529d081e81914f\": rpc error: code = NotFound desc = could not find container \"75b2c06df75750c4c383a2a5c55da9c635db709e0a2c8fdf77529d081e81914f\": container with ID starting with 75b2c06df75750c4c383a2a5c55da9c635db709e0a2c8fdf77529d081e81914f not found: ID does not exist" Feb 18 19:21:51 crc kubenswrapper[4942]: I0218 19:21:51.489472 4942 scope.go:117] "RemoveContainer" containerID="5b076eb0931e413c70c108596f5ee9f710dd64a76e5895d3b7dca278f88f019c" Feb 18 19:21:51 crc kubenswrapper[4942]: I0218 19:21:51.502422 4942 scope.go:117] "RemoveContainer" containerID="f47c297aaa4179fbd75fe8d9514cdd383b6ab6c7b7fa7596996fa94fd2798c4b" Feb 18 19:21:51 crc kubenswrapper[4942]: I0218 19:21:51.519162 4942 scope.go:117] "RemoveContainer" containerID="45ae396e5a2bb9c54d7b56f4a32d81eba0135151fa1a2d7722d17d0a8667d980" Feb 18 19:21:51 crc kubenswrapper[4942]: I0218 19:21:51.521909 4942 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Feb 18 19:21:51 crc kubenswrapper[4942]: I0218 19:21:51.541630 4942 scope.go:117] "RemoveContainer" containerID="5b076eb0931e413c70c108596f5ee9f710dd64a76e5895d3b7dca278f88f019c" Feb 18 19:21:51 crc kubenswrapper[4942]: E0218 19:21:51.542148 4942 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5b076eb0931e413c70c108596f5ee9f710dd64a76e5895d3b7dca278f88f019c\": container with ID starting with 5b076eb0931e413c70c108596f5ee9f710dd64a76e5895d3b7dca278f88f019c not found: ID does not exist" containerID="5b076eb0931e413c70c108596f5ee9f710dd64a76e5895d3b7dca278f88f019c" Feb 18 19:21:51 crc kubenswrapper[4942]: I0218 19:21:51.542179 4942 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5b076eb0931e413c70c108596f5ee9f710dd64a76e5895d3b7dca278f88f019c"} err="failed to get container status \"5b076eb0931e413c70c108596f5ee9f710dd64a76e5895d3b7dca278f88f019c\": rpc error: code = NotFound desc = could not find container \"5b076eb0931e413c70c108596f5ee9f710dd64a76e5895d3b7dca278f88f019c\": container with ID starting with 5b076eb0931e413c70c108596f5ee9f710dd64a76e5895d3b7dca278f88f019c not found: ID does not exist" Feb 18 19:21:51 crc kubenswrapper[4942]: I0218 19:21:51.542205 4942 scope.go:117] "RemoveContainer" containerID="f47c297aaa4179fbd75fe8d9514cdd383b6ab6c7b7fa7596996fa94fd2798c4b" Feb 18 19:21:51 crc kubenswrapper[4942]: E0218 19:21:51.542451 4942 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f47c297aaa4179fbd75fe8d9514cdd383b6ab6c7b7fa7596996fa94fd2798c4b\": container with ID starting with f47c297aaa4179fbd75fe8d9514cdd383b6ab6c7b7fa7596996fa94fd2798c4b not found: ID does not exist" containerID="f47c297aaa4179fbd75fe8d9514cdd383b6ab6c7b7fa7596996fa94fd2798c4b" Feb 18 19:21:51 crc kubenswrapper[4942]: I0218 19:21:51.542474 4942 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f47c297aaa4179fbd75fe8d9514cdd383b6ab6c7b7fa7596996fa94fd2798c4b"} err="failed to get container status \"f47c297aaa4179fbd75fe8d9514cdd383b6ab6c7b7fa7596996fa94fd2798c4b\": rpc error: code = NotFound desc = could not find container \"f47c297aaa4179fbd75fe8d9514cdd383b6ab6c7b7fa7596996fa94fd2798c4b\": container with ID starting with f47c297aaa4179fbd75fe8d9514cdd383b6ab6c7b7fa7596996fa94fd2798c4b not found: ID does not exist" Feb 18 19:21:51 crc kubenswrapper[4942]: I0218 19:21:51.542492 4942 scope.go:117] "RemoveContainer" containerID="45ae396e5a2bb9c54d7b56f4a32d81eba0135151fa1a2d7722d17d0a8667d980" Feb 18 19:21:51 crc kubenswrapper[4942]: E0218 19:21:51.542720 4942 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"45ae396e5a2bb9c54d7b56f4a32d81eba0135151fa1a2d7722d17d0a8667d980\": container with ID starting with 45ae396e5a2bb9c54d7b56f4a32d81eba0135151fa1a2d7722d17d0a8667d980 not found: ID does not exist" containerID="45ae396e5a2bb9c54d7b56f4a32d81eba0135151fa1a2d7722d17d0a8667d980" Feb 18 19:21:51 crc kubenswrapper[4942]: I0218 19:21:51.542743 4942 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"45ae396e5a2bb9c54d7b56f4a32d81eba0135151fa1a2d7722d17d0a8667d980"} err="failed to get container status \"45ae396e5a2bb9c54d7b56f4a32d81eba0135151fa1a2d7722d17d0a8667d980\": rpc error: code = NotFound desc = could not find container \"45ae396e5a2bb9c54d7b56f4a32d81eba0135151fa1a2d7722d17d0a8667d980\": container with ID starting with 45ae396e5a2bb9c54d7b56f4a32d81eba0135151fa1a2d7722d17d0a8667d980 not found: ID does not exist" Feb 18 19:21:51 crc kubenswrapper[4942]: I0218 19:21:51.568329 4942 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-gjnbk"] Feb 18 19:21:51 crc kubenswrapper[4942]: I0218 19:21:51.571350 4942 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-gjnbk"] Feb 18 19:21:51 crc kubenswrapper[4942]: I0218 19:21:51.572848 4942 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Feb 18 19:21:51 crc kubenswrapper[4942]: I0218 19:21:51.600696 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Feb 18 19:21:51 crc kubenswrapper[4942]: I0218 19:21:51.765090 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Feb 18 19:21:51 crc kubenswrapper[4942]: I0218 19:21:51.863165 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Feb 18 19:21:51 crc kubenswrapper[4942]: I0218 19:21:51.993478 4942 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Feb 18 19:21:52 crc kubenswrapper[4942]: I0218 19:21:52.060828 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Feb 18 19:21:52 crc kubenswrapper[4942]: I0218 19:21:52.064984 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Feb 18 19:21:52 crc kubenswrapper[4942]: I0218 19:21:52.138517 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Feb 18 19:21:52 crc kubenswrapper[4942]: I0218 19:21:52.158159 4942 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Feb 18 19:21:52 crc kubenswrapper[4942]: I0218 19:21:52.172451 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Feb 18 19:21:52 crc kubenswrapper[4942]: I0218 19:21:52.202268 4942 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Feb 18 19:21:52 crc kubenswrapper[4942]: I0218 19:21:52.233602 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Feb 18 19:21:52 crc kubenswrapper[4942]: I0218 19:21:52.286892 4942 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Feb 18 19:21:52 crc kubenswrapper[4942]: I0218 19:21:52.327404 4942 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Feb 18 19:21:52 crc kubenswrapper[4942]: I0218 19:21:52.435374 4942 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Feb 18 19:21:52 crc kubenswrapper[4942]: I0218 19:21:52.486789 4942 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Feb 18 19:21:52 crc kubenswrapper[4942]: I0218 19:21:52.506591 4942 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Feb 18 19:21:52 crc kubenswrapper[4942]: I0218 19:21:52.586553 4942 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Feb 18 19:21:52 crc kubenswrapper[4942]: I0218 19:21:52.643436 4942 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Feb 18 19:21:52 crc kubenswrapper[4942]: I0218 19:21:52.840891 4942 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Feb 18 19:21:52 crc kubenswrapper[4942]: I0218 19:21:52.841075 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Feb 18 19:21:52 crc kubenswrapper[4942]: I0218 19:21:52.947251 4942 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Feb 18 19:21:52 crc kubenswrapper[4942]: I0218 19:21:52.949004 4942 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Feb 18 19:21:53 crc kubenswrapper[4942]: I0218 19:21:53.018930 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Feb 18 19:21:53 crc kubenswrapper[4942]: I0218 19:21:53.028159 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Feb 18 19:21:53 crc kubenswrapper[4942]: I0218 19:21:53.042533 4942 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="07639322-4f8b-47d5-85c7-da678ca9eaf1" path="/var/lib/kubelet/pods/07639322-4f8b-47d5-85c7-da678ca9eaf1/volumes" Feb 18 19:21:53 crc kubenswrapper[4942]: I0218 19:21:53.043507 4942 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9b0511d8-736f-48fa-94a5-9a45e8482467" path="/var/lib/kubelet/pods/9b0511d8-736f-48fa-94a5-9a45e8482467/volumes" Feb 18 19:21:53 crc kubenswrapper[4942]: I0218 19:21:53.044205 4942 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a7f05662-6e61-4d86-8a52-13000d4bd2be" path="/var/lib/kubelet/pods/a7f05662-6e61-4d86-8a52-13000d4bd2be/volumes" Feb 18 19:21:53 crc kubenswrapper[4942]: I0218 19:21:53.045471 4942 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="efab374b-fec3-4b4e-81f1-002715812a67" path="/var/lib/kubelet/pods/efab374b-fec3-4b4e-81f1-002715812a67/volumes" Feb 18 19:21:53 crc kubenswrapper[4942]: I0218 19:21:53.046014 4942 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fc54a822-e044-4d85-a0a8-499a79d09aaf" path="/var/lib/kubelet/pods/fc54a822-e044-4d85-a0a8-499a79d09aaf/volumes" Feb 18 19:21:53 crc kubenswrapper[4942]: I0218 19:21:53.137488 4942 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Feb 18 19:21:53 crc kubenswrapper[4942]: I0218 19:21:53.217223 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Feb 18 19:21:53 crc kubenswrapper[4942]: I0218 19:21:53.228424 4942 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Feb 18 19:21:53 crc kubenswrapper[4942]: I0218 19:21:53.263135 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Feb 18 19:21:53 crc kubenswrapper[4942]: I0218 19:21:53.267373 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Feb 18 19:21:53 crc kubenswrapper[4942]: I0218 19:21:53.440330 4942 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Feb 18 19:21:53 crc kubenswrapper[4942]: I0218 19:21:53.475395 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Feb 18 19:21:53 crc kubenswrapper[4942]: I0218 19:21:53.567967 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Feb 18 19:21:53 crc kubenswrapper[4942]: I0218 19:21:53.627252 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Feb 18 19:21:53 crc kubenswrapper[4942]: I0218 19:21:53.648737 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Feb 18 19:21:53 crc kubenswrapper[4942]: I0218 19:21:53.679932 4942 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Feb 18 19:21:53 crc kubenswrapper[4942]: I0218 19:21:53.794167 4942 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Feb 18 19:21:53 crc kubenswrapper[4942]: I0218 19:21:53.805595 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Feb 18 19:21:53 crc kubenswrapper[4942]: I0218 19:21:53.875543 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Feb 18 19:21:53 crc kubenswrapper[4942]: I0218 19:21:53.881583 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Feb 18 19:21:53 crc kubenswrapper[4942]: I0218 19:21:53.882639 4942 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Feb 18 19:21:53 crc kubenswrapper[4942]: I0218 19:21:53.895996 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Feb 18 19:21:53 crc kubenswrapper[4942]: I0218 19:21:53.936291 4942 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Feb 18 19:21:53 crc kubenswrapper[4942]: I0218 19:21:53.998887 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Feb 18 19:21:54 crc kubenswrapper[4942]: I0218 19:21:54.031847 4942 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Feb 18 19:21:54 crc kubenswrapper[4942]: I0218 19:21:54.077485 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 18 19:21:54 crc kubenswrapper[4942]: I0218 19:21:54.082670 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 18 19:21:54 crc kubenswrapper[4942]: I0218 19:21:54.153582 4942 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Feb 18 19:21:54 crc kubenswrapper[4942]: I0218 19:21:54.154664 4942 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Feb 18 19:21:54 crc kubenswrapper[4942]: I0218 19:21:54.340400 4942 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Feb 18 19:21:54 crc kubenswrapper[4942]: I0218 19:21:54.427642 4942 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Feb 18 19:21:54 crc kubenswrapper[4942]: I0218 19:21:54.432614 4942 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Feb 18 19:21:54 crc kubenswrapper[4942]: I0218 19:21:54.537966 4942 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Feb 18 19:21:54 crc kubenswrapper[4942]: I0218 19:21:54.554140 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Feb 18 19:21:54 crc kubenswrapper[4942]: I0218 19:21:54.612113 4942 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Feb 18 19:21:54 crc kubenswrapper[4942]: I0218 19:21:54.620915 4942 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Feb 18 19:21:54 crc kubenswrapper[4942]: I0218 19:21:54.621868 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Feb 18 19:21:54 crc kubenswrapper[4942]: I0218 19:21:54.735118 4942 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Feb 18 19:21:54 crc kubenswrapper[4942]: I0218 19:21:54.747140 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Feb 18 19:21:54 crc kubenswrapper[4942]: I0218 19:21:54.819507 4942 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Feb 18 19:21:54 crc kubenswrapper[4942]: I0218 19:21:54.845393 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Feb 18 19:21:54 crc kubenswrapper[4942]: I0218 19:21:54.877431 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Feb 18 19:21:54 crc kubenswrapper[4942]: I0218 19:21:54.927325 4942 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Feb 18 19:21:55 crc kubenswrapper[4942]: I0218 19:21:55.026397 4942 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Feb 18 19:21:55 crc kubenswrapper[4942]: I0218 19:21:55.055967 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Feb 18 19:21:55 crc kubenswrapper[4942]: I0218 19:21:55.056385 4942 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Feb 18 19:21:55 crc kubenswrapper[4942]: I0218 19:21:55.140690 4942 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Feb 18 19:21:55 crc kubenswrapper[4942]: I0218 19:21:55.144904 4942 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Feb 18 19:21:55 crc kubenswrapper[4942]: I0218 19:21:55.149256 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Feb 18 19:21:55 crc kubenswrapper[4942]: I0218 19:21:55.188752 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Feb 18 19:21:55 crc kubenswrapper[4942]: I0218 19:21:55.198272 4942 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Feb 18 19:21:55 crc kubenswrapper[4942]: I0218 19:21:55.230210 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Feb 18 19:21:55 crc kubenswrapper[4942]: I0218 19:21:55.357368 4942 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Feb 18 19:21:55 crc kubenswrapper[4942]: I0218 19:21:55.421034 4942 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Feb 18 19:21:55 crc kubenswrapper[4942]: I0218 19:21:55.511346 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Feb 18 19:21:55 crc kubenswrapper[4942]: I0218 19:21:55.511653 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Feb 18 19:21:55 crc kubenswrapper[4942]: I0218 19:21:55.634905 4942 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Feb 18 19:21:55 crc kubenswrapper[4942]: I0218 19:21:55.638579 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Feb 18 19:21:55 crc kubenswrapper[4942]: I0218 19:21:55.655936 4942 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Feb 18 19:21:55 crc kubenswrapper[4942]: I0218 19:21:55.683717 4942 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Feb 18 19:21:55 crc kubenswrapper[4942]: I0218 19:21:55.829753 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Feb 18 19:21:55 crc kubenswrapper[4942]: I0218 19:21:55.853937 4942 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Feb 18 19:21:55 crc kubenswrapper[4942]: I0218 19:21:55.930343 4942 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Feb 18 19:21:55 crc kubenswrapper[4942]: I0218 19:21:55.957086 4942 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Feb 18 19:21:55 crc kubenswrapper[4942]: I0218 19:21:55.968340 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Feb 18 19:21:55 crc kubenswrapper[4942]: I0218 19:21:55.993010 4942 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Feb 18 19:21:56 crc kubenswrapper[4942]: I0218 19:21:56.026076 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Feb 18 19:21:56 crc kubenswrapper[4942]: I0218 19:21:56.097815 4942 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Feb 18 19:21:56 crc kubenswrapper[4942]: I0218 19:21:56.106435 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Feb 18 19:21:56 crc kubenswrapper[4942]: I0218 19:21:56.120714 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Feb 18 19:21:56 crc kubenswrapper[4942]: I0218 19:21:56.125039 4942 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Feb 18 19:21:56 crc kubenswrapper[4942]: I0218 19:21:56.198395 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Feb 18 19:21:56 crc kubenswrapper[4942]: I0218 19:21:56.255417 4942 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Feb 18 19:21:56 crc kubenswrapper[4942]: I0218 19:21:56.285649 4942 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Feb 18 19:21:56 crc kubenswrapper[4942]: I0218 19:21:56.331035 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Feb 18 19:21:56 crc kubenswrapper[4942]: I0218 19:21:56.405344 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Feb 18 19:21:56 crc kubenswrapper[4942]: I0218 19:21:56.431151 4942 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Feb 18 19:21:56 crc kubenswrapper[4942]: I0218 19:21:56.439384 4942 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Feb 18 19:21:56 crc kubenswrapper[4942]: I0218 19:21:56.510201 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Feb 18 19:21:56 crc kubenswrapper[4942]: I0218 19:21:56.560538 4942 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Feb 18 19:21:56 crc kubenswrapper[4942]: I0218 19:21:56.603977 4942 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Feb 18 19:21:56 crc kubenswrapper[4942]: I0218 19:21:56.760306 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Feb 18 19:21:56 crc kubenswrapper[4942]: I0218 19:21:56.808263 4942 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Feb 18 19:21:56 crc kubenswrapper[4942]: I0218 19:21:56.855919 4942 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Feb 18 19:21:57 crc kubenswrapper[4942]: I0218 19:21:57.001780 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Feb 18 19:21:57 crc kubenswrapper[4942]: I0218 19:21:57.197845 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Feb 18 19:21:57 crc kubenswrapper[4942]: I0218 19:21:57.202134 4942 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Feb 18 19:21:57 crc kubenswrapper[4942]: I0218 19:21:57.265975 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Feb 18 19:21:57 crc kubenswrapper[4942]: I0218 19:21:57.339156 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Feb 18 19:21:57 crc kubenswrapper[4942]: I0218 19:21:57.566969 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Feb 18 19:21:57 crc kubenswrapper[4942]: I0218 19:21:57.569736 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Feb 18 19:21:57 crc kubenswrapper[4942]: I0218 19:21:57.591810 4942 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Feb 18 19:21:57 crc kubenswrapper[4942]: I0218 19:21:57.707883 4942 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Feb 18 19:21:57 crc kubenswrapper[4942]: I0218 19:21:57.760548 4942 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Feb 18 19:21:57 crc kubenswrapper[4942]: I0218 19:21:57.762722 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Feb 18 19:21:57 crc kubenswrapper[4942]: I0218 19:21:57.804321 4942 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Feb 18 19:21:57 crc kubenswrapper[4942]: I0218 19:21:57.814210 4942 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Feb 18 19:21:57 crc kubenswrapper[4942]: I0218 19:21:57.839554 4942 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-7r96m"] Feb 18 19:21:57 crc kubenswrapper[4942]: E0218 19:21:57.839854 4942 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fc54a822-e044-4d85-a0a8-499a79d09aaf" containerName="registry-server" Feb 18 19:21:57 crc kubenswrapper[4942]: I0218 19:21:57.839885 4942 state_mem.go:107] "Deleted CPUSet assignment" podUID="fc54a822-e044-4d85-a0a8-499a79d09aaf" containerName="registry-server" Feb 18 19:21:57 crc kubenswrapper[4942]: E0218 19:21:57.839907 4942 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9b0511d8-736f-48fa-94a5-9a45e8482467" containerName="registry-server" Feb 18 19:21:57 crc kubenswrapper[4942]: I0218 19:21:57.839920 4942 state_mem.go:107] "Deleted CPUSet assignment" podUID="9b0511d8-736f-48fa-94a5-9a45e8482467" containerName="registry-server" Feb 18 19:21:57 crc kubenswrapper[4942]: E0218 19:21:57.839937 4942 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a7f05662-6e61-4d86-8a52-13000d4bd2be" containerName="extract-utilities" Feb 18 19:21:57 crc kubenswrapper[4942]: I0218 19:21:57.839949 4942 state_mem.go:107] "Deleted CPUSet assignment" podUID="a7f05662-6e61-4d86-8a52-13000d4bd2be" containerName="extract-utilities" Feb 18 19:21:57 crc kubenswrapper[4942]: E0218 19:21:57.839972 4942 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a7f05662-6e61-4d86-8a52-13000d4bd2be" containerName="registry-server" Feb 18 19:21:57 crc kubenswrapper[4942]: I0218 19:21:57.839984 4942 state_mem.go:107] "Deleted CPUSet assignment" podUID="a7f05662-6e61-4d86-8a52-13000d4bd2be" containerName="registry-server" Feb 18 19:21:57 crc kubenswrapper[4942]: E0218 19:21:57.840001 4942 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a7f05662-6e61-4d86-8a52-13000d4bd2be" containerName="extract-content" Feb 18 19:21:57 crc kubenswrapper[4942]: I0218 19:21:57.840013 4942 state_mem.go:107] "Deleted CPUSet assignment" podUID="a7f05662-6e61-4d86-8a52-13000d4bd2be" containerName="extract-content" Feb 18 19:21:57 crc kubenswrapper[4942]: E0218 19:21:57.840027 4942 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eea7d003-0909-4006-b81d-e566f256b0aa" containerName="installer" Feb 18 19:21:57 crc kubenswrapper[4942]: I0218 19:21:57.840039 4942 state_mem.go:107] "Deleted CPUSet assignment" podUID="eea7d003-0909-4006-b81d-e566f256b0aa" containerName="installer" Feb 18 19:21:57 crc kubenswrapper[4942]: E0218 19:21:57.840055 4942 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fc54a822-e044-4d85-a0a8-499a79d09aaf" containerName="extract-utilities" Feb 18 19:21:57 crc kubenswrapper[4942]: I0218 19:21:57.840067 4942 state_mem.go:107] "Deleted CPUSet assignment" podUID="fc54a822-e044-4d85-a0a8-499a79d09aaf" containerName="extract-utilities" Feb 18 19:21:57 crc kubenswrapper[4942]: E0218 19:21:57.840081 4942 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="07639322-4f8b-47d5-85c7-da678ca9eaf1" containerName="registry-server" Feb 18 19:21:57 crc kubenswrapper[4942]: I0218 19:21:57.840093 4942 state_mem.go:107] "Deleted CPUSet assignment" podUID="07639322-4f8b-47d5-85c7-da678ca9eaf1" containerName="registry-server" Feb 18 19:21:57 crc kubenswrapper[4942]: E0218 19:21:57.840113 4942 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fc54a822-e044-4d85-a0a8-499a79d09aaf" containerName="extract-content" Feb 18 19:21:57 crc kubenswrapper[4942]: I0218 19:21:57.840125 4942 state_mem.go:107] "Deleted CPUSet assignment" podUID="fc54a822-e044-4d85-a0a8-499a79d09aaf" containerName="extract-content" Feb 18 19:21:57 crc kubenswrapper[4942]: E0218 19:21:57.840139 4942 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="07639322-4f8b-47d5-85c7-da678ca9eaf1" containerName="extract-utilities" Feb 18 19:21:57 crc kubenswrapper[4942]: I0218 19:21:57.840151 4942 state_mem.go:107] "Deleted CPUSet assignment" podUID="07639322-4f8b-47d5-85c7-da678ca9eaf1" containerName="extract-utilities" Feb 18 19:21:57 crc kubenswrapper[4942]: E0218 19:21:57.840172 4942 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9b0511d8-736f-48fa-94a5-9a45e8482467" containerName="extract-content" Feb 18 19:21:57 crc kubenswrapper[4942]: I0218 19:21:57.840186 4942 state_mem.go:107] "Deleted CPUSet assignment" podUID="9b0511d8-736f-48fa-94a5-9a45e8482467" containerName="extract-content" Feb 18 19:21:57 crc kubenswrapper[4942]: E0218 19:21:57.840199 4942 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="07639322-4f8b-47d5-85c7-da678ca9eaf1" containerName="extract-content" Feb 18 19:21:57 crc kubenswrapper[4942]: I0218 19:21:57.840210 4942 state_mem.go:107] "Deleted CPUSet assignment" podUID="07639322-4f8b-47d5-85c7-da678ca9eaf1" containerName="extract-content" Feb 18 19:21:57 crc kubenswrapper[4942]: E0218 19:21:57.840226 4942 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="efab374b-fec3-4b4e-81f1-002715812a67" containerName="marketplace-operator" Feb 18 19:21:57 crc kubenswrapper[4942]: I0218 19:21:57.840238 4942 state_mem.go:107] "Deleted CPUSet assignment" podUID="efab374b-fec3-4b4e-81f1-002715812a67" containerName="marketplace-operator" Feb 18 19:21:57 crc kubenswrapper[4942]: E0218 19:21:57.840361 4942 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9b0511d8-736f-48fa-94a5-9a45e8482467" containerName="extract-utilities" Feb 18 19:21:57 crc kubenswrapper[4942]: I0218 19:21:57.840376 4942 state_mem.go:107] "Deleted CPUSet assignment" podUID="9b0511d8-736f-48fa-94a5-9a45e8482467" containerName="extract-utilities" Feb 18 19:21:57 crc kubenswrapper[4942]: I0218 19:21:57.840637 4942 memory_manager.go:354] "RemoveStaleState removing state" podUID="eea7d003-0909-4006-b81d-e566f256b0aa" containerName="installer" Feb 18 19:21:57 crc kubenswrapper[4942]: I0218 19:21:57.840662 4942 memory_manager.go:354] "RemoveStaleState removing state" podUID="fc54a822-e044-4d85-a0a8-499a79d09aaf" containerName="registry-server" Feb 18 19:21:57 crc kubenswrapper[4942]: I0218 19:21:57.840722 4942 memory_manager.go:354] "RemoveStaleState removing state" podUID="07639322-4f8b-47d5-85c7-da678ca9eaf1" containerName="registry-server" Feb 18 19:21:57 crc kubenswrapper[4942]: I0218 19:21:57.840740 4942 memory_manager.go:354] "RemoveStaleState removing state" podUID="efab374b-fec3-4b4e-81f1-002715812a67" containerName="marketplace-operator" Feb 18 19:21:57 crc kubenswrapper[4942]: I0218 19:21:57.840808 4942 memory_manager.go:354] "RemoveStaleState removing state" podUID="9b0511d8-736f-48fa-94a5-9a45e8482467" containerName="registry-server" Feb 18 19:21:57 crc kubenswrapper[4942]: I0218 19:21:57.840833 4942 memory_manager.go:354] "RemoveStaleState removing state" podUID="a7f05662-6e61-4d86-8a52-13000d4bd2be" containerName="registry-server" Feb 18 19:21:57 crc kubenswrapper[4942]: I0218 19:21:57.842101 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-7r96m" Feb 18 19:21:57 crc kubenswrapper[4942]: I0218 19:21:57.846845 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-7r96m"] Feb 18 19:21:57 crc kubenswrapper[4942]: I0218 19:21:57.847742 4942 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Feb 18 19:21:57 crc kubenswrapper[4942]: I0218 19:21:57.848062 4942 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Feb 18 19:21:57 crc kubenswrapper[4942]: I0218 19:21:57.848393 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Feb 18 19:21:57 crc kubenswrapper[4942]: I0218 19:21:57.848667 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Feb 18 19:21:57 crc kubenswrapper[4942]: I0218 19:21:57.853651 4942 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Feb 18 19:21:57 crc kubenswrapper[4942]: I0218 19:21:57.859752 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vrd55\" (UniqueName: \"kubernetes.io/projected/cdeaab03-0cb4-484c-be64-2a535c7ab318-kube-api-access-vrd55\") pod \"marketplace-operator-79b997595-7r96m\" (UID: \"cdeaab03-0cb4-484c-be64-2a535c7ab318\") " pod="openshift-marketplace/marketplace-operator-79b997595-7r96m" Feb 18 19:21:57 crc kubenswrapper[4942]: I0218 19:21:57.859839 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/cdeaab03-0cb4-484c-be64-2a535c7ab318-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-7r96m\" (UID: \"cdeaab03-0cb4-484c-be64-2a535c7ab318\") " pod="openshift-marketplace/marketplace-operator-79b997595-7r96m" Feb 18 19:21:57 crc kubenswrapper[4942]: I0218 19:21:57.859875 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/cdeaab03-0cb4-484c-be64-2a535c7ab318-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-7r96m\" (UID: \"cdeaab03-0cb4-484c-be64-2a535c7ab318\") " pod="openshift-marketplace/marketplace-operator-79b997595-7r96m" Feb 18 19:21:57 crc kubenswrapper[4942]: I0218 19:21:57.913414 4942 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Feb 18 19:21:57 crc kubenswrapper[4942]: I0218 19:21:57.960660 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vrd55\" (UniqueName: \"kubernetes.io/projected/cdeaab03-0cb4-484c-be64-2a535c7ab318-kube-api-access-vrd55\") pod \"marketplace-operator-79b997595-7r96m\" (UID: \"cdeaab03-0cb4-484c-be64-2a535c7ab318\") " pod="openshift-marketplace/marketplace-operator-79b997595-7r96m" Feb 18 19:21:57 crc kubenswrapper[4942]: I0218 19:21:57.960705 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/cdeaab03-0cb4-484c-be64-2a535c7ab318-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-7r96m\" (UID: \"cdeaab03-0cb4-484c-be64-2a535c7ab318\") " pod="openshift-marketplace/marketplace-operator-79b997595-7r96m" Feb 18 19:21:57 crc kubenswrapper[4942]: I0218 19:21:57.960732 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/cdeaab03-0cb4-484c-be64-2a535c7ab318-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-7r96m\" (UID: \"cdeaab03-0cb4-484c-be64-2a535c7ab318\") " pod="openshift-marketplace/marketplace-operator-79b997595-7r96m" Feb 18 19:21:57 crc kubenswrapper[4942]: I0218 19:21:57.961887 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/cdeaab03-0cb4-484c-be64-2a535c7ab318-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-7r96m\" (UID: \"cdeaab03-0cb4-484c-be64-2a535c7ab318\") " pod="openshift-marketplace/marketplace-operator-79b997595-7r96m" Feb 18 19:21:57 crc kubenswrapper[4942]: I0218 19:21:57.973462 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/cdeaab03-0cb4-484c-be64-2a535c7ab318-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-7r96m\" (UID: \"cdeaab03-0cb4-484c-be64-2a535c7ab318\") " pod="openshift-marketplace/marketplace-operator-79b997595-7r96m" Feb 18 19:21:57 crc kubenswrapper[4942]: I0218 19:21:57.975851 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vrd55\" (UniqueName: \"kubernetes.io/projected/cdeaab03-0cb4-484c-be64-2a535c7ab318-kube-api-access-vrd55\") pod \"marketplace-operator-79b997595-7r96m\" (UID: \"cdeaab03-0cb4-484c-be64-2a535c7ab318\") " pod="openshift-marketplace/marketplace-operator-79b997595-7r96m" Feb 18 19:21:58 crc kubenswrapper[4942]: I0218 19:21:58.003912 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Feb 18 19:21:58 crc kubenswrapper[4942]: I0218 19:21:58.074324 4942 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Feb 18 19:21:58 crc kubenswrapper[4942]: I0218 19:21:58.153630 4942 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Feb 18 19:21:58 crc kubenswrapper[4942]: I0218 19:21:58.164349 4942 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Feb 18 19:21:58 crc kubenswrapper[4942]: I0218 19:21:58.202025 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Feb 18 19:21:58 crc kubenswrapper[4942]: I0218 19:21:58.212512 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-7r96m" Feb 18 19:21:58 crc kubenswrapper[4942]: I0218 19:21:58.256816 4942 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Feb 18 19:21:58 crc kubenswrapper[4942]: I0218 19:21:58.292782 4942 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Feb 18 19:21:58 crc kubenswrapper[4942]: I0218 19:21:58.301600 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Feb 18 19:21:58 crc kubenswrapper[4942]: I0218 19:21:58.303887 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Feb 18 19:21:58 crc kubenswrapper[4942]: I0218 19:21:58.356238 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Feb 18 19:21:58 crc kubenswrapper[4942]: I0218 19:21:58.398932 4942 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Feb 18 19:21:58 crc kubenswrapper[4942]: I0218 19:21:58.399635 4942 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Feb 18 19:21:58 crc kubenswrapper[4942]: I0218 19:21:58.415276 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-7r96m"] Feb 18 19:21:58 crc kubenswrapper[4942]: I0218 19:21:58.442667 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Feb 18 19:21:58 crc kubenswrapper[4942]: I0218 19:21:58.456108 4942 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Feb 18 19:21:58 crc kubenswrapper[4942]: I0218 19:21:58.476538 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Feb 18 19:21:58 crc kubenswrapper[4942]: I0218 19:21:58.500227 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Feb 18 19:21:58 crc kubenswrapper[4942]: I0218 19:21:58.526526 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Feb 18 19:21:58 crc kubenswrapper[4942]: I0218 19:21:58.671754 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Feb 18 19:21:58 crc kubenswrapper[4942]: I0218 19:21:58.693488 4942 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Feb 18 19:21:58 crc kubenswrapper[4942]: I0218 19:21:58.701024 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Feb 18 19:21:58 crc kubenswrapper[4942]: I0218 19:21:58.806451 4942 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Feb 18 19:21:58 crc kubenswrapper[4942]: I0218 19:21:58.811236 4942 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Feb 18 19:21:58 crc kubenswrapper[4942]: I0218 19:21:58.922846 4942 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Feb 18 19:21:59 crc kubenswrapper[4942]: I0218 19:21:59.041901 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Feb 18 19:21:59 crc kubenswrapper[4942]: I0218 19:21:59.061706 4942 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Feb 18 19:21:59 crc kubenswrapper[4942]: I0218 19:21:59.084476 4942 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Feb 18 19:21:59 crc kubenswrapper[4942]: I0218 19:21:59.092800 4942 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Feb 18 19:21:59 crc kubenswrapper[4942]: I0218 19:21:59.133793 4942 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Feb 18 19:21:59 crc kubenswrapper[4942]: I0218 19:21:59.186747 4942 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Feb 18 19:21:59 crc kubenswrapper[4942]: I0218 19:21:59.215006 4942 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Feb 18 19:21:59 crc kubenswrapper[4942]: I0218 19:21:59.270953 4942 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Feb 18 19:21:59 crc kubenswrapper[4942]: I0218 19:21:59.279704 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Feb 18 19:21:59 crc kubenswrapper[4942]: I0218 19:21:59.318135 4942 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-7r96m_cdeaab03-0cb4-484c-be64-2a535c7ab318/marketplace-operator/0.log" Feb 18 19:21:59 crc kubenswrapper[4942]: I0218 19:21:59.318357 4942 generic.go:334] "Generic (PLEG): container finished" podID="cdeaab03-0cb4-484c-be64-2a535c7ab318" containerID="63d25efdcea7d65d362ccb8f142ebfdb9fcd3359c017c79f942e1e8d9cb6e32c" exitCode=1 Feb 18 19:21:59 crc kubenswrapper[4942]: I0218 19:21:59.318415 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-7r96m" event={"ID":"cdeaab03-0cb4-484c-be64-2a535c7ab318","Type":"ContainerDied","Data":"63d25efdcea7d65d362ccb8f142ebfdb9fcd3359c017c79f942e1e8d9cb6e32c"} Feb 18 19:21:59 crc kubenswrapper[4942]: I0218 19:21:59.318468 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-7r96m" event={"ID":"cdeaab03-0cb4-484c-be64-2a535c7ab318","Type":"ContainerStarted","Data":"5f98921b4033d3ff95b638a7df72e43ca5834c8570bc040d8779d0fc53704c3d"} Feb 18 19:21:59 crc kubenswrapper[4942]: I0218 19:21:59.318786 4942 scope.go:117] "RemoveContainer" containerID="63d25efdcea7d65d362ccb8f142ebfdb9fcd3359c017c79f942e1e8d9cb6e32c" Feb 18 19:21:59 crc kubenswrapper[4942]: I0218 19:21:59.378482 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Feb 18 19:21:59 crc kubenswrapper[4942]: I0218 19:21:59.392277 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Feb 18 19:21:59 crc kubenswrapper[4942]: I0218 19:21:59.403862 4942 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Feb 18 19:21:59 crc kubenswrapper[4942]: I0218 19:21:59.432227 4942 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Feb 18 19:21:59 crc kubenswrapper[4942]: I0218 19:21:59.473503 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Feb 18 19:21:59 crc kubenswrapper[4942]: I0218 19:21:59.476641 4942 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Feb 18 19:21:59 crc kubenswrapper[4942]: I0218 19:21:59.485832 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Feb 18 19:21:59 crc kubenswrapper[4942]: I0218 19:21:59.521423 4942 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Feb 18 19:21:59 crc kubenswrapper[4942]: I0218 19:21:59.624144 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Feb 18 19:21:59 crc kubenswrapper[4942]: I0218 19:21:59.792147 4942 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Feb 18 19:21:59 crc kubenswrapper[4942]: I0218 19:21:59.799631 4942 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Feb 18 19:21:59 crc kubenswrapper[4942]: I0218 19:21:59.825697 4942 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Feb 18 19:21:59 crc kubenswrapper[4942]: I0218 19:21:59.826136 4942 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" containerID="cri-o://17d52aa652e2262a448752f8eeedf1ade032558596806a1871b71588f0f54812" gracePeriod=5 Feb 18 19:21:59 crc kubenswrapper[4942]: I0218 19:21:59.843160 4942 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Feb 18 19:21:59 crc kubenswrapper[4942]: I0218 19:21:59.940226 4942 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Feb 18 19:21:59 crc kubenswrapper[4942]: I0218 19:21:59.962037 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Feb 18 19:22:00 crc kubenswrapper[4942]: I0218 19:22:00.006700 4942 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Feb 18 19:22:00 crc kubenswrapper[4942]: I0218 19:22:00.048998 4942 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Feb 18 19:22:00 crc kubenswrapper[4942]: I0218 19:22:00.060943 4942 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Feb 18 19:22:00 crc kubenswrapper[4942]: I0218 19:22:00.107585 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Feb 18 19:22:00 crc kubenswrapper[4942]: I0218 19:22:00.234360 4942 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Feb 18 19:22:00 crc kubenswrapper[4942]: I0218 19:22:00.280221 4942 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Feb 18 19:22:00 crc kubenswrapper[4942]: I0218 19:22:00.291248 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Feb 18 19:22:00 crc kubenswrapper[4942]: I0218 19:22:00.488502 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Feb 18 19:22:00 crc kubenswrapper[4942]: I0218 19:22:00.491405 4942 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Feb 18 19:22:00 crc kubenswrapper[4942]: I0218 19:22:00.621270 4942 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Feb 18 19:22:00 crc kubenswrapper[4942]: I0218 19:22:00.673500 4942 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Feb 18 19:22:00 crc kubenswrapper[4942]: I0218 19:22:00.746689 4942 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Feb 18 19:22:00 crc kubenswrapper[4942]: I0218 19:22:00.781302 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Feb 18 19:22:00 crc kubenswrapper[4942]: I0218 19:22:00.809376 4942 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Feb 18 19:22:00 crc kubenswrapper[4942]: I0218 19:22:00.894311 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Feb 18 19:22:00 crc kubenswrapper[4942]: I0218 19:22:00.926497 4942 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Feb 18 19:22:00 crc kubenswrapper[4942]: I0218 19:22:00.962125 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Feb 18 19:22:00 crc kubenswrapper[4942]: I0218 19:22:00.974614 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Feb 18 19:22:01 crc kubenswrapper[4942]: I0218 19:22:01.139114 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Feb 18 19:22:01 crc kubenswrapper[4942]: I0218 19:22:01.144463 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Feb 18 19:22:01 crc kubenswrapper[4942]: I0218 19:22:01.145833 4942 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Feb 18 19:22:01 crc kubenswrapper[4942]: I0218 19:22:01.330683 4942 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-7r96m_cdeaab03-0cb4-484c-be64-2a535c7ab318/marketplace-operator/0.log" Feb 18 19:22:01 crc kubenswrapper[4942]: I0218 19:22:01.330753 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-7r96m" event={"ID":"cdeaab03-0cb4-484c-be64-2a535c7ab318","Type":"ContainerStarted","Data":"77746b1ab73e5d1eac9b86c5bc420f04ef1fbd893259fe9fa7afe46382e72ea4"} Feb 18 19:22:01 crc kubenswrapper[4942]: I0218 19:22:01.331384 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-7r96m" Feb 18 19:22:01 crc kubenswrapper[4942]: I0218 19:22:01.333133 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-7r96m" Feb 18 19:22:01 crc kubenswrapper[4942]: I0218 19:22:01.351374 4942 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-7r96m" podStartSLOduration=5.351356702 podStartE2EDuration="5.351356702s" podCreationTimestamp="2026-02-18 19:21:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:22:01.348276864 +0000 UTC m=+281.053209529" watchObservedRunningTime="2026-02-18 19:22:01.351356702 +0000 UTC m=+281.056289367" Feb 18 19:22:01 crc kubenswrapper[4942]: I0218 19:22:01.405484 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Feb 18 19:22:01 crc kubenswrapper[4942]: I0218 19:22:01.460237 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Feb 18 19:22:01 crc kubenswrapper[4942]: I0218 19:22:01.766775 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Feb 18 19:22:01 crc kubenswrapper[4942]: I0218 19:22:01.792393 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Feb 18 19:22:01 crc kubenswrapper[4942]: I0218 19:22:01.964319 4942 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Feb 18 19:22:02 crc kubenswrapper[4942]: I0218 19:22:02.000805 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Feb 18 19:22:02 crc kubenswrapper[4942]: I0218 19:22:02.015381 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Feb 18 19:22:02 crc kubenswrapper[4942]: I0218 19:22:02.157450 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Feb 18 19:22:02 crc kubenswrapper[4942]: I0218 19:22:02.457563 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Feb 18 19:22:02 crc kubenswrapper[4942]: I0218 19:22:02.560899 4942 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Feb 18 19:22:02 crc kubenswrapper[4942]: I0218 19:22:02.664878 4942 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Feb 18 19:22:02 crc kubenswrapper[4942]: I0218 19:22:02.739177 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Feb 18 19:22:02 crc kubenswrapper[4942]: I0218 19:22:02.785935 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Feb 18 19:22:03 crc kubenswrapper[4942]: I0218 19:22:03.066355 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Feb 18 19:22:03 crc kubenswrapper[4942]: I0218 19:22:03.146400 4942 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Feb 18 19:22:03 crc kubenswrapper[4942]: I0218 19:22:03.158155 4942 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Feb 18 19:22:03 crc kubenswrapper[4942]: I0218 19:22:03.299877 4942 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Feb 18 19:22:03 crc kubenswrapper[4942]: I0218 19:22:03.338510 4942 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Feb 18 19:22:03 crc kubenswrapper[4942]: I0218 19:22:03.383678 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Feb 18 19:22:03 crc kubenswrapper[4942]: I0218 19:22:03.449787 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Feb 18 19:22:03 crc kubenswrapper[4942]: I0218 19:22:03.789311 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Feb 18 19:22:04 crc kubenswrapper[4942]: I0218 19:22:04.185873 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Feb 18 19:22:05 crc kubenswrapper[4942]: I0218 19:22:05.357409 4942 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Feb 18 19:22:05 crc kubenswrapper[4942]: I0218 19:22:05.357739 4942 generic.go:334] "Generic (PLEG): container finished" podID="f85e55b1a89d02b0cb034b1ea31ed45a" containerID="17d52aa652e2262a448752f8eeedf1ade032558596806a1871b71588f0f54812" exitCode=137 Feb 18 19:22:05 crc kubenswrapper[4942]: I0218 19:22:05.410214 4942 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Feb 18 19:22:05 crc kubenswrapper[4942]: I0218 19:22:05.410297 4942 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 18 19:22:05 crc kubenswrapper[4942]: I0218 19:22:05.488150 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Feb 18 19:22:05 crc kubenswrapper[4942]: I0218 19:22:05.488199 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Feb 18 19:22:05 crc kubenswrapper[4942]: I0218 19:22:05.488227 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Feb 18 19:22:05 crc kubenswrapper[4942]: I0218 19:22:05.488268 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Feb 18 19:22:05 crc kubenswrapper[4942]: I0218 19:22:05.488292 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Feb 18 19:22:05 crc kubenswrapper[4942]: I0218 19:22:05.488321 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 18 19:22:05 crc kubenswrapper[4942]: I0218 19:22:05.488356 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests" (OuterVolumeSpecName: "manifests") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "manifests". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 18 19:22:05 crc kubenswrapper[4942]: I0218 19:22:05.488330 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log" (OuterVolumeSpecName: "var-log") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 18 19:22:05 crc kubenswrapper[4942]: I0218 19:22:05.488457 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock" (OuterVolumeSpecName: "var-lock") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 18 19:22:05 crc kubenswrapper[4942]: I0218 19:22:05.488504 4942 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") on node \"crc\" DevicePath \"\"" Feb 18 19:22:05 crc kubenswrapper[4942]: I0218 19:22:05.488515 4942 reconciler_common.go:293] "Volume detached for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") on node \"crc\" DevicePath \"\"" Feb 18 19:22:05 crc kubenswrapper[4942]: I0218 19:22:05.488523 4942 reconciler_common.go:293] "Volume detached for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") on node \"crc\" DevicePath \"\"" Feb 18 19:22:05 crc kubenswrapper[4942]: I0218 19:22:05.496072 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir" (OuterVolumeSpecName: "pod-resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "pod-resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 18 19:22:05 crc kubenswrapper[4942]: I0218 19:22:05.589807 4942 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") on node \"crc\" DevicePath \"\"" Feb 18 19:22:05 crc kubenswrapper[4942]: I0218 19:22:05.589848 4942 reconciler_common.go:293] "Volume detached for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") on node \"crc\" DevicePath \"\"" Feb 18 19:22:06 crc kubenswrapper[4942]: I0218 19:22:06.363841 4942 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Feb 18 19:22:06 crc kubenswrapper[4942]: I0218 19:22:06.364121 4942 scope.go:117] "RemoveContainer" containerID="17d52aa652e2262a448752f8eeedf1ade032558596806a1871b71588f0f54812" Feb 18 19:22:06 crc kubenswrapper[4942]: I0218 19:22:06.364459 4942 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 18 19:22:07 crc kubenswrapper[4942]: I0218 19:22:07.043991 4942 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" path="/var/lib/kubelet/pods/f85e55b1a89d02b0cb034b1ea31ed45a/volumes" Feb 18 19:22:20 crc kubenswrapper[4942]: I0218 19:22:20.799524 4942 cert_rotation.go:91] certificate rotation detected, shutting down client connections to start using new credentials Feb 18 19:22:27 crc kubenswrapper[4942]: I0218 19:22:27.983753 4942 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-z4t28"] Feb 18 19:22:27 crc kubenswrapper[4942]: I0218 19:22:27.984948 4942 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-879f6c89f-z4t28" podUID="5d6ad520-b407-4b86-867b-9e9658bfa536" containerName="controller-manager" containerID="cri-o://023457a07127e4c5a3020cc7b562185bd2142efdc686d72b522eec24b84f6fdf" gracePeriod=30 Feb 18 19:22:28 crc kubenswrapper[4942]: I0218 19:22:28.089548 4942 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-xbkl5"] Feb 18 19:22:28 crc kubenswrapper[4942]: I0218 19:22:28.089873 4942 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-xbkl5" podUID="fa346657-46eb-4817-b206-4c09d46d4a55" containerName="route-controller-manager" containerID="cri-o://04d3d8f0260a49004f14e1e12877830297236a2190fa7c6cae15db82a5df0a0c" gracePeriod=30 Feb 18 19:22:28 crc kubenswrapper[4942]: I0218 19:22:28.326046 4942 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-z4t28" Feb 18 19:22:28 crc kubenswrapper[4942]: I0218 19:22:28.384089 4942 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-xbkl5" Feb 18 19:22:28 crc kubenswrapper[4942]: I0218 19:22:28.426415 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5d6ad520-b407-4b86-867b-9e9658bfa536-client-ca\") pod \"5d6ad520-b407-4b86-867b-9e9658bfa536\" (UID: \"5d6ad520-b407-4b86-867b-9e9658bfa536\") " Feb 18 19:22:28 crc kubenswrapper[4942]: I0218 19:22:28.426842 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5d6ad520-b407-4b86-867b-9e9658bfa536-serving-cert\") pod \"5d6ad520-b407-4b86-867b-9e9658bfa536\" (UID: \"5d6ad520-b407-4b86-867b-9e9658bfa536\") " Feb 18 19:22:28 crc kubenswrapper[4942]: I0218 19:22:28.426908 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/5d6ad520-b407-4b86-867b-9e9658bfa536-proxy-ca-bundles\") pod \"5d6ad520-b407-4b86-867b-9e9658bfa536\" (UID: \"5d6ad520-b407-4b86-867b-9e9658bfa536\") " Feb 18 19:22:28 crc kubenswrapper[4942]: I0218 19:22:28.427204 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5d6ad520-b407-4b86-867b-9e9658bfa536-client-ca" (OuterVolumeSpecName: "client-ca") pod "5d6ad520-b407-4b86-867b-9e9658bfa536" (UID: "5d6ad520-b407-4b86-867b-9e9658bfa536"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:22:28 crc kubenswrapper[4942]: I0218 19:22:28.427214 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2bpp7\" (UniqueName: \"kubernetes.io/projected/5d6ad520-b407-4b86-867b-9e9658bfa536-kube-api-access-2bpp7\") pod \"5d6ad520-b407-4b86-867b-9e9658bfa536\" (UID: \"5d6ad520-b407-4b86-867b-9e9658bfa536\") " Feb 18 19:22:28 crc kubenswrapper[4942]: I0218 19:22:28.427500 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5d6ad520-b407-4b86-867b-9e9658bfa536-config\") pod \"5d6ad520-b407-4b86-867b-9e9658bfa536\" (UID: \"5d6ad520-b407-4b86-867b-9e9658bfa536\") " Feb 18 19:22:28 crc kubenswrapper[4942]: I0218 19:22:28.427690 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5d6ad520-b407-4b86-867b-9e9658bfa536-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "5d6ad520-b407-4b86-867b-9e9658bfa536" (UID: "5d6ad520-b407-4b86-867b-9e9658bfa536"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:22:28 crc kubenswrapper[4942]: I0218 19:22:28.427727 4942 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5d6ad520-b407-4b86-867b-9e9658bfa536-client-ca\") on node \"crc\" DevicePath \"\"" Feb 18 19:22:28 crc kubenswrapper[4942]: I0218 19:22:28.428315 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5d6ad520-b407-4b86-867b-9e9658bfa536-config" (OuterVolumeSpecName: "config") pod "5d6ad520-b407-4b86-867b-9e9658bfa536" (UID: "5d6ad520-b407-4b86-867b-9e9658bfa536"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:22:28 crc kubenswrapper[4942]: I0218 19:22:28.431647 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5d6ad520-b407-4b86-867b-9e9658bfa536-kube-api-access-2bpp7" (OuterVolumeSpecName: "kube-api-access-2bpp7") pod "5d6ad520-b407-4b86-867b-9e9658bfa536" (UID: "5d6ad520-b407-4b86-867b-9e9658bfa536"). InnerVolumeSpecName "kube-api-access-2bpp7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:22:28 crc kubenswrapper[4942]: I0218 19:22:28.431841 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5d6ad520-b407-4b86-867b-9e9658bfa536-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "5d6ad520-b407-4b86-867b-9e9658bfa536" (UID: "5d6ad520-b407-4b86-867b-9e9658bfa536"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:22:28 crc kubenswrapper[4942]: I0218 19:22:28.520531 4942 generic.go:334] "Generic (PLEG): container finished" podID="5d6ad520-b407-4b86-867b-9e9658bfa536" containerID="023457a07127e4c5a3020cc7b562185bd2142efdc686d72b522eec24b84f6fdf" exitCode=0 Feb 18 19:22:28 crc kubenswrapper[4942]: I0218 19:22:28.520604 4942 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-z4t28" Feb 18 19:22:28 crc kubenswrapper[4942]: I0218 19:22:28.520635 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-z4t28" event={"ID":"5d6ad520-b407-4b86-867b-9e9658bfa536","Type":"ContainerDied","Data":"023457a07127e4c5a3020cc7b562185bd2142efdc686d72b522eec24b84f6fdf"} Feb 18 19:22:28 crc kubenswrapper[4942]: I0218 19:22:28.520697 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-z4t28" event={"ID":"5d6ad520-b407-4b86-867b-9e9658bfa536","Type":"ContainerDied","Data":"561f208636e4ed3a972d1961d576d8357f830eea84893972b2e168b33bc8de2c"} Feb 18 19:22:28 crc kubenswrapper[4942]: I0218 19:22:28.520718 4942 scope.go:117] "RemoveContainer" containerID="023457a07127e4c5a3020cc7b562185bd2142efdc686d72b522eec24b84f6fdf" Feb 18 19:22:28 crc kubenswrapper[4942]: I0218 19:22:28.523218 4942 generic.go:334] "Generic (PLEG): container finished" podID="fa346657-46eb-4817-b206-4c09d46d4a55" containerID="04d3d8f0260a49004f14e1e12877830297236a2190fa7c6cae15db82a5df0a0c" exitCode=0 Feb 18 19:22:28 crc kubenswrapper[4942]: I0218 19:22:28.523338 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-xbkl5" event={"ID":"fa346657-46eb-4817-b206-4c09d46d4a55","Type":"ContainerDied","Data":"04d3d8f0260a49004f14e1e12877830297236a2190fa7c6cae15db82a5df0a0c"} Feb 18 19:22:28 crc kubenswrapper[4942]: I0218 19:22:28.523367 4942 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-xbkl5" Feb 18 19:22:28 crc kubenswrapper[4942]: I0218 19:22:28.523421 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-xbkl5" event={"ID":"fa346657-46eb-4817-b206-4c09d46d4a55","Type":"ContainerDied","Data":"ef127dd826aba726a31acfac09be4ab1cb60219849d22bd68a56ddc0ec361b83"} Feb 18 19:22:28 crc kubenswrapper[4942]: I0218 19:22:28.528507 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mddqc\" (UniqueName: \"kubernetes.io/projected/fa346657-46eb-4817-b206-4c09d46d4a55-kube-api-access-mddqc\") pod \"fa346657-46eb-4817-b206-4c09d46d4a55\" (UID: \"fa346657-46eb-4817-b206-4c09d46d4a55\") " Feb 18 19:22:28 crc kubenswrapper[4942]: I0218 19:22:28.528612 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/fa346657-46eb-4817-b206-4c09d46d4a55-client-ca\") pod \"fa346657-46eb-4817-b206-4c09d46d4a55\" (UID: \"fa346657-46eb-4817-b206-4c09d46d4a55\") " Feb 18 19:22:28 crc kubenswrapper[4942]: I0218 19:22:28.528667 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fa346657-46eb-4817-b206-4c09d46d4a55-config\") pod \"fa346657-46eb-4817-b206-4c09d46d4a55\" (UID: \"fa346657-46eb-4817-b206-4c09d46d4a55\") " Feb 18 19:22:28 crc kubenswrapper[4942]: I0218 19:22:28.528840 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fa346657-46eb-4817-b206-4c09d46d4a55-serving-cert\") pod \"fa346657-46eb-4817-b206-4c09d46d4a55\" (UID: \"fa346657-46eb-4817-b206-4c09d46d4a55\") " Feb 18 19:22:28 crc kubenswrapper[4942]: I0218 19:22:28.529186 4942 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/5d6ad520-b407-4b86-867b-9e9658bfa536-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 18 19:22:28 crc kubenswrapper[4942]: I0218 19:22:28.529221 4942 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5d6ad520-b407-4b86-867b-9e9658bfa536-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 18 19:22:28 crc kubenswrapper[4942]: I0218 19:22:28.529242 4942 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2bpp7\" (UniqueName: \"kubernetes.io/projected/5d6ad520-b407-4b86-867b-9e9658bfa536-kube-api-access-2bpp7\") on node \"crc\" DevicePath \"\"" Feb 18 19:22:28 crc kubenswrapper[4942]: I0218 19:22:28.529263 4942 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5d6ad520-b407-4b86-867b-9e9658bfa536-config\") on node \"crc\" DevicePath \"\"" Feb 18 19:22:28 crc kubenswrapper[4942]: I0218 19:22:28.529659 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fa346657-46eb-4817-b206-4c09d46d4a55-client-ca" (OuterVolumeSpecName: "client-ca") pod "fa346657-46eb-4817-b206-4c09d46d4a55" (UID: "fa346657-46eb-4817-b206-4c09d46d4a55"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:22:28 crc kubenswrapper[4942]: I0218 19:22:28.530101 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fa346657-46eb-4817-b206-4c09d46d4a55-config" (OuterVolumeSpecName: "config") pod "fa346657-46eb-4817-b206-4c09d46d4a55" (UID: "fa346657-46eb-4817-b206-4c09d46d4a55"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:22:28 crc kubenswrapper[4942]: I0218 19:22:28.534472 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fa346657-46eb-4817-b206-4c09d46d4a55-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "fa346657-46eb-4817-b206-4c09d46d4a55" (UID: "fa346657-46eb-4817-b206-4c09d46d4a55"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:22:28 crc kubenswrapper[4942]: I0218 19:22:28.536083 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fa346657-46eb-4817-b206-4c09d46d4a55-kube-api-access-mddqc" (OuterVolumeSpecName: "kube-api-access-mddqc") pod "fa346657-46eb-4817-b206-4c09d46d4a55" (UID: "fa346657-46eb-4817-b206-4c09d46d4a55"). InnerVolumeSpecName "kube-api-access-mddqc". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:22:28 crc kubenswrapper[4942]: I0218 19:22:28.549888 4942 scope.go:117] "RemoveContainer" containerID="023457a07127e4c5a3020cc7b562185bd2142efdc686d72b522eec24b84f6fdf" Feb 18 19:22:28 crc kubenswrapper[4942]: E0218 19:22:28.550814 4942 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"023457a07127e4c5a3020cc7b562185bd2142efdc686d72b522eec24b84f6fdf\": container with ID starting with 023457a07127e4c5a3020cc7b562185bd2142efdc686d72b522eec24b84f6fdf not found: ID does not exist" containerID="023457a07127e4c5a3020cc7b562185bd2142efdc686d72b522eec24b84f6fdf" Feb 18 19:22:28 crc kubenswrapper[4942]: I0218 19:22:28.550961 4942 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"023457a07127e4c5a3020cc7b562185bd2142efdc686d72b522eec24b84f6fdf"} err="failed to get container status \"023457a07127e4c5a3020cc7b562185bd2142efdc686d72b522eec24b84f6fdf\": rpc error: code = NotFound desc = could not find container \"023457a07127e4c5a3020cc7b562185bd2142efdc686d72b522eec24b84f6fdf\": container with ID starting with 023457a07127e4c5a3020cc7b562185bd2142efdc686d72b522eec24b84f6fdf not found: ID does not exist" Feb 18 19:22:28 crc kubenswrapper[4942]: I0218 19:22:28.550997 4942 scope.go:117] "RemoveContainer" containerID="04d3d8f0260a49004f14e1e12877830297236a2190fa7c6cae15db82a5df0a0c" Feb 18 19:22:28 crc kubenswrapper[4942]: I0218 19:22:28.572026 4942 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-z4t28"] Feb 18 19:22:28 crc kubenswrapper[4942]: I0218 19:22:28.576743 4942 scope.go:117] "RemoveContainer" containerID="04d3d8f0260a49004f14e1e12877830297236a2190fa7c6cae15db82a5df0a0c" Feb 18 19:22:28 crc kubenswrapper[4942]: E0218 19:22:28.577449 4942 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"04d3d8f0260a49004f14e1e12877830297236a2190fa7c6cae15db82a5df0a0c\": container with ID starting with 04d3d8f0260a49004f14e1e12877830297236a2190fa7c6cae15db82a5df0a0c not found: ID does not exist" containerID="04d3d8f0260a49004f14e1e12877830297236a2190fa7c6cae15db82a5df0a0c" Feb 18 19:22:28 crc kubenswrapper[4942]: I0218 19:22:28.577533 4942 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"04d3d8f0260a49004f14e1e12877830297236a2190fa7c6cae15db82a5df0a0c"} err="failed to get container status \"04d3d8f0260a49004f14e1e12877830297236a2190fa7c6cae15db82a5df0a0c\": rpc error: code = NotFound desc = could not find container \"04d3d8f0260a49004f14e1e12877830297236a2190fa7c6cae15db82a5df0a0c\": container with ID starting with 04d3d8f0260a49004f14e1e12877830297236a2190fa7c6cae15db82a5df0a0c not found: ID does not exist" Feb 18 19:22:28 crc kubenswrapper[4942]: I0218 19:22:28.581063 4942 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-z4t28"] Feb 18 19:22:28 crc kubenswrapper[4942]: I0218 19:22:28.630964 4942 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fa346657-46eb-4817-b206-4c09d46d4a55-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 18 19:22:28 crc kubenswrapper[4942]: I0218 19:22:28.631034 4942 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mddqc\" (UniqueName: \"kubernetes.io/projected/fa346657-46eb-4817-b206-4c09d46d4a55-kube-api-access-mddqc\") on node \"crc\" DevicePath \"\"" Feb 18 19:22:28 crc kubenswrapper[4942]: I0218 19:22:28.631059 4942 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/fa346657-46eb-4817-b206-4c09d46d4a55-client-ca\") on node \"crc\" DevicePath \"\"" Feb 18 19:22:28 crc kubenswrapper[4942]: I0218 19:22:28.631085 4942 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fa346657-46eb-4817-b206-4c09d46d4a55-config\") on node \"crc\" DevicePath \"\"" Feb 18 19:22:28 crc kubenswrapper[4942]: I0218 19:22:28.866064 4942 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-xbkl5"] Feb 18 19:22:28 crc kubenswrapper[4942]: I0218 19:22:28.869428 4942 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-xbkl5"] Feb 18 19:22:29 crc kubenswrapper[4942]: I0218 19:22:29.048985 4942 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5d6ad520-b407-4b86-867b-9e9658bfa536" path="/var/lib/kubelet/pods/5d6ad520-b407-4b86-867b-9e9658bfa536/volumes" Feb 18 19:22:29 crc kubenswrapper[4942]: I0218 19:22:29.050216 4942 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fa346657-46eb-4817-b206-4c09d46d4a55" path="/var/lib/kubelet/pods/fa346657-46eb-4817-b206-4c09d46d4a55/volumes" Feb 18 19:22:29 crc kubenswrapper[4942]: I0218 19:22:29.083637 4942 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-8464758d69-lqr4v"] Feb 18 19:22:29 crc kubenswrapper[4942]: E0218 19:22:29.083841 4942 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fa346657-46eb-4817-b206-4c09d46d4a55" containerName="route-controller-manager" Feb 18 19:22:29 crc kubenswrapper[4942]: I0218 19:22:29.083854 4942 state_mem.go:107] "Deleted CPUSet assignment" podUID="fa346657-46eb-4817-b206-4c09d46d4a55" containerName="route-controller-manager" Feb 18 19:22:29 crc kubenswrapper[4942]: E0218 19:22:29.083867 4942 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5d6ad520-b407-4b86-867b-9e9658bfa536" containerName="controller-manager" Feb 18 19:22:29 crc kubenswrapper[4942]: I0218 19:22:29.083873 4942 state_mem.go:107] "Deleted CPUSet assignment" podUID="5d6ad520-b407-4b86-867b-9e9658bfa536" containerName="controller-manager" Feb 18 19:22:29 crc kubenswrapper[4942]: E0218 19:22:29.083887 4942 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Feb 18 19:22:29 crc kubenswrapper[4942]: I0218 19:22:29.083894 4942 state_mem.go:107] "Deleted CPUSet assignment" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Feb 18 19:22:29 crc kubenswrapper[4942]: I0218 19:22:29.083984 4942 memory_manager.go:354] "RemoveStaleState removing state" podUID="fa346657-46eb-4817-b206-4c09d46d4a55" containerName="route-controller-manager" Feb 18 19:22:29 crc kubenswrapper[4942]: I0218 19:22:29.083994 4942 memory_manager.go:354] "RemoveStaleState removing state" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Feb 18 19:22:29 crc kubenswrapper[4942]: I0218 19:22:29.084003 4942 memory_manager.go:354] "RemoveStaleState removing state" podUID="5d6ad520-b407-4b86-867b-9e9658bfa536" containerName="controller-manager" Feb 18 19:22:29 crc kubenswrapper[4942]: I0218 19:22:29.084355 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-8464758d69-lqr4v" Feb 18 19:22:29 crc kubenswrapper[4942]: I0218 19:22:29.087351 4942 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Feb 18 19:22:29 crc kubenswrapper[4942]: I0218 19:22:29.089046 4942 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Feb 18 19:22:29 crc kubenswrapper[4942]: I0218 19:22:29.089091 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Feb 18 19:22:29 crc kubenswrapper[4942]: I0218 19:22:29.089219 4942 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Feb 18 19:22:29 crc kubenswrapper[4942]: I0218 19:22:29.090070 4942 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Feb 18 19:22:29 crc kubenswrapper[4942]: I0218 19:22:29.092543 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Feb 18 19:22:29 crc kubenswrapper[4942]: I0218 19:22:29.100712 4942 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Feb 18 19:22:29 crc kubenswrapper[4942]: I0218 19:22:29.105271 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-8464758d69-lqr4v"] Feb 18 19:22:29 crc kubenswrapper[4942]: I0218 19:22:29.239921 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k6bnk\" (UniqueName: \"kubernetes.io/projected/bf3631bc-384b-44bf-a012-7a1ab90ceb0e-kube-api-access-k6bnk\") pod \"controller-manager-8464758d69-lqr4v\" (UID: \"bf3631bc-384b-44bf-a012-7a1ab90ceb0e\") " pod="openshift-controller-manager/controller-manager-8464758d69-lqr4v" Feb 18 19:22:29 crc kubenswrapper[4942]: I0218 19:22:29.240139 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/bf3631bc-384b-44bf-a012-7a1ab90ceb0e-client-ca\") pod \"controller-manager-8464758d69-lqr4v\" (UID: \"bf3631bc-384b-44bf-a012-7a1ab90ceb0e\") " pod="openshift-controller-manager/controller-manager-8464758d69-lqr4v" Feb 18 19:22:29 crc kubenswrapper[4942]: I0218 19:22:29.240418 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/bf3631bc-384b-44bf-a012-7a1ab90ceb0e-proxy-ca-bundles\") pod \"controller-manager-8464758d69-lqr4v\" (UID: \"bf3631bc-384b-44bf-a012-7a1ab90ceb0e\") " pod="openshift-controller-manager/controller-manager-8464758d69-lqr4v" Feb 18 19:22:29 crc kubenswrapper[4942]: I0218 19:22:29.240523 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bf3631bc-384b-44bf-a012-7a1ab90ceb0e-serving-cert\") pod \"controller-manager-8464758d69-lqr4v\" (UID: \"bf3631bc-384b-44bf-a012-7a1ab90ceb0e\") " pod="openshift-controller-manager/controller-manager-8464758d69-lqr4v" Feb 18 19:22:29 crc kubenswrapper[4942]: I0218 19:22:29.240750 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bf3631bc-384b-44bf-a012-7a1ab90ceb0e-config\") pod \"controller-manager-8464758d69-lqr4v\" (UID: \"bf3631bc-384b-44bf-a012-7a1ab90ceb0e\") " pod="openshift-controller-manager/controller-manager-8464758d69-lqr4v" Feb 18 19:22:29 crc kubenswrapper[4942]: I0218 19:22:29.342832 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bf3631bc-384b-44bf-a012-7a1ab90ceb0e-config\") pod \"controller-manager-8464758d69-lqr4v\" (UID: \"bf3631bc-384b-44bf-a012-7a1ab90ceb0e\") " pod="openshift-controller-manager/controller-manager-8464758d69-lqr4v" Feb 18 19:22:29 crc kubenswrapper[4942]: I0218 19:22:29.342955 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k6bnk\" (UniqueName: \"kubernetes.io/projected/bf3631bc-384b-44bf-a012-7a1ab90ceb0e-kube-api-access-k6bnk\") pod \"controller-manager-8464758d69-lqr4v\" (UID: \"bf3631bc-384b-44bf-a012-7a1ab90ceb0e\") " pod="openshift-controller-manager/controller-manager-8464758d69-lqr4v" Feb 18 19:22:29 crc kubenswrapper[4942]: I0218 19:22:29.343021 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/bf3631bc-384b-44bf-a012-7a1ab90ceb0e-client-ca\") pod \"controller-manager-8464758d69-lqr4v\" (UID: \"bf3631bc-384b-44bf-a012-7a1ab90ceb0e\") " pod="openshift-controller-manager/controller-manager-8464758d69-lqr4v" Feb 18 19:22:29 crc kubenswrapper[4942]: I0218 19:22:29.343082 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/bf3631bc-384b-44bf-a012-7a1ab90ceb0e-proxy-ca-bundles\") pod \"controller-manager-8464758d69-lqr4v\" (UID: \"bf3631bc-384b-44bf-a012-7a1ab90ceb0e\") " pod="openshift-controller-manager/controller-manager-8464758d69-lqr4v" Feb 18 19:22:29 crc kubenswrapper[4942]: I0218 19:22:29.343123 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bf3631bc-384b-44bf-a012-7a1ab90ceb0e-serving-cert\") pod \"controller-manager-8464758d69-lqr4v\" (UID: \"bf3631bc-384b-44bf-a012-7a1ab90ceb0e\") " pod="openshift-controller-manager/controller-manager-8464758d69-lqr4v" Feb 18 19:22:29 crc kubenswrapper[4942]: I0218 19:22:29.345347 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/bf3631bc-384b-44bf-a012-7a1ab90ceb0e-proxy-ca-bundles\") pod \"controller-manager-8464758d69-lqr4v\" (UID: \"bf3631bc-384b-44bf-a012-7a1ab90ceb0e\") " pod="openshift-controller-manager/controller-manager-8464758d69-lqr4v" Feb 18 19:22:29 crc kubenswrapper[4942]: I0218 19:22:29.345596 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/bf3631bc-384b-44bf-a012-7a1ab90ceb0e-client-ca\") pod \"controller-manager-8464758d69-lqr4v\" (UID: \"bf3631bc-384b-44bf-a012-7a1ab90ceb0e\") " pod="openshift-controller-manager/controller-manager-8464758d69-lqr4v" Feb 18 19:22:29 crc kubenswrapper[4942]: I0218 19:22:29.345858 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bf3631bc-384b-44bf-a012-7a1ab90ceb0e-config\") pod \"controller-manager-8464758d69-lqr4v\" (UID: \"bf3631bc-384b-44bf-a012-7a1ab90ceb0e\") " pod="openshift-controller-manager/controller-manager-8464758d69-lqr4v" Feb 18 19:22:29 crc kubenswrapper[4942]: I0218 19:22:29.351282 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bf3631bc-384b-44bf-a012-7a1ab90ceb0e-serving-cert\") pod \"controller-manager-8464758d69-lqr4v\" (UID: \"bf3631bc-384b-44bf-a012-7a1ab90ceb0e\") " pod="openshift-controller-manager/controller-manager-8464758d69-lqr4v" Feb 18 19:22:29 crc kubenswrapper[4942]: I0218 19:22:29.373508 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k6bnk\" (UniqueName: \"kubernetes.io/projected/bf3631bc-384b-44bf-a012-7a1ab90ceb0e-kube-api-access-k6bnk\") pod \"controller-manager-8464758d69-lqr4v\" (UID: \"bf3631bc-384b-44bf-a012-7a1ab90ceb0e\") " pod="openshift-controller-manager/controller-manager-8464758d69-lqr4v" Feb 18 19:22:29 crc kubenswrapper[4942]: I0218 19:22:29.448716 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-8464758d69-lqr4v" Feb 18 19:22:29 crc kubenswrapper[4942]: I0218 19:22:29.659522 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-8464758d69-lqr4v"] Feb 18 19:22:30 crc kubenswrapper[4942]: I0218 19:22:30.080739 4942 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-ffc5946f9-cm2jk"] Feb 18 19:22:30 crc kubenswrapper[4942]: I0218 19:22:30.081326 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-ffc5946f9-cm2jk" Feb 18 19:22:30 crc kubenswrapper[4942]: I0218 19:22:30.083577 4942 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Feb 18 19:22:30 crc kubenswrapper[4942]: I0218 19:22:30.083860 4942 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Feb 18 19:22:30 crc kubenswrapper[4942]: I0218 19:22:30.084292 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Feb 18 19:22:30 crc kubenswrapper[4942]: I0218 19:22:30.084513 4942 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Feb 18 19:22:30 crc kubenswrapper[4942]: I0218 19:22:30.084532 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Feb 18 19:22:30 crc kubenswrapper[4942]: I0218 19:22:30.084549 4942 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Feb 18 19:22:30 crc kubenswrapper[4942]: I0218 19:22:30.096528 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-ffc5946f9-cm2jk"] Feb 18 19:22:30 crc kubenswrapper[4942]: I0218 19:22:30.255211 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qsrnq\" (UniqueName: \"kubernetes.io/projected/35947dc4-201a-4fbd-9c5c-9b0766d22557-kube-api-access-qsrnq\") pod \"route-controller-manager-ffc5946f9-cm2jk\" (UID: \"35947dc4-201a-4fbd-9c5c-9b0766d22557\") " pod="openshift-route-controller-manager/route-controller-manager-ffc5946f9-cm2jk" Feb 18 19:22:30 crc kubenswrapper[4942]: I0218 19:22:30.255287 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/35947dc4-201a-4fbd-9c5c-9b0766d22557-serving-cert\") pod \"route-controller-manager-ffc5946f9-cm2jk\" (UID: \"35947dc4-201a-4fbd-9c5c-9b0766d22557\") " pod="openshift-route-controller-manager/route-controller-manager-ffc5946f9-cm2jk" Feb 18 19:22:30 crc kubenswrapper[4942]: I0218 19:22:30.255320 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/35947dc4-201a-4fbd-9c5c-9b0766d22557-config\") pod \"route-controller-manager-ffc5946f9-cm2jk\" (UID: \"35947dc4-201a-4fbd-9c5c-9b0766d22557\") " pod="openshift-route-controller-manager/route-controller-manager-ffc5946f9-cm2jk" Feb 18 19:22:30 crc kubenswrapper[4942]: I0218 19:22:30.255341 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/35947dc4-201a-4fbd-9c5c-9b0766d22557-client-ca\") pod \"route-controller-manager-ffc5946f9-cm2jk\" (UID: \"35947dc4-201a-4fbd-9c5c-9b0766d22557\") " pod="openshift-route-controller-manager/route-controller-manager-ffc5946f9-cm2jk" Feb 18 19:22:30 crc kubenswrapper[4942]: I0218 19:22:30.356807 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qsrnq\" (UniqueName: \"kubernetes.io/projected/35947dc4-201a-4fbd-9c5c-9b0766d22557-kube-api-access-qsrnq\") pod \"route-controller-manager-ffc5946f9-cm2jk\" (UID: \"35947dc4-201a-4fbd-9c5c-9b0766d22557\") " pod="openshift-route-controller-manager/route-controller-manager-ffc5946f9-cm2jk" Feb 18 19:22:30 crc kubenswrapper[4942]: I0218 19:22:30.357342 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/35947dc4-201a-4fbd-9c5c-9b0766d22557-serving-cert\") pod \"route-controller-manager-ffc5946f9-cm2jk\" (UID: \"35947dc4-201a-4fbd-9c5c-9b0766d22557\") " pod="openshift-route-controller-manager/route-controller-manager-ffc5946f9-cm2jk" Feb 18 19:22:30 crc kubenswrapper[4942]: I0218 19:22:30.357617 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/35947dc4-201a-4fbd-9c5c-9b0766d22557-config\") pod \"route-controller-manager-ffc5946f9-cm2jk\" (UID: \"35947dc4-201a-4fbd-9c5c-9b0766d22557\") " pod="openshift-route-controller-manager/route-controller-manager-ffc5946f9-cm2jk" Feb 18 19:22:30 crc kubenswrapper[4942]: I0218 19:22:30.357926 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/35947dc4-201a-4fbd-9c5c-9b0766d22557-client-ca\") pod \"route-controller-manager-ffc5946f9-cm2jk\" (UID: \"35947dc4-201a-4fbd-9c5c-9b0766d22557\") " pod="openshift-route-controller-manager/route-controller-manager-ffc5946f9-cm2jk" Feb 18 19:22:30 crc kubenswrapper[4942]: I0218 19:22:30.359656 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/35947dc4-201a-4fbd-9c5c-9b0766d22557-client-ca\") pod \"route-controller-manager-ffc5946f9-cm2jk\" (UID: \"35947dc4-201a-4fbd-9c5c-9b0766d22557\") " pod="openshift-route-controller-manager/route-controller-manager-ffc5946f9-cm2jk" Feb 18 19:22:30 crc kubenswrapper[4942]: I0218 19:22:30.360310 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/35947dc4-201a-4fbd-9c5c-9b0766d22557-config\") pod \"route-controller-manager-ffc5946f9-cm2jk\" (UID: \"35947dc4-201a-4fbd-9c5c-9b0766d22557\") " pod="openshift-route-controller-manager/route-controller-manager-ffc5946f9-cm2jk" Feb 18 19:22:30 crc kubenswrapper[4942]: I0218 19:22:30.362782 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/35947dc4-201a-4fbd-9c5c-9b0766d22557-serving-cert\") pod \"route-controller-manager-ffc5946f9-cm2jk\" (UID: \"35947dc4-201a-4fbd-9c5c-9b0766d22557\") " pod="openshift-route-controller-manager/route-controller-manager-ffc5946f9-cm2jk" Feb 18 19:22:30 crc kubenswrapper[4942]: I0218 19:22:30.373213 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qsrnq\" (UniqueName: \"kubernetes.io/projected/35947dc4-201a-4fbd-9c5c-9b0766d22557-kube-api-access-qsrnq\") pod \"route-controller-manager-ffc5946f9-cm2jk\" (UID: \"35947dc4-201a-4fbd-9c5c-9b0766d22557\") " pod="openshift-route-controller-manager/route-controller-manager-ffc5946f9-cm2jk" Feb 18 19:22:30 crc kubenswrapper[4942]: I0218 19:22:30.399250 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-ffc5946f9-cm2jk" Feb 18 19:22:30 crc kubenswrapper[4942]: I0218 19:22:30.539052 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-8464758d69-lqr4v" event={"ID":"bf3631bc-384b-44bf-a012-7a1ab90ceb0e","Type":"ContainerStarted","Data":"915aa75e8df4dbeea679a1cf7bebbe608496cd3849b965879f7008cb226fc9de"} Feb 18 19:22:30 crc kubenswrapper[4942]: I0218 19:22:30.539133 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-8464758d69-lqr4v" event={"ID":"bf3631bc-384b-44bf-a012-7a1ab90ceb0e","Type":"ContainerStarted","Data":"ac5bf5f33c7a2c2e5df869efaf323352918861e3a5e68c61cf0a32573f034d12"} Feb 18 19:22:30 crc kubenswrapper[4942]: I0218 19:22:30.539387 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-8464758d69-lqr4v" Feb 18 19:22:30 crc kubenswrapper[4942]: I0218 19:22:30.546382 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-8464758d69-lqr4v" Feb 18 19:22:30 crc kubenswrapper[4942]: I0218 19:22:30.555531 4942 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-8464758d69-lqr4v" podStartSLOduration=2.555517724 podStartE2EDuration="2.555517724s" podCreationTimestamp="2026-02-18 19:22:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:22:30.554193296 +0000 UTC m=+310.259125961" watchObservedRunningTime="2026-02-18 19:22:30.555517724 +0000 UTC m=+310.260450389" Feb 18 19:22:30 crc kubenswrapper[4942]: I0218 19:22:30.846636 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-ffc5946f9-cm2jk"] Feb 18 19:22:30 crc kubenswrapper[4942]: W0218 19:22:30.854441 4942 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod35947dc4_201a_4fbd_9c5c_9b0766d22557.slice/crio-79122ab086e7f81b3ad38361643e8fa8fc3704a751d70239bb25e6b1e8aa9b08 WatchSource:0}: Error finding container 79122ab086e7f81b3ad38361643e8fa8fc3704a751d70239bb25e6b1e8aa9b08: Status 404 returned error can't find the container with id 79122ab086e7f81b3ad38361643e8fa8fc3704a751d70239bb25e6b1e8aa9b08 Feb 18 19:22:31 crc kubenswrapper[4942]: I0218 19:22:31.545312 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-ffc5946f9-cm2jk" event={"ID":"35947dc4-201a-4fbd-9c5c-9b0766d22557","Type":"ContainerStarted","Data":"e12524f14ccd55e1c3802ffb9d505f7488b76a71edd194d8175292f5bd8ed263"} Feb 18 19:22:31 crc kubenswrapper[4942]: I0218 19:22:31.545373 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-ffc5946f9-cm2jk" event={"ID":"35947dc4-201a-4fbd-9c5c-9b0766d22557","Type":"ContainerStarted","Data":"79122ab086e7f81b3ad38361643e8fa8fc3704a751d70239bb25e6b1e8aa9b08"} Feb 18 19:22:31 crc kubenswrapper[4942]: I0218 19:22:31.545727 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-ffc5946f9-cm2jk" Feb 18 19:22:31 crc kubenswrapper[4942]: I0218 19:22:31.552369 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-ffc5946f9-cm2jk" Feb 18 19:22:31 crc kubenswrapper[4942]: I0218 19:22:31.593449 4942 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-ffc5946f9-cm2jk" podStartSLOduration=3.593428555 podStartE2EDuration="3.593428555s" podCreationTimestamp="2026-02-18 19:22:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:22:31.572215794 +0000 UTC m=+311.277148529" watchObservedRunningTime="2026-02-18 19:22:31.593428555 +0000 UTC m=+311.298361250" Feb 18 19:22:40 crc kubenswrapper[4942]: I0218 19:22:40.945715 4942 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-8464758d69-lqr4v"] Feb 18 19:22:40 crc kubenswrapper[4942]: I0218 19:22:40.946478 4942 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-8464758d69-lqr4v" podUID="bf3631bc-384b-44bf-a012-7a1ab90ceb0e" containerName="controller-manager" containerID="cri-o://915aa75e8df4dbeea679a1cf7bebbe608496cd3849b965879f7008cb226fc9de" gracePeriod=30 Feb 18 19:22:40 crc kubenswrapper[4942]: I0218 19:22:40.956997 4942 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-ffc5946f9-cm2jk"] Feb 18 19:22:40 crc kubenswrapper[4942]: I0218 19:22:40.957190 4942 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-ffc5946f9-cm2jk" podUID="35947dc4-201a-4fbd-9c5c-9b0766d22557" containerName="route-controller-manager" containerID="cri-o://e12524f14ccd55e1c3802ffb9d505f7488b76a71edd194d8175292f5bd8ed263" gracePeriod=30 Feb 18 19:22:41 crc kubenswrapper[4942]: I0218 19:22:41.424016 4942 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-ffc5946f9-cm2jk" Feb 18 19:22:41 crc kubenswrapper[4942]: I0218 19:22:41.512303 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qsrnq\" (UniqueName: \"kubernetes.io/projected/35947dc4-201a-4fbd-9c5c-9b0766d22557-kube-api-access-qsrnq\") pod \"35947dc4-201a-4fbd-9c5c-9b0766d22557\" (UID: \"35947dc4-201a-4fbd-9c5c-9b0766d22557\") " Feb 18 19:22:41 crc kubenswrapper[4942]: I0218 19:22:41.512411 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/35947dc4-201a-4fbd-9c5c-9b0766d22557-client-ca\") pod \"35947dc4-201a-4fbd-9c5c-9b0766d22557\" (UID: \"35947dc4-201a-4fbd-9c5c-9b0766d22557\") " Feb 18 19:22:41 crc kubenswrapper[4942]: I0218 19:22:41.512493 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/35947dc4-201a-4fbd-9c5c-9b0766d22557-serving-cert\") pod \"35947dc4-201a-4fbd-9c5c-9b0766d22557\" (UID: \"35947dc4-201a-4fbd-9c5c-9b0766d22557\") " Feb 18 19:22:41 crc kubenswrapper[4942]: I0218 19:22:41.512534 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/35947dc4-201a-4fbd-9c5c-9b0766d22557-config\") pod \"35947dc4-201a-4fbd-9c5c-9b0766d22557\" (UID: \"35947dc4-201a-4fbd-9c5c-9b0766d22557\") " Feb 18 19:22:41 crc kubenswrapper[4942]: I0218 19:22:41.513105 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/35947dc4-201a-4fbd-9c5c-9b0766d22557-client-ca" (OuterVolumeSpecName: "client-ca") pod "35947dc4-201a-4fbd-9c5c-9b0766d22557" (UID: "35947dc4-201a-4fbd-9c5c-9b0766d22557"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:22:41 crc kubenswrapper[4942]: I0218 19:22:41.513479 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/35947dc4-201a-4fbd-9c5c-9b0766d22557-config" (OuterVolumeSpecName: "config") pod "35947dc4-201a-4fbd-9c5c-9b0766d22557" (UID: "35947dc4-201a-4fbd-9c5c-9b0766d22557"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:22:41 crc kubenswrapper[4942]: I0218 19:22:41.515645 4942 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-8464758d69-lqr4v" Feb 18 19:22:41 crc kubenswrapper[4942]: I0218 19:22:41.518589 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/35947dc4-201a-4fbd-9c5c-9b0766d22557-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "35947dc4-201a-4fbd-9c5c-9b0766d22557" (UID: "35947dc4-201a-4fbd-9c5c-9b0766d22557"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:22:41 crc kubenswrapper[4942]: I0218 19:22:41.519152 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/35947dc4-201a-4fbd-9c5c-9b0766d22557-kube-api-access-qsrnq" (OuterVolumeSpecName: "kube-api-access-qsrnq") pod "35947dc4-201a-4fbd-9c5c-9b0766d22557" (UID: "35947dc4-201a-4fbd-9c5c-9b0766d22557"). InnerVolumeSpecName "kube-api-access-qsrnq". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:22:41 crc kubenswrapper[4942]: I0218 19:22:41.604355 4942 generic.go:334] "Generic (PLEG): container finished" podID="bf3631bc-384b-44bf-a012-7a1ab90ceb0e" containerID="915aa75e8df4dbeea679a1cf7bebbe608496cd3849b965879f7008cb226fc9de" exitCode=0 Feb 18 19:22:41 crc kubenswrapper[4942]: I0218 19:22:41.604399 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-8464758d69-lqr4v" event={"ID":"bf3631bc-384b-44bf-a012-7a1ab90ceb0e","Type":"ContainerDied","Data":"915aa75e8df4dbeea679a1cf7bebbe608496cd3849b965879f7008cb226fc9de"} Feb 18 19:22:41 crc kubenswrapper[4942]: I0218 19:22:41.604460 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-8464758d69-lqr4v" event={"ID":"bf3631bc-384b-44bf-a012-7a1ab90ceb0e","Type":"ContainerDied","Data":"ac5bf5f33c7a2c2e5df869efaf323352918861e3a5e68c61cf0a32573f034d12"} Feb 18 19:22:41 crc kubenswrapper[4942]: I0218 19:22:41.604486 4942 scope.go:117] "RemoveContainer" containerID="915aa75e8df4dbeea679a1cf7bebbe608496cd3849b965879f7008cb226fc9de" Feb 18 19:22:41 crc kubenswrapper[4942]: I0218 19:22:41.604548 4942 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-8464758d69-lqr4v" Feb 18 19:22:41 crc kubenswrapper[4942]: I0218 19:22:41.606080 4942 generic.go:334] "Generic (PLEG): container finished" podID="35947dc4-201a-4fbd-9c5c-9b0766d22557" containerID="e12524f14ccd55e1c3802ffb9d505f7488b76a71edd194d8175292f5bd8ed263" exitCode=0 Feb 18 19:22:41 crc kubenswrapper[4942]: I0218 19:22:41.606129 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-ffc5946f9-cm2jk" event={"ID":"35947dc4-201a-4fbd-9c5c-9b0766d22557","Type":"ContainerDied","Data":"e12524f14ccd55e1c3802ffb9d505f7488b76a71edd194d8175292f5bd8ed263"} Feb 18 19:22:41 crc kubenswrapper[4942]: I0218 19:22:41.606164 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-ffc5946f9-cm2jk" event={"ID":"35947dc4-201a-4fbd-9c5c-9b0766d22557","Type":"ContainerDied","Data":"79122ab086e7f81b3ad38361643e8fa8fc3704a751d70239bb25e6b1e8aa9b08"} Feb 18 19:22:41 crc kubenswrapper[4942]: I0218 19:22:41.606211 4942 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-ffc5946f9-cm2jk" Feb 18 19:22:41 crc kubenswrapper[4942]: I0218 19:22:41.613659 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bf3631bc-384b-44bf-a012-7a1ab90ceb0e-config\") pod \"bf3631bc-384b-44bf-a012-7a1ab90ceb0e\" (UID: \"bf3631bc-384b-44bf-a012-7a1ab90ceb0e\") " Feb 18 19:22:41 crc kubenswrapper[4942]: I0218 19:22:41.613756 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k6bnk\" (UniqueName: \"kubernetes.io/projected/bf3631bc-384b-44bf-a012-7a1ab90ceb0e-kube-api-access-k6bnk\") pod \"bf3631bc-384b-44bf-a012-7a1ab90ceb0e\" (UID: \"bf3631bc-384b-44bf-a012-7a1ab90ceb0e\") " Feb 18 19:22:41 crc kubenswrapper[4942]: I0218 19:22:41.613856 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bf3631bc-384b-44bf-a012-7a1ab90ceb0e-serving-cert\") pod \"bf3631bc-384b-44bf-a012-7a1ab90ceb0e\" (UID: \"bf3631bc-384b-44bf-a012-7a1ab90ceb0e\") " Feb 18 19:22:41 crc kubenswrapper[4942]: I0218 19:22:41.613928 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/bf3631bc-384b-44bf-a012-7a1ab90ceb0e-proxy-ca-bundles\") pod \"bf3631bc-384b-44bf-a012-7a1ab90ceb0e\" (UID: \"bf3631bc-384b-44bf-a012-7a1ab90ceb0e\") " Feb 18 19:22:41 crc kubenswrapper[4942]: I0218 19:22:41.613966 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/bf3631bc-384b-44bf-a012-7a1ab90ceb0e-client-ca\") pod \"bf3631bc-384b-44bf-a012-7a1ab90ceb0e\" (UID: \"bf3631bc-384b-44bf-a012-7a1ab90ceb0e\") " Feb 18 19:22:41 crc kubenswrapper[4942]: I0218 19:22:41.614210 4942 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/35947dc4-201a-4fbd-9c5c-9b0766d22557-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 18 19:22:41 crc kubenswrapper[4942]: I0218 19:22:41.614230 4942 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/35947dc4-201a-4fbd-9c5c-9b0766d22557-config\") on node \"crc\" DevicePath \"\"" Feb 18 19:22:41 crc kubenswrapper[4942]: I0218 19:22:41.614239 4942 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qsrnq\" (UniqueName: \"kubernetes.io/projected/35947dc4-201a-4fbd-9c5c-9b0766d22557-kube-api-access-qsrnq\") on node \"crc\" DevicePath \"\"" Feb 18 19:22:41 crc kubenswrapper[4942]: I0218 19:22:41.614250 4942 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/35947dc4-201a-4fbd-9c5c-9b0766d22557-client-ca\") on node \"crc\" DevicePath \"\"" Feb 18 19:22:41 crc kubenswrapper[4942]: I0218 19:22:41.615023 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bf3631bc-384b-44bf-a012-7a1ab90ceb0e-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "bf3631bc-384b-44bf-a012-7a1ab90ceb0e" (UID: "bf3631bc-384b-44bf-a012-7a1ab90ceb0e"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:22:41 crc kubenswrapper[4942]: I0218 19:22:41.615161 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bf3631bc-384b-44bf-a012-7a1ab90ceb0e-config" (OuterVolumeSpecName: "config") pod "bf3631bc-384b-44bf-a012-7a1ab90ceb0e" (UID: "bf3631bc-384b-44bf-a012-7a1ab90ceb0e"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:22:41 crc kubenswrapper[4942]: I0218 19:22:41.615520 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bf3631bc-384b-44bf-a012-7a1ab90ceb0e-client-ca" (OuterVolumeSpecName: "client-ca") pod "bf3631bc-384b-44bf-a012-7a1ab90ceb0e" (UID: "bf3631bc-384b-44bf-a012-7a1ab90ceb0e"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:22:41 crc kubenswrapper[4942]: I0218 19:22:41.619904 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf3631bc-384b-44bf-a012-7a1ab90ceb0e-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "bf3631bc-384b-44bf-a012-7a1ab90ceb0e" (UID: "bf3631bc-384b-44bf-a012-7a1ab90ceb0e"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:22:41 crc kubenswrapper[4942]: I0218 19:22:41.619943 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf3631bc-384b-44bf-a012-7a1ab90ceb0e-kube-api-access-k6bnk" (OuterVolumeSpecName: "kube-api-access-k6bnk") pod "bf3631bc-384b-44bf-a012-7a1ab90ceb0e" (UID: "bf3631bc-384b-44bf-a012-7a1ab90ceb0e"). InnerVolumeSpecName "kube-api-access-k6bnk". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:22:41 crc kubenswrapper[4942]: I0218 19:22:41.629192 4942 scope.go:117] "RemoveContainer" containerID="915aa75e8df4dbeea679a1cf7bebbe608496cd3849b965879f7008cb226fc9de" Feb 18 19:22:41 crc kubenswrapper[4942]: E0218 19:22:41.630931 4942 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"915aa75e8df4dbeea679a1cf7bebbe608496cd3849b965879f7008cb226fc9de\": container with ID starting with 915aa75e8df4dbeea679a1cf7bebbe608496cd3849b965879f7008cb226fc9de not found: ID does not exist" containerID="915aa75e8df4dbeea679a1cf7bebbe608496cd3849b965879f7008cb226fc9de" Feb 18 19:22:41 crc kubenswrapper[4942]: I0218 19:22:41.630989 4942 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"915aa75e8df4dbeea679a1cf7bebbe608496cd3849b965879f7008cb226fc9de"} err="failed to get container status \"915aa75e8df4dbeea679a1cf7bebbe608496cd3849b965879f7008cb226fc9de\": rpc error: code = NotFound desc = could not find container \"915aa75e8df4dbeea679a1cf7bebbe608496cd3849b965879f7008cb226fc9de\": container with ID starting with 915aa75e8df4dbeea679a1cf7bebbe608496cd3849b965879f7008cb226fc9de not found: ID does not exist" Feb 18 19:22:41 crc kubenswrapper[4942]: I0218 19:22:41.631024 4942 scope.go:117] "RemoveContainer" containerID="e12524f14ccd55e1c3802ffb9d505f7488b76a71edd194d8175292f5bd8ed263" Feb 18 19:22:41 crc kubenswrapper[4942]: I0218 19:22:41.632963 4942 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-ffc5946f9-cm2jk"] Feb 18 19:22:41 crc kubenswrapper[4942]: I0218 19:22:41.635630 4942 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-ffc5946f9-cm2jk"] Feb 18 19:22:41 crc kubenswrapper[4942]: I0218 19:22:41.651315 4942 scope.go:117] "RemoveContainer" containerID="e12524f14ccd55e1c3802ffb9d505f7488b76a71edd194d8175292f5bd8ed263" Feb 18 19:22:41 crc kubenswrapper[4942]: E0218 19:22:41.651816 4942 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e12524f14ccd55e1c3802ffb9d505f7488b76a71edd194d8175292f5bd8ed263\": container with ID starting with e12524f14ccd55e1c3802ffb9d505f7488b76a71edd194d8175292f5bd8ed263 not found: ID does not exist" containerID="e12524f14ccd55e1c3802ffb9d505f7488b76a71edd194d8175292f5bd8ed263" Feb 18 19:22:41 crc kubenswrapper[4942]: I0218 19:22:41.651848 4942 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e12524f14ccd55e1c3802ffb9d505f7488b76a71edd194d8175292f5bd8ed263"} err="failed to get container status \"e12524f14ccd55e1c3802ffb9d505f7488b76a71edd194d8175292f5bd8ed263\": rpc error: code = NotFound desc = could not find container \"e12524f14ccd55e1c3802ffb9d505f7488b76a71edd194d8175292f5bd8ed263\": container with ID starting with e12524f14ccd55e1c3802ffb9d505f7488b76a71edd194d8175292f5bd8ed263 not found: ID does not exist" Feb 18 19:22:41 crc kubenswrapper[4942]: I0218 19:22:41.714682 4942 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bf3631bc-384b-44bf-a012-7a1ab90ceb0e-config\") on node \"crc\" DevicePath \"\"" Feb 18 19:22:41 crc kubenswrapper[4942]: I0218 19:22:41.714722 4942 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k6bnk\" (UniqueName: \"kubernetes.io/projected/bf3631bc-384b-44bf-a012-7a1ab90ceb0e-kube-api-access-k6bnk\") on node \"crc\" DevicePath \"\"" Feb 18 19:22:41 crc kubenswrapper[4942]: I0218 19:22:41.714734 4942 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bf3631bc-384b-44bf-a012-7a1ab90ceb0e-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 18 19:22:41 crc kubenswrapper[4942]: I0218 19:22:41.714748 4942 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/bf3631bc-384b-44bf-a012-7a1ab90ceb0e-client-ca\") on node \"crc\" DevicePath \"\"" Feb 18 19:22:41 crc kubenswrapper[4942]: I0218 19:22:41.714800 4942 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/bf3631bc-384b-44bf-a012-7a1ab90ceb0e-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 18 19:22:41 crc kubenswrapper[4942]: I0218 19:22:41.955000 4942 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-8464758d69-lqr4v"] Feb 18 19:22:41 crc kubenswrapper[4942]: I0218 19:22:41.960859 4942 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-8464758d69-lqr4v"] Feb 18 19:22:42 crc kubenswrapper[4942]: I0218 19:22:42.097363 4942 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-5678897b9b-xkwzk"] Feb 18 19:22:42 crc kubenswrapper[4942]: E0218 19:22:42.097681 4942 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bf3631bc-384b-44bf-a012-7a1ab90ceb0e" containerName="controller-manager" Feb 18 19:22:42 crc kubenswrapper[4942]: I0218 19:22:42.097965 4942 state_mem.go:107] "Deleted CPUSet assignment" podUID="bf3631bc-384b-44bf-a012-7a1ab90ceb0e" containerName="controller-manager" Feb 18 19:22:42 crc kubenswrapper[4942]: E0218 19:22:42.098012 4942 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="35947dc4-201a-4fbd-9c5c-9b0766d22557" containerName="route-controller-manager" Feb 18 19:22:42 crc kubenswrapper[4942]: I0218 19:22:42.098028 4942 state_mem.go:107] "Deleted CPUSet assignment" podUID="35947dc4-201a-4fbd-9c5c-9b0766d22557" containerName="route-controller-manager" Feb 18 19:22:42 crc kubenswrapper[4942]: I0218 19:22:42.099795 4942 memory_manager.go:354] "RemoveStaleState removing state" podUID="bf3631bc-384b-44bf-a012-7a1ab90ceb0e" containerName="controller-manager" Feb 18 19:22:42 crc kubenswrapper[4942]: I0218 19:22:42.099857 4942 memory_manager.go:354] "RemoveStaleState removing state" podUID="35947dc4-201a-4fbd-9c5c-9b0766d22557" containerName="route-controller-manager" Feb 18 19:22:42 crc kubenswrapper[4942]: I0218 19:22:42.100611 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5678897b9b-xkwzk" Feb 18 19:22:42 crc kubenswrapper[4942]: I0218 19:22:42.106168 4942 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Feb 18 19:22:42 crc kubenswrapper[4942]: I0218 19:22:42.106386 4942 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Feb 18 19:22:42 crc kubenswrapper[4942]: I0218 19:22:42.106471 4942 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Feb 18 19:22:42 crc kubenswrapper[4942]: I0218 19:22:42.106661 4942 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Feb 18 19:22:42 crc kubenswrapper[4942]: I0218 19:22:42.106964 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Feb 18 19:22:42 crc kubenswrapper[4942]: I0218 19:22:42.107908 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Feb 18 19:22:42 crc kubenswrapper[4942]: I0218 19:22:42.111524 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-5678897b9b-xkwzk"] Feb 18 19:22:42 crc kubenswrapper[4942]: I0218 19:22:42.119365 4942 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Feb 18 19:22:42 crc kubenswrapper[4942]: I0218 19:22:42.123930 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/1f32533e-b907-4ba0-a54f-df71a6863c6d-proxy-ca-bundles\") pod \"controller-manager-5678897b9b-xkwzk\" (UID: \"1f32533e-b907-4ba0-a54f-df71a6863c6d\") " pod="openshift-controller-manager/controller-manager-5678897b9b-xkwzk" Feb 18 19:22:42 crc kubenswrapper[4942]: I0218 19:22:42.124137 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1f32533e-b907-4ba0-a54f-df71a6863c6d-serving-cert\") pod \"controller-manager-5678897b9b-xkwzk\" (UID: \"1f32533e-b907-4ba0-a54f-df71a6863c6d\") " pod="openshift-controller-manager/controller-manager-5678897b9b-xkwzk" Feb 18 19:22:42 crc kubenswrapper[4942]: I0218 19:22:42.124203 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-584kq\" (UniqueName: \"kubernetes.io/projected/1f32533e-b907-4ba0-a54f-df71a6863c6d-kube-api-access-584kq\") pod \"controller-manager-5678897b9b-xkwzk\" (UID: \"1f32533e-b907-4ba0-a54f-df71a6863c6d\") " pod="openshift-controller-manager/controller-manager-5678897b9b-xkwzk" Feb 18 19:22:42 crc kubenswrapper[4942]: I0218 19:22:42.124274 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1f32533e-b907-4ba0-a54f-df71a6863c6d-config\") pod \"controller-manager-5678897b9b-xkwzk\" (UID: \"1f32533e-b907-4ba0-a54f-df71a6863c6d\") " pod="openshift-controller-manager/controller-manager-5678897b9b-xkwzk" Feb 18 19:22:42 crc kubenswrapper[4942]: I0218 19:22:42.124359 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1f32533e-b907-4ba0-a54f-df71a6863c6d-client-ca\") pod \"controller-manager-5678897b9b-xkwzk\" (UID: \"1f32533e-b907-4ba0-a54f-df71a6863c6d\") " pod="openshift-controller-manager/controller-manager-5678897b9b-xkwzk" Feb 18 19:22:42 crc kubenswrapper[4942]: I0218 19:22:42.225915 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-584kq\" (UniqueName: \"kubernetes.io/projected/1f32533e-b907-4ba0-a54f-df71a6863c6d-kube-api-access-584kq\") pod \"controller-manager-5678897b9b-xkwzk\" (UID: \"1f32533e-b907-4ba0-a54f-df71a6863c6d\") " pod="openshift-controller-manager/controller-manager-5678897b9b-xkwzk" Feb 18 19:22:42 crc kubenswrapper[4942]: I0218 19:22:42.226012 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1f32533e-b907-4ba0-a54f-df71a6863c6d-config\") pod \"controller-manager-5678897b9b-xkwzk\" (UID: \"1f32533e-b907-4ba0-a54f-df71a6863c6d\") " pod="openshift-controller-manager/controller-manager-5678897b9b-xkwzk" Feb 18 19:22:42 crc kubenswrapper[4942]: I0218 19:22:42.226086 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1f32533e-b907-4ba0-a54f-df71a6863c6d-client-ca\") pod \"controller-manager-5678897b9b-xkwzk\" (UID: \"1f32533e-b907-4ba0-a54f-df71a6863c6d\") " pod="openshift-controller-manager/controller-manager-5678897b9b-xkwzk" Feb 18 19:22:42 crc kubenswrapper[4942]: I0218 19:22:42.226149 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/1f32533e-b907-4ba0-a54f-df71a6863c6d-proxy-ca-bundles\") pod \"controller-manager-5678897b9b-xkwzk\" (UID: \"1f32533e-b907-4ba0-a54f-df71a6863c6d\") " pod="openshift-controller-manager/controller-manager-5678897b9b-xkwzk" Feb 18 19:22:42 crc kubenswrapper[4942]: I0218 19:22:42.226218 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1f32533e-b907-4ba0-a54f-df71a6863c6d-serving-cert\") pod \"controller-manager-5678897b9b-xkwzk\" (UID: \"1f32533e-b907-4ba0-a54f-df71a6863c6d\") " pod="openshift-controller-manager/controller-manager-5678897b9b-xkwzk" Feb 18 19:22:42 crc kubenswrapper[4942]: I0218 19:22:42.227747 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1f32533e-b907-4ba0-a54f-df71a6863c6d-client-ca\") pod \"controller-manager-5678897b9b-xkwzk\" (UID: \"1f32533e-b907-4ba0-a54f-df71a6863c6d\") " pod="openshift-controller-manager/controller-manager-5678897b9b-xkwzk" Feb 18 19:22:42 crc kubenswrapper[4942]: I0218 19:22:42.228239 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/1f32533e-b907-4ba0-a54f-df71a6863c6d-proxy-ca-bundles\") pod \"controller-manager-5678897b9b-xkwzk\" (UID: \"1f32533e-b907-4ba0-a54f-df71a6863c6d\") " pod="openshift-controller-manager/controller-manager-5678897b9b-xkwzk" Feb 18 19:22:42 crc kubenswrapper[4942]: I0218 19:22:42.228434 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1f32533e-b907-4ba0-a54f-df71a6863c6d-config\") pod \"controller-manager-5678897b9b-xkwzk\" (UID: \"1f32533e-b907-4ba0-a54f-df71a6863c6d\") " pod="openshift-controller-manager/controller-manager-5678897b9b-xkwzk" Feb 18 19:22:42 crc kubenswrapper[4942]: I0218 19:22:42.233733 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1f32533e-b907-4ba0-a54f-df71a6863c6d-serving-cert\") pod \"controller-manager-5678897b9b-xkwzk\" (UID: \"1f32533e-b907-4ba0-a54f-df71a6863c6d\") " pod="openshift-controller-manager/controller-manager-5678897b9b-xkwzk" Feb 18 19:22:42 crc kubenswrapper[4942]: I0218 19:22:42.248222 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-584kq\" (UniqueName: \"kubernetes.io/projected/1f32533e-b907-4ba0-a54f-df71a6863c6d-kube-api-access-584kq\") pod \"controller-manager-5678897b9b-xkwzk\" (UID: \"1f32533e-b907-4ba0-a54f-df71a6863c6d\") " pod="openshift-controller-manager/controller-manager-5678897b9b-xkwzk" Feb 18 19:22:42 crc kubenswrapper[4942]: I0218 19:22:42.469561 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5678897b9b-xkwzk" Feb 18 19:22:42 crc kubenswrapper[4942]: I0218 19:22:42.719445 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-5678897b9b-xkwzk"] Feb 18 19:22:43 crc kubenswrapper[4942]: I0218 19:22:43.044987 4942 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="35947dc4-201a-4fbd-9c5c-9b0766d22557" path="/var/lib/kubelet/pods/35947dc4-201a-4fbd-9c5c-9b0766d22557/volumes" Feb 18 19:22:43 crc kubenswrapper[4942]: I0218 19:22:43.046498 4942 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bf3631bc-384b-44bf-a012-7a1ab90ceb0e" path="/var/lib/kubelet/pods/bf3631bc-384b-44bf-a012-7a1ab90ceb0e/volumes" Feb 18 19:22:43 crc kubenswrapper[4942]: I0218 19:22:43.091047 4942 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5c6fb6955d-rp6mz"] Feb 18 19:22:43 crc kubenswrapper[4942]: I0218 19:22:43.091973 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5c6fb6955d-rp6mz" Feb 18 19:22:43 crc kubenswrapper[4942]: I0218 19:22:43.093542 4942 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Feb 18 19:22:43 crc kubenswrapper[4942]: I0218 19:22:43.093822 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Feb 18 19:22:43 crc kubenswrapper[4942]: I0218 19:22:43.094156 4942 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Feb 18 19:22:43 crc kubenswrapper[4942]: I0218 19:22:43.094369 4942 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Feb 18 19:22:43 crc kubenswrapper[4942]: I0218 19:22:43.094382 4942 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Feb 18 19:22:43 crc kubenswrapper[4942]: I0218 19:22:43.096102 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Feb 18 19:22:43 crc kubenswrapper[4942]: I0218 19:22:43.103198 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5c6fb6955d-rp6mz"] Feb 18 19:22:43 crc kubenswrapper[4942]: I0218 19:22:43.148267 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b0c3cea3-65a4-46fc-9185-d057169b4174-client-ca\") pod \"route-controller-manager-5c6fb6955d-rp6mz\" (UID: \"b0c3cea3-65a4-46fc-9185-d057169b4174\") " pod="openshift-route-controller-manager/route-controller-manager-5c6fb6955d-rp6mz" Feb 18 19:22:43 crc kubenswrapper[4942]: I0218 19:22:43.148326 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kjdbd\" (UniqueName: \"kubernetes.io/projected/b0c3cea3-65a4-46fc-9185-d057169b4174-kube-api-access-kjdbd\") pod \"route-controller-manager-5c6fb6955d-rp6mz\" (UID: \"b0c3cea3-65a4-46fc-9185-d057169b4174\") " pod="openshift-route-controller-manager/route-controller-manager-5c6fb6955d-rp6mz" Feb 18 19:22:43 crc kubenswrapper[4942]: I0218 19:22:43.148371 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b0c3cea3-65a4-46fc-9185-d057169b4174-serving-cert\") pod \"route-controller-manager-5c6fb6955d-rp6mz\" (UID: \"b0c3cea3-65a4-46fc-9185-d057169b4174\") " pod="openshift-route-controller-manager/route-controller-manager-5c6fb6955d-rp6mz" Feb 18 19:22:43 crc kubenswrapper[4942]: I0218 19:22:43.148446 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b0c3cea3-65a4-46fc-9185-d057169b4174-config\") pod \"route-controller-manager-5c6fb6955d-rp6mz\" (UID: \"b0c3cea3-65a4-46fc-9185-d057169b4174\") " pod="openshift-route-controller-manager/route-controller-manager-5c6fb6955d-rp6mz" Feb 18 19:22:43 crc kubenswrapper[4942]: I0218 19:22:43.249298 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b0c3cea3-65a4-46fc-9185-d057169b4174-config\") pod \"route-controller-manager-5c6fb6955d-rp6mz\" (UID: \"b0c3cea3-65a4-46fc-9185-d057169b4174\") " pod="openshift-route-controller-manager/route-controller-manager-5c6fb6955d-rp6mz" Feb 18 19:22:43 crc kubenswrapper[4942]: I0218 19:22:43.249349 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b0c3cea3-65a4-46fc-9185-d057169b4174-client-ca\") pod \"route-controller-manager-5c6fb6955d-rp6mz\" (UID: \"b0c3cea3-65a4-46fc-9185-d057169b4174\") " pod="openshift-route-controller-manager/route-controller-manager-5c6fb6955d-rp6mz" Feb 18 19:22:43 crc kubenswrapper[4942]: I0218 19:22:43.249367 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kjdbd\" (UniqueName: \"kubernetes.io/projected/b0c3cea3-65a4-46fc-9185-d057169b4174-kube-api-access-kjdbd\") pod \"route-controller-manager-5c6fb6955d-rp6mz\" (UID: \"b0c3cea3-65a4-46fc-9185-d057169b4174\") " pod="openshift-route-controller-manager/route-controller-manager-5c6fb6955d-rp6mz" Feb 18 19:22:43 crc kubenswrapper[4942]: I0218 19:22:43.249397 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b0c3cea3-65a4-46fc-9185-d057169b4174-serving-cert\") pod \"route-controller-manager-5c6fb6955d-rp6mz\" (UID: \"b0c3cea3-65a4-46fc-9185-d057169b4174\") " pod="openshift-route-controller-manager/route-controller-manager-5c6fb6955d-rp6mz" Feb 18 19:22:43 crc kubenswrapper[4942]: I0218 19:22:43.250548 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b0c3cea3-65a4-46fc-9185-d057169b4174-client-ca\") pod \"route-controller-manager-5c6fb6955d-rp6mz\" (UID: \"b0c3cea3-65a4-46fc-9185-d057169b4174\") " pod="openshift-route-controller-manager/route-controller-manager-5c6fb6955d-rp6mz" Feb 18 19:22:43 crc kubenswrapper[4942]: I0218 19:22:43.251571 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b0c3cea3-65a4-46fc-9185-d057169b4174-config\") pod \"route-controller-manager-5c6fb6955d-rp6mz\" (UID: \"b0c3cea3-65a4-46fc-9185-d057169b4174\") " pod="openshift-route-controller-manager/route-controller-manager-5c6fb6955d-rp6mz" Feb 18 19:22:43 crc kubenswrapper[4942]: I0218 19:22:43.255535 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b0c3cea3-65a4-46fc-9185-d057169b4174-serving-cert\") pod \"route-controller-manager-5c6fb6955d-rp6mz\" (UID: \"b0c3cea3-65a4-46fc-9185-d057169b4174\") " pod="openshift-route-controller-manager/route-controller-manager-5c6fb6955d-rp6mz" Feb 18 19:22:43 crc kubenswrapper[4942]: I0218 19:22:43.271830 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kjdbd\" (UniqueName: \"kubernetes.io/projected/b0c3cea3-65a4-46fc-9185-d057169b4174-kube-api-access-kjdbd\") pod \"route-controller-manager-5c6fb6955d-rp6mz\" (UID: \"b0c3cea3-65a4-46fc-9185-d057169b4174\") " pod="openshift-route-controller-manager/route-controller-manager-5c6fb6955d-rp6mz" Feb 18 19:22:43 crc kubenswrapper[4942]: I0218 19:22:43.449498 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5c6fb6955d-rp6mz" Feb 18 19:22:43 crc kubenswrapper[4942]: I0218 19:22:43.622868 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5678897b9b-xkwzk" event={"ID":"1f32533e-b907-4ba0-a54f-df71a6863c6d","Type":"ContainerStarted","Data":"d422b6ce6418a0e09a6e40a46330d86011e1e1d6d58089842da04f410ac6b22d"} Feb 18 19:22:43 crc kubenswrapper[4942]: I0218 19:22:43.622911 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5678897b9b-xkwzk" event={"ID":"1f32533e-b907-4ba0-a54f-df71a6863c6d","Type":"ContainerStarted","Data":"c8548dc0e24ec36f22e8ba06bf062da1a1bdfa4aa5ed316e0b965a17574740e1"} Feb 18 19:22:43 crc kubenswrapper[4942]: I0218 19:22:43.623245 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-5678897b9b-xkwzk" Feb 18 19:22:43 crc kubenswrapper[4942]: I0218 19:22:43.629614 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-5678897b9b-xkwzk" Feb 18 19:22:43 crc kubenswrapper[4942]: I0218 19:22:43.643656 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5c6fb6955d-rp6mz"] Feb 18 19:22:43 crc kubenswrapper[4942]: I0218 19:22:43.645092 4942 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-5678897b9b-xkwzk" podStartSLOduration=2.645079757 podStartE2EDuration="2.645079757s" podCreationTimestamp="2026-02-18 19:22:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:22:43.642139554 +0000 UTC m=+323.347072239" watchObservedRunningTime="2026-02-18 19:22:43.645079757 +0000 UTC m=+323.350012422" Feb 18 19:22:43 crc kubenswrapper[4942]: W0218 19:22:43.649971 4942 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb0c3cea3_65a4_46fc_9185_d057169b4174.slice/crio-714d652ad9fc0a287b99250d51e1e142f7f1431c0d7742444938d5e3b88e03f0 WatchSource:0}: Error finding container 714d652ad9fc0a287b99250d51e1e142f7f1431c0d7742444938d5e3b88e03f0: Status 404 returned error can't find the container with id 714d652ad9fc0a287b99250d51e1e142f7f1431c0d7742444938d5e3b88e03f0 Feb 18 19:22:44 crc kubenswrapper[4942]: I0218 19:22:44.629273 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-5c6fb6955d-rp6mz" event={"ID":"b0c3cea3-65a4-46fc-9185-d057169b4174","Type":"ContainerStarted","Data":"7d305ffaded32d689690e4389ef3771bc28998898ef46b672d0975af1f0d2c1d"} Feb 18 19:22:44 crc kubenswrapper[4942]: I0218 19:22:44.629614 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-5c6fb6955d-rp6mz" event={"ID":"b0c3cea3-65a4-46fc-9185-d057169b4174","Type":"ContainerStarted","Data":"714d652ad9fc0a287b99250d51e1e142f7f1431c0d7742444938d5e3b88e03f0"} Feb 18 19:22:44 crc kubenswrapper[4942]: I0218 19:22:44.629628 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-5c6fb6955d-rp6mz" Feb 18 19:22:44 crc kubenswrapper[4942]: I0218 19:22:44.637850 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-5c6fb6955d-rp6mz" Feb 18 19:22:44 crc kubenswrapper[4942]: I0218 19:22:44.656281 4942 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-5c6fb6955d-rp6mz" podStartSLOduration=3.656261662 podStartE2EDuration="3.656261662s" podCreationTimestamp="2026-02-18 19:22:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:22:44.650738786 +0000 UTC m=+324.355671461" watchObservedRunningTime="2026-02-18 19:22:44.656261662 +0000 UTC m=+324.361194317" Feb 18 19:23:07 crc kubenswrapper[4942]: I0218 19:23:07.955360 4942 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-5678897b9b-xkwzk"] Feb 18 19:23:07 crc kubenswrapper[4942]: I0218 19:23:07.958432 4942 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-5678897b9b-xkwzk" podUID="1f32533e-b907-4ba0-a54f-df71a6863c6d" containerName="controller-manager" containerID="cri-o://d422b6ce6418a0e09a6e40a46330d86011e1e1d6d58089842da04f410ac6b22d" gracePeriod=30 Feb 18 19:23:08 crc kubenswrapper[4942]: I0218 19:23:08.566543 4942 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5678897b9b-xkwzk" Feb 18 19:23:08 crc kubenswrapper[4942]: I0218 19:23:08.591163 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-584kq\" (UniqueName: \"kubernetes.io/projected/1f32533e-b907-4ba0-a54f-df71a6863c6d-kube-api-access-584kq\") pod \"1f32533e-b907-4ba0-a54f-df71a6863c6d\" (UID: \"1f32533e-b907-4ba0-a54f-df71a6863c6d\") " Feb 18 19:23:08 crc kubenswrapper[4942]: I0218 19:23:08.591310 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/1f32533e-b907-4ba0-a54f-df71a6863c6d-proxy-ca-bundles\") pod \"1f32533e-b907-4ba0-a54f-df71a6863c6d\" (UID: \"1f32533e-b907-4ba0-a54f-df71a6863c6d\") " Feb 18 19:23:08 crc kubenswrapper[4942]: I0218 19:23:08.591373 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1f32533e-b907-4ba0-a54f-df71a6863c6d-config\") pod \"1f32533e-b907-4ba0-a54f-df71a6863c6d\" (UID: \"1f32533e-b907-4ba0-a54f-df71a6863c6d\") " Feb 18 19:23:08 crc kubenswrapper[4942]: I0218 19:23:08.591420 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1f32533e-b907-4ba0-a54f-df71a6863c6d-serving-cert\") pod \"1f32533e-b907-4ba0-a54f-df71a6863c6d\" (UID: \"1f32533e-b907-4ba0-a54f-df71a6863c6d\") " Feb 18 19:23:08 crc kubenswrapper[4942]: I0218 19:23:08.591470 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1f32533e-b907-4ba0-a54f-df71a6863c6d-client-ca\") pod \"1f32533e-b907-4ba0-a54f-df71a6863c6d\" (UID: \"1f32533e-b907-4ba0-a54f-df71a6863c6d\") " Feb 18 19:23:08 crc kubenswrapper[4942]: I0218 19:23:08.592170 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1f32533e-b907-4ba0-a54f-df71a6863c6d-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "1f32533e-b907-4ba0-a54f-df71a6863c6d" (UID: "1f32533e-b907-4ba0-a54f-df71a6863c6d"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:23:08 crc kubenswrapper[4942]: I0218 19:23:08.592451 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1f32533e-b907-4ba0-a54f-df71a6863c6d-client-ca" (OuterVolumeSpecName: "client-ca") pod "1f32533e-b907-4ba0-a54f-df71a6863c6d" (UID: "1f32533e-b907-4ba0-a54f-df71a6863c6d"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:23:08 crc kubenswrapper[4942]: I0218 19:23:08.592663 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1f32533e-b907-4ba0-a54f-df71a6863c6d-config" (OuterVolumeSpecName: "config") pod "1f32533e-b907-4ba0-a54f-df71a6863c6d" (UID: "1f32533e-b907-4ba0-a54f-df71a6863c6d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:23:08 crc kubenswrapper[4942]: I0218 19:23:08.598266 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1f32533e-b907-4ba0-a54f-df71a6863c6d-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1f32533e-b907-4ba0-a54f-df71a6863c6d" (UID: "1f32533e-b907-4ba0-a54f-df71a6863c6d"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:23:08 crc kubenswrapper[4942]: I0218 19:23:08.606108 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1f32533e-b907-4ba0-a54f-df71a6863c6d-kube-api-access-584kq" (OuterVolumeSpecName: "kube-api-access-584kq") pod "1f32533e-b907-4ba0-a54f-df71a6863c6d" (UID: "1f32533e-b907-4ba0-a54f-df71a6863c6d"). InnerVolumeSpecName "kube-api-access-584kq". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:23:08 crc kubenswrapper[4942]: I0218 19:23:08.693280 4942 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/1f32533e-b907-4ba0-a54f-df71a6863c6d-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 18 19:23:08 crc kubenswrapper[4942]: I0218 19:23:08.693316 4942 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1f32533e-b907-4ba0-a54f-df71a6863c6d-config\") on node \"crc\" DevicePath \"\"" Feb 18 19:23:08 crc kubenswrapper[4942]: I0218 19:23:08.693328 4942 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1f32533e-b907-4ba0-a54f-df71a6863c6d-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 18 19:23:08 crc kubenswrapper[4942]: I0218 19:23:08.693340 4942 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1f32533e-b907-4ba0-a54f-df71a6863c6d-client-ca\") on node \"crc\" DevicePath \"\"" Feb 18 19:23:08 crc kubenswrapper[4942]: I0218 19:23:08.693353 4942 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-584kq\" (UniqueName: \"kubernetes.io/projected/1f32533e-b907-4ba0-a54f-df71a6863c6d-kube-api-access-584kq\") on node \"crc\" DevicePath \"\"" Feb 18 19:23:08 crc kubenswrapper[4942]: I0218 19:23:08.774646 4942 generic.go:334] "Generic (PLEG): container finished" podID="1f32533e-b907-4ba0-a54f-df71a6863c6d" containerID="d422b6ce6418a0e09a6e40a46330d86011e1e1d6d58089842da04f410ac6b22d" exitCode=0 Feb 18 19:23:08 crc kubenswrapper[4942]: I0218 19:23:08.774731 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5678897b9b-xkwzk" event={"ID":"1f32533e-b907-4ba0-a54f-df71a6863c6d","Type":"ContainerDied","Data":"d422b6ce6418a0e09a6e40a46330d86011e1e1d6d58089842da04f410ac6b22d"} Feb 18 19:23:08 crc kubenswrapper[4942]: I0218 19:23:08.774823 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5678897b9b-xkwzk" event={"ID":"1f32533e-b907-4ba0-a54f-df71a6863c6d","Type":"ContainerDied","Data":"c8548dc0e24ec36f22e8ba06bf062da1a1bdfa4aa5ed316e0b965a17574740e1"} Feb 18 19:23:08 crc kubenswrapper[4942]: I0218 19:23:08.774859 4942 scope.go:117] "RemoveContainer" containerID="d422b6ce6418a0e09a6e40a46330d86011e1e1d6d58089842da04f410ac6b22d" Feb 18 19:23:08 crc kubenswrapper[4942]: I0218 19:23:08.774745 4942 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5678897b9b-xkwzk" Feb 18 19:23:08 crc kubenswrapper[4942]: I0218 19:23:08.804402 4942 scope.go:117] "RemoveContainer" containerID="d422b6ce6418a0e09a6e40a46330d86011e1e1d6d58089842da04f410ac6b22d" Feb 18 19:23:08 crc kubenswrapper[4942]: E0218 19:23:08.805205 4942 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d422b6ce6418a0e09a6e40a46330d86011e1e1d6d58089842da04f410ac6b22d\": container with ID starting with d422b6ce6418a0e09a6e40a46330d86011e1e1d6d58089842da04f410ac6b22d not found: ID does not exist" containerID="d422b6ce6418a0e09a6e40a46330d86011e1e1d6d58089842da04f410ac6b22d" Feb 18 19:23:08 crc kubenswrapper[4942]: I0218 19:23:08.805274 4942 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d422b6ce6418a0e09a6e40a46330d86011e1e1d6d58089842da04f410ac6b22d"} err="failed to get container status \"d422b6ce6418a0e09a6e40a46330d86011e1e1d6d58089842da04f410ac6b22d\": rpc error: code = NotFound desc = could not find container \"d422b6ce6418a0e09a6e40a46330d86011e1e1d6d58089842da04f410ac6b22d\": container with ID starting with d422b6ce6418a0e09a6e40a46330d86011e1e1d6d58089842da04f410ac6b22d not found: ID does not exist" Feb 18 19:23:08 crc kubenswrapper[4942]: I0218 19:23:08.813265 4942 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-5678897b9b-xkwzk"] Feb 18 19:23:08 crc kubenswrapper[4942]: I0218 19:23:08.819305 4942 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-5678897b9b-xkwzk"] Feb 18 19:23:09 crc kubenswrapper[4942]: I0218 19:23:09.044462 4942 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1f32533e-b907-4ba0-a54f-df71a6863c6d" path="/var/lib/kubelet/pods/1f32533e-b907-4ba0-a54f-df71a6863c6d/volumes" Feb 18 19:23:09 crc kubenswrapper[4942]: I0218 19:23:09.112835 4942 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-bd7dd95df-8ddw9"] Feb 18 19:23:09 crc kubenswrapper[4942]: E0218 19:23:09.113070 4942 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1f32533e-b907-4ba0-a54f-df71a6863c6d" containerName="controller-manager" Feb 18 19:23:09 crc kubenswrapper[4942]: I0218 19:23:09.113085 4942 state_mem.go:107] "Deleted CPUSet assignment" podUID="1f32533e-b907-4ba0-a54f-df71a6863c6d" containerName="controller-manager" Feb 18 19:23:09 crc kubenswrapper[4942]: I0218 19:23:09.113197 4942 memory_manager.go:354] "RemoveStaleState removing state" podUID="1f32533e-b907-4ba0-a54f-df71a6863c6d" containerName="controller-manager" Feb 18 19:23:09 crc kubenswrapper[4942]: I0218 19:23:09.113616 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-bd7dd95df-8ddw9" Feb 18 19:23:09 crc kubenswrapper[4942]: I0218 19:23:09.116343 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Feb 18 19:23:09 crc kubenswrapper[4942]: I0218 19:23:09.117033 4942 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Feb 18 19:23:09 crc kubenswrapper[4942]: I0218 19:23:09.117366 4942 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Feb 18 19:23:09 crc kubenswrapper[4942]: I0218 19:23:09.117651 4942 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Feb 18 19:23:09 crc kubenswrapper[4942]: I0218 19:23:09.118993 4942 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Feb 18 19:23:09 crc kubenswrapper[4942]: I0218 19:23:09.121220 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Feb 18 19:23:09 crc kubenswrapper[4942]: I0218 19:23:09.126941 4942 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Feb 18 19:23:09 crc kubenswrapper[4942]: I0218 19:23:09.138541 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-bd7dd95df-8ddw9"] Feb 18 19:23:09 crc kubenswrapper[4942]: I0218 19:23:09.199546 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m9qpx\" (UniqueName: \"kubernetes.io/projected/fb4e19d0-ff6c-45f9-872e-750bb8231014-kube-api-access-m9qpx\") pod \"controller-manager-bd7dd95df-8ddw9\" (UID: \"fb4e19d0-ff6c-45f9-872e-750bb8231014\") " pod="openshift-controller-manager/controller-manager-bd7dd95df-8ddw9" Feb 18 19:23:09 crc kubenswrapper[4942]: I0218 19:23:09.200028 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fb4e19d0-ff6c-45f9-872e-750bb8231014-serving-cert\") pod \"controller-manager-bd7dd95df-8ddw9\" (UID: \"fb4e19d0-ff6c-45f9-872e-750bb8231014\") " pod="openshift-controller-manager/controller-manager-bd7dd95df-8ddw9" Feb 18 19:23:09 crc kubenswrapper[4942]: I0218 19:23:09.200140 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fb4e19d0-ff6c-45f9-872e-750bb8231014-config\") pod \"controller-manager-bd7dd95df-8ddw9\" (UID: \"fb4e19d0-ff6c-45f9-872e-750bb8231014\") " pod="openshift-controller-manager/controller-manager-bd7dd95df-8ddw9" Feb 18 19:23:09 crc kubenswrapper[4942]: I0218 19:23:09.200315 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/fb4e19d0-ff6c-45f9-872e-750bb8231014-client-ca\") pod \"controller-manager-bd7dd95df-8ddw9\" (UID: \"fb4e19d0-ff6c-45f9-872e-750bb8231014\") " pod="openshift-controller-manager/controller-manager-bd7dd95df-8ddw9" Feb 18 19:23:09 crc kubenswrapper[4942]: I0218 19:23:09.200492 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/fb4e19d0-ff6c-45f9-872e-750bb8231014-proxy-ca-bundles\") pod \"controller-manager-bd7dd95df-8ddw9\" (UID: \"fb4e19d0-ff6c-45f9-872e-750bb8231014\") " pod="openshift-controller-manager/controller-manager-bd7dd95df-8ddw9" Feb 18 19:23:09 crc kubenswrapper[4942]: I0218 19:23:09.302232 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fb4e19d0-ff6c-45f9-872e-750bb8231014-serving-cert\") pod \"controller-manager-bd7dd95df-8ddw9\" (UID: \"fb4e19d0-ff6c-45f9-872e-750bb8231014\") " pod="openshift-controller-manager/controller-manager-bd7dd95df-8ddw9" Feb 18 19:23:09 crc kubenswrapper[4942]: I0218 19:23:09.302352 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fb4e19d0-ff6c-45f9-872e-750bb8231014-config\") pod \"controller-manager-bd7dd95df-8ddw9\" (UID: \"fb4e19d0-ff6c-45f9-872e-750bb8231014\") " pod="openshift-controller-manager/controller-manager-bd7dd95df-8ddw9" Feb 18 19:23:09 crc kubenswrapper[4942]: I0218 19:23:09.302405 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/fb4e19d0-ff6c-45f9-872e-750bb8231014-client-ca\") pod \"controller-manager-bd7dd95df-8ddw9\" (UID: \"fb4e19d0-ff6c-45f9-872e-750bb8231014\") " pod="openshift-controller-manager/controller-manager-bd7dd95df-8ddw9" Feb 18 19:23:09 crc kubenswrapper[4942]: I0218 19:23:09.302464 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/fb4e19d0-ff6c-45f9-872e-750bb8231014-proxy-ca-bundles\") pod \"controller-manager-bd7dd95df-8ddw9\" (UID: \"fb4e19d0-ff6c-45f9-872e-750bb8231014\") " pod="openshift-controller-manager/controller-manager-bd7dd95df-8ddw9" Feb 18 19:23:09 crc kubenswrapper[4942]: I0218 19:23:09.302555 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m9qpx\" (UniqueName: \"kubernetes.io/projected/fb4e19d0-ff6c-45f9-872e-750bb8231014-kube-api-access-m9qpx\") pod \"controller-manager-bd7dd95df-8ddw9\" (UID: \"fb4e19d0-ff6c-45f9-872e-750bb8231014\") " pod="openshift-controller-manager/controller-manager-bd7dd95df-8ddw9" Feb 18 19:23:09 crc kubenswrapper[4942]: I0218 19:23:09.304387 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/fb4e19d0-ff6c-45f9-872e-750bb8231014-client-ca\") pod \"controller-manager-bd7dd95df-8ddw9\" (UID: \"fb4e19d0-ff6c-45f9-872e-750bb8231014\") " pod="openshift-controller-manager/controller-manager-bd7dd95df-8ddw9" Feb 18 19:23:09 crc kubenswrapper[4942]: I0218 19:23:09.305241 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fb4e19d0-ff6c-45f9-872e-750bb8231014-config\") pod \"controller-manager-bd7dd95df-8ddw9\" (UID: \"fb4e19d0-ff6c-45f9-872e-750bb8231014\") " pod="openshift-controller-manager/controller-manager-bd7dd95df-8ddw9" Feb 18 19:23:09 crc kubenswrapper[4942]: I0218 19:23:09.305623 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/fb4e19d0-ff6c-45f9-872e-750bb8231014-proxy-ca-bundles\") pod \"controller-manager-bd7dd95df-8ddw9\" (UID: \"fb4e19d0-ff6c-45f9-872e-750bb8231014\") " pod="openshift-controller-manager/controller-manager-bd7dd95df-8ddw9" Feb 18 19:23:09 crc kubenswrapper[4942]: I0218 19:23:09.311890 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fb4e19d0-ff6c-45f9-872e-750bb8231014-serving-cert\") pod \"controller-manager-bd7dd95df-8ddw9\" (UID: \"fb4e19d0-ff6c-45f9-872e-750bb8231014\") " pod="openshift-controller-manager/controller-manager-bd7dd95df-8ddw9" Feb 18 19:23:09 crc kubenswrapper[4942]: I0218 19:23:09.334471 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m9qpx\" (UniqueName: \"kubernetes.io/projected/fb4e19d0-ff6c-45f9-872e-750bb8231014-kube-api-access-m9qpx\") pod \"controller-manager-bd7dd95df-8ddw9\" (UID: \"fb4e19d0-ff6c-45f9-872e-750bb8231014\") " pod="openshift-controller-manager/controller-manager-bd7dd95df-8ddw9" Feb 18 19:23:09 crc kubenswrapper[4942]: I0218 19:23:09.428518 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-bd7dd95df-8ddw9" Feb 18 19:23:09 crc kubenswrapper[4942]: I0218 19:23:09.714289 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-bd7dd95df-8ddw9"] Feb 18 19:23:09 crc kubenswrapper[4942]: I0218 19:23:09.784133 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-bd7dd95df-8ddw9" event={"ID":"fb4e19d0-ff6c-45f9-872e-750bb8231014","Type":"ContainerStarted","Data":"afd90d5b443ea5cf5cebd07e2c8014114986c27f1381efaec4e0a06ed2585461"} Feb 18 19:23:10 crc kubenswrapper[4942]: I0218 19:23:10.792076 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-bd7dd95df-8ddw9" event={"ID":"fb4e19d0-ff6c-45f9-872e-750bb8231014","Type":"ContainerStarted","Data":"0c40db728dcdbd061b234305e3ad34d84b236777107413954e1184878bc6241d"} Feb 18 19:23:10 crc kubenswrapper[4942]: I0218 19:23:10.792404 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-bd7dd95df-8ddw9" Feb 18 19:23:10 crc kubenswrapper[4942]: I0218 19:23:10.798685 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-bd7dd95df-8ddw9" Feb 18 19:23:10 crc kubenswrapper[4942]: I0218 19:23:10.822615 4942 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-bd7dd95df-8ddw9" podStartSLOduration=3.82259369 podStartE2EDuration="3.82259369s" podCreationTimestamp="2026-02-18 19:23:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:23:10.818191948 +0000 UTC m=+350.523124613" watchObservedRunningTime="2026-02-18 19:23:10.82259369 +0000 UTC m=+350.527526355" Feb 18 19:23:15 crc kubenswrapper[4942]: I0218 19:23:15.753062 4942 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-vcns7"] Feb 18 19:23:15 crc kubenswrapper[4942]: I0218 19:23:15.754779 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-vcns7" Feb 18 19:23:15 crc kubenswrapper[4942]: I0218 19:23:15.756863 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Feb 18 19:23:15 crc kubenswrapper[4942]: I0218 19:23:15.770977 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-vcns7"] Feb 18 19:23:15 crc kubenswrapper[4942]: I0218 19:23:15.786987 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2dd4a143-c433-4caf-9416-c99755ec1bc5-utilities\") pod \"certified-operators-vcns7\" (UID: \"2dd4a143-c433-4caf-9416-c99755ec1bc5\") " pod="openshift-marketplace/certified-operators-vcns7" Feb 18 19:23:15 crc kubenswrapper[4942]: I0218 19:23:15.787069 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8js5p\" (UniqueName: \"kubernetes.io/projected/2dd4a143-c433-4caf-9416-c99755ec1bc5-kube-api-access-8js5p\") pod \"certified-operators-vcns7\" (UID: \"2dd4a143-c433-4caf-9416-c99755ec1bc5\") " pod="openshift-marketplace/certified-operators-vcns7" Feb 18 19:23:15 crc kubenswrapper[4942]: I0218 19:23:15.787111 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2dd4a143-c433-4caf-9416-c99755ec1bc5-catalog-content\") pod \"certified-operators-vcns7\" (UID: \"2dd4a143-c433-4caf-9416-c99755ec1bc5\") " pod="openshift-marketplace/certified-operators-vcns7" Feb 18 19:23:15 crc kubenswrapper[4942]: I0218 19:23:15.888490 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2dd4a143-c433-4caf-9416-c99755ec1bc5-utilities\") pod \"certified-operators-vcns7\" (UID: \"2dd4a143-c433-4caf-9416-c99755ec1bc5\") " pod="openshift-marketplace/certified-operators-vcns7" Feb 18 19:23:15 crc kubenswrapper[4942]: I0218 19:23:15.888554 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8js5p\" (UniqueName: \"kubernetes.io/projected/2dd4a143-c433-4caf-9416-c99755ec1bc5-kube-api-access-8js5p\") pod \"certified-operators-vcns7\" (UID: \"2dd4a143-c433-4caf-9416-c99755ec1bc5\") " pod="openshift-marketplace/certified-operators-vcns7" Feb 18 19:23:15 crc kubenswrapper[4942]: I0218 19:23:15.888579 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2dd4a143-c433-4caf-9416-c99755ec1bc5-catalog-content\") pod \"certified-operators-vcns7\" (UID: \"2dd4a143-c433-4caf-9416-c99755ec1bc5\") " pod="openshift-marketplace/certified-operators-vcns7" Feb 18 19:23:15 crc kubenswrapper[4942]: I0218 19:23:15.889012 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2dd4a143-c433-4caf-9416-c99755ec1bc5-catalog-content\") pod \"certified-operators-vcns7\" (UID: \"2dd4a143-c433-4caf-9416-c99755ec1bc5\") " pod="openshift-marketplace/certified-operators-vcns7" Feb 18 19:23:15 crc kubenswrapper[4942]: I0218 19:23:15.889208 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2dd4a143-c433-4caf-9416-c99755ec1bc5-utilities\") pod \"certified-operators-vcns7\" (UID: \"2dd4a143-c433-4caf-9416-c99755ec1bc5\") " pod="openshift-marketplace/certified-operators-vcns7" Feb 18 19:23:15 crc kubenswrapper[4942]: I0218 19:23:15.915422 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8js5p\" (UniqueName: \"kubernetes.io/projected/2dd4a143-c433-4caf-9416-c99755ec1bc5-kube-api-access-8js5p\") pod \"certified-operators-vcns7\" (UID: \"2dd4a143-c433-4caf-9416-c99755ec1bc5\") " pod="openshift-marketplace/certified-operators-vcns7" Feb 18 19:23:15 crc kubenswrapper[4942]: I0218 19:23:15.947076 4942 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-gk749"] Feb 18 19:23:15 crc kubenswrapper[4942]: I0218 19:23:15.948107 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-gk749" Feb 18 19:23:15 crc kubenswrapper[4942]: I0218 19:23:15.949781 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Feb 18 19:23:15 crc kubenswrapper[4942]: I0218 19:23:15.956311 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-gk749"] Feb 18 19:23:15 crc kubenswrapper[4942]: I0218 19:23:15.989888 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d6b8eeb7-c370-4453-9f7e-d98d5ca2dab7-catalog-content\") pod \"redhat-marketplace-gk749\" (UID: \"d6b8eeb7-c370-4453-9f7e-d98d5ca2dab7\") " pod="openshift-marketplace/redhat-marketplace-gk749" Feb 18 19:23:15 crc kubenswrapper[4942]: I0218 19:23:15.989922 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d6b8eeb7-c370-4453-9f7e-d98d5ca2dab7-utilities\") pod \"redhat-marketplace-gk749\" (UID: \"d6b8eeb7-c370-4453-9f7e-d98d5ca2dab7\") " pod="openshift-marketplace/redhat-marketplace-gk749" Feb 18 19:23:15 crc kubenswrapper[4942]: I0218 19:23:15.989955 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jqrc6\" (UniqueName: \"kubernetes.io/projected/d6b8eeb7-c370-4453-9f7e-d98d5ca2dab7-kube-api-access-jqrc6\") pod \"redhat-marketplace-gk749\" (UID: \"d6b8eeb7-c370-4453-9f7e-d98d5ca2dab7\") " pod="openshift-marketplace/redhat-marketplace-gk749" Feb 18 19:23:16 crc kubenswrapper[4942]: I0218 19:23:16.091842 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jqrc6\" (UniqueName: \"kubernetes.io/projected/d6b8eeb7-c370-4453-9f7e-d98d5ca2dab7-kube-api-access-jqrc6\") pod \"redhat-marketplace-gk749\" (UID: \"d6b8eeb7-c370-4453-9f7e-d98d5ca2dab7\") " pod="openshift-marketplace/redhat-marketplace-gk749" Feb 18 19:23:16 crc kubenswrapper[4942]: I0218 19:23:16.092017 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d6b8eeb7-c370-4453-9f7e-d98d5ca2dab7-utilities\") pod \"redhat-marketplace-gk749\" (UID: \"d6b8eeb7-c370-4453-9f7e-d98d5ca2dab7\") " pod="openshift-marketplace/redhat-marketplace-gk749" Feb 18 19:23:16 crc kubenswrapper[4942]: I0218 19:23:16.092041 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d6b8eeb7-c370-4453-9f7e-d98d5ca2dab7-catalog-content\") pod \"redhat-marketplace-gk749\" (UID: \"d6b8eeb7-c370-4453-9f7e-d98d5ca2dab7\") " pod="openshift-marketplace/redhat-marketplace-gk749" Feb 18 19:23:16 crc kubenswrapper[4942]: I0218 19:23:16.092526 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d6b8eeb7-c370-4453-9f7e-d98d5ca2dab7-catalog-content\") pod \"redhat-marketplace-gk749\" (UID: \"d6b8eeb7-c370-4453-9f7e-d98d5ca2dab7\") " pod="openshift-marketplace/redhat-marketplace-gk749" Feb 18 19:23:16 crc kubenswrapper[4942]: I0218 19:23:16.092751 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d6b8eeb7-c370-4453-9f7e-d98d5ca2dab7-utilities\") pod \"redhat-marketplace-gk749\" (UID: \"d6b8eeb7-c370-4453-9f7e-d98d5ca2dab7\") " pod="openshift-marketplace/redhat-marketplace-gk749" Feb 18 19:23:16 crc kubenswrapper[4942]: I0218 19:23:16.114886 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jqrc6\" (UniqueName: \"kubernetes.io/projected/d6b8eeb7-c370-4453-9f7e-d98d5ca2dab7-kube-api-access-jqrc6\") pod \"redhat-marketplace-gk749\" (UID: \"d6b8eeb7-c370-4453-9f7e-d98d5ca2dab7\") " pod="openshift-marketplace/redhat-marketplace-gk749" Feb 18 19:23:16 crc kubenswrapper[4942]: I0218 19:23:16.120516 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-vcns7" Feb 18 19:23:16 crc kubenswrapper[4942]: I0218 19:23:16.274401 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-gk749" Feb 18 19:23:16 crc kubenswrapper[4942]: I0218 19:23:16.526460 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-vcns7"] Feb 18 19:23:16 crc kubenswrapper[4942]: I0218 19:23:16.693834 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-gk749"] Feb 18 19:23:16 crc kubenswrapper[4942]: W0218 19:23:16.746671 4942 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd6b8eeb7_c370_4453_9f7e_d98d5ca2dab7.slice/crio-87dc1e96f4bad903f9fd2e3b55783e810cb0121517717454d4caa5df4f2f4427 WatchSource:0}: Error finding container 87dc1e96f4bad903f9fd2e3b55783e810cb0121517717454d4caa5df4f2f4427: Status 404 returned error can't find the container with id 87dc1e96f4bad903f9fd2e3b55783e810cb0121517717454d4caa5df4f2f4427 Feb 18 19:23:16 crc kubenswrapper[4942]: I0218 19:23:16.834505 4942 generic.go:334] "Generic (PLEG): container finished" podID="2dd4a143-c433-4caf-9416-c99755ec1bc5" containerID="2d67488b90e723ad378bc6997e8d0910ea01d0ca368c949de065a80c98891cf7" exitCode=0 Feb 18 19:23:16 crc kubenswrapper[4942]: I0218 19:23:16.834565 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vcns7" event={"ID":"2dd4a143-c433-4caf-9416-c99755ec1bc5","Type":"ContainerDied","Data":"2d67488b90e723ad378bc6997e8d0910ea01d0ca368c949de065a80c98891cf7"} Feb 18 19:23:16 crc kubenswrapper[4942]: I0218 19:23:16.834785 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vcns7" event={"ID":"2dd4a143-c433-4caf-9416-c99755ec1bc5","Type":"ContainerStarted","Data":"6feac6c48e56392a7b0af37b8f76c48ff3c2d2a71b037956c438730a7ce226ae"} Feb 18 19:23:16 crc kubenswrapper[4942]: I0218 19:23:16.836514 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-gk749" event={"ID":"d6b8eeb7-c370-4453-9f7e-d98d5ca2dab7","Type":"ContainerStarted","Data":"87dc1e96f4bad903f9fd2e3b55783e810cb0121517717454d4caa5df4f2f4427"} Feb 18 19:23:17 crc kubenswrapper[4942]: I0218 19:23:17.842904 4942 generic.go:334] "Generic (PLEG): container finished" podID="d6b8eeb7-c370-4453-9f7e-d98d5ca2dab7" containerID="58306be1031c1f15f908358aea5f3bec04348aeb4791e18ff870bc3e971b704c" exitCode=0 Feb 18 19:23:17 crc kubenswrapper[4942]: I0218 19:23:17.842992 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-gk749" event={"ID":"d6b8eeb7-c370-4453-9f7e-d98d5ca2dab7","Type":"ContainerDied","Data":"58306be1031c1f15f908358aea5f3bec04348aeb4791e18ff870bc3e971b704c"} Feb 18 19:23:17 crc kubenswrapper[4942]: I0218 19:23:17.845703 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vcns7" event={"ID":"2dd4a143-c433-4caf-9416-c99755ec1bc5","Type":"ContainerStarted","Data":"d46d5d5064d8560819d8f7094daf05dbd4c499c7577a017891ae2ebd861026eb"} Feb 18 19:23:18 crc kubenswrapper[4942]: I0218 19:23:18.153141 4942 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-hm9ft"] Feb 18 19:23:18 crc kubenswrapper[4942]: I0218 19:23:18.154329 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-hm9ft" Feb 18 19:23:18 crc kubenswrapper[4942]: I0218 19:23:18.156207 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Feb 18 19:23:18 crc kubenswrapper[4942]: I0218 19:23:18.168360 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-hm9ft"] Feb 18 19:23:18 crc kubenswrapper[4942]: I0218 19:23:18.220950 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-66cgn\" (UniqueName: \"kubernetes.io/projected/e446c051-f451-4260-b0f7-d2b08c7ae991-kube-api-access-66cgn\") pod \"redhat-operators-hm9ft\" (UID: \"e446c051-f451-4260-b0f7-d2b08c7ae991\") " pod="openshift-marketplace/redhat-operators-hm9ft" Feb 18 19:23:18 crc kubenswrapper[4942]: I0218 19:23:18.221012 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e446c051-f451-4260-b0f7-d2b08c7ae991-catalog-content\") pod \"redhat-operators-hm9ft\" (UID: \"e446c051-f451-4260-b0f7-d2b08c7ae991\") " pod="openshift-marketplace/redhat-operators-hm9ft" Feb 18 19:23:18 crc kubenswrapper[4942]: I0218 19:23:18.221045 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e446c051-f451-4260-b0f7-d2b08c7ae991-utilities\") pod \"redhat-operators-hm9ft\" (UID: \"e446c051-f451-4260-b0f7-d2b08c7ae991\") " pod="openshift-marketplace/redhat-operators-hm9ft" Feb 18 19:23:18 crc kubenswrapper[4942]: I0218 19:23:18.323566 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e446c051-f451-4260-b0f7-d2b08c7ae991-catalog-content\") pod \"redhat-operators-hm9ft\" (UID: \"e446c051-f451-4260-b0f7-d2b08c7ae991\") " pod="openshift-marketplace/redhat-operators-hm9ft" Feb 18 19:23:18 crc kubenswrapper[4942]: I0218 19:23:18.323650 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e446c051-f451-4260-b0f7-d2b08c7ae991-utilities\") pod \"redhat-operators-hm9ft\" (UID: \"e446c051-f451-4260-b0f7-d2b08c7ae991\") " pod="openshift-marketplace/redhat-operators-hm9ft" Feb 18 19:23:18 crc kubenswrapper[4942]: I0218 19:23:18.323715 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-66cgn\" (UniqueName: \"kubernetes.io/projected/e446c051-f451-4260-b0f7-d2b08c7ae991-kube-api-access-66cgn\") pod \"redhat-operators-hm9ft\" (UID: \"e446c051-f451-4260-b0f7-d2b08c7ae991\") " pod="openshift-marketplace/redhat-operators-hm9ft" Feb 18 19:23:18 crc kubenswrapper[4942]: I0218 19:23:18.324517 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e446c051-f451-4260-b0f7-d2b08c7ae991-catalog-content\") pod \"redhat-operators-hm9ft\" (UID: \"e446c051-f451-4260-b0f7-d2b08c7ae991\") " pod="openshift-marketplace/redhat-operators-hm9ft" Feb 18 19:23:18 crc kubenswrapper[4942]: I0218 19:23:18.324659 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e446c051-f451-4260-b0f7-d2b08c7ae991-utilities\") pod \"redhat-operators-hm9ft\" (UID: \"e446c051-f451-4260-b0f7-d2b08c7ae991\") " pod="openshift-marketplace/redhat-operators-hm9ft" Feb 18 19:23:18 crc kubenswrapper[4942]: I0218 19:23:18.355086 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-66cgn\" (UniqueName: \"kubernetes.io/projected/e446c051-f451-4260-b0f7-d2b08c7ae991-kube-api-access-66cgn\") pod \"redhat-operators-hm9ft\" (UID: \"e446c051-f451-4260-b0f7-d2b08c7ae991\") " pod="openshift-marketplace/redhat-operators-hm9ft" Feb 18 19:23:18 crc kubenswrapper[4942]: I0218 19:23:18.357125 4942 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-z5cvk"] Feb 18 19:23:18 crc kubenswrapper[4942]: I0218 19:23:18.359587 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-z5cvk" Feb 18 19:23:18 crc kubenswrapper[4942]: I0218 19:23:18.361288 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-z5cvk"] Feb 18 19:23:18 crc kubenswrapper[4942]: I0218 19:23:18.363918 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Feb 18 19:23:18 crc kubenswrapper[4942]: I0218 19:23:18.425366 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g66n2\" (UniqueName: \"kubernetes.io/projected/0ab9f5c3-07c4-4635-9c50-a42b85ad0752-kube-api-access-g66n2\") pod \"community-operators-z5cvk\" (UID: \"0ab9f5c3-07c4-4635-9c50-a42b85ad0752\") " pod="openshift-marketplace/community-operators-z5cvk" Feb 18 19:23:18 crc kubenswrapper[4942]: I0218 19:23:18.425416 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0ab9f5c3-07c4-4635-9c50-a42b85ad0752-catalog-content\") pod \"community-operators-z5cvk\" (UID: \"0ab9f5c3-07c4-4635-9c50-a42b85ad0752\") " pod="openshift-marketplace/community-operators-z5cvk" Feb 18 19:23:18 crc kubenswrapper[4942]: I0218 19:23:18.425468 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0ab9f5c3-07c4-4635-9c50-a42b85ad0752-utilities\") pod \"community-operators-z5cvk\" (UID: \"0ab9f5c3-07c4-4635-9c50-a42b85ad0752\") " pod="openshift-marketplace/community-operators-z5cvk" Feb 18 19:23:18 crc kubenswrapper[4942]: I0218 19:23:18.476396 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-hm9ft" Feb 18 19:23:18 crc kubenswrapper[4942]: I0218 19:23:18.526843 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g66n2\" (UniqueName: \"kubernetes.io/projected/0ab9f5c3-07c4-4635-9c50-a42b85ad0752-kube-api-access-g66n2\") pod \"community-operators-z5cvk\" (UID: \"0ab9f5c3-07c4-4635-9c50-a42b85ad0752\") " pod="openshift-marketplace/community-operators-z5cvk" Feb 18 19:23:18 crc kubenswrapper[4942]: I0218 19:23:18.527285 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0ab9f5c3-07c4-4635-9c50-a42b85ad0752-catalog-content\") pod \"community-operators-z5cvk\" (UID: \"0ab9f5c3-07c4-4635-9c50-a42b85ad0752\") " pod="openshift-marketplace/community-operators-z5cvk" Feb 18 19:23:18 crc kubenswrapper[4942]: I0218 19:23:18.527670 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0ab9f5c3-07c4-4635-9c50-a42b85ad0752-utilities\") pod \"community-operators-z5cvk\" (UID: \"0ab9f5c3-07c4-4635-9c50-a42b85ad0752\") " pod="openshift-marketplace/community-operators-z5cvk" Feb 18 19:23:18 crc kubenswrapper[4942]: I0218 19:23:18.528178 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0ab9f5c3-07c4-4635-9c50-a42b85ad0752-catalog-content\") pod \"community-operators-z5cvk\" (UID: \"0ab9f5c3-07c4-4635-9c50-a42b85ad0752\") " pod="openshift-marketplace/community-operators-z5cvk" Feb 18 19:23:18 crc kubenswrapper[4942]: I0218 19:23:18.528596 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0ab9f5c3-07c4-4635-9c50-a42b85ad0752-utilities\") pod \"community-operators-z5cvk\" (UID: \"0ab9f5c3-07c4-4635-9c50-a42b85ad0752\") " pod="openshift-marketplace/community-operators-z5cvk" Feb 18 19:23:18 crc kubenswrapper[4942]: I0218 19:23:18.546111 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g66n2\" (UniqueName: \"kubernetes.io/projected/0ab9f5c3-07c4-4635-9c50-a42b85ad0752-kube-api-access-g66n2\") pod \"community-operators-z5cvk\" (UID: \"0ab9f5c3-07c4-4635-9c50-a42b85ad0752\") " pod="openshift-marketplace/community-operators-z5cvk" Feb 18 19:23:18 crc kubenswrapper[4942]: I0218 19:23:18.696383 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-z5cvk" Feb 18 19:23:18 crc kubenswrapper[4942]: I0218 19:23:18.867200 4942 generic.go:334] "Generic (PLEG): container finished" podID="2dd4a143-c433-4caf-9416-c99755ec1bc5" containerID="d46d5d5064d8560819d8f7094daf05dbd4c499c7577a017891ae2ebd861026eb" exitCode=0 Feb 18 19:23:18 crc kubenswrapper[4942]: I0218 19:23:18.867526 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vcns7" event={"ID":"2dd4a143-c433-4caf-9416-c99755ec1bc5","Type":"ContainerDied","Data":"d46d5d5064d8560819d8f7094daf05dbd4c499c7577a017891ae2ebd861026eb"} Feb 18 19:23:18 crc kubenswrapper[4942]: I0218 19:23:18.974039 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-hm9ft"] Feb 18 19:23:19 crc kubenswrapper[4942]: I0218 19:23:19.141497 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-z5cvk"] Feb 18 19:23:19 crc kubenswrapper[4942]: W0218 19:23:19.175264 4942 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0ab9f5c3_07c4_4635_9c50_a42b85ad0752.slice/crio-724f2f43deed25ba4d56cae3bf0c6c5fe6adc3d9ab675f421897534aa4ee9b4e WatchSource:0}: Error finding container 724f2f43deed25ba4d56cae3bf0c6c5fe6adc3d9ab675f421897534aa4ee9b4e: Status 404 returned error can't find the container with id 724f2f43deed25ba4d56cae3bf0c6c5fe6adc3d9ab675f421897534aa4ee9b4e Feb 18 19:23:19 crc kubenswrapper[4942]: I0218 19:23:19.880872 4942 generic.go:334] "Generic (PLEG): container finished" podID="e446c051-f451-4260-b0f7-d2b08c7ae991" containerID="329f78c5fa69f6508a3a5a0ef3ea0e4eed0a29583b2f63ff497a5196576be246" exitCode=0 Feb 18 19:23:19 crc kubenswrapper[4942]: I0218 19:23:19.880965 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hm9ft" event={"ID":"e446c051-f451-4260-b0f7-d2b08c7ae991","Type":"ContainerDied","Data":"329f78c5fa69f6508a3a5a0ef3ea0e4eed0a29583b2f63ff497a5196576be246"} Feb 18 19:23:19 crc kubenswrapper[4942]: I0218 19:23:19.884162 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hm9ft" event={"ID":"e446c051-f451-4260-b0f7-d2b08c7ae991","Type":"ContainerStarted","Data":"60d3f1cc2e867768c6a9d5d89723aae6d1c530f4e516f46e5be89899c1bf7134"} Feb 18 19:23:19 crc kubenswrapper[4942]: I0218 19:23:19.884577 4942 generic.go:334] "Generic (PLEG): container finished" podID="d6b8eeb7-c370-4453-9f7e-d98d5ca2dab7" containerID="bf9fc8b047455cbf0f22f686ccbfba4718ea0819913e33cdf05f26cdea43b794" exitCode=0 Feb 18 19:23:19 crc kubenswrapper[4942]: I0218 19:23:19.884625 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-gk749" event={"ID":"d6b8eeb7-c370-4453-9f7e-d98d5ca2dab7","Type":"ContainerDied","Data":"bf9fc8b047455cbf0f22f686ccbfba4718ea0819913e33cdf05f26cdea43b794"} Feb 18 19:23:19 crc kubenswrapper[4942]: I0218 19:23:19.886226 4942 generic.go:334] "Generic (PLEG): container finished" podID="0ab9f5c3-07c4-4635-9c50-a42b85ad0752" containerID="1b589d633c05a4b5cb0cd7dbfe1e529f5e59b71d54dee0f5a2733d4695b96757" exitCode=0 Feb 18 19:23:19 crc kubenswrapper[4942]: I0218 19:23:19.886305 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-z5cvk" event={"ID":"0ab9f5c3-07c4-4635-9c50-a42b85ad0752","Type":"ContainerDied","Data":"1b589d633c05a4b5cb0cd7dbfe1e529f5e59b71d54dee0f5a2733d4695b96757"} Feb 18 19:23:19 crc kubenswrapper[4942]: I0218 19:23:19.886811 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-z5cvk" event={"ID":"0ab9f5c3-07c4-4635-9c50-a42b85ad0752","Type":"ContainerStarted","Data":"724f2f43deed25ba4d56cae3bf0c6c5fe6adc3d9ab675f421897534aa4ee9b4e"} Feb 18 19:23:19 crc kubenswrapper[4942]: I0218 19:23:19.889004 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vcns7" event={"ID":"2dd4a143-c433-4caf-9416-c99755ec1bc5","Type":"ContainerStarted","Data":"0be7e0660beef651a0ad00dcf4e20adffe239285f7559ab52faaf44cc69dbc0c"} Feb 18 19:23:19 crc kubenswrapper[4942]: I0218 19:23:19.953585 4942 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-vcns7" podStartSLOduration=2.283402345 podStartE2EDuration="4.95356884s" podCreationTimestamp="2026-02-18 19:23:15 +0000 UTC" firstStartedPulling="2026-02-18 19:23:16.836562044 +0000 UTC m=+356.541494709" lastFinishedPulling="2026-02-18 19:23:19.506728539 +0000 UTC m=+359.211661204" observedRunningTime="2026-02-18 19:23:19.951941885 +0000 UTC m=+359.656874570" watchObservedRunningTime="2026-02-18 19:23:19.95356884 +0000 UTC m=+359.658501515" Feb 18 19:23:21 crc kubenswrapper[4942]: I0218 19:23:21.926904 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hm9ft" event={"ID":"e446c051-f451-4260-b0f7-d2b08c7ae991","Type":"ContainerStarted","Data":"5e28183415c1de7a5c72a835abeac7915c3a18c21961567a325b259015487417"} Feb 18 19:23:21 crc kubenswrapper[4942]: I0218 19:23:21.933800 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-gk749" event={"ID":"d6b8eeb7-c370-4453-9f7e-d98d5ca2dab7","Type":"ContainerStarted","Data":"f94c96e115af3fba4734cc5226fdefac5bd50d3a4e09dc7e61125cff9b6763a6"} Feb 18 19:23:21 crc kubenswrapper[4942]: I0218 19:23:21.935948 4942 generic.go:334] "Generic (PLEG): container finished" podID="0ab9f5c3-07c4-4635-9c50-a42b85ad0752" containerID="9522e32c1343be17233dc44774f2cf79d47cf58830e666d6666f69cafacfebe2" exitCode=0 Feb 18 19:23:21 crc kubenswrapper[4942]: I0218 19:23:21.936017 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-z5cvk" event={"ID":"0ab9f5c3-07c4-4635-9c50-a42b85ad0752","Type":"ContainerDied","Data":"9522e32c1343be17233dc44774f2cf79d47cf58830e666d6666f69cafacfebe2"} Feb 18 19:23:22 crc kubenswrapper[4942]: I0218 19:23:22.942913 4942 generic.go:334] "Generic (PLEG): container finished" podID="e446c051-f451-4260-b0f7-d2b08c7ae991" containerID="5e28183415c1de7a5c72a835abeac7915c3a18c21961567a325b259015487417" exitCode=0 Feb 18 19:23:22 crc kubenswrapper[4942]: I0218 19:23:22.942993 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hm9ft" event={"ID":"e446c051-f451-4260-b0f7-d2b08c7ae991","Type":"ContainerDied","Data":"5e28183415c1de7a5c72a835abeac7915c3a18c21961567a325b259015487417"} Feb 18 19:23:22 crc kubenswrapper[4942]: I0218 19:23:22.946828 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-z5cvk" event={"ID":"0ab9f5c3-07c4-4635-9c50-a42b85ad0752","Type":"ContainerStarted","Data":"ab1e0708e25431cf82c6a75f311eaf07b10b02a62e742fc93b4144890308bbf1"} Feb 18 19:23:22 crc kubenswrapper[4942]: I0218 19:23:22.965859 4942 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-gk749" podStartSLOduration=5.5073350229999996 podStartE2EDuration="7.965840904s" podCreationTimestamp="2026-02-18 19:23:15 +0000 UTC" firstStartedPulling="2026-02-18 19:23:17.84482078 +0000 UTC m=+357.549753445" lastFinishedPulling="2026-02-18 19:23:20.303326651 +0000 UTC m=+360.008259326" observedRunningTime="2026-02-18 19:23:22.008302152 +0000 UTC m=+361.713234817" watchObservedRunningTime="2026-02-18 19:23:22.965840904 +0000 UTC m=+362.670773589" Feb 18 19:23:23 crc kubenswrapper[4942]: I0218 19:23:23.741562 4942 patch_prober.go:28] interesting pod/machine-config-daemon-wqxh4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 19:23:23 crc kubenswrapper[4942]: I0218 19:23:23.741971 4942 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wqxh4" podUID="28921539-823a-4439-a230-3b5aed7085cc" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 19:23:23 crc kubenswrapper[4942]: I0218 19:23:23.953236 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hm9ft" event={"ID":"e446c051-f451-4260-b0f7-d2b08c7ae991","Type":"ContainerStarted","Data":"7a8346f985956c9b038690085d37ebf096c58262f3b6decae2eca3e2cd738fae"} Feb 18 19:23:23 crc kubenswrapper[4942]: I0218 19:23:23.980607 4942 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-hm9ft" podStartSLOduration=2.323723777 podStartE2EDuration="5.980592971s" podCreationTimestamp="2026-02-18 19:23:18 +0000 UTC" firstStartedPulling="2026-02-18 19:23:19.883227821 +0000 UTC m=+359.588160486" lastFinishedPulling="2026-02-18 19:23:23.540097015 +0000 UTC m=+363.245029680" observedRunningTime="2026-02-18 19:23:23.978194594 +0000 UTC m=+363.683127269" watchObservedRunningTime="2026-02-18 19:23:23.980592971 +0000 UTC m=+363.685525636" Feb 18 19:23:23 crc kubenswrapper[4942]: I0218 19:23:23.981272 4942 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-z5cvk" podStartSLOduration=3.548590315 podStartE2EDuration="5.981265879s" podCreationTimestamp="2026-02-18 19:23:18 +0000 UTC" firstStartedPulling="2026-02-18 19:23:19.887027976 +0000 UTC m=+359.591960641" lastFinishedPulling="2026-02-18 19:23:22.31970354 +0000 UTC m=+362.024636205" observedRunningTime="2026-02-18 19:23:22.982400752 +0000 UTC m=+362.687333407" watchObservedRunningTime="2026-02-18 19:23:23.981265879 +0000 UTC m=+363.686198544" Feb 18 19:23:26 crc kubenswrapper[4942]: I0218 19:23:26.120948 4942 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-vcns7" Feb 18 19:23:26 crc kubenswrapper[4942]: I0218 19:23:26.121311 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-vcns7" Feb 18 19:23:26 crc kubenswrapper[4942]: I0218 19:23:26.162430 4942 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-vcns7" Feb 18 19:23:26 crc kubenswrapper[4942]: I0218 19:23:26.274529 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-gk749" Feb 18 19:23:26 crc kubenswrapper[4942]: I0218 19:23:26.274811 4942 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-gk749" Feb 18 19:23:26 crc kubenswrapper[4942]: I0218 19:23:26.321439 4942 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-gk749" Feb 18 19:23:27 crc kubenswrapper[4942]: I0218 19:23:27.014080 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-vcns7" Feb 18 19:23:27 crc kubenswrapper[4942]: I0218 19:23:27.014140 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-gk749" Feb 18 19:23:28 crc kubenswrapper[4942]: I0218 19:23:28.477909 4942 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-hm9ft" Feb 18 19:23:28 crc kubenswrapper[4942]: I0218 19:23:28.478225 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-hm9ft" Feb 18 19:23:28 crc kubenswrapper[4942]: I0218 19:23:28.696750 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-z5cvk" Feb 18 19:23:28 crc kubenswrapper[4942]: I0218 19:23:28.696808 4942 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-z5cvk" Feb 18 19:23:28 crc kubenswrapper[4942]: I0218 19:23:28.759271 4942 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-z5cvk" Feb 18 19:23:29 crc kubenswrapper[4942]: I0218 19:23:29.045329 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-z5cvk" Feb 18 19:23:29 crc kubenswrapper[4942]: I0218 19:23:29.546663 4942 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-hm9ft" podUID="e446c051-f451-4260-b0f7-d2b08c7ae991" containerName="registry-server" probeResult="failure" output=< Feb 18 19:23:29 crc kubenswrapper[4942]: timeout: failed to connect service ":50051" within 1s Feb 18 19:23:29 crc kubenswrapper[4942]: > Feb 18 19:23:38 crc kubenswrapper[4942]: I0218 19:23:38.551027 4942 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-hm9ft" Feb 18 19:23:38 crc kubenswrapper[4942]: I0218 19:23:38.623726 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-hm9ft" Feb 18 19:23:42 crc kubenswrapper[4942]: I0218 19:23:42.225658 4942 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-fjd9g"] Feb 18 19:23:42 crc kubenswrapper[4942]: I0218 19:23:42.226583 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-fjd9g" Feb 18 19:23:42 crc kubenswrapper[4942]: I0218 19:23:42.249002 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-fjd9g"] Feb 18 19:23:42 crc kubenswrapper[4942]: I0218 19:23:42.411910 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/d5f6f3e4-1894-4fed-8ac9-5eeb9480ce4c-trusted-ca\") pod \"image-registry-66df7c8f76-fjd9g\" (UID: \"d5f6f3e4-1894-4fed-8ac9-5eeb9480ce4c\") " pod="openshift-image-registry/image-registry-66df7c8f76-fjd9g" Feb 18 19:23:42 crc kubenswrapper[4942]: I0218 19:23:42.411971 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-fjd9g\" (UID: \"d5f6f3e4-1894-4fed-8ac9-5eeb9480ce4c\") " pod="openshift-image-registry/image-registry-66df7c8f76-fjd9g" Feb 18 19:23:42 crc kubenswrapper[4942]: I0218 19:23:42.412009 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/d5f6f3e4-1894-4fed-8ac9-5eeb9480ce4c-ca-trust-extracted\") pod \"image-registry-66df7c8f76-fjd9g\" (UID: \"d5f6f3e4-1894-4fed-8ac9-5eeb9480ce4c\") " pod="openshift-image-registry/image-registry-66df7c8f76-fjd9g" Feb 18 19:23:42 crc kubenswrapper[4942]: I0218 19:23:42.412126 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/d5f6f3e4-1894-4fed-8ac9-5eeb9480ce4c-registry-tls\") pod \"image-registry-66df7c8f76-fjd9g\" (UID: \"d5f6f3e4-1894-4fed-8ac9-5eeb9480ce4c\") " pod="openshift-image-registry/image-registry-66df7c8f76-fjd9g" Feb 18 19:23:42 crc kubenswrapper[4942]: I0218 19:23:42.412163 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/d5f6f3e4-1894-4fed-8ac9-5eeb9480ce4c-registry-certificates\") pod \"image-registry-66df7c8f76-fjd9g\" (UID: \"d5f6f3e4-1894-4fed-8ac9-5eeb9480ce4c\") " pod="openshift-image-registry/image-registry-66df7c8f76-fjd9g" Feb 18 19:23:42 crc kubenswrapper[4942]: I0218 19:23:42.412199 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5x9rc\" (UniqueName: \"kubernetes.io/projected/d5f6f3e4-1894-4fed-8ac9-5eeb9480ce4c-kube-api-access-5x9rc\") pod \"image-registry-66df7c8f76-fjd9g\" (UID: \"d5f6f3e4-1894-4fed-8ac9-5eeb9480ce4c\") " pod="openshift-image-registry/image-registry-66df7c8f76-fjd9g" Feb 18 19:23:42 crc kubenswrapper[4942]: I0218 19:23:42.412277 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/d5f6f3e4-1894-4fed-8ac9-5eeb9480ce4c-bound-sa-token\") pod \"image-registry-66df7c8f76-fjd9g\" (UID: \"d5f6f3e4-1894-4fed-8ac9-5eeb9480ce4c\") " pod="openshift-image-registry/image-registry-66df7c8f76-fjd9g" Feb 18 19:23:42 crc kubenswrapper[4942]: I0218 19:23:42.412366 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/d5f6f3e4-1894-4fed-8ac9-5eeb9480ce4c-installation-pull-secrets\") pod \"image-registry-66df7c8f76-fjd9g\" (UID: \"d5f6f3e4-1894-4fed-8ac9-5eeb9480ce4c\") " pod="openshift-image-registry/image-registry-66df7c8f76-fjd9g" Feb 18 19:23:42 crc kubenswrapper[4942]: I0218 19:23:42.445359 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-fjd9g\" (UID: \"d5f6f3e4-1894-4fed-8ac9-5eeb9480ce4c\") " pod="openshift-image-registry/image-registry-66df7c8f76-fjd9g" Feb 18 19:23:42 crc kubenswrapper[4942]: I0218 19:23:42.514071 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/d5f6f3e4-1894-4fed-8ac9-5eeb9480ce4c-trusted-ca\") pod \"image-registry-66df7c8f76-fjd9g\" (UID: \"d5f6f3e4-1894-4fed-8ac9-5eeb9480ce4c\") " pod="openshift-image-registry/image-registry-66df7c8f76-fjd9g" Feb 18 19:23:42 crc kubenswrapper[4942]: I0218 19:23:42.514156 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/d5f6f3e4-1894-4fed-8ac9-5eeb9480ce4c-ca-trust-extracted\") pod \"image-registry-66df7c8f76-fjd9g\" (UID: \"d5f6f3e4-1894-4fed-8ac9-5eeb9480ce4c\") " pod="openshift-image-registry/image-registry-66df7c8f76-fjd9g" Feb 18 19:23:42 crc kubenswrapper[4942]: I0218 19:23:42.514245 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/d5f6f3e4-1894-4fed-8ac9-5eeb9480ce4c-registry-tls\") pod \"image-registry-66df7c8f76-fjd9g\" (UID: \"d5f6f3e4-1894-4fed-8ac9-5eeb9480ce4c\") " pod="openshift-image-registry/image-registry-66df7c8f76-fjd9g" Feb 18 19:23:42 crc kubenswrapper[4942]: I0218 19:23:42.514277 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/d5f6f3e4-1894-4fed-8ac9-5eeb9480ce4c-registry-certificates\") pod \"image-registry-66df7c8f76-fjd9g\" (UID: \"d5f6f3e4-1894-4fed-8ac9-5eeb9480ce4c\") " pod="openshift-image-registry/image-registry-66df7c8f76-fjd9g" Feb 18 19:23:42 crc kubenswrapper[4942]: I0218 19:23:42.514316 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5x9rc\" (UniqueName: \"kubernetes.io/projected/d5f6f3e4-1894-4fed-8ac9-5eeb9480ce4c-kube-api-access-5x9rc\") pod \"image-registry-66df7c8f76-fjd9g\" (UID: \"d5f6f3e4-1894-4fed-8ac9-5eeb9480ce4c\") " pod="openshift-image-registry/image-registry-66df7c8f76-fjd9g" Feb 18 19:23:42 crc kubenswrapper[4942]: I0218 19:23:42.514361 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/d5f6f3e4-1894-4fed-8ac9-5eeb9480ce4c-bound-sa-token\") pod \"image-registry-66df7c8f76-fjd9g\" (UID: \"d5f6f3e4-1894-4fed-8ac9-5eeb9480ce4c\") " pod="openshift-image-registry/image-registry-66df7c8f76-fjd9g" Feb 18 19:23:42 crc kubenswrapper[4942]: I0218 19:23:42.514405 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/d5f6f3e4-1894-4fed-8ac9-5eeb9480ce4c-installation-pull-secrets\") pod \"image-registry-66df7c8f76-fjd9g\" (UID: \"d5f6f3e4-1894-4fed-8ac9-5eeb9480ce4c\") " pod="openshift-image-registry/image-registry-66df7c8f76-fjd9g" Feb 18 19:23:42 crc kubenswrapper[4942]: I0218 19:23:42.518118 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/d5f6f3e4-1894-4fed-8ac9-5eeb9480ce4c-registry-certificates\") pod \"image-registry-66df7c8f76-fjd9g\" (UID: \"d5f6f3e4-1894-4fed-8ac9-5eeb9480ce4c\") " pod="openshift-image-registry/image-registry-66df7c8f76-fjd9g" Feb 18 19:23:42 crc kubenswrapper[4942]: I0218 19:23:42.518361 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/d5f6f3e4-1894-4fed-8ac9-5eeb9480ce4c-trusted-ca\") pod \"image-registry-66df7c8f76-fjd9g\" (UID: \"d5f6f3e4-1894-4fed-8ac9-5eeb9480ce4c\") " pod="openshift-image-registry/image-registry-66df7c8f76-fjd9g" Feb 18 19:23:42 crc kubenswrapper[4942]: I0218 19:23:42.518588 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/d5f6f3e4-1894-4fed-8ac9-5eeb9480ce4c-ca-trust-extracted\") pod \"image-registry-66df7c8f76-fjd9g\" (UID: \"d5f6f3e4-1894-4fed-8ac9-5eeb9480ce4c\") " pod="openshift-image-registry/image-registry-66df7c8f76-fjd9g" Feb 18 19:23:42 crc kubenswrapper[4942]: I0218 19:23:42.520839 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/d5f6f3e4-1894-4fed-8ac9-5eeb9480ce4c-installation-pull-secrets\") pod \"image-registry-66df7c8f76-fjd9g\" (UID: \"d5f6f3e4-1894-4fed-8ac9-5eeb9480ce4c\") " pod="openshift-image-registry/image-registry-66df7c8f76-fjd9g" Feb 18 19:23:42 crc kubenswrapper[4942]: I0218 19:23:42.521849 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/d5f6f3e4-1894-4fed-8ac9-5eeb9480ce4c-registry-tls\") pod \"image-registry-66df7c8f76-fjd9g\" (UID: \"d5f6f3e4-1894-4fed-8ac9-5eeb9480ce4c\") " pod="openshift-image-registry/image-registry-66df7c8f76-fjd9g" Feb 18 19:23:42 crc kubenswrapper[4942]: I0218 19:23:42.540474 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/d5f6f3e4-1894-4fed-8ac9-5eeb9480ce4c-bound-sa-token\") pod \"image-registry-66df7c8f76-fjd9g\" (UID: \"d5f6f3e4-1894-4fed-8ac9-5eeb9480ce4c\") " pod="openshift-image-registry/image-registry-66df7c8f76-fjd9g" Feb 18 19:23:42 crc kubenswrapper[4942]: I0218 19:23:42.559720 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5x9rc\" (UniqueName: \"kubernetes.io/projected/d5f6f3e4-1894-4fed-8ac9-5eeb9480ce4c-kube-api-access-5x9rc\") pod \"image-registry-66df7c8f76-fjd9g\" (UID: \"d5f6f3e4-1894-4fed-8ac9-5eeb9480ce4c\") " pod="openshift-image-registry/image-registry-66df7c8f76-fjd9g" Feb 18 19:23:42 crc kubenswrapper[4942]: I0218 19:23:42.843323 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-fjd9g" Feb 18 19:23:43 crc kubenswrapper[4942]: I0218 19:23:43.305833 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-fjd9g"] Feb 18 19:23:43 crc kubenswrapper[4942]: W0218 19:23:43.316007 4942 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd5f6f3e4_1894_4fed_8ac9_5eeb9480ce4c.slice/crio-778419abdbfccb980aa4b6219aef6eb29850b3466f060d255b31fe06c9e1c8f2 WatchSource:0}: Error finding container 778419abdbfccb980aa4b6219aef6eb29850b3466f060d255b31fe06c9e1c8f2: Status 404 returned error can't find the container with id 778419abdbfccb980aa4b6219aef6eb29850b3466f060d255b31fe06c9e1c8f2 Feb 18 19:23:44 crc kubenswrapper[4942]: I0218 19:23:44.066007 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-fjd9g" event={"ID":"d5f6f3e4-1894-4fed-8ac9-5eeb9480ce4c","Type":"ContainerStarted","Data":"f13b79d47ee4fad9b03a0ff0a36b3da5e6e5ed7241c21bfb36f3675f8c81ecba"} Feb 18 19:23:44 crc kubenswrapper[4942]: I0218 19:23:44.066089 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-fjd9g" event={"ID":"d5f6f3e4-1894-4fed-8ac9-5eeb9480ce4c","Type":"ContainerStarted","Data":"778419abdbfccb980aa4b6219aef6eb29850b3466f060d255b31fe06c9e1c8f2"} Feb 18 19:23:44 crc kubenswrapper[4942]: I0218 19:23:44.066412 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-66df7c8f76-fjd9g" Feb 18 19:23:44 crc kubenswrapper[4942]: I0218 19:23:44.094012 4942 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-66df7c8f76-fjd9g" podStartSLOduration=2.093976441 podStartE2EDuration="2.093976441s" podCreationTimestamp="2026-02-18 19:23:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:23:44.091668517 +0000 UTC m=+383.796601242" watchObservedRunningTime="2026-02-18 19:23:44.093976441 +0000 UTC m=+383.798909146" Feb 18 19:23:47 crc kubenswrapper[4942]: I0218 19:23:47.992360 4942 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5c6fb6955d-rp6mz"] Feb 18 19:23:47 crc kubenswrapper[4942]: I0218 19:23:47.992785 4942 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-5c6fb6955d-rp6mz" podUID="b0c3cea3-65a4-46fc-9185-d057169b4174" containerName="route-controller-manager" containerID="cri-o://7d305ffaded32d689690e4389ef3771bc28998898ef46b672d0975af1f0d2c1d" gracePeriod=30 Feb 18 19:23:48 crc kubenswrapper[4942]: I0218 19:23:48.369578 4942 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5c6fb6955d-rp6mz" Feb 18 19:23:48 crc kubenswrapper[4942]: I0218 19:23:48.504971 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kjdbd\" (UniqueName: \"kubernetes.io/projected/b0c3cea3-65a4-46fc-9185-d057169b4174-kube-api-access-kjdbd\") pod \"b0c3cea3-65a4-46fc-9185-d057169b4174\" (UID: \"b0c3cea3-65a4-46fc-9185-d057169b4174\") " Feb 18 19:23:48 crc kubenswrapper[4942]: I0218 19:23:48.505101 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b0c3cea3-65a4-46fc-9185-d057169b4174-serving-cert\") pod \"b0c3cea3-65a4-46fc-9185-d057169b4174\" (UID: \"b0c3cea3-65a4-46fc-9185-d057169b4174\") " Feb 18 19:23:48 crc kubenswrapper[4942]: I0218 19:23:48.505204 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b0c3cea3-65a4-46fc-9185-d057169b4174-client-ca\") pod \"b0c3cea3-65a4-46fc-9185-d057169b4174\" (UID: \"b0c3cea3-65a4-46fc-9185-d057169b4174\") " Feb 18 19:23:48 crc kubenswrapper[4942]: I0218 19:23:48.505390 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b0c3cea3-65a4-46fc-9185-d057169b4174-config\") pod \"b0c3cea3-65a4-46fc-9185-d057169b4174\" (UID: \"b0c3cea3-65a4-46fc-9185-d057169b4174\") " Feb 18 19:23:48 crc kubenswrapper[4942]: I0218 19:23:48.506741 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b0c3cea3-65a4-46fc-9185-d057169b4174-client-ca" (OuterVolumeSpecName: "client-ca") pod "b0c3cea3-65a4-46fc-9185-d057169b4174" (UID: "b0c3cea3-65a4-46fc-9185-d057169b4174"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:23:48 crc kubenswrapper[4942]: I0218 19:23:48.507197 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b0c3cea3-65a4-46fc-9185-d057169b4174-config" (OuterVolumeSpecName: "config") pod "b0c3cea3-65a4-46fc-9185-d057169b4174" (UID: "b0c3cea3-65a4-46fc-9185-d057169b4174"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:23:48 crc kubenswrapper[4942]: I0218 19:23:48.509935 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b0c3cea3-65a4-46fc-9185-d057169b4174-kube-api-access-kjdbd" (OuterVolumeSpecName: "kube-api-access-kjdbd") pod "b0c3cea3-65a4-46fc-9185-d057169b4174" (UID: "b0c3cea3-65a4-46fc-9185-d057169b4174"). InnerVolumeSpecName "kube-api-access-kjdbd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:23:48 crc kubenswrapper[4942]: I0218 19:23:48.514406 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b0c3cea3-65a4-46fc-9185-d057169b4174-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "b0c3cea3-65a4-46fc-9185-d057169b4174" (UID: "b0c3cea3-65a4-46fc-9185-d057169b4174"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:23:48 crc kubenswrapper[4942]: I0218 19:23:48.606828 4942 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b0c3cea3-65a4-46fc-9185-d057169b4174-config\") on node \"crc\" DevicePath \"\"" Feb 18 19:23:48 crc kubenswrapper[4942]: I0218 19:23:48.606883 4942 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kjdbd\" (UniqueName: \"kubernetes.io/projected/b0c3cea3-65a4-46fc-9185-d057169b4174-kube-api-access-kjdbd\") on node \"crc\" DevicePath \"\"" Feb 18 19:23:48 crc kubenswrapper[4942]: I0218 19:23:48.606904 4942 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b0c3cea3-65a4-46fc-9185-d057169b4174-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 18 19:23:48 crc kubenswrapper[4942]: I0218 19:23:48.606921 4942 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b0c3cea3-65a4-46fc-9185-d057169b4174-client-ca\") on node \"crc\" DevicePath \"\"" Feb 18 19:23:49 crc kubenswrapper[4942]: I0218 19:23:49.094132 4942 generic.go:334] "Generic (PLEG): container finished" podID="b0c3cea3-65a4-46fc-9185-d057169b4174" containerID="7d305ffaded32d689690e4389ef3771bc28998898ef46b672d0975af1f0d2c1d" exitCode=0 Feb 18 19:23:49 crc kubenswrapper[4942]: I0218 19:23:49.094202 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-5c6fb6955d-rp6mz" event={"ID":"b0c3cea3-65a4-46fc-9185-d057169b4174","Type":"ContainerDied","Data":"7d305ffaded32d689690e4389ef3771bc28998898ef46b672d0975af1f0d2c1d"} Feb 18 19:23:49 crc kubenswrapper[4942]: I0218 19:23:49.094242 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-5c6fb6955d-rp6mz" event={"ID":"b0c3cea3-65a4-46fc-9185-d057169b4174","Type":"ContainerDied","Data":"714d652ad9fc0a287b99250d51e1e142f7f1431c0d7742444938d5e3b88e03f0"} Feb 18 19:23:49 crc kubenswrapper[4942]: I0218 19:23:49.094265 4942 scope.go:117] "RemoveContainer" containerID="7d305ffaded32d689690e4389ef3771bc28998898ef46b672d0975af1f0d2c1d" Feb 18 19:23:49 crc kubenswrapper[4942]: I0218 19:23:49.094318 4942 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5c6fb6955d-rp6mz" Feb 18 19:23:49 crc kubenswrapper[4942]: I0218 19:23:49.111151 4942 scope.go:117] "RemoveContainer" containerID="7d305ffaded32d689690e4389ef3771bc28998898ef46b672d0975af1f0d2c1d" Feb 18 19:23:49 crc kubenswrapper[4942]: E0218 19:23:49.112101 4942 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7d305ffaded32d689690e4389ef3771bc28998898ef46b672d0975af1f0d2c1d\": container with ID starting with 7d305ffaded32d689690e4389ef3771bc28998898ef46b672d0975af1f0d2c1d not found: ID does not exist" containerID="7d305ffaded32d689690e4389ef3771bc28998898ef46b672d0975af1f0d2c1d" Feb 18 19:23:49 crc kubenswrapper[4942]: I0218 19:23:49.112145 4942 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7d305ffaded32d689690e4389ef3771bc28998898ef46b672d0975af1f0d2c1d"} err="failed to get container status \"7d305ffaded32d689690e4389ef3771bc28998898ef46b672d0975af1f0d2c1d\": rpc error: code = NotFound desc = could not find container \"7d305ffaded32d689690e4389ef3771bc28998898ef46b672d0975af1f0d2c1d\": container with ID starting with 7d305ffaded32d689690e4389ef3771bc28998898ef46b672d0975af1f0d2c1d not found: ID does not exist" Feb 18 19:23:49 crc kubenswrapper[4942]: I0218 19:23:49.117948 4942 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5c6fb6955d-rp6mz"] Feb 18 19:23:49 crc kubenswrapper[4942]: I0218 19:23:49.122056 4942 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5c6fb6955d-rp6mz"] Feb 18 19:23:49 crc kubenswrapper[4942]: I0218 19:23:49.139564 4942 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5c48bc88ff-kbhxj"] Feb 18 19:23:49 crc kubenswrapper[4942]: E0218 19:23:49.139785 4942 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b0c3cea3-65a4-46fc-9185-d057169b4174" containerName="route-controller-manager" Feb 18 19:23:49 crc kubenswrapper[4942]: I0218 19:23:49.139798 4942 state_mem.go:107] "Deleted CPUSet assignment" podUID="b0c3cea3-65a4-46fc-9185-d057169b4174" containerName="route-controller-manager" Feb 18 19:23:49 crc kubenswrapper[4942]: I0218 19:23:49.139894 4942 memory_manager.go:354] "RemoveStaleState removing state" podUID="b0c3cea3-65a4-46fc-9185-d057169b4174" containerName="route-controller-manager" Feb 18 19:23:49 crc kubenswrapper[4942]: I0218 19:23:49.140365 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5c48bc88ff-kbhxj" Feb 18 19:23:49 crc kubenswrapper[4942]: I0218 19:23:49.142209 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Feb 18 19:23:49 crc kubenswrapper[4942]: I0218 19:23:49.143354 4942 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Feb 18 19:23:49 crc kubenswrapper[4942]: I0218 19:23:49.143360 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Feb 18 19:23:49 crc kubenswrapper[4942]: I0218 19:23:49.143526 4942 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Feb 18 19:23:49 crc kubenswrapper[4942]: I0218 19:23:49.144844 4942 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Feb 18 19:23:49 crc kubenswrapper[4942]: I0218 19:23:49.145074 4942 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Feb 18 19:23:49 crc kubenswrapper[4942]: I0218 19:23:49.150440 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5c48bc88ff-kbhxj"] Feb 18 19:23:49 crc kubenswrapper[4942]: I0218 19:23:49.316549 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2c290d1c-13cd-4018-a8ca-1f57494eaf51-serving-cert\") pod \"route-controller-manager-5c48bc88ff-kbhxj\" (UID: \"2c290d1c-13cd-4018-a8ca-1f57494eaf51\") " pod="openshift-route-controller-manager/route-controller-manager-5c48bc88ff-kbhxj" Feb 18 19:23:49 crc kubenswrapper[4942]: I0218 19:23:49.316628 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/2c290d1c-13cd-4018-a8ca-1f57494eaf51-client-ca\") pod \"route-controller-manager-5c48bc88ff-kbhxj\" (UID: \"2c290d1c-13cd-4018-a8ca-1f57494eaf51\") " pod="openshift-route-controller-manager/route-controller-manager-5c48bc88ff-kbhxj" Feb 18 19:23:49 crc kubenswrapper[4942]: I0218 19:23:49.316852 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2c290d1c-13cd-4018-a8ca-1f57494eaf51-config\") pod \"route-controller-manager-5c48bc88ff-kbhxj\" (UID: \"2c290d1c-13cd-4018-a8ca-1f57494eaf51\") " pod="openshift-route-controller-manager/route-controller-manager-5c48bc88ff-kbhxj" Feb 18 19:23:49 crc kubenswrapper[4942]: I0218 19:23:49.316955 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c6f5b\" (UniqueName: \"kubernetes.io/projected/2c290d1c-13cd-4018-a8ca-1f57494eaf51-kube-api-access-c6f5b\") pod \"route-controller-manager-5c48bc88ff-kbhxj\" (UID: \"2c290d1c-13cd-4018-a8ca-1f57494eaf51\") " pod="openshift-route-controller-manager/route-controller-manager-5c48bc88ff-kbhxj" Feb 18 19:23:49 crc kubenswrapper[4942]: I0218 19:23:49.417979 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2c290d1c-13cd-4018-a8ca-1f57494eaf51-serving-cert\") pod \"route-controller-manager-5c48bc88ff-kbhxj\" (UID: \"2c290d1c-13cd-4018-a8ca-1f57494eaf51\") " pod="openshift-route-controller-manager/route-controller-manager-5c48bc88ff-kbhxj" Feb 18 19:23:49 crc kubenswrapper[4942]: I0218 19:23:49.418034 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/2c290d1c-13cd-4018-a8ca-1f57494eaf51-client-ca\") pod \"route-controller-manager-5c48bc88ff-kbhxj\" (UID: \"2c290d1c-13cd-4018-a8ca-1f57494eaf51\") " pod="openshift-route-controller-manager/route-controller-manager-5c48bc88ff-kbhxj" Feb 18 19:23:49 crc kubenswrapper[4942]: I0218 19:23:49.418096 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2c290d1c-13cd-4018-a8ca-1f57494eaf51-config\") pod \"route-controller-manager-5c48bc88ff-kbhxj\" (UID: \"2c290d1c-13cd-4018-a8ca-1f57494eaf51\") " pod="openshift-route-controller-manager/route-controller-manager-5c48bc88ff-kbhxj" Feb 18 19:23:49 crc kubenswrapper[4942]: I0218 19:23:49.418134 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c6f5b\" (UniqueName: \"kubernetes.io/projected/2c290d1c-13cd-4018-a8ca-1f57494eaf51-kube-api-access-c6f5b\") pod \"route-controller-manager-5c48bc88ff-kbhxj\" (UID: \"2c290d1c-13cd-4018-a8ca-1f57494eaf51\") " pod="openshift-route-controller-manager/route-controller-manager-5c48bc88ff-kbhxj" Feb 18 19:23:49 crc kubenswrapper[4942]: I0218 19:23:49.419932 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2c290d1c-13cd-4018-a8ca-1f57494eaf51-config\") pod \"route-controller-manager-5c48bc88ff-kbhxj\" (UID: \"2c290d1c-13cd-4018-a8ca-1f57494eaf51\") " pod="openshift-route-controller-manager/route-controller-manager-5c48bc88ff-kbhxj" Feb 18 19:23:49 crc kubenswrapper[4942]: I0218 19:23:49.421716 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/2c290d1c-13cd-4018-a8ca-1f57494eaf51-client-ca\") pod \"route-controller-manager-5c48bc88ff-kbhxj\" (UID: \"2c290d1c-13cd-4018-a8ca-1f57494eaf51\") " pod="openshift-route-controller-manager/route-controller-manager-5c48bc88ff-kbhxj" Feb 18 19:23:49 crc kubenswrapper[4942]: I0218 19:23:49.424526 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2c290d1c-13cd-4018-a8ca-1f57494eaf51-serving-cert\") pod \"route-controller-manager-5c48bc88ff-kbhxj\" (UID: \"2c290d1c-13cd-4018-a8ca-1f57494eaf51\") " pod="openshift-route-controller-manager/route-controller-manager-5c48bc88ff-kbhxj" Feb 18 19:23:49 crc kubenswrapper[4942]: I0218 19:23:49.445577 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c6f5b\" (UniqueName: \"kubernetes.io/projected/2c290d1c-13cd-4018-a8ca-1f57494eaf51-kube-api-access-c6f5b\") pod \"route-controller-manager-5c48bc88ff-kbhxj\" (UID: \"2c290d1c-13cd-4018-a8ca-1f57494eaf51\") " pod="openshift-route-controller-manager/route-controller-manager-5c48bc88ff-kbhxj" Feb 18 19:23:49 crc kubenswrapper[4942]: I0218 19:23:49.466447 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5c48bc88ff-kbhxj" Feb 18 19:23:49 crc kubenswrapper[4942]: I0218 19:23:49.867727 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5c48bc88ff-kbhxj"] Feb 18 19:23:50 crc kubenswrapper[4942]: I0218 19:23:50.108719 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-5c48bc88ff-kbhxj" event={"ID":"2c290d1c-13cd-4018-a8ca-1f57494eaf51","Type":"ContainerStarted","Data":"149575ced0e167a0eed5ca21f1b301d58fec0509ff1b889674fa452cc858f349"} Feb 18 19:23:51 crc kubenswrapper[4942]: I0218 19:23:51.060037 4942 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b0c3cea3-65a4-46fc-9185-d057169b4174" path="/var/lib/kubelet/pods/b0c3cea3-65a4-46fc-9185-d057169b4174/volumes" Feb 18 19:23:51 crc kubenswrapper[4942]: I0218 19:23:51.121820 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-5c48bc88ff-kbhxj" event={"ID":"2c290d1c-13cd-4018-a8ca-1f57494eaf51","Type":"ContainerStarted","Data":"637e73864c6fa658ca69250ef298149cef6817155631ac96b00f3f7a70395ab3"} Feb 18 19:23:51 crc kubenswrapper[4942]: I0218 19:23:51.122381 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-5c48bc88ff-kbhxj" Feb 18 19:23:51 crc kubenswrapper[4942]: I0218 19:23:51.128648 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-5c48bc88ff-kbhxj" Feb 18 19:23:51 crc kubenswrapper[4942]: I0218 19:23:51.154127 4942 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-5c48bc88ff-kbhxj" podStartSLOduration=3.154095603 podStartE2EDuration="3.154095603s" podCreationTimestamp="2026-02-18 19:23:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:23:51.143725835 +0000 UTC m=+390.848658540" watchObservedRunningTime="2026-02-18 19:23:51.154095603 +0000 UTC m=+390.859028308" Feb 18 19:23:53 crc kubenswrapper[4942]: I0218 19:23:53.740910 4942 patch_prober.go:28] interesting pod/machine-config-daemon-wqxh4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 19:23:53 crc kubenswrapper[4942]: I0218 19:23:53.741325 4942 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wqxh4" podUID="28921539-823a-4439-a230-3b5aed7085cc" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 19:24:02 crc kubenswrapper[4942]: I0218 19:24:02.848751 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-66df7c8f76-fjd9g" Feb 18 19:24:02 crc kubenswrapper[4942]: I0218 19:24:02.921040 4942 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-2fcrf"] Feb 18 19:24:23 crc kubenswrapper[4942]: I0218 19:24:23.740917 4942 patch_prober.go:28] interesting pod/machine-config-daemon-wqxh4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 19:24:23 crc kubenswrapper[4942]: I0218 19:24:23.741816 4942 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wqxh4" podUID="28921539-823a-4439-a230-3b5aed7085cc" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 19:24:23 crc kubenswrapper[4942]: I0218 19:24:23.741891 4942 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-wqxh4" Feb 18 19:24:23 crc kubenswrapper[4942]: I0218 19:24:23.742920 4942 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"cbd8c39f4ca27a862760680c197d71be21444460d43b83855f644da4c249ce06"} pod="openshift-machine-config-operator/machine-config-daemon-wqxh4" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 18 19:24:23 crc kubenswrapper[4942]: I0218 19:24:23.743030 4942 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-wqxh4" podUID="28921539-823a-4439-a230-3b5aed7085cc" containerName="machine-config-daemon" containerID="cri-o://cbd8c39f4ca27a862760680c197d71be21444460d43b83855f644da4c249ce06" gracePeriod=600 Feb 18 19:24:24 crc kubenswrapper[4942]: I0218 19:24:24.372137 4942 generic.go:334] "Generic (PLEG): container finished" podID="28921539-823a-4439-a230-3b5aed7085cc" containerID="cbd8c39f4ca27a862760680c197d71be21444460d43b83855f644da4c249ce06" exitCode=0 Feb 18 19:24:24 crc kubenswrapper[4942]: I0218 19:24:24.372267 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wqxh4" event={"ID":"28921539-823a-4439-a230-3b5aed7085cc","Type":"ContainerDied","Data":"cbd8c39f4ca27a862760680c197d71be21444460d43b83855f644da4c249ce06"} Feb 18 19:24:24 crc kubenswrapper[4942]: I0218 19:24:24.372493 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wqxh4" event={"ID":"28921539-823a-4439-a230-3b5aed7085cc","Type":"ContainerStarted","Data":"69563ccc2ca715071d77cf8ee678820b7e15eada4a6e511a3ef021c2758d0101"} Feb 18 19:24:24 crc kubenswrapper[4942]: I0218 19:24:24.372520 4942 scope.go:117] "RemoveContainer" containerID="d3f2583de812c35d32f50918d2ea1071672e650d7bb1eca09416558ca25526b1" Feb 18 19:24:27 crc kubenswrapper[4942]: I0218 19:24:27.968757 4942 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-image-registry/image-registry-697d97f7c8-2fcrf" podUID="087f0c6b-3e9f-4db4-bbcb-a8075e218219" containerName="registry" containerID="cri-o://91e860bb5e26a16c65c27e2d570478576e7d6d20c751b07a7d8ecff08551af59" gracePeriod=30 Feb 18 19:24:28 crc kubenswrapper[4942]: I0218 19:24:28.404965 4942 generic.go:334] "Generic (PLEG): container finished" podID="087f0c6b-3e9f-4db4-bbcb-a8075e218219" containerID="91e860bb5e26a16c65c27e2d570478576e7d6d20c751b07a7d8ecff08551af59" exitCode=0 Feb 18 19:24:28 crc kubenswrapper[4942]: I0218 19:24:28.405093 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-2fcrf" event={"ID":"087f0c6b-3e9f-4db4-bbcb-a8075e218219","Type":"ContainerDied","Data":"91e860bb5e26a16c65c27e2d570478576e7d6d20c751b07a7d8ecff08551af59"} Feb 18 19:24:28 crc kubenswrapper[4942]: I0218 19:24:28.454826 4942 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-2fcrf" Feb 18 19:24:28 crc kubenswrapper[4942]: I0218 19:24:28.614754 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/087f0c6b-3e9f-4db4-bbcb-a8075e218219-trusted-ca\") pod \"087f0c6b-3e9f-4db4-bbcb-a8075e218219\" (UID: \"087f0c6b-3e9f-4db4-bbcb-a8075e218219\") " Feb 18 19:24:28 crc kubenswrapper[4942]: I0218 19:24:28.614824 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/087f0c6b-3e9f-4db4-bbcb-a8075e218219-bound-sa-token\") pod \"087f0c6b-3e9f-4db4-bbcb-a8075e218219\" (UID: \"087f0c6b-3e9f-4db4-bbcb-a8075e218219\") " Feb 18 19:24:28 crc kubenswrapper[4942]: I0218 19:24:28.614849 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/087f0c6b-3e9f-4db4-bbcb-a8075e218219-registry-tls\") pod \"087f0c6b-3e9f-4db4-bbcb-a8075e218219\" (UID: \"087f0c6b-3e9f-4db4-bbcb-a8075e218219\") " Feb 18 19:24:28 crc kubenswrapper[4942]: I0218 19:24:28.615014 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-storage\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"087f0c6b-3e9f-4db4-bbcb-a8075e218219\" (UID: \"087f0c6b-3e9f-4db4-bbcb-a8075e218219\") " Feb 18 19:24:28 crc kubenswrapper[4942]: I0218 19:24:28.615041 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/087f0c6b-3e9f-4db4-bbcb-a8075e218219-ca-trust-extracted\") pod \"087f0c6b-3e9f-4db4-bbcb-a8075e218219\" (UID: \"087f0c6b-3e9f-4db4-bbcb-a8075e218219\") " Feb 18 19:24:28 crc kubenswrapper[4942]: I0218 19:24:28.615076 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/087f0c6b-3e9f-4db4-bbcb-a8075e218219-registry-certificates\") pod \"087f0c6b-3e9f-4db4-bbcb-a8075e218219\" (UID: \"087f0c6b-3e9f-4db4-bbcb-a8075e218219\") " Feb 18 19:24:28 crc kubenswrapper[4942]: I0218 19:24:28.615133 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-25z4w\" (UniqueName: \"kubernetes.io/projected/087f0c6b-3e9f-4db4-bbcb-a8075e218219-kube-api-access-25z4w\") pod \"087f0c6b-3e9f-4db4-bbcb-a8075e218219\" (UID: \"087f0c6b-3e9f-4db4-bbcb-a8075e218219\") " Feb 18 19:24:28 crc kubenswrapper[4942]: I0218 19:24:28.615173 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/087f0c6b-3e9f-4db4-bbcb-a8075e218219-installation-pull-secrets\") pod \"087f0c6b-3e9f-4db4-bbcb-a8075e218219\" (UID: \"087f0c6b-3e9f-4db4-bbcb-a8075e218219\") " Feb 18 19:24:28 crc kubenswrapper[4942]: I0218 19:24:28.616033 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/087f0c6b-3e9f-4db4-bbcb-a8075e218219-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "087f0c6b-3e9f-4db4-bbcb-a8075e218219" (UID: "087f0c6b-3e9f-4db4-bbcb-a8075e218219"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:24:28 crc kubenswrapper[4942]: I0218 19:24:28.618133 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/087f0c6b-3e9f-4db4-bbcb-a8075e218219-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "087f0c6b-3e9f-4db4-bbcb-a8075e218219" (UID: "087f0c6b-3e9f-4db4-bbcb-a8075e218219"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:24:28 crc kubenswrapper[4942]: I0218 19:24:28.623406 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/087f0c6b-3e9f-4db4-bbcb-a8075e218219-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "087f0c6b-3e9f-4db4-bbcb-a8075e218219" (UID: "087f0c6b-3e9f-4db4-bbcb-a8075e218219"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:24:28 crc kubenswrapper[4942]: I0218 19:24:28.624755 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/087f0c6b-3e9f-4db4-bbcb-a8075e218219-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "087f0c6b-3e9f-4db4-bbcb-a8075e218219" (UID: "087f0c6b-3e9f-4db4-bbcb-a8075e218219"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:24:28 crc kubenswrapper[4942]: I0218 19:24:28.625526 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/087f0c6b-3e9f-4db4-bbcb-a8075e218219-kube-api-access-25z4w" (OuterVolumeSpecName: "kube-api-access-25z4w") pod "087f0c6b-3e9f-4db4-bbcb-a8075e218219" (UID: "087f0c6b-3e9f-4db4-bbcb-a8075e218219"). InnerVolumeSpecName "kube-api-access-25z4w". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:24:28 crc kubenswrapper[4942]: I0218 19:24:28.626009 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/087f0c6b-3e9f-4db4-bbcb-a8075e218219-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "087f0c6b-3e9f-4db4-bbcb-a8075e218219" (UID: "087f0c6b-3e9f-4db4-bbcb-a8075e218219"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:24:28 crc kubenswrapper[4942]: I0218 19:24:28.628304 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "registry-storage") pod "087f0c6b-3e9f-4db4-bbcb-a8075e218219" (UID: "087f0c6b-3e9f-4db4-bbcb-a8075e218219"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Feb 18 19:24:28 crc kubenswrapper[4942]: I0218 19:24:28.640025 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/087f0c6b-3e9f-4db4-bbcb-a8075e218219-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "087f0c6b-3e9f-4db4-bbcb-a8075e218219" (UID: "087f0c6b-3e9f-4db4-bbcb-a8075e218219"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 19:24:28 crc kubenswrapper[4942]: I0218 19:24:28.717013 4942 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-25z4w\" (UniqueName: \"kubernetes.io/projected/087f0c6b-3e9f-4db4-bbcb-a8075e218219-kube-api-access-25z4w\") on node \"crc\" DevicePath \"\"" Feb 18 19:24:28 crc kubenswrapper[4942]: I0218 19:24:28.717077 4942 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/087f0c6b-3e9f-4db4-bbcb-a8075e218219-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Feb 18 19:24:28 crc kubenswrapper[4942]: I0218 19:24:28.717104 4942 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/087f0c6b-3e9f-4db4-bbcb-a8075e218219-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 18 19:24:28 crc kubenswrapper[4942]: I0218 19:24:28.717132 4942 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/087f0c6b-3e9f-4db4-bbcb-a8075e218219-bound-sa-token\") on node \"crc\" DevicePath \"\"" Feb 18 19:24:28 crc kubenswrapper[4942]: I0218 19:24:28.717266 4942 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/087f0c6b-3e9f-4db4-bbcb-a8075e218219-registry-tls\") on node \"crc\" DevicePath \"\"" Feb 18 19:24:28 crc kubenswrapper[4942]: I0218 19:24:28.717295 4942 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/087f0c6b-3e9f-4db4-bbcb-a8075e218219-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Feb 18 19:24:28 crc kubenswrapper[4942]: I0218 19:24:28.717321 4942 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/087f0c6b-3e9f-4db4-bbcb-a8075e218219-registry-certificates\") on node \"crc\" DevicePath \"\"" Feb 18 19:24:29 crc kubenswrapper[4942]: I0218 19:24:29.415492 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-2fcrf" event={"ID":"087f0c6b-3e9f-4db4-bbcb-a8075e218219","Type":"ContainerDied","Data":"c9af7faf6591829dd44fe7e25f59f09e1004d7cfb6e0f93079ef222657176a3e"} Feb 18 19:24:29 crc kubenswrapper[4942]: I0218 19:24:29.415583 4942 scope.go:117] "RemoveContainer" containerID="91e860bb5e26a16c65c27e2d570478576e7d6d20c751b07a7d8ecff08551af59" Feb 18 19:24:29 crc kubenswrapper[4942]: I0218 19:24:29.415870 4942 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-2fcrf" Feb 18 19:24:29 crc kubenswrapper[4942]: I0218 19:24:29.446073 4942 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-2fcrf"] Feb 18 19:24:29 crc kubenswrapper[4942]: I0218 19:24:29.450479 4942 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-2fcrf"] Feb 18 19:24:31 crc kubenswrapper[4942]: I0218 19:24:31.047983 4942 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="087f0c6b-3e9f-4db4-bbcb-a8075e218219" path="/var/lib/kubelet/pods/087f0c6b-3e9f-4db4-bbcb-a8075e218219/volumes" Feb 18 19:26:49 crc kubenswrapper[4942]: I0218 19:26:49.769532 4942 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-cainjector-cf98fcc89-kdpq5"] Feb 18 19:26:49 crc kubenswrapper[4942]: E0218 19:26:49.773471 4942 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="087f0c6b-3e9f-4db4-bbcb-a8075e218219" containerName="registry" Feb 18 19:26:49 crc kubenswrapper[4942]: I0218 19:26:49.773614 4942 state_mem.go:107] "Deleted CPUSet assignment" podUID="087f0c6b-3e9f-4db4-bbcb-a8075e218219" containerName="registry" Feb 18 19:26:49 crc kubenswrapper[4942]: I0218 19:26:49.773867 4942 memory_manager.go:354] "RemoveStaleState removing state" podUID="087f0c6b-3e9f-4db4-bbcb-a8075e218219" containerName="registry" Feb 18 19:26:49 crc kubenswrapper[4942]: I0218 19:26:49.774494 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-cf98fcc89-kdpq5" Feb 18 19:26:49 crc kubenswrapper[4942]: I0218 19:26:49.776630 4942 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"kube-root-ca.crt" Feb 18 19:26:49 crc kubenswrapper[4942]: I0218 19:26:49.776788 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-cf98fcc89-kdpq5"] Feb 18 19:26:49 crc kubenswrapper[4942]: I0218 19:26:49.776972 4942 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-cainjector-dockercfg-5ml7p" Feb 18 19:26:49 crc kubenswrapper[4942]: I0218 19:26:49.778160 4942 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"openshift-service-ca.crt" Feb 18 19:26:49 crc kubenswrapper[4942]: I0218 19:26:49.805709 4942 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-858654f9db-p9pz8"] Feb 18 19:26:49 crc kubenswrapper[4942]: I0218 19:26:49.806926 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-858654f9db-p9pz8" Feb 18 19:26:49 crc kubenswrapper[4942]: I0218 19:26:49.808482 4942 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-dockercfg-gzd7w" Feb 18 19:26:49 crc kubenswrapper[4942]: I0218 19:26:49.810911 4942 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-webhook-687f57d79b-4fcbs"] Feb 18 19:26:49 crc kubenswrapper[4942]: I0218 19:26:49.811843 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-687f57d79b-4fcbs" Feb 18 19:26:49 crc kubenswrapper[4942]: I0218 19:26:49.815821 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-858654f9db-p9pz8"] Feb 18 19:26:49 crc kubenswrapper[4942]: I0218 19:26:49.816645 4942 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-webhook-dockercfg-qsls8" Feb 18 19:26:49 crc kubenswrapper[4942]: I0218 19:26:49.821754 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-687f57d79b-4fcbs"] Feb 18 19:26:49 crc kubenswrapper[4942]: I0218 19:26:49.919706 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qjg72\" (UniqueName: \"kubernetes.io/projected/6e365537-e12c-486a-a7e3-156ecf269ba3-kube-api-access-qjg72\") pod \"cert-manager-webhook-687f57d79b-4fcbs\" (UID: \"6e365537-e12c-486a-a7e3-156ecf269ba3\") " pod="cert-manager/cert-manager-webhook-687f57d79b-4fcbs" Feb 18 19:26:49 crc kubenswrapper[4942]: I0218 19:26:49.919845 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w5n97\" (UniqueName: \"kubernetes.io/projected/b67fb0f6-ae10-459f-82eb-516f6837a3c9-kube-api-access-w5n97\") pod \"cert-manager-858654f9db-p9pz8\" (UID: \"b67fb0f6-ae10-459f-82eb-516f6837a3c9\") " pod="cert-manager/cert-manager-858654f9db-p9pz8" Feb 18 19:26:49 crc kubenswrapper[4942]: I0218 19:26:49.919933 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tk5hq\" (UniqueName: \"kubernetes.io/projected/2d101833-8f66-4f88-931b-62659bb0b37e-kube-api-access-tk5hq\") pod \"cert-manager-cainjector-cf98fcc89-kdpq5\" (UID: \"2d101833-8f66-4f88-931b-62659bb0b37e\") " pod="cert-manager/cert-manager-cainjector-cf98fcc89-kdpq5" Feb 18 19:26:50 crc kubenswrapper[4942]: I0218 19:26:50.021742 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qjg72\" (UniqueName: \"kubernetes.io/projected/6e365537-e12c-486a-a7e3-156ecf269ba3-kube-api-access-qjg72\") pod \"cert-manager-webhook-687f57d79b-4fcbs\" (UID: \"6e365537-e12c-486a-a7e3-156ecf269ba3\") " pod="cert-manager/cert-manager-webhook-687f57d79b-4fcbs" Feb 18 19:26:50 crc kubenswrapper[4942]: I0218 19:26:50.021856 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w5n97\" (UniqueName: \"kubernetes.io/projected/b67fb0f6-ae10-459f-82eb-516f6837a3c9-kube-api-access-w5n97\") pod \"cert-manager-858654f9db-p9pz8\" (UID: \"b67fb0f6-ae10-459f-82eb-516f6837a3c9\") " pod="cert-manager/cert-manager-858654f9db-p9pz8" Feb 18 19:26:50 crc kubenswrapper[4942]: I0218 19:26:50.022019 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tk5hq\" (UniqueName: \"kubernetes.io/projected/2d101833-8f66-4f88-931b-62659bb0b37e-kube-api-access-tk5hq\") pod \"cert-manager-cainjector-cf98fcc89-kdpq5\" (UID: \"2d101833-8f66-4f88-931b-62659bb0b37e\") " pod="cert-manager/cert-manager-cainjector-cf98fcc89-kdpq5" Feb 18 19:26:50 crc kubenswrapper[4942]: I0218 19:26:50.041338 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tk5hq\" (UniqueName: \"kubernetes.io/projected/2d101833-8f66-4f88-931b-62659bb0b37e-kube-api-access-tk5hq\") pod \"cert-manager-cainjector-cf98fcc89-kdpq5\" (UID: \"2d101833-8f66-4f88-931b-62659bb0b37e\") " pod="cert-manager/cert-manager-cainjector-cf98fcc89-kdpq5" Feb 18 19:26:50 crc kubenswrapper[4942]: I0218 19:26:50.045697 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qjg72\" (UniqueName: \"kubernetes.io/projected/6e365537-e12c-486a-a7e3-156ecf269ba3-kube-api-access-qjg72\") pod \"cert-manager-webhook-687f57d79b-4fcbs\" (UID: \"6e365537-e12c-486a-a7e3-156ecf269ba3\") " pod="cert-manager/cert-manager-webhook-687f57d79b-4fcbs" Feb 18 19:26:50 crc kubenswrapper[4942]: I0218 19:26:50.046178 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w5n97\" (UniqueName: \"kubernetes.io/projected/b67fb0f6-ae10-459f-82eb-516f6837a3c9-kube-api-access-w5n97\") pod \"cert-manager-858654f9db-p9pz8\" (UID: \"b67fb0f6-ae10-459f-82eb-516f6837a3c9\") " pod="cert-manager/cert-manager-858654f9db-p9pz8" Feb 18 19:26:50 crc kubenswrapper[4942]: I0218 19:26:50.103534 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-cf98fcc89-kdpq5" Feb 18 19:26:50 crc kubenswrapper[4942]: I0218 19:26:50.135469 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-858654f9db-p9pz8" Feb 18 19:26:50 crc kubenswrapper[4942]: I0218 19:26:50.142811 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-687f57d79b-4fcbs" Feb 18 19:26:50 crc kubenswrapper[4942]: I0218 19:26:50.370313 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-858654f9db-p9pz8"] Feb 18 19:26:50 crc kubenswrapper[4942]: I0218 19:26:50.375783 4942 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 18 19:26:50 crc kubenswrapper[4942]: I0218 19:26:50.516353 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-cf98fcc89-kdpq5"] Feb 18 19:26:50 crc kubenswrapper[4942]: W0218 19:26:50.611588 4942 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6e365537_e12c_486a_a7e3_156ecf269ba3.slice/crio-da27393b001c232a53b492697d913e4c6fc8fb889efc82f647369ae603cb73b7 WatchSource:0}: Error finding container da27393b001c232a53b492697d913e4c6fc8fb889efc82f647369ae603cb73b7: Status 404 returned error can't find the container with id da27393b001c232a53b492697d913e4c6fc8fb889efc82f647369ae603cb73b7 Feb 18 19:26:50 crc kubenswrapper[4942]: I0218 19:26:50.612954 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-687f57d79b-4fcbs"] Feb 18 19:26:51 crc kubenswrapper[4942]: I0218 19:26:51.377323 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-cf98fcc89-kdpq5" event={"ID":"2d101833-8f66-4f88-931b-62659bb0b37e","Type":"ContainerStarted","Data":"361703ec57837edec24561eb5f613e5781c6396efa55f3de81407a593207fdd5"} Feb 18 19:26:51 crc kubenswrapper[4942]: I0218 19:26:51.380960 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-687f57d79b-4fcbs" event={"ID":"6e365537-e12c-486a-a7e3-156ecf269ba3","Type":"ContainerStarted","Data":"da27393b001c232a53b492697d913e4c6fc8fb889efc82f647369ae603cb73b7"} Feb 18 19:26:51 crc kubenswrapper[4942]: I0218 19:26:51.382956 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-858654f9db-p9pz8" event={"ID":"b67fb0f6-ae10-459f-82eb-516f6837a3c9","Type":"ContainerStarted","Data":"4b886bc8f43ec6f7a4c065210f1025ff4e8cde472330f17e5ed865925c41d46d"} Feb 18 19:26:53 crc kubenswrapper[4942]: I0218 19:26:53.740575 4942 patch_prober.go:28] interesting pod/machine-config-daemon-wqxh4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 19:26:53 crc kubenswrapper[4942]: I0218 19:26:53.741052 4942 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wqxh4" podUID="28921539-823a-4439-a230-3b5aed7085cc" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 19:26:54 crc kubenswrapper[4942]: I0218 19:26:54.403999 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-cf98fcc89-kdpq5" event={"ID":"2d101833-8f66-4f88-931b-62659bb0b37e","Type":"ContainerStarted","Data":"3448060ea07c059a677d7cb4dbc687f3bc455914a3038ab2c48fbf5211e5064c"} Feb 18 19:26:54 crc kubenswrapper[4942]: I0218 19:26:54.405852 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-858654f9db-p9pz8" event={"ID":"b67fb0f6-ae10-459f-82eb-516f6837a3c9","Type":"ContainerStarted","Data":"0c3cece2a44cb1606fcea7c8e4f9e8d2a4d8463fceb231d17afeb3f14aaf8bb5"} Feb 18 19:26:54 crc kubenswrapper[4942]: I0218 19:26:54.434434 4942 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-cainjector-cf98fcc89-kdpq5" podStartSLOduration=2.718978829 podStartE2EDuration="5.434405704s" podCreationTimestamp="2026-02-18 19:26:49 +0000 UTC" firstStartedPulling="2026-02-18 19:26:50.523625632 +0000 UTC m=+570.228558307" lastFinishedPulling="2026-02-18 19:26:53.239052497 +0000 UTC m=+572.943985182" observedRunningTime="2026-02-18 19:26:54.430268401 +0000 UTC m=+574.135201086" watchObservedRunningTime="2026-02-18 19:26:54.434405704 +0000 UTC m=+574.139338399" Feb 18 19:26:54 crc kubenswrapper[4942]: I0218 19:26:54.468034 4942 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-858654f9db-p9pz8" podStartSLOduration=2.571630873 podStartE2EDuration="5.46801154s" podCreationTimestamp="2026-02-18 19:26:49 +0000 UTC" firstStartedPulling="2026-02-18 19:26:50.374042505 +0000 UTC m=+570.078975180" lastFinishedPulling="2026-02-18 19:26:53.270423182 +0000 UTC m=+572.975355847" observedRunningTime="2026-02-18 19:26:54.466652283 +0000 UTC m=+574.171584988" watchObservedRunningTime="2026-02-18 19:26:54.46801154 +0000 UTC m=+574.172944205" Feb 18 19:26:55 crc kubenswrapper[4942]: I0218 19:26:55.415878 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-687f57d79b-4fcbs" event={"ID":"6e365537-e12c-486a-a7e3-156ecf269ba3","Type":"ContainerStarted","Data":"4ffea2f6aed24bc16fc6f5716c235c2e8ed0c9e6d6c462e3a765205da3e13cb1"} Feb 18 19:26:55 crc kubenswrapper[4942]: I0218 19:26:55.416889 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="cert-manager/cert-manager-webhook-687f57d79b-4fcbs" Feb 18 19:26:55 crc kubenswrapper[4942]: I0218 19:26:55.442254 4942 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-webhook-687f57d79b-4fcbs" podStartSLOduration=2.735627103 podStartE2EDuration="6.442222158s" podCreationTimestamp="2026-02-18 19:26:49 +0000 UTC" firstStartedPulling="2026-02-18 19:26:50.613804541 +0000 UTC m=+570.318737226" lastFinishedPulling="2026-02-18 19:26:54.320399586 +0000 UTC m=+574.025332281" observedRunningTime="2026-02-18 19:26:55.440965214 +0000 UTC m=+575.145897909" watchObservedRunningTime="2026-02-18 19:26:55.442222158 +0000 UTC m=+575.147154863" Feb 18 19:26:59 crc kubenswrapper[4942]: I0218 19:26:59.889409 4942 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-89fzv"] Feb 18 19:26:59 crc kubenswrapper[4942]: I0218 19:26:59.890199 4942 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-89fzv" podUID="45dc4164-81a9-44cf-b86a-dff571bc0417" containerName="ovn-controller" containerID="cri-o://427d7c083c5040fc6afe217c7850f1114323977542e83eb35d0a71b4bef6ecc6" gracePeriod=30 Feb 18 19:26:59 crc kubenswrapper[4942]: I0218 19:26:59.890334 4942 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-89fzv" podUID="45dc4164-81a9-44cf-b86a-dff571bc0417" containerName="kube-rbac-proxy-node" containerID="cri-o://e988175a524e389ddf3e3a47acb65910ac3bf3b812e14b76d988f13e2cdc5dc7" gracePeriod=30 Feb 18 19:26:59 crc kubenswrapper[4942]: I0218 19:26:59.890308 4942 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-89fzv" podUID="45dc4164-81a9-44cf-b86a-dff571bc0417" containerName="nbdb" containerID="cri-o://bcc9ee5f12cc3a3518c9fe13c16743e946e59b82dc01239767afb1e4afb2e4b9" gracePeriod=30 Feb 18 19:26:59 crc kubenswrapper[4942]: I0218 19:26:59.890387 4942 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-89fzv" podUID="45dc4164-81a9-44cf-b86a-dff571bc0417" containerName="ovn-acl-logging" containerID="cri-o://6351d0088a3e9c170ebe043fa700ef7f870c52f40d751b4fd13ac7b5bfa5e3b7" gracePeriod=30 Feb 18 19:26:59 crc kubenswrapper[4942]: I0218 19:26:59.890343 4942 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-89fzv" podUID="45dc4164-81a9-44cf-b86a-dff571bc0417" containerName="kube-rbac-proxy-ovn-metrics" containerID="cri-o://9333dac09e056ca12a248589ed4a097788b86ab83f9a1014d76d8bad88f1800c" gracePeriod=30 Feb 18 19:26:59 crc kubenswrapper[4942]: I0218 19:26:59.890571 4942 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-89fzv" podUID="45dc4164-81a9-44cf-b86a-dff571bc0417" containerName="northd" containerID="cri-o://b2e222b580b244e85a382499ae61c72779f95fdab87e4d4c723d29b488219f94" gracePeriod=30 Feb 18 19:26:59 crc kubenswrapper[4942]: I0218 19:26:59.890727 4942 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-89fzv" podUID="45dc4164-81a9-44cf-b86a-dff571bc0417" containerName="sbdb" containerID="cri-o://c498aa99d3ec10af57c279f23804f4dce52a99d2c73fafe2bd9dc6ea454c7a23" gracePeriod=30 Feb 18 19:26:59 crc kubenswrapper[4942]: I0218 19:26:59.920122 4942 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-89fzv" podUID="45dc4164-81a9-44cf-b86a-dff571bc0417" containerName="ovnkube-controller" containerID="cri-o://7f5cfffb19bf5e734126be098127f35dd8141f0fb212e21f57fd5fb0d64306d6" gracePeriod=30 Feb 18 19:27:00 crc kubenswrapper[4942]: I0218 19:27:00.145471 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="cert-manager/cert-manager-webhook-687f57d79b-4fcbs" Feb 18 19:27:00 crc kubenswrapper[4942]: I0218 19:27:00.180507 4942 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-89fzv_45dc4164-81a9-44cf-b86a-dff571bc0417/ovnkube-controller/3.log" Feb 18 19:27:00 crc kubenswrapper[4942]: I0218 19:27:00.182780 4942 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-89fzv_45dc4164-81a9-44cf-b86a-dff571bc0417/ovn-acl-logging/0.log" Feb 18 19:27:00 crc kubenswrapper[4942]: I0218 19:27:00.183239 4942 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-89fzv_45dc4164-81a9-44cf-b86a-dff571bc0417/ovn-controller/0.log" Feb 18 19:27:00 crc kubenswrapper[4942]: I0218 19:27:00.183867 4942 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-89fzv" Feb 18 19:27:00 crc kubenswrapper[4942]: I0218 19:27:00.251819 4942 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-gjbdb"] Feb 18 19:27:00 crc kubenswrapper[4942]: E0218 19:27:00.252110 4942 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="45dc4164-81a9-44cf-b86a-dff571bc0417" containerName="ovnkube-controller" Feb 18 19:27:00 crc kubenswrapper[4942]: I0218 19:27:00.252130 4942 state_mem.go:107] "Deleted CPUSet assignment" podUID="45dc4164-81a9-44cf-b86a-dff571bc0417" containerName="ovnkube-controller" Feb 18 19:27:00 crc kubenswrapper[4942]: E0218 19:27:00.252146 4942 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="45dc4164-81a9-44cf-b86a-dff571bc0417" containerName="kubecfg-setup" Feb 18 19:27:00 crc kubenswrapper[4942]: I0218 19:27:00.252157 4942 state_mem.go:107] "Deleted CPUSet assignment" podUID="45dc4164-81a9-44cf-b86a-dff571bc0417" containerName="kubecfg-setup" Feb 18 19:27:00 crc kubenswrapper[4942]: E0218 19:27:00.252166 4942 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="45dc4164-81a9-44cf-b86a-dff571bc0417" containerName="kube-rbac-proxy-node" Feb 18 19:27:00 crc kubenswrapper[4942]: I0218 19:27:00.252174 4942 state_mem.go:107] "Deleted CPUSet assignment" podUID="45dc4164-81a9-44cf-b86a-dff571bc0417" containerName="kube-rbac-proxy-node" Feb 18 19:27:00 crc kubenswrapper[4942]: E0218 19:27:00.252189 4942 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="45dc4164-81a9-44cf-b86a-dff571bc0417" containerName="ovn-acl-logging" Feb 18 19:27:00 crc kubenswrapper[4942]: I0218 19:27:00.252196 4942 state_mem.go:107] "Deleted CPUSet assignment" podUID="45dc4164-81a9-44cf-b86a-dff571bc0417" containerName="ovn-acl-logging" Feb 18 19:27:00 crc kubenswrapper[4942]: E0218 19:27:00.252209 4942 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="45dc4164-81a9-44cf-b86a-dff571bc0417" containerName="ovn-controller" Feb 18 19:27:00 crc kubenswrapper[4942]: I0218 19:27:00.252217 4942 state_mem.go:107] "Deleted CPUSet assignment" podUID="45dc4164-81a9-44cf-b86a-dff571bc0417" containerName="ovn-controller" Feb 18 19:27:00 crc kubenswrapper[4942]: E0218 19:27:00.252229 4942 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="45dc4164-81a9-44cf-b86a-dff571bc0417" containerName="nbdb" Feb 18 19:27:00 crc kubenswrapper[4942]: I0218 19:27:00.252237 4942 state_mem.go:107] "Deleted CPUSet assignment" podUID="45dc4164-81a9-44cf-b86a-dff571bc0417" containerName="nbdb" Feb 18 19:27:00 crc kubenswrapper[4942]: E0218 19:27:00.252247 4942 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="45dc4164-81a9-44cf-b86a-dff571bc0417" containerName="kube-rbac-proxy-ovn-metrics" Feb 18 19:27:00 crc kubenswrapper[4942]: I0218 19:27:00.252255 4942 state_mem.go:107] "Deleted CPUSet assignment" podUID="45dc4164-81a9-44cf-b86a-dff571bc0417" containerName="kube-rbac-proxy-ovn-metrics" Feb 18 19:27:00 crc kubenswrapper[4942]: E0218 19:27:00.252263 4942 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="45dc4164-81a9-44cf-b86a-dff571bc0417" containerName="sbdb" Feb 18 19:27:00 crc kubenswrapper[4942]: I0218 19:27:00.252270 4942 state_mem.go:107] "Deleted CPUSet assignment" podUID="45dc4164-81a9-44cf-b86a-dff571bc0417" containerName="sbdb" Feb 18 19:27:00 crc kubenswrapper[4942]: E0218 19:27:00.252283 4942 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="45dc4164-81a9-44cf-b86a-dff571bc0417" containerName="northd" Feb 18 19:27:00 crc kubenswrapper[4942]: I0218 19:27:00.252290 4942 state_mem.go:107] "Deleted CPUSet assignment" podUID="45dc4164-81a9-44cf-b86a-dff571bc0417" containerName="northd" Feb 18 19:27:00 crc kubenswrapper[4942]: E0218 19:27:00.252300 4942 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="45dc4164-81a9-44cf-b86a-dff571bc0417" containerName="ovnkube-controller" Feb 18 19:27:00 crc kubenswrapper[4942]: I0218 19:27:00.252307 4942 state_mem.go:107] "Deleted CPUSet assignment" podUID="45dc4164-81a9-44cf-b86a-dff571bc0417" containerName="ovnkube-controller" Feb 18 19:27:00 crc kubenswrapper[4942]: E0218 19:27:00.252317 4942 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="45dc4164-81a9-44cf-b86a-dff571bc0417" containerName="ovnkube-controller" Feb 18 19:27:00 crc kubenswrapper[4942]: I0218 19:27:00.252324 4942 state_mem.go:107] "Deleted CPUSet assignment" podUID="45dc4164-81a9-44cf-b86a-dff571bc0417" containerName="ovnkube-controller" Feb 18 19:27:00 crc kubenswrapper[4942]: E0218 19:27:00.252335 4942 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="45dc4164-81a9-44cf-b86a-dff571bc0417" containerName="ovnkube-controller" Feb 18 19:27:00 crc kubenswrapper[4942]: I0218 19:27:00.252343 4942 state_mem.go:107] "Deleted CPUSet assignment" podUID="45dc4164-81a9-44cf-b86a-dff571bc0417" containerName="ovnkube-controller" Feb 18 19:27:00 crc kubenswrapper[4942]: I0218 19:27:00.252447 4942 memory_manager.go:354] "RemoveStaleState removing state" podUID="45dc4164-81a9-44cf-b86a-dff571bc0417" containerName="ovnkube-controller" Feb 18 19:27:00 crc kubenswrapper[4942]: I0218 19:27:00.252458 4942 memory_manager.go:354] "RemoveStaleState removing state" podUID="45dc4164-81a9-44cf-b86a-dff571bc0417" containerName="nbdb" Feb 18 19:27:00 crc kubenswrapper[4942]: I0218 19:27:00.252467 4942 memory_manager.go:354] "RemoveStaleState removing state" podUID="45dc4164-81a9-44cf-b86a-dff571bc0417" containerName="ovnkube-controller" Feb 18 19:27:00 crc kubenswrapper[4942]: I0218 19:27:00.252478 4942 memory_manager.go:354] "RemoveStaleState removing state" podUID="45dc4164-81a9-44cf-b86a-dff571bc0417" containerName="kube-rbac-proxy-ovn-metrics" Feb 18 19:27:00 crc kubenswrapper[4942]: I0218 19:27:00.252491 4942 memory_manager.go:354] "RemoveStaleState removing state" podUID="45dc4164-81a9-44cf-b86a-dff571bc0417" containerName="ovn-controller" Feb 18 19:27:00 crc kubenswrapper[4942]: I0218 19:27:00.252502 4942 memory_manager.go:354] "RemoveStaleState removing state" podUID="45dc4164-81a9-44cf-b86a-dff571bc0417" containerName="ovn-acl-logging" Feb 18 19:27:00 crc kubenswrapper[4942]: I0218 19:27:00.252516 4942 memory_manager.go:354] "RemoveStaleState removing state" podUID="45dc4164-81a9-44cf-b86a-dff571bc0417" containerName="sbdb" Feb 18 19:27:00 crc kubenswrapper[4942]: I0218 19:27:00.252524 4942 memory_manager.go:354] "RemoveStaleState removing state" podUID="45dc4164-81a9-44cf-b86a-dff571bc0417" containerName="ovnkube-controller" Feb 18 19:27:00 crc kubenswrapper[4942]: I0218 19:27:00.252532 4942 memory_manager.go:354] "RemoveStaleState removing state" podUID="45dc4164-81a9-44cf-b86a-dff571bc0417" containerName="northd" Feb 18 19:27:00 crc kubenswrapper[4942]: I0218 19:27:00.252543 4942 memory_manager.go:354] "RemoveStaleState removing state" podUID="45dc4164-81a9-44cf-b86a-dff571bc0417" containerName="ovnkube-controller" Feb 18 19:27:00 crc kubenswrapper[4942]: I0218 19:27:00.252552 4942 memory_manager.go:354] "RemoveStaleState removing state" podUID="45dc4164-81a9-44cf-b86a-dff571bc0417" containerName="kube-rbac-proxy-node" Feb 18 19:27:00 crc kubenswrapper[4942]: E0218 19:27:00.252661 4942 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="45dc4164-81a9-44cf-b86a-dff571bc0417" containerName="ovnkube-controller" Feb 18 19:27:00 crc kubenswrapper[4942]: I0218 19:27:00.252670 4942 state_mem.go:107] "Deleted CPUSet assignment" podUID="45dc4164-81a9-44cf-b86a-dff571bc0417" containerName="ovnkube-controller" Feb 18 19:27:00 crc kubenswrapper[4942]: I0218 19:27:00.252789 4942 memory_manager.go:354] "RemoveStaleState removing state" podUID="45dc4164-81a9-44cf-b86a-dff571bc0417" containerName="ovnkube-controller" Feb 18 19:27:00 crc kubenswrapper[4942]: I0218 19:27:00.254638 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-gjbdb" Feb 18 19:27:00 crc kubenswrapper[4942]: I0218 19:27:00.300093 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/45dc4164-81a9-44cf-b86a-dff571bc0417-etc-openvswitch\") pod \"45dc4164-81a9-44cf-b86a-dff571bc0417\" (UID: \"45dc4164-81a9-44cf-b86a-dff571bc0417\") " Feb 18 19:27:00 crc kubenswrapper[4942]: I0218 19:27:00.300145 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/45dc4164-81a9-44cf-b86a-dff571bc0417-host-cni-netd\") pod \"45dc4164-81a9-44cf-b86a-dff571bc0417\" (UID: \"45dc4164-81a9-44cf-b86a-dff571bc0417\") " Feb 18 19:27:00 crc kubenswrapper[4942]: I0218 19:27:00.300172 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/45dc4164-81a9-44cf-b86a-dff571bc0417-host-cni-bin\") pod \"45dc4164-81a9-44cf-b86a-dff571bc0417\" (UID: \"45dc4164-81a9-44cf-b86a-dff571bc0417\") " Feb 18 19:27:00 crc kubenswrapper[4942]: I0218 19:27:00.300201 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/45dc4164-81a9-44cf-b86a-dff571bc0417-env-overrides\") pod \"45dc4164-81a9-44cf-b86a-dff571bc0417\" (UID: \"45dc4164-81a9-44cf-b86a-dff571bc0417\") " Feb 18 19:27:00 crc kubenswrapper[4942]: I0218 19:27:00.300221 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/45dc4164-81a9-44cf-b86a-dff571bc0417-log-socket\") pod \"45dc4164-81a9-44cf-b86a-dff571bc0417\" (UID: \"45dc4164-81a9-44cf-b86a-dff571bc0417\") " Feb 18 19:27:00 crc kubenswrapper[4942]: I0218 19:27:00.300220 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/45dc4164-81a9-44cf-b86a-dff571bc0417-etc-openvswitch" (OuterVolumeSpecName: "etc-openvswitch") pod "45dc4164-81a9-44cf-b86a-dff571bc0417" (UID: "45dc4164-81a9-44cf-b86a-dff571bc0417"). InnerVolumeSpecName "etc-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 18 19:27:00 crc kubenswrapper[4942]: I0218 19:27:00.300248 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/45dc4164-81a9-44cf-b86a-dff571bc0417-var-lib-openvswitch\") pod \"45dc4164-81a9-44cf-b86a-dff571bc0417\" (UID: \"45dc4164-81a9-44cf-b86a-dff571bc0417\") " Feb 18 19:27:00 crc kubenswrapper[4942]: I0218 19:27:00.300282 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/45dc4164-81a9-44cf-b86a-dff571bc0417-ovnkube-script-lib\") pod \"45dc4164-81a9-44cf-b86a-dff571bc0417\" (UID: \"45dc4164-81a9-44cf-b86a-dff571bc0417\") " Feb 18 19:27:00 crc kubenswrapper[4942]: I0218 19:27:00.300235 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/45dc4164-81a9-44cf-b86a-dff571bc0417-host-cni-netd" (OuterVolumeSpecName: "host-cni-netd") pod "45dc4164-81a9-44cf-b86a-dff571bc0417" (UID: "45dc4164-81a9-44cf-b86a-dff571bc0417"). InnerVolumeSpecName "host-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 18 19:27:00 crc kubenswrapper[4942]: I0218 19:27:00.300287 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/45dc4164-81a9-44cf-b86a-dff571bc0417-log-socket" (OuterVolumeSpecName: "log-socket") pod "45dc4164-81a9-44cf-b86a-dff571bc0417" (UID: "45dc4164-81a9-44cf-b86a-dff571bc0417"). InnerVolumeSpecName "log-socket". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 18 19:27:00 crc kubenswrapper[4942]: I0218 19:27:00.300330 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/45dc4164-81a9-44cf-b86a-dff571bc0417-host-slash\") pod \"45dc4164-81a9-44cf-b86a-dff571bc0417\" (UID: \"45dc4164-81a9-44cf-b86a-dff571bc0417\") " Feb 18 19:27:00 crc kubenswrapper[4942]: I0218 19:27:00.300366 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cl7tj\" (UniqueName: \"kubernetes.io/projected/45dc4164-81a9-44cf-b86a-dff571bc0417-kube-api-access-cl7tj\") pod \"45dc4164-81a9-44cf-b86a-dff571bc0417\" (UID: \"45dc4164-81a9-44cf-b86a-dff571bc0417\") " Feb 18 19:27:00 crc kubenswrapper[4942]: I0218 19:27:00.300396 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/45dc4164-81a9-44cf-b86a-dff571bc0417-run-ovn\") pod \"45dc4164-81a9-44cf-b86a-dff571bc0417\" (UID: \"45dc4164-81a9-44cf-b86a-dff571bc0417\") " Feb 18 19:27:00 crc kubenswrapper[4942]: I0218 19:27:00.300424 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/45dc4164-81a9-44cf-b86a-dff571bc0417-host-var-lib-cni-networks-ovn-kubernetes\") pod \"45dc4164-81a9-44cf-b86a-dff571bc0417\" (UID: \"45dc4164-81a9-44cf-b86a-dff571bc0417\") " Feb 18 19:27:00 crc kubenswrapper[4942]: I0218 19:27:00.300452 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/45dc4164-81a9-44cf-b86a-dff571bc0417-systemd-units\") pod \"45dc4164-81a9-44cf-b86a-dff571bc0417\" (UID: \"45dc4164-81a9-44cf-b86a-dff571bc0417\") " Feb 18 19:27:00 crc kubenswrapper[4942]: I0218 19:27:00.300365 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/45dc4164-81a9-44cf-b86a-dff571bc0417-host-cni-bin" (OuterVolumeSpecName: "host-cni-bin") pod "45dc4164-81a9-44cf-b86a-dff571bc0417" (UID: "45dc4164-81a9-44cf-b86a-dff571bc0417"). InnerVolumeSpecName "host-cni-bin". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 18 19:27:00 crc kubenswrapper[4942]: I0218 19:27:00.300483 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/45dc4164-81a9-44cf-b86a-dff571bc0417-host-kubelet\") pod \"45dc4164-81a9-44cf-b86a-dff571bc0417\" (UID: \"45dc4164-81a9-44cf-b86a-dff571bc0417\") " Feb 18 19:27:00 crc kubenswrapper[4942]: I0218 19:27:00.300506 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/45dc4164-81a9-44cf-b86a-dff571bc0417-ovnkube-config\") pod \"45dc4164-81a9-44cf-b86a-dff571bc0417\" (UID: \"45dc4164-81a9-44cf-b86a-dff571bc0417\") " Feb 18 19:27:00 crc kubenswrapper[4942]: I0218 19:27:00.300529 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/45dc4164-81a9-44cf-b86a-dff571bc0417-run-systemd\") pod \"45dc4164-81a9-44cf-b86a-dff571bc0417\" (UID: \"45dc4164-81a9-44cf-b86a-dff571bc0417\") " Feb 18 19:27:00 crc kubenswrapper[4942]: I0218 19:27:00.300566 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/45dc4164-81a9-44cf-b86a-dff571bc0417-ovn-node-metrics-cert\") pod \"45dc4164-81a9-44cf-b86a-dff571bc0417\" (UID: \"45dc4164-81a9-44cf-b86a-dff571bc0417\") " Feb 18 19:27:00 crc kubenswrapper[4942]: I0218 19:27:00.300589 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/45dc4164-81a9-44cf-b86a-dff571bc0417-node-log\") pod \"45dc4164-81a9-44cf-b86a-dff571bc0417\" (UID: \"45dc4164-81a9-44cf-b86a-dff571bc0417\") " Feb 18 19:27:00 crc kubenswrapper[4942]: I0218 19:27:00.300617 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/45dc4164-81a9-44cf-b86a-dff571bc0417-host-run-netns\") pod \"45dc4164-81a9-44cf-b86a-dff571bc0417\" (UID: \"45dc4164-81a9-44cf-b86a-dff571bc0417\") " Feb 18 19:27:00 crc kubenswrapper[4942]: I0218 19:27:00.300637 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/45dc4164-81a9-44cf-b86a-dff571bc0417-host-run-ovn-kubernetes\") pod \"45dc4164-81a9-44cf-b86a-dff571bc0417\" (UID: \"45dc4164-81a9-44cf-b86a-dff571bc0417\") " Feb 18 19:27:00 crc kubenswrapper[4942]: I0218 19:27:00.300664 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/45dc4164-81a9-44cf-b86a-dff571bc0417-run-openvswitch\") pod \"45dc4164-81a9-44cf-b86a-dff571bc0417\" (UID: \"45dc4164-81a9-44cf-b86a-dff571bc0417\") " Feb 18 19:27:00 crc kubenswrapper[4942]: I0218 19:27:00.300377 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/45dc4164-81a9-44cf-b86a-dff571bc0417-var-lib-openvswitch" (OuterVolumeSpecName: "var-lib-openvswitch") pod "45dc4164-81a9-44cf-b86a-dff571bc0417" (UID: "45dc4164-81a9-44cf-b86a-dff571bc0417"). InnerVolumeSpecName "var-lib-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 18 19:27:00 crc kubenswrapper[4942]: I0218 19:27:00.300898 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/45dc4164-81a9-44cf-b86a-dff571bc0417-node-log" (OuterVolumeSpecName: "node-log") pod "45dc4164-81a9-44cf-b86a-dff571bc0417" (UID: "45dc4164-81a9-44cf-b86a-dff571bc0417"). InnerVolumeSpecName "node-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 18 19:27:00 crc kubenswrapper[4942]: I0218 19:27:00.300425 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/45dc4164-81a9-44cf-b86a-dff571bc0417-host-slash" (OuterVolumeSpecName: "host-slash") pod "45dc4164-81a9-44cf-b86a-dff571bc0417" (UID: "45dc4164-81a9-44cf-b86a-dff571bc0417"). InnerVolumeSpecName "host-slash". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 18 19:27:00 crc kubenswrapper[4942]: I0218 19:27:00.300452 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/45dc4164-81a9-44cf-b86a-dff571bc0417-run-ovn" (OuterVolumeSpecName: "run-ovn") pod "45dc4164-81a9-44cf-b86a-dff571bc0417" (UID: "45dc4164-81a9-44cf-b86a-dff571bc0417"). InnerVolumeSpecName "run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 18 19:27:00 crc kubenswrapper[4942]: I0218 19:27:00.300529 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/45dc4164-81a9-44cf-b86a-dff571bc0417-systemd-units" (OuterVolumeSpecName: "systemd-units") pod "45dc4164-81a9-44cf-b86a-dff571bc0417" (UID: "45dc4164-81a9-44cf-b86a-dff571bc0417"). InnerVolumeSpecName "systemd-units". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 18 19:27:00 crc kubenswrapper[4942]: I0218 19:27:00.300547 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/45dc4164-81a9-44cf-b86a-dff571bc0417-host-var-lib-cni-networks-ovn-kubernetes" (OuterVolumeSpecName: "host-var-lib-cni-networks-ovn-kubernetes") pod "45dc4164-81a9-44cf-b86a-dff571bc0417" (UID: "45dc4164-81a9-44cf-b86a-dff571bc0417"). InnerVolumeSpecName "host-var-lib-cni-networks-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 18 19:27:00 crc kubenswrapper[4942]: I0218 19:27:00.300656 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/45dc4164-81a9-44cf-b86a-dff571bc0417-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "45dc4164-81a9-44cf-b86a-dff571bc0417" (UID: "45dc4164-81a9-44cf-b86a-dff571bc0417"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:27:00 crc kubenswrapper[4942]: I0218 19:27:00.300868 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/45dc4164-81a9-44cf-b86a-dff571bc0417-host-run-netns" (OuterVolumeSpecName: "host-run-netns") pod "45dc4164-81a9-44cf-b86a-dff571bc0417" (UID: "45dc4164-81a9-44cf-b86a-dff571bc0417"). InnerVolumeSpecName "host-run-netns". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 18 19:27:00 crc kubenswrapper[4942]: I0218 19:27:00.300878 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/45dc4164-81a9-44cf-b86a-dff571bc0417-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "45dc4164-81a9-44cf-b86a-dff571bc0417" (UID: "45dc4164-81a9-44cf-b86a-dff571bc0417"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:27:00 crc kubenswrapper[4942]: I0218 19:27:00.300884 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/45dc4164-81a9-44cf-b86a-dff571bc0417-host-run-ovn-kubernetes" (OuterVolumeSpecName: "host-run-ovn-kubernetes") pod "45dc4164-81a9-44cf-b86a-dff571bc0417" (UID: "45dc4164-81a9-44cf-b86a-dff571bc0417"). InnerVolumeSpecName "host-run-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 18 19:27:00 crc kubenswrapper[4942]: I0218 19:27:00.300957 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/45dc4164-81a9-44cf-b86a-dff571bc0417-host-kubelet" (OuterVolumeSpecName: "host-kubelet") pod "45dc4164-81a9-44cf-b86a-dff571bc0417" (UID: "45dc4164-81a9-44cf-b86a-dff571bc0417"). InnerVolumeSpecName "host-kubelet". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 18 19:27:00 crc kubenswrapper[4942]: I0218 19:27:00.301012 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/45dc4164-81a9-44cf-b86a-dff571bc0417-run-openvswitch" (OuterVolumeSpecName: "run-openvswitch") pod "45dc4164-81a9-44cf-b86a-dff571bc0417" (UID: "45dc4164-81a9-44cf-b86a-dff571bc0417"). InnerVolumeSpecName "run-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 18 19:27:00 crc kubenswrapper[4942]: I0218 19:27:00.301180 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/45dc4164-81a9-44cf-b86a-dff571bc0417-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "45dc4164-81a9-44cf-b86a-dff571bc0417" (UID: "45dc4164-81a9-44cf-b86a-dff571bc0417"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:27:00 crc kubenswrapper[4942]: I0218 19:27:00.301214 4942 reconciler_common.go:293] "Volume detached for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/45dc4164-81a9-44cf-b86a-dff571bc0417-node-log\") on node \"crc\" DevicePath \"\"" Feb 18 19:27:00 crc kubenswrapper[4942]: I0218 19:27:00.301238 4942 reconciler_common.go:293] "Volume detached for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/45dc4164-81a9-44cf-b86a-dff571bc0417-etc-openvswitch\") on node \"crc\" DevicePath \"\"" Feb 18 19:27:00 crc kubenswrapper[4942]: I0218 19:27:00.301251 4942 reconciler_common.go:293] "Volume detached for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/45dc4164-81a9-44cf-b86a-dff571bc0417-host-cni-netd\") on node \"crc\" DevicePath \"\"" Feb 18 19:27:00 crc kubenswrapper[4942]: I0218 19:27:00.301263 4942 reconciler_common.go:293] "Volume detached for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/45dc4164-81a9-44cf-b86a-dff571bc0417-host-cni-bin\") on node \"crc\" DevicePath \"\"" Feb 18 19:27:00 crc kubenswrapper[4942]: I0218 19:27:00.301274 4942 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/45dc4164-81a9-44cf-b86a-dff571bc0417-env-overrides\") on node \"crc\" DevicePath \"\"" Feb 18 19:27:00 crc kubenswrapper[4942]: I0218 19:27:00.301284 4942 reconciler_common.go:293] "Volume detached for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/45dc4164-81a9-44cf-b86a-dff571bc0417-log-socket\") on node \"crc\" DevicePath \"\"" Feb 18 19:27:00 crc kubenswrapper[4942]: I0218 19:27:00.301295 4942 reconciler_common.go:293] "Volume detached for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/45dc4164-81a9-44cf-b86a-dff571bc0417-var-lib-openvswitch\") on node \"crc\" DevicePath \"\"" Feb 18 19:27:00 crc kubenswrapper[4942]: I0218 19:27:00.301309 4942 reconciler_common.go:293] "Volume detached for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/45dc4164-81a9-44cf-b86a-dff571bc0417-host-slash\") on node \"crc\" DevicePath \"\"" Feb 18 19:27:00 crc kubenswrapper[4942]: I0218 19:27:00.301322 4942 reconciler_common.go:293] "Volume detached for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/45dc4164-81a9-44cf-b86a-dff571bc0417-run-ovn\") on node \"crc\" DevicePath \"\"" Feb 18 19:27:00 crc kubenswrapper[4942]: I0218 19:27:00.301333 4942 reconciler_common.go:293] "Volume detached for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/45dc4164-81a9-44cf-b86a-dff571bc0417-host-var-lib-cni-networks-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Feb 18 19:27:00 crc kubenswrapper[4942]: I0218 19:27:00.301345 4942 reconciler_common.go:293] "Volume detached for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/45dc4164-81a9-44cf-b86a-dff571bc0417-systemd-units\") on node \"crc\" DevicePath \"\"" Feb 18 19:27:00 crc kubenswrapper[4942]: I0218 19:27:00.307353 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/45dc4164-81a9-44cf-b86a-dff571bc0417-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "45dc4164-81a9-44cf-b86a-dff571bc0417" (UID: "45dc4164-81a9-44cf-b86a-dff571bc0417"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:27:00 crc kubenswrapper[4942]: I0218 19:27:00.308174 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/45dc4164-81a9-44cf-b86a-dff571bc0417-kube-api-access-cl7tj" (OuterVolumeSpecName: "kube-api-access-cl7tj") pod "45dc4164-81a9-44cf-b86a-dff571bc0417" (UID: "45dc4164-81a9-44cf-b86a-dff571bc0417"). InnerVolumeSpecName "kube-api-access-cl7tj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:27:00 crc kubenswrapper[4942]: I0218 19:27:00.317282 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/45dc4164-81a9-44cf-b86a-dff571bc0417-run-systemd" (OuterVolumeSpecName: "run-systemd") pod "45dc4164-81a9-44cf-b86a-dff571bc0417" (UID: "45dc4164-81a9-44cf-b86a-dff571bc0417"). InnerVolumeSpecName "run-systemd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 18 19:27:00 crc kubenswrapper[4942]: I0218 19:27:00.402544 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/9574e413-faa5-4a62-a9ef-aaee68989944-systemd-units\") pod \"ovnkube-node-gjbdb\" (UID: \"9574e413-faa5-4a62-a9ef-aaee68989944\") " pod="openshift-ovn-kubernetes/ovnkube-node-gjbdb" Feb 18 19:27:00 crc kubenswrapper[4942]: I0218 19:27:00.402617 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/9574e413-faa5-4a62-a9ef-aaee68989944-env-overrides\") pod \"ovnkube-node-gjbdb\" (UID: \"9574e413-faa5-4a62-a9ef-aaee68989944\") " pod="openshift-ovn-kubernetes/ovnkube-node-gjbdb" Feb 18 19:27:00 crc kubenswrapper[4942]: I0218 19:27:00.402654 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/9574e413-faa5-4a62-a9ef-aaee68989944-node-log\") pod \"ovnkube-node-gjbdb\" (UID: \"9574e413-faa5-4a62-a9ef-aaee68989944\") " pod="openshift-ovn-kubernetes/ovnkube-node-gjbdb" Feb 18 19:27:00 crc kubenswrapper[4942]: I0218 19:27:00.402703 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/9574e413-faa5-4a62-a9ef-aaee68989944-host-cni-bin\") pod \"ovnkube-node-gjbdb\" (UID: \"9574e413-faa5-4a62-a9ef-aaee68989944\") " pod="openshift-ovn-kubernetes/ovnkube-node-gjbdb" Feb 18 19:27:00 crc kubenswrapper[4942]: I0218 19:27:00.402896 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/9574e413-faa5-4a62-a9ef-aaee68989944-run-openvswitch\") pod \"ovnkube-node-gjbdb\" (UID: \"9574e413-faa5-4a62-a9ef-aaee68989944\") " pod="openshift-ovn-kubernetes/ovnkube-node-gjbdb" Feb 18 19:27:00 crc kubenswrapper[4942]: I0218 19:27:00.402964 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/9574e413-faa5-4a62-a9ef-aaee68989944-run-ovn\") pod \"ovnkube-node-gjbdb\" (UID: \"9574e413-faa5-4a62-a9ef-aaee68989944\") " pod="openshift-ovn-kubernetes/ovnkube-node-gjbdb" Feb 18 19:27:00 crc kubenswrapper[4942]: I0218 19:27:00.402999 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/9574e413-faa5-4a62-a9ef-aaee68989944-run-systemd\") pod \"ovnkube-node-gjbdb\" (UID: \"9574e413-faa5-4a62-a9ef-aaee68989944\") " pod="openshift-ovn-kubernetes/ovnkube-node-gjbdb" Feb 18 19:27:00 crc kubenswrapper[4942]: I0218 19:27:00.403097 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/9574e413-faa5-4a62-a9ef-aaee68989944-var-lib-openvswitch\") pod \"ovnkube-node-gjbdb\" (UID: \"9574e413-faa5-4a62-a9ef-aaee68989944\") " pod="openshift-ovn-kubernetes/ovnkube-node-gjbdb" Feb 18 19:27:00 crc kubenswrapper[4942]: I0218 19:27:00.403232 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/9574e413-faa5-4a62-a9ef-aaee68989944-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-gjbdb\" (UID: \"9574e413-faa5-4a62-a9ef-aaee68989944\") " pod="openshift-ovn-kubernetes/ovnkube-node-gjbdb" Feb 18 19:27:00 crc kubenswrapper[4942]: I0218 19:27:00.403281 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/9574e413-faa5-4a62-a9ef-aaee68989944-etc-openvswitch\") pod \"ovnkube-node-gjbdb\" (UID: \"9574e413-faa5-4a62-a9ef-aaee68989944\") " pod="openshift-ovn-kubernetes/ovnkube-node-gjbdb" Feb 18 19:27:00 crc kubenswrapper[4942]: I0218 19:27:00.403323 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/9574e413-faa5-4a62-a9ef-aaee68989944-ovn-node-metrics-cert\") pod \"ovnkube-node-gjbdb\" (UID: \"9574e413-faa5-4a62-a9ef-aaee68989944\") " pod="openshift-ovn-kubernetes/ovnkube-node-gjbdb" Feb 18 19:27:00 crc kubenswrapper[4942]: I0218 19:27:00.403436 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/9574e413-faa5-4a62-a9ef-aaee68989944-ovnkube-config\") pod \"ovnkube-node-gjbdb\" (UID: \"9574e413-faa5-4a62-a9ef-aaee68989944\") " pod="openshift-ovn-kubernetes/ovnkube-node-gjbdb" Feb 18 19:27:00 crc kubenswrapper[4942]: I0218 19:27:00.403482 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/9574e413-faa5-4a62-a9ef-aaee68989944-host-run-ovn-kubernetes\") pod \"ovnkube-node-gjbdb\" (UID: \"9574e413-faa5-4a62-a9ef-aaee68989944\") " pod="openshift-ovn-kubernetes/ovnkube-node-gjbdb" Feb 18 19:27:00 crc kubenswrapper[4942]: I0218 19:27:00.403572 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/9574e413-faa5-4a62-a9ef-aaee68989944-host-kubelet\") pod \"ovnkube-node-gjbdb\" (UID: \"9574e413-faa5-4a62-a9ef-aaee68989944\") " pod="openshift-ovn-kubernetes/ovnkube-node-gjbdb" Feb 18 19:27:00 crc kubenswrapper[4942]: I0218 19:27:00.403609 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/9574e413-faa5-4a62-a9ef-aaee68989944-host-slash\") pod \"ovnkube-node-gjbdb\" (UID: \"9574e413-faa5-4a62-a9ef-aaee68989944\") " pod="openshift-ovn-kubernetes/ovnkube-node-gjbdb" Feb 18 19:27:00 crc kubenswrapper[4942]: I0218 19:27:00.403645 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/9574e413-faa5-4a62-a9ef-aaee68989944-host-run-netns\") pod \"ovnkube-node-gjbdb\" (UID: \"9574e413-faa5-4a62-a9ef-aaee68989944\") " pod="openshift-ovn-kubernetes/ovnkube-node-gjbdb" Feb 18 19:27:00 crc kubenswrapper[4942]: I0218 19:27:00.403689 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sxwn7\" (UniqueName: \"kubernetes.io/projected/9574e413-faa5-4a62-a9ef-aaee68989944-kube-api-access-sxwn7\") pod \"ovnkube-node-gjbdb\" (UID: \"9574e413-faa5-4a62-a9ef-aaee68989944\") " pod="openshift-ovn-kubernetes/ovnkube-node-gjbdb" Feb 18 19:27:00 crc kubenswrapper[4942]: I0218 19:27:00.403747 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/9574e413-faa5-4a62-a9ef-aaee68989944-log-socket\") pod \"ovnkube-node-gjbdb\" (UID: \"9574e413-faa5-4a62-a9ef-aaee68989944\") " pod="openshift-ovn-kubernetes/ovnkube-node-gjbdb" Feb 18 19:27:00 crc kubenswrapper[4942]: I0218 19:27:00.403878 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/9574e413-faa5-4a62-a9ef-aaee68989944-ovnkube-script-lib\") pod \"ovnkube-node-gjbdb\" (UID: \"9574e413-faa5-4a62-a9ef-aaee68989944\") " pod="openshift-ovn-kubernetes/ovnkube-node-gjbdb" Feb 18 19:27:00 crc kubenswrapper[4942]: I0218 19:27:00.403919 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/9574e413-faa5-4a62-a9ef-aaee68989944-host-cni-netd\") pod \"ovnkube-node-gjbdb\" (UID: \"9574e413-faa5-4a62-a9ef-aaee68989944\") " pod="openshift-ovn-kubernetes/ovnkube-node-gjbdb" Feb 18 19:27:00 crc kubenswrapper[4942]: I0218 19:27:00.404052 4942 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/45dc4164-81a9-44cf-b86a-dff571bc0417-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Feb 18 19:27:00 crc kubenswrapper[4942]: I0218 19:27:00.404086 4942 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cl7tj\" (UniqueName: \"kubernetes.io/projected/45dc4164-81a9-44cf-b86a-dff571bc0417-kube-api-access-cl7tj\") on node \"crc\" DevicePath \"\"" Feb 18 19:27:00 crc kubenswrapper[4942]: I0218 19:27:00.404107 4942 reconciler_common.go:293] "Volume detached for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/45dc4164-81a9-44cf-b86a-dff571bc0417-host-kubelet\") on node \"crc\" DevicePath \"\"" Feb 18 19:27:00 crc kubenswrapper[4942]: I0218 19:27:00.404124 4942 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/45dc4164-81a9-44cf-b86a-dff571bc0417-ovnkube-config\") on node \"crc\" DevicePath \"\"" Feb 18 19:27:00 crc kubenswrapper[4942]: I0218 19:27:00.404143 4942 reconciler_common.go:293] "Volume detached for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/45dc4164-81a9-44cf-b86a-dff571bc0417-run-systemd\") on node \"crc\" DevicePath \"\"" Feb 18 19:27:00 crc kubenswrapper[4942]: I0218 19:27:00.404161 4942 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/45dc4164-81a9-44cf-b86a-dff571bc0417-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Feb 18 19:27:00 crc kubenswrapper[4942]: I0218 19:27:00.404179 4942 reconciler_common.go:293] "Volume detached for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/45dc4164-81a9-44cf-b86a-dff571bc0417-host-run-netns\") on node \"crc\" DevicePath \"\"" Feb 18 19:27:00 crc kubenswrapper[4942]: I0218 19:27:00.404198 4942 reconciler_common.go:293] "Volume detached for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/45dc4164-81a9-44cf-b86a-dff571bc0417-host-run-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Feb 18 19:27:00 crc kubenswrapper[4942]: I0218 19:27:00.404216 4942 reconciler_common.go:293] "Volume detached for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/45dc4164-81a9-44cf-b86a-dff571bc0417-run-openvswitch\") on node \"crc\" DevicePath \"\"" Feb 18 19:27:00 crc kubenswrapper[4942]: I0218 19:27:00.454420 4942 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-89fzv_45dc4164-81a9-44cf-b86a-dff571bc0417/ovnkube-controller/3.log" Feb 18 19:27:00 crc kubenswrapper[4942]: I0218 19:27:00.457953 4942 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-89fzv_45dc4164-81a9-44cf-b86a-dff571bc0417/ovn-acl-logging/0.log" Feb 18 19:27:00 crc kubenswrapper[4942]: I0218 19:27:00.458875 4942 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-89fzv_45dc4164-81a9-44cf-b86a-dff571bc0417/ovn-controller/0.log" Feb 18 19:27:00 crc kubenswrapper[4942]: I0218 19:27:00.459666 4942 generic.go:334] "Generic (PLEG): container finished" podID="45dc4164-81a9-44cf-b86a-dff571bc0417" containerID="7f5cfffb19bf5e734126be098127f35dd8141f0fb212e21f57fd5fb0d64306d6" exitCode=0 Feb 18 19:27:00 crc kubenswrapper[4942]: I0218 19:27:00.459720 4942 generic.go:334] "Generic (PLEG): container finished" podID="45dc4164-81a9-44cf-b86a-dff571bc0417" containerID="c498aa99d3ec10af57c279f23804f4dce52a99d2c73fafe2bd9dc6ea454c7a23" exitCode=0 Feb 18 19:27:00 crc kubenswrapper[4942]: I0218 19:27:00.459740 4942 generic.go:334] "Generic (PLEG): container finished" podID="45dc4164-81a9-44cf-b86a-dff571bc0417" containerID="bcc9ee5f12cc3a3518c9fe13c16743e946e59b82dc01239767afb1e4afb2e4b9" exitCode=0 Feb 18 19:27:00 crc kubenswrapper[4942]: I0218 19:27:00.459755 4942 generic.go:334] "Generic (PLEG): container finished" podID="45dc4164-81a9-44cf-b86a-dff571bc0417" containerID="b2e222b580b244e85a382499ae61c72779f95fdab87e4d4c723d29b488219f94" exitCode=0 Feb 18 19:27:00 crc kubenswrapper[4942]: I0218 19:27:00.459790 4942 generic.go:334] "Generic (PLEG): container finished" podID="45dc4164-81a9-44cf-b86a-dff571bc0417" containerID="9333dac09e056ca12a248589ed4a097788b86ab83f9a1014d76d8bad88f1800c" exitCode=0 Feb 18 19:27:00 crc kubenswrapper[4942]: I0218 19:27:00.459807 4942 generic.go:334] "Generic (PLEG): container finished" podID="45dc4164-81a9-44cf-b86a-dff571bc0417" containerID="e988175a524e389ddf3e3a47acb65910ac3bf3b812e14b76d988f13e2cdc5dc7" exitCode=0 Feb 18 19:27:00 crc kubenswrapper[4942]: I0218 19:27:00.459824 4942 generic.go:334] "Generic (PLEG): container finished" podID="45dc4164-81a9-44cf-b86a-dff571bc0417" containerID="6351d0088a3e9c170ebe043fa700ef7f870c52f40d751b4fd13ac7b5bfa5e3b7" exitCode=143 Feb 18 19:27:00 crc kubenswrapper[4942]: I0218 19:27:00.459840 4942 generic.go:334] "Generic (PLEG): container finished" podID="45dc4164-81a9-44cf-b86a-dff571bc0417" containerID="427d7c083c5040fc6afe217c7850f1114323977542e83eb35d0a71b4bef6ecc6" exitCode=143 Feb 18 19:27:00 crc kubenswrapper[4942]: I0218 19:27:00.459807 4942 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-89fzv" Feb 18 19:27:00 crc kubenswrapper[4942]: I0218 19:27:00.459828 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-89fzv" event={"ID":"45dc4164-81a9-44cf-b86a-dff571bc0417","Type":"ContainerDied","Data":"7f5cfffb19bf5e734126be098127f35dd8141f0fb212e21f57fd5fb0d64306d6"} Feb 18 19:27:00 crc kubenswrapper[4942]: I0218 19:27:00.460003 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-89fzv" event={"ID":"45dc4164-81a9-44cf-b86a-dff571bc0417","Type":"ContainerDied","Data":"c498aa99d3ec10af57c279f23804f4dce52a99d2c73fafe2bd9dc6ea454c7a23"} Feb 18 19:27:00 crc kubenswrapper[4942]: I0218 19:27:00.460036 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-89fzv" event={"ID":"45dc4164-81a9-44cf-b86a-dff571bc0417","Type":"ContainerDied","Data":"bcc9ee5f12cc3a3518c9fe13c16743e946e59b82dc01239767afb1e4afb2e4b9"} Feb 18 19:27:00 crc kubenswrapper[4942]: I0218 19:27:00.460061 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-89fzv" event={"ID":"45dc4164-81a9-44cf-b86a-dff571bc0417","Type":"ContainerDied","Data":"b2e222b580b244e85a382499ae61c72779f95fdab87e4d4c723d29b488219f94"} Feb 18 19:27:00 crc kubenswrapper[4942]: I0218 19:27:00.460083 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-89fzv" event={"ID":"45dc4164-81a9-44cf-b86a-dff571bc0417","Type":"ContainerDied","Data":"9333dac09e056ca12a248589ed4a097788b86ab83f9a1014d76d8bad88f1800c"} Feb 18 19:27:00 crc kubenswrapper[4942]: I0218 19:27:00.460104 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-89fzv" event={"ID":"45dc4164-81a9-44cf-b86a-dff571bc0417","Type":"ContainerDied","Data":"e988175a524e389ddf3e3a47acb65910ac3bf3b812e14b76d988f13e2cdc5dc7"} Feb 18 19:27:00 crc kubenswrapper[4942]: I0218 19:27:00.460123 4942 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"331d92ab2b896c654b5eb6e9e3372f06c02c3b582188b54cff7b9b6feb78c9a9"} Feb 18 19:27:00 crc kubenswrapper[4942]: I0218 19:27:00.460142 4942 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"c498aa99d3ec10af57c279f23804f4dce52a99d2c73fafe2bd9dc6ea454c7a23"} Feb 18 19:27:00 crc kubenswrapper[4942]: I0218 19:27:00.460041 4942 scope.go:117] "RemoveContainer" containerID="7f5cfffb19bf5e734126be098127f35dd8141f0fb212e21f57fd5fb0d64306d6" Feb 18 19:27:00 crc kubenswrapper[4942]: I0218 19:27:00.460156 4942 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"bcc9ee5f12cc3a3518c9fe13c16743e946e59b82dc01239767afb1e4afb2e4b9"} Feb 18 19:27:00 crc kubenswrapper[4942]: I0218 19:27:00.460257 4942 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"b2e222b580b244e85a382499ae61c72779f95fdab87e4d4c723d29b488219f94"} Feb 18 19:27:00 crc kubenswrapper[4942]: I0218 19:27:00.460269 4942 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"9333dac09e056ca12a248589ed4a097788b86ab83f9a1014d76d8bad88f1800c"} Feb 18 19:27:00 crc kubenswrapper[4942]: I0218 19:27:00.460281 4942 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"e988175a524e389ddf3e3a47acb65910ac3bf3b812e14b76d988f13e2cdc5dc7"} Feb 18 19:27:00 crc kubenswrapper[4942]: I0218 19:27:00.460331 4942 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"6351d0088a3e9c170ebe043fa700ef7f870c52f40d751b4fd13ac7b5bfa5e3b7"} Feb 18 19:27:00 crc kubenswrapper[4942]: I0218 19:27:00.460345 4942 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"427d7c083c5040fc6afe217c7850f1114323977542e83eb35d0a71b4bef6ecc6"} Feb 18 19:27:00 crc kubenswrapper[4942]: I0218 19:27:00.460359 4942 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"581689b9e064557a35e24e6d5c15a73036b8499700959fd330e9ebee15543edc"} Feb 18 19:27:00 crc kubenswrapper[4942]: I0218 19:27:00.460377 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-89fzv" event={"ID":"45dc4164-81a9-44cf-b86a-dff571bc0417","Type":"ContainerDied","Data":"6351d0088a3e9c170ebe043fa700ef7f870c52f40d751b4fd13ac7b5bfa5e3b7"} Feb 18 19:27:00 crc kubenswrapper[4942]: I0218 19:27:00.460438 4942 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"7f5cfffb19bf5e734126be098127f35dd8141f0fb212e21f57fd5fb0d64306d6"} Feb 18 19:27:00 crc kubenswrapper[4942]: I0218 19:27:00.460454 4942 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"331d92ab2b896c654b5eb6e9e3372f06c02c3b582188b54cff7b9b6feb78c9a9"} Feb 18 19:27:00 crc kubenswrapper[4942]: I0218 19:27:00.460466 4942 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"c498aa99d3ec10af57c279f23804f4dce52a99d2c73fafe2bd9dc6ea454c7a23"} Feb 18 19:27:00 crc kubenswrapper[4942]: I0218 19:27:00.460517 4942 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"bcc9ee5f12cc3a3518c9fe13c16743e946e59b82dc01239767afb1e4afb2e4b9"} Feb 18 19:27:00 crc kubenswrapper[4942]: I0218 19:27:00.460531 4942 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"b2e222b580b244e85a382499ae61c72779f95fdab87e4d4c723d29b488219f94"} Feb 18 19:27:00 crc kubenswrapper[4942]: I0218 19:27:00.460544 4942 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"9333dac09e056ca12a248589ed4a097788b86ab83f9a1014d76d8bad88f1800c"} Feb 18 19:27:00 crc kubenswrapper[4942]: I0218 19:27:00.460555 4942 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"e988175a524e389ddf3e3a47acb65910ac3bf3b812e14b76d988f13e2cdc5dc7"} Feb 18 19:27:00 crc kubenswrapper[4942]: I0218 19:27:00.460606 4942 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"6351d0088a3e9c170ebe043fa700ef7f870c52f40d751b4fd13ac7b5bfa5e3b7"} Feb 18 19:27:00 crc kubenswrapper[4942]: I0218 19:27:00.460625 4942 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"427d7c083c5040fc6afe217c7850f1114323977542e83eb35d0a71b4bef6ecc6"} Feb 18 19:27:00 crc kubenswrapper[4942]: I0218 19:27:00.460640 4942 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"581689b9e064557a35e24e6d5c15a73036b8499700959fd330e9ebee15543edc"} Feb 18 19:27:00 crc kubenswrapper[4942]: I0218 19:27:00.460709 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-89fzv" event={"ID":"45dc4164-81a9-44cf-b86a-dff571bc0417","Type":"ContainerDied","Data":"427d7c083c5040fc6afe217c7850f1114323977542e83eb35d0a71b4bef6ecc6"} Feb 18 19:27:00 crc kubenswrapper[4942]: I0218 19:27:00.460741 4942 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"7f5cfffb19bf5e734126be098127f35dd8141f0fb212e21f57fd5fb0d64306d6"} Feb 18 19:27:00 crc kubenswrapper[4942]: I0218 19:27:00.460756 4942 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"331d92ab2b896c654b5eb6e9e3372f06c02c3b582188b54cff7b9b6feb78c9a9"} Feb 18 19:27:00 crc kubenswrapper[4942]: I0218 19:27:00.460822 4942 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"c498aa99d3ec10af57c279f23804f4dce52a99d2c73fafe2bd9dc6ea454c7a23"} Feb 18 19:27:00 crc kubenswrapper[4942]: I0218 19:27:00.460835 4942 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"bcc9ee5f12cc3a3518c9fe13c16743e946e59b82dc01239767afb1e4afb2e4b9"} Feb 18 19:27:00 crc kubenswrapper[4942]: I0218 19:27:00.460847 4942 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"b2e222b580b244e85a382499ae61c72779f95fdab87e4d4c723d29b488219f94"} Feb 18 19:27:00 crc kubenswrapper[4942]: I0218 19:27:00.460858 4942 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"9333dac09e056ca12a248589ed4a097788b86ab83f9a1014d76d8bad88f1800c"} Feb 18 19:27:00 crc kubenswrapper[4942]: I0218 19:27:00.460909 4942 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"e988175a524e389ddf3e3a47acb65910ac3bf3b812e14b76d988f13e2cdc5dc7"} Feb 18 19:27:00 crc kubenswrapper[4942]: I0218 19:27:00.460922 4942 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"6351d0088a3e9c170ebe043fa700ef7f870c52f40d751b4fd13ac7b5bfa5e3b7"} Feb 18 19:27:00 crc kubenswrapper[4942]: I0218 19:27:00.460934 4942 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"427d7c083c5040fc6afe217c7850f1114323977542e83eb35d0a71b4bef6ecc6"} Feb 18 19:27:00 crc kubenswrapper[4942]: I0218 19:27:00.460945 4942 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"581689b9e064557a35e24e6d5c15a73036b8499700959fd330e9ebee15543edc"} Feb 18 19:27:00 crc kubenswrapper[4942]: I0218 19:27:00.461002 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-89fzv" event={"ID":"45dc4164-81a9-44cf-b86a-dff571bc0417","Type":"ContainerDied","Data":"9d4b5c04c361e209886b1bb004385933e7d66c1477df3ba1ff39b92720286780"} Feb 18 19:27:00 crc kubenswrapper[4942]: I0218 19:27:00.461023 4942 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"7f5cfffb19bf5e734126be098127f35dd8141f0fb212e21f57fd5fb0d64306d6"} Feb 18 19:27:00 crc kubenswrapper[4942]: I0218 19:27:00.461037 4942 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"331d92ab2b896c654b5eb6e9e3372f06c02c3b582188b54cff7b9b6feb78c9a9"} Feb 18 19:27:00 crc kubenswrapper[4942]: I0218 19:27:00.461049 4942 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"c498aa99d3ec10af57c279f23804f4dce52a99d2c73fafe2bd9dc6ea454c7a23"} Feb 18 19:27:00 crc kubenswrapper[4942]: I0218 19:27:00.461100 4942 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"bcc9ee5f12cc3a3518c9fe13c16743e946e59b82dc01239767afb1e4afb2e4b9"} Feb 18 19:27:00 crc kubenswrapper[4942]: I0218 19:27:00.461112 4942 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"b2e222b580b244e85a382499ae61c72779f95fdab87e4d4c723d29b488219f94"} Feb 18 19:27:00 crc kubenswrapper[4942]: I0218 19:27:00.461123 4942 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"9333dac09e056ca12a248589ed4a097788b86ab83f9a1014d76d8bad88f1800c"} Feb 18 19:27:00 crc kubenswrapper[4942]: I0218 19:27:00.461135 4942 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"e988175a524e389ddf3e3a47acb65910ac3bf3b812e14b76d988f13e2cdc5dc7"} Feb 18 19:27:00 crc kubenswrapper[4942]: I0218 19:27:00.461184 4942 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"6351d0088a3e9c170ebe043fa700ef7f870c52f40d751b4fd13ac7b5bfa5e3b7"} Feb 18 19:27:00 crc kubenswrapper[4942]: I0218 19:27:00.461201 4942 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"427d7c083c5040fc6afe217c7850f1114323977542e83eb35d0a71b4bef6ecc6"} Feb 18 19:27:00 crc kubenswrapper[4942]: I0218 19:27:00.461212 4942 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"581689b9e064557a35e24e6d5c15a73036b8499700959fd330e9ebee15543edc"} Feb 18 19:27:00 crc kubenswrapper[4942]: I0218 19:27:00.464705 4942 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-8jfwb_75150b8c-7a02-497b-86c3-eabc9c8dbc55/kube-multus/2.log" Feb 18 19:27:00 crc kubenswrapper[4942]: I0218 19:27:00.465411 4942 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-8jfwb_75150b8c-7a02-497b-86c3-eabc9c8dbc55/kube-multus/1.log" Feb 18 19:27:00 crc kubenswrapper[4942]: I0218 19:27:00.465492 4942 generic.go:334] "Generic (PLEG): container finished" podID="75150b8c-7a02-497b-86c3-eabc9c8dbc55" containerID="62118c834582250ad430997ee392aa040ba0e100f92c0bb922d559c42cf4e958" exitCode=2 Feb 18 19:27:00 crc kubenswrapper[4942]: I0218 19:27:00.465545 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-8jfwb" event={"ID":"75150b8c-7a02-497b-86c3-eabc9c8dbc55","Type":"ContainerDied","Data":"62118c834582250ad430997ee392aa040ba0e100f92c0bb922d559c42cf4e958"} Feb 18 19:27:00 crc kubenswrapper[4942]: I0218 19:27:00.465612 4942 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"4ea9fbe1ac2843b80786e84d58bed874d360e223686eac9666589a7841d71c46"} Feb 18 19:27:00 crc kubenswrapper[4942]: I0218 19:27:00.466359 4942 scope.go:117] "RemoveContainer" containerID="62118c834582250ad430997ee392aa040ba0e100f92c0bb922d559c42cf4e958" Feb 18 19:27:00 crc kubenswrapper[4942]: E0218 19:27:00.466664 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-multus pod=multus-8jfwb_openshift-multus(75150b8c-7a02-497b-86c3-eabc9c8dbc55)\"" pod="openshift-multus/multus-8jfwb" podUID="75150b8c-7a02-497b-86c3-eabc9c8dbc55" Feb 18 19:27:00 crc kubenswrapper[4942]: I0218 19:27:00.500177 4942 scope.go:117] "RemoveContainer" containerID="331d92ab2b896c654b5eb6e9e3372f06c02c3b582188b54cff7b9b6feb78c9a9" Feb 18 19:27:00 crc kubenswrapper[4942]: I0218 19:27:00.505408 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/9574e413-faa5-4a62-a9ef-aaee68989944-ovnkube-script-lib\") pod \"ovnkube-node-gjbdb\" (UID: \"9574e413-faa5-4a62-a9ef-aaee68989944\") " pod="openshift-ovn-kubernetes/ovnkube-node-gjbdb" Feb 18 19:27:00 crc kubenswrapper[4942]: I0218 19:27:00.505446 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/9574e413-faa5-4a62-a9ef-aaee68989944-host-cni-netd\") pod \"ovnkube-node-gjbdb\" (UID: \"9574e413-faa5-4a62-a9ef-aaee68989944\") " pod="openshift-ovn-kubernetes/ovnkube-node-gjbdb" Feb 18 19:27:00 crc kubenswrapper[4942]: I0218 19:27:00.505480 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/9574e413-faa5-4a62-a9ef-aaee68989944-systemd-units\") pod \"ovnkube-node-gjbdb\" (UID: \"9574e413-faa5-4a62-a9ef-aaee68989944\") " pod="openshift-ovn-kubernetes/ovnkube-node-gjbdb" Feb 18 19:27:00 crc kubenswrapper[4942]: I0218 19:27:00.505503 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/9574e413-faa5-4a62-a9ef-aaee68989944-env-overrides\") pod \"ovnkube-node-gjbdb\" (UID: \"9574e413-faa5-4a62-a9ef-aaee68989944\") " pod="openshift-ovn-kubernetes/ovnkube-node-gjbdb" Feb 18 19:27:00 crc kubenswrapper[4942]: I0218 19:27:00.505523 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/9574e413-faa5-4a62-a9ef-aaee68989944-node-log\") pod \"ovnkube-node-gjbdb\" (UID: \"9574e413-faa5-4a62-a9ef-aaee68989944\") " pod="openshift-ovn-kubernetes/ovnkube-node-gjbdb" Feb 18 19:27:00 crc kubenswrapper[4942]: I0218 19:27:00.505550 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/9574e413-faa5-4a62-a9ef-aaee68989944-host-cni-bin\") pod \"ovnkube-node-gjbdb\" (UID: \"9574e413-faa5-4a62-a9ef-aaee68989944\") " pod="openshift-ovn-kubernetes/ovnkube-node-gjbdb" Feb 18 19:27:00 crc kubenswrapper[4942]: I0218 19:27:00.505584 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/9574e413-faa5-4a62-a9ef-aaee68989944-run-openvswitch\") pod \"ovnkube-node-gjbdb\" (UID: \"9574e413-faa5-4a62-a9ef-aaee68989944\") " pod="openshift-ovn-kubernetes/ovnkube-node-gjbdb" Feb 18 19:27:00 crc kubenswrapper[4942]: I0218 19:27:00.505608 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/9574e413-faa5-4a62-a9ef-aaee68989944-run-ovn\") pod \"ovnkube-node-gjbdb\" (UID: \"9574e413-faa5-4a62-a9ef-aaee68989944\") " pod="openshift-ovn-kubernetes/ovnkube-node-gjbdb" Feb 18 19:27:00 crc kubenswrapper[4942]: I0218 19:27:00.505630 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/9574e413-faa5-4a62-a9ef-aaee68989944-run-systemd\") pod \"ovnkube-node-gjbdb\" (UID: \"9574e413-faa5-4a62-a9ef-aaee68989944\") " pod="openshift-ovn-kubernetes/ovnkube-node-gjbdb" Feb 18 19:27:00 crc kubenswrapper[4942]: I0218 19:27:00.505629 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/9574e413-faa5-4a62-a9ef-aaee68989944-systemd-units\") pod \"ovnkube-node-gjbdb\" (UID: \"9574e413-faa5-4a62-a9ef-aaee68989944\") " pod="openshift-ovn-kubernetes/ovnkube-node-gjbdb" Feb 18 19:27:00 crc kubenswrapper[4942]: I0218 19:27:00.505653 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/9574e413-faa5-4a62-a9ef-aaee68989944-var-lib-openvswitch\") pod \"ovnkube-node-gjbdb\" (UID: \"9574e413-faa5-4a62-a9ef-aaee68989944\") " pod="openshift-ovn-kubernetes/ovnkube-node-gjbdb" Feb 18 19:27:00 crc kubenswrapper[4942]: I0218 19:27:00.505666 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/9574e413-faa5-4a62-a9ef-aaee68989944-host-cni-netd\") pod \"ovnkube-node-gjbdb\" (UID: \"9574e413-faa5-4a62-a9ef-aaee68989944\") " pod="openshift-ovn-kubernetes/ovnkube-node-gjbdb" Feb 18 19:27:00 crc kubenswrapper[4942]: I0218 19:27:00.506912 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/9574e413-faa5-4a62-a9ef-aaee68989944-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-gjbdb\" (UID: \"9574e413-faa5-4a62-a9ef-aaee68989944\") " pod="openshift-ovn-kubernetes/ovnkube-node-gjbdb" Feb 18 19:27:00 crc kubenswrapper[4942]: I0218 19:27:00.505696 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/9574e413-faa5-4a62-a9ef-aaee68989944-host-cni-bin\") pod \"ovnkube-node-gjbdb\" (UID: \"9574e413-faa5-4a62-a9ef-aaee68989944\") " pod="openshift-ovn-kubernetes/ovnkube-node-gjbdb" Feb 18 19:27:00 crc kubenswrapper[4942]: I0218 19:27:00.505727 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/9574e413-faa5-4a62-a9ef-aaee68989944-node-log\") pod \"ovnkube-node-gjbdb\" (UID: \"9574e413-faa5-4a62-a9ef-aaee68989944\") " pod="openshift-ovn-kubernetes/ovnkube-node-gjbdb" Feb 18 19:27:00 crc kubenswrapper[4942]: I0218 19:27:00.505745 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/9574e413-faa5-4a62-a9ef-aaee68989944-run-openvswitch\") pod \"ovnkube-node-gjbdb\" (UID: \"9574e413-faa5-4a62-a9ef-aaee68989944\") " pod="openshift-ovn-kubernetes/ovnkube-node-gjbdb" Feb 18 19:27:00 crc kubenswrapper[4942]: I0218 19:27:00.506954 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/9574e413-faa5-4a62-a9ef-aaee68989944-etc-openvswitch\") pod \"ovnkube-node-gjbdb\" (UID: \"9574e413-faa5-4a62-a9ef-aaee68989944\") " pod="openshift-ovn-kubernetes/ovnkube-node-gjbdb" Feb 18 19:27:00 crc kubenswrapper[4942]: I0218 19:27:00.505798 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/9574e413-faa5-4a62-a9ef-aaee68989944-run-ovn\") pod \"ovnkube-node-gjbdb\" (UID: \"9574e413-faa5-4a62-a9ef-aaee68989944\") " pod="openshift-ovn-kubernetes/ovnkube-node-gjbdb" Feb 18 19:27:00 crc kubenswrapper[4942]: I0218 19:27:00.505723 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/9574e413-faa5-4a62-a9ef-aaee68989944-run-systemd\") pod \"ovnkube-node-gjbdb\" (UID: \"9574e413-faa5-4a62-a9ef-aaee68989944\") " pod="openshift-ovn-kubernetes/ovnkube-node-gjbdb" Feb 18 19:27:00 crc kubenswrapper[4942]: I0218 19:27:00.506983 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/9574e413-faa5-4a62-a9ef-aaee68989944-ovn-node-metrics-cert\") pod \"ovnkube-node-gjbdb\" (UID: \"9574e413-faa5-4a62-a9ef-aaee68989944\") " pod="openshift-ovn-kubernetes/ovnkube-node-gjbdb" Feb 18 19:27:00 crc kubenswrapper[4942]: I0218 19:27:00.507021 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/9574e413-faa5-4a62-a9ef-aaee68989944-ovnkube-config\") pod \"ovnkube-node-gjbdb\" (UID: \"9574e413-faa5-4a62-a9ef-aaee68989944\") " pod="openshift-ovn-kubernetes/ovnkube-node-gjbdb" Feb 18 19:27:00 crc kubenswrapper[4942]: I0218 19:27:00.507025 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/9574e413-faa5-4a62-a9ef-aaee68989944-etc-openvswitch\") pod \"ovnkube-node-gjbdb\" (UID: \"9574e413-faa5-4a62-a9ef-aaee68989944\") " pod="openshift-ovn-kubernetes/ovnkube-node-gjbdb" Feb 18 19:27:00 crc kubenswrapper[4942]: I0218 19:27:00.507038 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/9574e413-faa5-4a62-a9ef-aaee68989944-host-run-ovn-kubernetes\") pod \"ovnkube-node-gjbdb\" (UID: \"9574e413-faa5-4a62-a9ef-aaee68989944\") " pod="openshift-ovn-kubernetes/ovnkube-node-gjbdb" Feb 18 19:27:00 crc kubenswrapper[4942]: I0218 19:27:00.507036 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/9574e413-faa5-4a62-a9ef-aaee68989944-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-gjbdb\" (UID: \"9574e413-faa5-4a62-a9ef-aaee68989944\") " pod="openshift-ovn-kubernetes/ovnkube-node-gjbdb" Feb 18 19:27:00 crc kubenswrapper[4942]: I0218 19:27:00.507115 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/9574e413-faa5-4a62-a9ef-aaee68989944-host-run-ovn-kubernetes\") pod \"ovnkube-node-gjbdb\" (UID: \"9574e413-faa5-4a62-a9ef-aaee68989944\") " pod="openshift-ovn-kubernetes/ovnkube-node-gjbdb" Feb 18 19:27:00 crc kubenswrapper[4942]: I0218 19:27:00.507162 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/9574e413-faa5-4a62-a9ef-aaee68989944-host-kubelet\") pod \"ovnkube-node-gjbdb\" (UID: \"9574e413-faa5-4a62-a9ef-aaee68989944\") " pod="openshift-ovn-kubernetes/ovnkube-node-gjbdb" Feb 18 19:27:00 crc kubenswrapper[4942]: I0218 19:27:00.507183 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/9574e413-faa5-4a62-a9ef-aaee68989944-host-slash\") pod \"ovnkube-node-gjbdb\" (UID: \"9574e413-faa5-4a62-a9ef-aaee68989944\") " pod="openshift-ovn-kubernetes/ovnkube-node-gjbdb" Feb 18 19:27:00 crc kubenswrapper[4942]: I0218 19:27:00.507206 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/9574e413-faa5-4a62-a9ef-aaee68989944-host-run-netns\") pod \"ovnkube-node-gjbdb\" (UID: \"9574e413-faa5-4a62-a9ef-aaee68989944\") " pod="openshift-ovn-kubernetes/ovnkube-node-gjbdb" Feb 18 19:27:00 crc kubenswrapper[4942]: I0218 19:27:00.507231 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sxwn7\" (UniqueName: \"kubernetes.io/projected/9574e413-faa5-4a62-a9ef-aaee68989944-kube-api-access-sxwn7\") pod \"ovnkube-node-gjbdb\" (UID: \"9574e413-faa5-4a62-a9ef-aaee68989944\") " pod="openshift-ovn-kubernetes/ovnkube-node-gjbdb" Feb 18 19:27:00 crc kubenswrapper[4942]: I0218 19:27:00.507255 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/9574e413-faa5-4a62-a9ef-aaee68989944-log-socket\") pod \"ovnkube-node-gjbdb\" (UID: \"9574e413-faa5-4a62-a9ef-aaee68989944\") " pod="openshift-ovn-kubernetes/ovnkube-node-gjbdb" Feb 18 19:27:00 crc kubenswrapper[4942]: I0218 19:27:00.507314 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/9574e413-faa5-4a62-a9ef-aaee68989944-log-socket\") pod \"ovnkube-node-gjbdb\" (UID: \"9574e413-faa5-4a62-a9ef-aaee68989944\") " pod="openshift-ovn-kubernetes/ovnkube-node-gjbdb" Feb 18 19:27:00 crc kubenswrapper[4942]: I0218 19:27:00.507344 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/9574e413-faa5-4a62-a9ef-aaee68989944-host-kubelet\") pod \"ovnkube-node-gjbdb\" (UID: \"9574e413-faa5-4a62-a9ef-aaee68989944\") " pod="openshift-ovn-kubernetes/ovnkube-node-gjbdb" Feb 18 19:27:00 crc kubenswrapper[4942]: I0218 19:27:00.507371 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/9574e413-faa5-4a62-a9ef-aaee68989944-host-slash\") pod \"ovnkube-node-gjbdb\" (UID: \"9574e413-faa5-4a62-a9ef-aaee68989944\") " pod="openshift-ovn-kubernetes/ovnkube-node-gjbdb" Feb 18 19:27:00 crc kubenswrapper[4942]: I0218 19:27:00.507449 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/9574e413-faa5-4a62-a9ef-aaee68989944-host-run-netns\") pod \"ovnkube-node-gjbdb\" (UID: \"9574e413-faa5-4a62-a9ef-aaee68989944\") " pod="openshift-ovn-kubernetes/ovnkube-node-gjbdb" Feb 18 19:27:00 crc kubenswrapper[4942]: I0218 19:27:00.505918 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/9574e413-faa5-4a62-a9ef-aaee68989944-var-lib-openvswitch\") pod \"ovnkube-node-gjbdb\" (UID: \"9574e413-faa5-4a62-a9ef-aaee68989944\") " pod="openshift-ovn-kubernetes/ovnkube-node-gjbdb" Feb 18 19:27:00 crc kubenswrapper[4942]: I0218 19:27:00.507710 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/9574e413-faa5-4a62-a9ef-aaee68989944-ovnkube-script-lib\") pod \"ovnkube-node-gjbdb\" (UID: \"9574e413-faa5-4a62-a9ef-aaee68989944\") " pod="openshift-ovn-kubernetes/ovnkube-node-gjbdb" Feb 18 19:27:00 crc kubenswrapper[4942]: I0218 19:27:00.507894 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/9574e413-faa5-4a62-a9ef-aaee68989944-ovnkube-config\") pod \"ovnkube-node-gjbdb\" (UID: \"9574e413-faa5-4a62-a9ef-aaee68989944\") " pod="openshift-ovn-kubernetes/ovnkube-node-gjbdb" Feb 18 19:27:00 crc kubenswrapper[4942]: I0218 19:27:00.509147 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/9574e413-faa5-4a62-a9ef-aaee68989944-env-overrides\") pod \"ovnkube-node-gjbdb\" (UID: \"9574e413-faa5-4a62-a9ef-aaee68989944\") " pod="openshift-ovn-kubernetes/ovnkube-node-gjbdb" Feb 18 19:27:00 crc kubenswrapper[4942]: I0218 19:27:00.513956 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/9574e413-faa5-4a62-a9ef-aaee68989944-ovn-node-metrics-cert\") pod \"ovnkube-node-gjbdb\" (UID: \"9574e413-faa5-4a62-a9ef-aaee68989944\") " pod="openshift-ovn-kubernetes/ovnkube-node-gjbdb" Feb 18 19:27:00 crc kubenswrapper[4942]: I0218 19:27:00.528615 4942 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-89fzv"] Feb 18 19:27:00 crc kubenswrapper[4942]: I0218 19:27:00.531450 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sxwn7\" (UniqueName: \"kubernetes.io/projected/9574e413-faa5-4a62-a9ef-aaee68989944-kube-api-access-sxwn7\") pod \"ovnkube-node-gjbdb\" (UID: \"9574e413-faa5-4a62-a9ef-aaee68989944\") " pod="openshift-ovn-kubernetes/ovnkube-node-gjbdb" Feb 18 19:27:00 crc kubenswrapper[4942]: I0218 19:27:00.533017 4942 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-89fzv"] Feb 18 19:27:00 crc kubenswrapper[4942]: I0218 19:27:00.536032 4942 scope.go:117] "RemoveContainer" containerID="c498aa99d3ec10af57c279f23804f4dce52a99d2c73fafe2bd9dc6ea454c7a23" Feb 18 19:27:00 crc kubenswrapper[4942]: I0218 19:27:00.550209 4942 scope.go:117] "RemoveContainer" containerID="bcc9ee5f12cc3a3518c9fe13c16743e946e59b82dc01239767afb1e4afb2e4b9" Feb 18 19:27:00 crc kubenswrapper[4942]: I0218 19:27:00.569480 4942 scope.go:117] "RemoveContainer" containerID="b2e222b580b244e85a382499ae61c72779f95fdab87e4d4c723d29b488219f94" Feb 18 19:27:00 crc kubenswrapper[4942]: I0218 19:27:00.587842 4942 scope.go:117] "RemoveContainer" containerID="9333dac09e056ca12a248589ed4a097788b86ab83f9a1014d76d8bad88f1800c" Feb 18 19:27:00 crc kubenswrapper[4942]: I0218 19:27:00.592171 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-gjbdb" Feb 18 19:27:00 crc kubenswrapper[4942]: I0218 19:27:00.619047 4942 scope.go:117] "RemoveContainer" containerID="e988175a524e389ddf3e3a47acb65910ac3bf3b812e14b76d988f13e2cdc5dc7" Feb 18 19:27:00 crc kubenswrapper[4942]: I0218 19:27:00.635787 4942 scope.go:117] "RemoveContainer" containerID="6351d0088a3e9c170ebe043fa700ef7f870c52f40d751b4fd13ac7b5bfa5e3b7" Feb 18 19:27:00 crc kubenswrapper[4942]: I0218 19:27:00.655291 4942 scope.go:117] "RemoveContainer" containerID="427d7c083c5040fc6afe217c7850f1114323977542e83eb35d0a71b4bef6ecc6" Feb 18 19:27:00 crc kubenswrapper[4942]: I0218 19:27:00.675706 4942 scope.go:117] "RemoveContainer" containerID="581689b9e064557a35e24e6d5c15a73036b8499700959fd330e9ebee15543edc" Feb 18 19:27:00 crc kubenswrapper[4942]: I0218 19:27:00.690256 4942 scope.go:117] "RemoveContainer" containerID="7f5cfffb19bf5e734126be098127f35dd8141f0fb212e21f57fd5fb0d64306d6" Feb 18 19:27:00 crc kubenswrapper[4942]: E0218 19:27:00.691368 4942 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7f5cfffb19bf5e734126be098127f35dd8141f0fb212e21f57fd5fb0d64306d6\": container with ID starting with 7f5cfffb19bf5e734126be098127f35dd8141f0fb212e21f57fd5fb0d64306d6 not found: ID does not exist" containerID="7f5cfffb19bf5e734126be098127f35dd8141f0fb212e21f57fd5fb0d64306d6" Feb 18 19:27:00 crc kubenswrapper[4942]: I0218 19:27:00.691405 4942 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7f5cfffb19bf5e734126be098127f35dd8141f0fb212e21f57fd5fb0d64306d6"} err="failed to get container status \"7f5cfffb19bf5e734126be098127f35dd8141f0fb212e21f57fd5fb0d64306d6\": rpc error: code = NotFound desc = could not find container \"7f5cfffb19bf5e734126be098127f35dd8141f0fb212e21f57fd5fb0d64306d6\": container with ID starting with 7f5cfffb19bf5e734126be098127f35dd8141f0fb212e21f57fd5fb0d64306d6 not found: ID does not exist" Feb 18 19:27:00 crc kubenswrapper[4942]: I0218 19:27:00.691432 4942 scope.go:117] "RemoveContainer" containerID="331d92ab2b896c654b5eb6e9e3372f06c02c3b582188b54cff7b9b6feb78c9a9" Feb 18 19:27:00 crc kubenswrapper[4942]: E0218 19:27:00.691749 4942 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"331d92ab2b896c654b5eb6e9e3372f06c02c3b582188b54cff7b9b6feb78c9a9\": container with ID starting with 331d92ab2b896c654b5eb6e9e3372f06c02c3b582188b54cff7b9b6feb78c9a9 not found: ID does not exist" containerID="331d92ab2b896c654b5eb6e9e3372f06c02c3b582188b54cff7b9b6feb78c9a9" Feb 18 19:27:00 crc kubenswrapper[4942]: I0218 19:27:00.691804 4942 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"331d92ab2b896c654b5eb6e9e3372f06c02c3b582188b54cff7b9b6feb78c9a9"} err="failed to get container status \"331d92ab2b896c654b5eb6e9e3372f06c02c3b582188b54cff7b9b6feb78c9a9\": rpc error: code = NotFound desc = could not find container \"331d92ab2b896c654b5eb6e9e3372f06c02c3b582188b54cff7b9b6feb78c9a9\": container with ID starting with 331d92ab2b896c654b5eb6e9e3372f06c02c3b582188b54cff7b9b6feb78c9a9 not found: ID does not exist" Feb 18 19:27:00 crc kubenswrapper[4942]: I0218 19:27:00.691829 4942 scope.go:117] "RemoveContainer" containerID="c498aa99d3ec10af57c279f23804f4dce52a99d2c73fafe2bd9dc6ea454c7a23" Feb 18 19:27:00 crc kubenswrapper[4942]: E0218 19:27:00.692274 4942 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c498aa99d3ec10af57c279f23804f4dce52a99d2c73fafe2bd9dc6ea454c7a23\": container with ID starting with c498aa99d3ec10af57c279f23804f4dce52a99d2c73fafe2bd9dc6ea454c7a23 not found: ID does not exist" containerID="c498aa99d3ec10af57c279f23804f4dce52a99d2c73fafe2bd9dc6ea454c7a23" Feb 18 19:27:00 crc kubenswrapper[4942]: I0218 19:27:00.692313 4942 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c498aa99d3ec10af57c279f23804f4dce52a99d2c73fafe2bd9dc6ea454c7a23"} err="failed to get container status \"c498aa99d3ec10af57c279f23804f4dce52a99d2c73fafe2bd9dc6ea454c7a23\": rpc error: code = NotFound desc = could not find container \"c498aa99d3ec10af57c279f23804f4dce52a99d2c73fafe2bd9dc6ea454c7a23\": container with ID starting with c498aa99d3ec10af57c279f23804f4dce52a99d2c73fafe2bd9dc6ea454c7a23 not found: ID does not exist" Feb 18 19:27:00 crc kubenswrapper[4942]: I0218 19:27:00.692347 4942 scope.go:117] "RemoveContainer" containerID="bcc9ee5f12cc3a3518c9fe13c16743e946e59b82dc01239767afb1e4afb2e4b9" Feb 18 19:27:00 crc kubenswrapper[4942]: E0218 19:27:00.692884 4942 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bcc9ee5f12cc3a3518c9fe13c16743e946e59b82dc01239767afb1e4afb2e4b9\": container with ID starting with bcc9ee5f12cc3a3518c9fe13c16743e946e59b82dc01239767afb1e4afb2e4b9 not found: ID does not exist" containerID="bcc9ee5f12cc3a3518c9fe13c16743e946e59b82dc01239767afb1e4afb2e4b9" Feb 18 19:27:00 crc kubenswrapper[4942]: I0218 19:27:00.692905 4942 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bcc9ee5f12cc3a3518c9fe13c16743e946e59b82dc01239767afb1e4afb2e4b9"} err="failed to get container status \"bcc9ee5f12cc3a3518c9fe13c16743e946e59b82dc01239767afb1e4afb2e4b9\": rpc error: code = NotFound desc = could not find container \"bcc9ee5f12cc3a3518c9fe13c16743e946e59b82dc01239767afb1e4afb2e4b9\": container with ID starting with bcc9ee5f12cc3a3518c9fe13c16743e946e59b82dc01239767afb1e4afb2e4b9 not found: ID does not exist" Feb 18 19:27:00 crc kubenswrapper[4942]: I0218 19:27:00.692919 4942 scope.go:117] "RemoveContainer" containerID="b2e222b580b244e85a382499ae61c72779f95fdab87e4d4c723d29b488219f94" Feb 18 19:27:00 crc kubenswrapper[4942]: E0218 19:27:00.693362 4942 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b2e222b580b244e85a382499ae61c72779f95fdab87e4d4c723d29b488219f94\": container with ID starting with b2e222b580b244e85a382499ae61c72779f95fdab87e4d4c723d29b488219f94 not found: ID does not exist" containerID="b2e222b580b244e85a382499ae61c72779f95fdab87e4d4c723d29b488219f94" Feb 18 19:27:00 crc kubenswrapper[4942]: I0218 19:27:00.693416 4942 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b2e222b580b244e85a382499ae61c72779f95fdab87e4d4c723d29b488219f94"} err="failed to get container status \"b2e222b580b244e85a382499ae61c72779f95fdab87e4d4c723d29b488219f94\": rpc error: code = NotFound desc = could not find container \"b2e222b580b244e85a382499ae61c72779f95fdab87e4d4c723d29b488219f94\": container with ID starting with b2e222b580b244e85a382499ae61c72779f95fdab87e4d4c723d29b488219f94 not found: ID does not exist" Feb 18 19:27:00 crc kubenswrapper[4942]: I0218 19:27:00.693448 4942 scope.go:117] "RemoveContainer" containerID="9333dac09e056ca12a248589ed4a097788b86ab83f9a1014d76d8bad88f1800c" Feb 18 19:27:00 crc kubenswrapper[4942]: E0218 19:27:00.693928 4942 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9333dac09e056ca12a248589ed4a097788b86ab83f9a1014d76d8bad88f1800c\": container with ID starting with 9333dac09e056ca12a248589ed4a097788b86ab83f9a1014d76d8bad88f1800c not found: ID does not exist" containerID="9333dac09e056ca12a248589ed4a097788b86ab83f9a1014d76d8bad88f1800c" Feb 18 19:27:00 crc kubenswrapper[4942]: I0218 19:27:00.693971 4942 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9333dac09e056ca12a248589ed4a097788b86ab83f9a1014d76d8bad88f1800c"} err="failed to get container status \"9333dac09e056ca12a248589ed4a097788b86ab83f9a1014d76d8bad88f1800c\": rpc error: code = NotFound desc = could not find container \"9333dac09e056ca12a248589ed4a097788b86ab83f9a1014d76d8bad88f1800c\": container with ID starting with 9333dac09e056ca12a248589ed4a097788b86ab83f9a1014d76d8bad88f1800c not found: ID does not exist" Feb 18 19:27:00 crc kubenswrapper[4942]: I0218 19:27:00.693993 4942 scope.go:117] "RemoveContainer" containerID="e988175a524e389ddf3e3a47acb65910ac3bf3b812e14b76d988f13e2cdc5dc7" Feb 18 19:27:00 crc kubenswrapper[4942]: E0218 19:27:00.694343 4942 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e988175a524e389ddf3e3a47acb65910ac3bf3b812e14b76d988f13e2cdc5dc7\": container with ID starting with e988175a524e389ddf3e3a47acb65910ac3bf3b812e14b76d988f13e2cdc5dc7 not found: ID does not exist" containerID="e988175a524e389ddf3e3a47acb65910ac3bf3b812e14b76d988f13e2cdc5dc7" Feb 18 19:27:00 crc kubenswrapper[4942]: I0218 19:27:00.694376 4942 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e988175a524e389ddf3e3a47acb65910ac3bf3b812e14b76d988f13e2cdc5dc7"} err="failed to get container status \"e988175a524e389ddf3e3a47acb65910ac3bf3b812e14b76d988f13e2cdc5dc7\": rpc error: code = NotFound desc = could not find container \"e988175a524e389ddf3e3a47acb65910ac3bf3b812e14b76d988f13e2cdc5dc7\": container with ID starting with e988175a524e389ddf3e3a47acb65910ac3bf3b812e14b76d988f13e2cdc5dc7 not found: ID does not exist" Feb 18 19:27:00 crc kubenswrapper[4942]: I0218 19:27:00.694396 4942 scope.go:117] "RemoveContainer" containerID="6351d0088a3e9c170ebe043fa700ef7f870c52f40d751b4fd13ac7b5bfa5e3b7" Feb 18 19:27:00 crc kubenswrapper[4942]: E0218 19:27:00.694870 4942 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6351d0088a3e9c170ebe043fa700ef7f870c52f40d751b4fd13ac7b5bfa5e3b7\": container with ID starting with 6351d0088a3e9c170ebe043fa700ef7f870c52f40d751b4fd13ac7b5bfa5e3b7 not found: ID does not exist" containerID="6351d0088a3e9c170ebe043fa700ef7f870c52f40d751b4fd13ac7b5bfa5e3b7" Feb 18 19:27:00 crc kubenswrapper[4942]: I0218 19:27:00.694901 4942 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6351d0088a3e9c170ebe043fa700ef7f870c52f40d751b4fd13ac7b5bfa5e3b7"} err="failed to get container status \"6351d0088a3e9c170ebe043fa700ef7f870c52f40d751b4fd13ac7b5bfa5e3b7\": rpc error: code = NotFound desc = could not find container \"6351d0088a3e9c170ebe043fa700ef7f870c52f40d751b4fd13ac7b5bfa5e3b7\": container with ID starting with 6351d0088a3e9c170ebe043fa700ef7f870c52f40d751b4fd13ac7b5bfa5e3b7 not found: ID does not exist" Feb 18 19:27:00 crc kubenswrapper[4942]: I0218 19:27:00.694947 4942 scope.go:117] "RemoveContainer" containerID="427d7c083c5040fc6afe217c7850f1114323977542e83eb35d0a71b4bef6ecc6" Feb 18 19:27:00 crc kubenswrapper[4942]: E0218 19:27:00.695258 4942 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"427d7c083c5040fc6afe217c7850f1114323977542e83eb35d0a71b4bef6ecc6\": container with ID starting with 427d7c083c5040fc6afe217c7850f1114323977542e83eb35d0a71b4bef6ecc6 not found: ID does not exist" containerID="427d7c083c5040fc6afe217c7850f1114323977542e83eb35d0a71b4bef6ecc6" Feb 18 19:27:00 crc kubenswrapper[4942]: I0218 19:27:00.695295 4942 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"427d7c083c5040fc6afe217c7850f1114323977542e83eb35d0a71b4bef6ecc6"} err="failed to get container status \"427d7c083c5040fc6afe217c7850f1114323977542e83eb35d0a71b4bef6ecc6\": rpc error: code = NotFound desc = could not find container \"427d7c083c5040fc6afe217c7850f1114323977542e83eb35d0a71b4bef6ecc6\": container with ID starting with 427d7c083c5040fc6afe217c7850f1114323977542e83eb35d0a71b4bef6ecc6 not found: ID does not exist" Feb 18 19:27:00 crc kubenswrapper[4942]: I0218 19:27:00.695319 4942 scope.go:117] "RemoveContainer" containerID="581689b9e064557a35e24e6d5c15a73036b8499700959fd330e9ebee15543edc" Feb 18 19:27:00 crc kubenswrapper[4942]: E0218 19:27:00.695684 4942 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"581689b9e064557a35e24e6d5c15a73036b8499700959fd330e9ebee15543edc\": container with ID starting with 581689b9e064557a35e24e6d5c15a73036b8499700959fd330e9ebee15543edc not found: ID does not exist" containerID="581689b9e064557a35e24e6d5c15a73036b8499700959fd330e9ebee15543edc" Feb 18 19:27:00 crc kubenswrapper[4942]: I0218 19:27:00.695713 4942 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"581689b9e064557a35e24e6d5c15a73036b8499700959fd330e9ebee15543edc"} err="failed to get container status \"581689b9e064557a35e24e6d5c15a73036b8499700959fd330e9ebee15543edc\": rpc error: code = NotFound desc = could not find container \"581689b9e064557a35e24e6d5c15a73036b8499700959fd330e9ebee15543edc\": container with ID starting with 581689b9e064557a35e24e6d5c15a73036b8499700959fd330e9ebee15543edc not found: ID does not exist" Feb 18 19:27:00 crc kubenswrapper[4942]: I0218 19:27:00.695730 4942 scope.go:117] "RemoveContainer" containerID="7f5cfffb19bf5e734126be098127f35dd8141f0fb212e21f57fd5fb0d64306d6" Feb 18 19:27:00 crc kubenswrapper[4942]: I0218 19:27:00.696100 4942 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7f5cfffb19bf5e734126be098127f35dd8141f0fb212e21f57fd5fb0d64306d6"} err="failed to get container status \"7f5cfffb19bf5e734126be098127f35dd8141f0fb212e21f57fd5fb0d64306d6\": rpc error: code = NotFound desc = could not find container \"7f5cfffb19bf5e734126be098127f35dd8141f0fb212e21f57fd5fb0d64306d6\": container with ID starting with 7f5cfffb19bf5e734126be098127f35dd8141f0fb212e21f57fd5fb0d64306d6 not found: ID does not exist" Feb 18 19:27:00 crc kubenswrapper[4942]: I0218 19:27:00.696157 4942 scope.go:117] "RemoveContainer" containerID="331d92ab2b896c654b5eb6e9e3372f06c02c3b582188b54cff7b9b6feb78c9a9" Feb 18 19:27:00 crc kubenswrapper[4942]: I0218 19:27:00.696500 4942 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"331d92ab2b896c654b5eb6e9e3372f06c02c3b582188b54cff7b9b6feb78c9a9"} err="failed to get container status \"331d92ab2b896c654b5eb6e9e3372f06c02c3b582188b54cff7b9b6feb78c9a9\": rpc error: code = NotFound desc = could not find container \"331d92ab2b896c654b5eb6e9e3372f06c02c3b582188b54cff7b9b6feb78c9a9\": container with ID starting with 331d92ab2b896c654b5eb6e9e3372f06c02c3b582188b54cff7b9b6feb78c9a9 not found: ID does not exist" Feb 18 19:27:00 crc kubenswrapper[4942]: I0218 19:27:00.696529 4942 scope.go:117] "RemoveContainer" containerID="c498aa99d3ec10af57c279f23804f4dce52a99d2c73fafe2bd9dc6ea454c7a23" Feb 18 19:27:00 crc kubenswrapper[4942]: I0218 19:27:00.696870 4942 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c498aa99d3ec10af57c279f23804f4dce52a99d2c73fafe2bd9dc6ea454c7a23"} err="failed to get container status \"c498aa99d3ec10af57c279f23804f4dce52a99d2c73fafe2bd9dc6ea454c7a23\": rpc error: code = NotFound desc = could not find container \"c498aa99d3ec10af57c279f23804f4dce52a99d2c73fafe2bd9dc6ea454c7a23\": container with ID starting with c498aa99d3ec10af57c279f23804f4dce52a99d2c73fafe2bd9dc6ea454c7a23 not found: ID does not exist" Feb 18 19:27:00 crc kubenswrapper[4942]: I0218 19:27:00.696902 4942 scope.go:117] "RemoveContainer" containerID="bcc9ee5f12cc3a3518c9fe13c16743e946e59b82dc01239767afb1e4afb2e4b9" Feb 18 19:27:00 crc kubenswrapper[4942]: I0218 19:27:00.697240 4942 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bcc9ee5f12cc3a3518c9fe13c16743e946e59b82dc01239767afb1e4afb2e4b9"} err="failed to get container status \"bcc9ee5f12cc3a3518c9fe13c16743e946e59b82dc01239767afb1e4afb2e4b9\": rpc error: code = NotFound desc = could not find container \"bcc9ee5f12cc3a3518c9fe13c16743e946e59b82dc01239767afb1e4afb2e4b9\": container with ID starting with bcc9ee5f12cc3a3518c9fe13c16743e946e59b82dc01239767afb1e4afb2e4b9 not found: ID does not exist" Feb 18 19:27:00 crc kubenswrapper[4942]: I0218 19:27:00.697287 4942 scope.go:117] "RemoveContainer" containerID="b2e222b580b244e85a382499ae61c72779f95fdab87e4d4c723d29b488219f94" Feb 18 19:27:00 crc kubenswrapper[4942]: I0218 19:27:00.697661 4942 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b2e222b580b244e85a382499ae61c72779f95fdab87e4d4c723d29b488219f94"} err="failed to get container status \"b2e222b580b244e85a382499ae61c72779f95fdab87e4d4c723d29b488219f94\": rpc error: code = NotFound desc = could not find container \"b2e222b580b244e85a382499ae61c72779f95fdab87e4d4c723d29b488219f94\": container with ID starting with b2e222b580b244e85a382499ae61c72779f95fdab87e4d4c723d29b488219f94 not found: ID does not exist" Feb 18 19:27:00 crc kubenswrapper[4942]: I0218 19:27:00.697685 4942 scope.go:117] "RemoveContainer" containerID="9333dac09e056ca12a248589ed4a097788b86ab83f9a1014d76d8bad88f1800c" Feb 18 19:27:00 crc kubenswrapper[4942]: I0218 19:27:00.698103 4942 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9333dac09e056ca12a248589ed4a097788b86ab83f9a1014d76d8bad88f1800c"} err="failed to get container status \"9333dac09e056ca12a248589ed4a097788b86ab83f9a1014d76d8bad88f1800c\": rpc error: code = NotFound desc = could not find container \"9333dac09e056ca12a248589ed4a097788b86ab83f9a1014d76d8bad88f1800c\": container with ID starting with 9333dac09e056ca12a248589ed4a097788b86ab83f9a1014d76d8bad88f1800c not found: ID does not exist" Feb 18 19:27:00 crc kubenswrapper[4942]: I0218 19:27:00.698131 4942 scope.go:117] "RemoveContainer" containerID="e988175a524e389ddf3e3a47acb65910ac3bf3b812e14b76d988f13e2cdc5dc7" Feb 18 19:27:00 crc kubenswrapper[4942]: I0218 19:27:00.698380 4942 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e988175a524e389ddf3e3a47acb65910ac3bf3b812e14b76d988f13e2cdc5dc7"} err="failed to get container status \"e988175a524e389ddf3e3a47acb65910ac3bf3b812e14b76d988f13e2cdc5dc7\": rpc error: code = NotFound desc = could not find container \"e988175a524e389ddf3e3a47acb65910ac3bf3b812e14b76d988f13e2cdc5dc7\": container with ID starting with e988175a524e389ddf3e3a47acb65910ac3bf3b812e14b76d988f13e2cdc5dc7 not found: ID does not exist" Feb 18 19:27:00 crc kubenswrapper[4942]: I0218 19:27:00.698407 4942 scope.go:117] "RemoveContainer" containerID="6351d0088a3e9c170ebe043fa700ef7f870c52f40d751b4fd13ac7b5bfa5e3b7" Feb 18 19:27:00 crc kubenswrapper[4942]: I0218 19:27:00.698885 4942 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6351d0088a3e9c170ebe043fa700ef7f870c52f40d751b4fd13ac7b5bfa5e3b7"} err="failed to get container status \"6351d0088a3e9c170ebe043fa700ef7f870c52f40d751b4fd13ac7b5bfa5e3b7\": rpc error: code = NotFound desc = could not find container \"6351d0088a3e9c170ebe043fa700ef7f870c52f40d751b4fd13ac7b5bfa5e3b7\": container with ID starting with 6351d0088a3e9c170ebe043fa700ef7f870c52f40d751b4fd13ac7b5bfa5e3b7 not found: ID does not exist" Feb 18 19:27:00 crc kubenswrapper[4942]: I0218 19:27:00.698918 4942 scope.go:117] "RemoveContainer" containerID="427d7c083c5040fc6afe217c7850f1114323977542e83eb35d0a71b4bef6ecc6" Feb 18 19:27:00 crc kubenswrapper[4942]: I0218 19:27:00.699203 4942 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"427d7c083c5040fc6afe217c7850f1114323977542e83eb35d0a71b4bef6ecc6"} err="failed to get container status \"427d7c083c5040fc6afe217c7850f1114323977542e83eb35d0a71b4bef6ecc6\": rpc error: code = NotFound desc = could not find container \"427d7c083c5040fc6afe217c7850f1114323977542e83eb35d0a71b4bef6ecc6\": container with ID starting with 427d7c083c5040fc6afe217c7850f1114323977542e83eb35d0a71b4bef6ecc6 not found: ID does not exist" Feb 18 19:27:00 crc kubenswrapper[4942]: I0218 19:27:00.699223 4942 scope.go:117] "RemoveContainer" containerID="581689b9e064557a35e24e6d5c15a73036b8499700959fd330e9ebee15543edc" Feb 18 19:27:00 crc kubenswrapper[4942]: I0218 19:27:00.699447 4942 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"581689b9e064557a35e24e6d5c15a73036b8499700959fd330e9ebee15543edc"} err="failed to get container status \"581689b9e064557a35e24e6d5c15a73036b8499700959fd330e9ebee15543edc\": rpc error: code = NotFound desc = could not find container \"581689b9e064557a35e24e6d5c15a73036b8499700959fd330e9ebee15543edc\": container with ID starting with 581689b9e064557a35e24e6d5c15a73036b8499700959fd330e9ebee15543edc not found: ID does not exist" Feb 18 19:27:00 crc kubenswrapper[4942]: I0218 19:27:00.699471 4942 scope.go:117] "RemoveContainer" containerID="7f5cfffb19bf5e734126be098127f35dd8141f0fb212e21f57fd5fb0d64306d6" Feb 18 19:27:00 crc kubenswrapper[4942]: I0218 19:27:00.699888 4942 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7f5cfffb19bf5e734126be098127f35dd8141f0fb212e21f57fd5fb0d64306d6"} err="failed to get container status \"7f5cfffb19bf5e734126be098127f35dd8141f0fb212e21f57fd5fb0d64306d6\": rpc error: code = NotFound desc = could not find container \"7f5cfffb19bf5e734126be098127f35dd8141f0fb212e21f57fd5fb0d64306d6\": container with ID starting with 7f5cfffb19bf5e734126be098127f35dd8141f0fb212e21f57fd5fb0d64306d6 not found: ID does not exist" Feb 18 19:27:00 crc kubenswrapper[4942]: I0218 19:27:00.699915 4942 scope.go:117] "RemoveContainer" containerID="331d92ab2b896c654b5eb6e9e3372f06c02c3b582188b54cff7b9b6feb78c9a9" Feb 18 19:27:00 crc kubenswrapper[4942]: I0218 19:27:00.700208 4942 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"331d92ab2b896c654b5eb6e9e3372f06c02c3b582188b54cff7b9b6feb78c9a9"} err="failed to get container status \"331d92ab2b896c654b5eb6e9e3372f06c02c3b582188b54cff7b9b6feb78c9a9\": rpc error: code = NotFound desc = could not find container \"331d92ab2b896c654b5eb6e9e3372f06c02c3b582188b54cff7b9b6feb78c9a9\": container with ID starting with 331d92ab2b896c654b5eb6e9e3372f06c02c3b582188b54cff7b9b6feb78c9a9 not found: ID does not exist" Feb 18 19:27:00 crc kubenswrapper[4942]: I0218 19:27:00.700231 4942 scope.go:117] "RemoveContainer" containerID="c498aa99d3ec10af57c279f23804f4dce52a99d2c73fafe2bd9dc6ea454c7a23" Feb 18 19:27:00 crc kubenswrapper[4942]: I0218 19:27:00.700515 4942 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c498aa99d3ec10af57c279f23804f4dce52a99d2c73fafe2bd9dc6ea454c7a23"} err="failed to get container status \"c498aa99d3ec10af57c279f23804f4dce52a99d2c73fafe2bd9dc6ea454c7a23\": rpc error: code = NotFound desc = could not find container \"c498aa99d3ec10af57c279f23804f4dce52a99d2c73fafe2bd9dc6ea454c7a23\": container with ID starting with c498aa99d3ec10af57c279f23804f4dce52a99d2c73fafe2bd9dc6ea454c7a23 not found: ID does not exist" Feb 18 19:27:00 crc kubenswrapper[4942]: I0218 19:27:00.700543 4942 scope.go:117] "RemoveContainer" containerID="bcc9ee5f12cc3a3518c9fe13c16743e946e59b82dc01239767afb1e4afb2e4b9" Feb 18 19:27:00 crc kubenswrapper[4942]: I0218 19:27:00.700872 4942 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bcc9ee5f12cc3a3518c9fe13c16743e946e59b82dc01239767afb1e4afb2e4b9"} err="failed to get container status \"bcc9ee5f12cc3a3518c9fe13c16743e946e59b82dc01239767afb1e4afb2e4b9\": rpc error: code = NotFound desc = could not find container \"bcc9ee5f12cc3a3518c9fe13c16743e946e59b82dc01239767afb1e4afb2e4b9\": container with ID starting with bcc9ee5f12cc3a3518c9fe13c16743e946e59b82dc01239767afb1e4afb2e4b9 not found: ID does not exist" Feb 18 19:27:00 crc kubenswrapper[4942]: I0218 19:27:00.700900 4942 scope.go:117] "RemoveContainer" containerID="b2e222b580b244e85a382499ae61c72779f95fdab87e4d4c723d29b488219f94" Feb 18 19:27:00 crc kubenswrapper[4942]: I0218 19:27:00.701152 4942 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b2e222b580b244e85a382499ae61c72779f95fdab87e4d4c723d29b488219f94"} err="failed to get container status \"b2e222b580b244e85a382499ae61c72779f95fdab87e4d4c723d29b488219f94\": rpc error: code = NotFound desc = could not find container \"b2e222b580b244e85a382499ae61c72779f95fdab87e4d4c723d29b488219f94\": container with ID starting with b2e222b580b244e85a382499ae61c72779f95fdab87e4d4c723d29b488219f94 not found: ID does not exist" Feb 18 19:27:00 crc kubenswrapper[4942]: I0218 19:27:00.701178 4942 scope.go:117] "RemoveContainer" containerID="9333dac09e056ca12a248589ed4a097788b86ab83f9a1014d76d8bad88f1800c" Feb 18 19:27:00 crc kubenswrapper[4942]: I0218 19:27:00.701563 4942 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9333dac09e056ca12a248589ed4a097788b86ab83f9a1014d76d8bad88f1800c"} err="failed to get container status \"9333dac09e056ca12a248589ed4a097788b86ab83f9a1014d76d8bad88f1800c\": rpc error: code = NotFound desc = could not find container \"9333dac09e056ca12a248589ed4a097788b86ab83f9a1014d76d8bad88f1800c\": container with ID starting with 9333dac09e056ca12a248589ed4a097788b86ab83f9a1014d76d8bad88f1800c not found: ID does not exist" Feb 18 19:27:00 crc kubenswrapper[4942]: I0218 19:27:00.701584 4942 scope.go:117] "RemoveContainer" containerID="e988175a524e389ddf3e3a47acb65910ac3bf3b812e14b76d988f13e2cdc5dc7" Feb 18 19:27:00 crc kubenswrapper[4942]: I0218 19:27:00.701873 4942 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e988175a524e389ddf3e3a47acb65910ac3bf3b812e14b76d988f13e2cdc5dc7"} err="failed to get container status \"e988175a524e389ddf3e3a47acb65910ac3bf3b812e14b76d988f13e2cdc5dc7\": rpc error: code = NotFound desc = could not find container \"e988175a524e389ddf3e3a47acb65910ac3bf3b812e14b76d988f13e2cdc5dc7\": container with ID starting with e988175a524e389ddf3e3a47acb65910ac3bf3b812e14b76d988f13e2cdc5dc7 not found: ID does not exist" Feb 18 19:27:00 crc kubenswrapper[4942]: I0218 19:27:00.701900 4942 scope.go:117] "RemoveContainer" containerID="6351d0088a3e9c170ebe043fa700ef7f870c52f40d751b4fd13ac7b5bfa5e3b7" Feb 18 19:27:00 crc kubenswrapper[4942]: I0218 19:27:00.702177 4942 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6351d0088a3e9c170ebe043fa700ef7f870c52f40d751b4fd13ac7b5bfa5e3b7"} err="failed to get container status \"6351d0088a3e9c170ebe043fa700ef7f870c52f40d751b4fd13ac7b5bfa5e3b7\": rpc error: code = NotFound desc = could not find container \"6351d0088a3e9c170ebe043fa700ef7f870c52f40d751b4fd13ac7b5bfa5e3b7\": container with ID starting with 6351d0088a3e9c170ebe043fa700ef7f870c52f40d751b4fd13ac7b5bfa5e3b7 not found: ID does not exist" Feb 18 19:27:00 crc kubenswrapper[4942]: I0218 19:27:00.702239 4942 scope.go:117] "RemoveContainer" containerID="427d7c083c5040fc6afe217c7850f1114323977542e83eb35d0a71b4bef6ecc6" Feb 18 19:27:00 crc kubenswrapper[4942]: I0218 19:27:00.702517 4942 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"427d7c083c5040fc6afe217c7850f1114323977542e83eb35d0a71b4bef6ecc6"} err="failed to get container status \"427d7c083c5040fc6afe217c7850f1114323977542e83eb35d0a71b4bef6ecc6\": rpc error: code = NotFound desc = could not find container \"427d7c083c5040fc6afe217c7850f1114323977542e83eb35d0a71b4bef6ecc6\": container with ID starting with 427d7c083c5040fc6afe217c7850f1114323977542e83eb35d0a71b4bef6ecc6 not found: ID does not exist" Feb 18 19:27:00 crc kubenswrapper[4942]: I0218 19:27:00.702538 4942 scope.go:117] "RemoveContainer" containerID="581689b9e064557a35e24e6d5c15a73036b8499700959fd330e9ebee15543edc" Feb 18 19:27:00 crc kubenswrapper[4942]: I0218 19:27:00.702820 4942 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"581689b9e064557a35e24e6d5c15a73036b8499700959fd330e9ebee15543edc"} err="failed to get container status \"581689b9e064557a35e24e6d5c15a73036b8499700959fd330e9ebee15543edc\": rpc error: code = NotFound desc = could not find container \"581689b9e064557a35e24e6d5c15a73036b8499700959fd330e9ebee15543edc\": container with ID starting with 581689b9e064557a35e24e6d5c15a73036b8499700959fd330e9ebee15543edc not found: ID does not exist" Feb 18 19:27:00 crc kubenswrapper[4942]: I0218 19:27:00.702847 4942 scope.go:117] "RemoveContainer" containerID="7f5cfffb19bf5e734126be098127f35dd8141f0fb212e21f57fd5fb0d64306d6" Feb 18 19:27:00 crc kubenswrapper[4942]: I0218 19:27:00.703138 4942 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7f5cfffb19bf5e734126be098127f35dd8141f0fb212e21f57fd5fb0d64306d6"} err="failed to get container status \"7f5cfffb19bf5e734126be098127f35dd8141f0fb212e21f57fd5fb0d64306d6\": rpc error: code = NotFound desc = could not find container \"7f5cfffb19bf5e734126be098127f35dd8141f0fb212e21f57fd5fb0d64306d6\": container with ID starting with 7f5cfffb19bf5e734126be098127f35dd8141f0fb212e21f57fd5fb0d64306d6 not found: ID does not exist" Feb 18 19:27:00 crc kubenswrapper[4942]: I0218 19:27:00.703161 4942 scope.go:117] "RemoveContainer" containerID="331d92ab2b896c654b5eb6e9e3372f06c02c3b582188b54cff7b9b6feb78c9a9" Feb 18 19:27:00 crc kubenswrapper[4942]: I0218 19:27:00.703409 4942 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"331d92ab2b896c654b5eb6e9e3372f06c02c3b582188b54cff7b9b6feb78c9a9"} err="failed to get container status \"331d92ab2b896c654b5eb6e9e3372f06c02c3b582188b54cff7b9b6feb78c9a9\": rpc error: code = NotFound desc = could not find container \"331d92ab2b896c654b5eb6e9e3372f06c02c3b582188b54cff7b9b6feb78c9a9\": container with ID starting with 331d92ab2b896c654b5eb6e9e3372f06c02c3b582188b54cff7b9b6feb78c9a9 not found: ID does not exist" Feb 18 19:27:00 crc kubenswrapper[4942]: I0218 19:27:00.703431 4942 scope.go:117] "RemoveContainer" containerID="c498aa99d3ec10af57c279f23804f4dce52a99d2c73fafe2bd9dc6ea454c7a23" Feb 18 19:27:00 crc kubenswrapper[4942]: I0218 19:27:00.703659 4942 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c498aa99d3ec10af57c279f23804f4dce52a99d2c73fafe2bd9dc6ea454c7a23"} err="failed to get container status \"c498aa99d3ec10af57c279f23804f4dce52a99d2c73fafe2bd9dc6ea454c7a23\": rpc error: code = NotFound desc = could not find container \"c498aa99d3ec10af57c279f23804f4dce52a99d2c73fafe2bd9dc6ea454c7a23\": container with ID starting with c498aa99d3ec10af57c279f23804f4dce52a99d2c73fafe2bd9dc6ea454c7a23 not found: ID does not exist" Feb 18 19:27:00 crc kubenswrapper[4942]: I0218 19:27:00.703686 4942 scope.go:117] "RemoveContainer" containerID="bcc9ee5f12cc3a3518c9fe13c16743e946e59b82dc01239767afb1e4afb2e4b9" Feb 18 19:27:00 crc kubenswrapper[4942]: I0218 19:27:00.703999 4942 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bcc9ee5f12cc3a3518c9fe13c16743e946e59b82dc01239767afb1e4afb2e4b9"} err="failed to get container status \"bcc9ee5f12cc3a3518c9fe13c16743e946e59b82dc01239767afb1e4afb2e4b9\": rpc error: code = NotFound desc = could not find container \"bcc9ee5f12cc3a3518c9fe13c16743e946e59b82dc01239767afb1e4afb2e4b9\": container with ID starting with bcc9ee5f12cc3a3518c9fe13c16743e946e59b82dc01239767afb1e4afb2e4b9 not found: ID does not exist" Feb 18 19:27:00 crc kubenswrapper[4942]: I0218 19:27:00.704022 4942 scope.go:117] "RemoveContainer" containerID="b2e222b580b244e85a382499ae61c72779f95fdab87e4d4c723d29b488219f94" Feb 18 19:27:00 crc kubenswrapper[4942]: I0218 19:27:00.704256 4942 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b2e222b580b244e85a382499ae61c72779f95fdab87e4d4c723d29b488219f94"} err="failed to get container status \"b2e222b580b244e85a382499ae61c72779f95fdab87e4d4c723d29b488219f94\": rpc error: code = NotFound desc = could not find container \"b2e222b580b244e85a382499ae61c72779f95fdab87e4d4c723d29b488219f94\": container with ID starting with b2e222b580b244e85a382499ae61c72779f95fdab87e4d4c723d29b488219f94 not found: ID does not exist" Feb 18 19:27:00 crc kubenswrapper[4942]: I0218 19:27:00.704298 4942 scope.go:117] "RemoveContainer" containerID="9333dac09e056ca12a248589ed4a097788b86ab83f9a1014d76d8bad88f1800c" Feb 18 19:27:00 crc kubenswrapper[4942]: I0218 19:27:00.704720 4942 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9333dac09e056ca12a248589ed4a097788b86ab83f9a1014d76d8bad88f1800c"} err="failed to get container status \"9333dac09e056ca12a248589ed4a097788b86ab83f9a1014d76d8bad88f1800c\": rpc error: code = NotFound desc = could not find container \"9333dac09e056ca12a248589ed4a097788b86ab83f9a1014d76d8bad88f1800c\": container with ID starting with 9333dac09e056ca12a248589ed4a097788b86ab83f9a1014d76d8bad88f1800c not found: ID does not exist" Feb 18 19:27:00 crc kubenswrapper[4942]: I0218 19:27:00.704743 4942 scope.go:117] "RemoveContainer" containerID="e988175a524e389ddf3e3a47acb65910ac3bf3b812e14b76d988f13e2cdc5dc7" Feb 18 19:27:00 crc kubenswrapper[4942]: I0218 19:27:00.705608 4942 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e988175a524e389ddf3e3a47acb65910ac3bf3b812e14b76d988f13e2cdc5dc7"} err="failed to get container status \"e988175a524e389ddf3e3a47acb65910ac3bf3b812e14b76d988f13e2cdc5dc7\": rpc error: code = NotFound desc = could not find container \"e988175a524e389ddf3e3a47acb65910ac3bf3b812e14b76d988f13e2cdc5dc7\": container with ID starting with e988175a524e389ddf3e3a47acb65910ac3bf3b812e14b76d988f13e2cdc5dc7 not found: ID does not exist" Feb 18 19:27:00 crc kubenswrapper[4942]: I0218 19:27:00.705644 4942 scope.go:117] "RemoveContainer" containerID="6351d0088a3e9c170ebe043fa700ef7f870c52f40d751b4fd13ac7b5bfa5e3b7" Feb 18 19:27:00 crc kubenswrapper[4942]: I0218 19:27:00.706010 4942 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6351d0088a3e9c170ebe043fa700ef7f870c52f40d751b4fd13ac7b5bfa5e3b7"} err="failed to get container status \"6351d0088a3e9c170ebe043fa700ef7f870c52f40d751b4fd13ac7b5bfa5e3b7\": rpc error: code = NotFound desc = could not find container \"6351d0088a3e9c170ebe043fa700ef7f870c52f40d751b4fd13ac7b5bfa5e3b7\": container with ID starting with 6351d0088a3e9c170ebe043fa700ef7f870c52f40d751b4fd13ac7b5bfa5e3b7 not found: ID does not exist" Feb 18 19:27:00 crc kubenswrapper[4942]: I0218 19:27:00.706035 4942 scope.go:117] "RemoveContainer" containerID="427d7c083c5040fc6afe217c7850f1114323977542e83eb35d0a71b4bef6ecc6" Feb 18 19:27:00 crc kubenswrapper[4942]: I0218 19:27:00.706338 4942 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"427d7c083c5040fc6afe217c7850f1114323977542e83eb35d0a71b4bef6ecc6"} err="failed to get container status \"427d7c083c5040fc6afe217c7850f1114323977542e83eb35d0a71b4bef6ecc6\": rpc error: code = NotFound desc = could not find container \"427d7c083c5040fc6afe217c7850f1114323977542e83eb35d0a71b4bef6ecc6\": container with ID starting with 427d7c083c5040fc6afe217c7850f1114323977542e83eb35d0a71b4bef6ecc6 not found: ID does not exist" Feb 18 19:27:00 crc kubenswrapper[4942]: I0218 19:27:00.706374 4942 scope.go:117] "RemoveContainer" containerID="581689b9e064557a35e24e6d5c15a73036b8499700959fd330e9ebee15543edc" Feb 18 19:27:00 crc kubenswrapper[4942]: I0218 19:27:00.706811 4942 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"581689b9e064557a35e24e6d5c15a73036b8499700959fd330e9ebee15543edc"} err="failed to get container status \"581689b9e064557a35e24e6d5c15a73036b8499700959fd330e9ebee15543edc\": rpc error: code = NotFound desc = could not find container \"581689b9e064557a35e24e6d5c15a73036b8499700959fd330e9ebee15543edc\": container with ID starting with 581689b9e064557a35e24e6d5c15a73036b8499700959fd330e9ebee15543edc not found: ID does not exist" Feb 18 19:27:00 crc kubenswrapper[4942]: I0218 19:27:00.706842 4942 scope.go:117] "RemoveContainer" containerID="7f5cfffb19bf5e734126be098127f35dd8141f0fb212e21f57fd5fb0d64306d6" Feb 18 19:27:00 crc kubenswrapper[4942]: I0218 19:27:00.707094 4942 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7f5cfffb19bf5e734126be098127f35dd8141f0fb212e21f57fd5fb0d64306d6"} err="failed to get container status \"7f5cfffb19bf5e734126be098127f35dd8141f0fb212e21f57fd5fb0d64306d6\": rpc error: code = NotFound desc = could not find container \"7f5cfffb19bf5e734126be098127f35dd8141f0fb212e21f57fd5fb0d64306d6\": container with ID starting with 7f5cfffb19bf5e734126be098127f35dd8141f0fb212e21f57fd5fb0d64306d6 not found: ID does not exist" Feb 18 19:27:01 crc kubenswrapper[4942]: I0218 19:27:01.048500 4942 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="45dc4164-81a9-44cf-b86a-dff571bc0417" path="/var/lib/kubelet/pods/45dc4164-81a9-44cf-b86a-dff571bc0417/volumes" Feb 18 19:27:01 crc kubenswrapper[4942]: I0218 19:27:01.473424 4942 generic.go:334] "Generic (PLEG): container finished" podID="9574e413-faa5-4a62-a9ef-aaee68989944" containerID="b3806cadf6db010b7ff938701ef6c223075e700d63136fe60f4aa5b6ab710c25" exitCode=0 Feb 18 19:27:01 crc kubenswrapper[4942]: I0218 19:27:01.473463 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-gjbdb" event={"ID":"9574e413-faa5-4a62-a9ef-aaee68989944","Type":"ContainerDied","Data":"b3806cadf6db010b7ff938701ef6c223075e700d63136fe60f4aa5b6ab710c25"} Feb 18 19:27:01 crc kubenswrapper[4942]: I0218 19:27:01.473514 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-gjbdb" event={"ID":"9574e413-faa5-4a62-a9ef-aaee68989944","Type":"ContainerStarted","Data":"4e65f2f25c26de3fdb063c8a7c04ce58c5c1e39df7b646bca82561106b59cff4"} Feb 18 19:27:02 crc kubenswrapper[4942]: I0218 19:27:02.482250 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-gjbdb" event={"ID":"9574e413-faa5-4a62-a9ef-aaee68989944","Type":"ContainerStarted","Data":"8702d73c36e3d25dc2ecc4611e8459e92dbc20e65cf96f81005b5d0fbd0fa3b2"} Feb 18 19:27:02 crc kubenswrapper[4942]: I0218 19:27:02.482539 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-gjbdb" event={"ID":"9574e413-faa5-4a62-a9ef-aaee68989944","Type":"ContainerStarted","Data":"1fe567c7f5871b8847d931208b1f6d6d85a4716ecde1f739b66cc2fea61ff2a0"} Feb 18 19:27:02 crc kubenswrapper[4942]: I0218 19:27:02.482559 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-gjbdb" event={"ID":"9574e413-faa5-4a62-a9ef-aaee68989944","Type":"ContainerStarted","Data":"eeac1ee4643777c1aa501950bed3477bb3d55b1f9e7699e7c8398406c4034434"} Feb 18 19:27:02 crc kubenswrapper[4942]: I0218 19:27:02.482576 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-gjbdb" event={"ID":"9574e413-faa5-4a62-a9ef-aaee68989944","Type":"ContainerStarted","Data":"d60548efb514441807d9eca0f97e09724ec058e39e5591b232d5fccfedf16463"} Feb 18 19:27:02 crc kubenswrapper[4942]: I0218 19:27:02.482594 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-gjbdb" event={"ID":"9574e413-faa5-4a62-a9ef-aaee68989944","Type":"ContainerStarted","Data":"b5842ac58811154562b4d429af15ff6d1931e52c1eb3efbd6b7bade3e787badd"} Feb 18 19:27:02 crc kubenswrapper[4942]: I0218 19:27:02.482611 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-gjbdb" event={"ID":"9574e413-faa5-4a62-a9ef-aaee68989944","Type":"ContainerStarted","Data":"6e93c6773d7467ab5007080e262b72bbf4b3d35c0af27f60e0c9a8b9e5aff647"} Feb 18 19:27:05 crc kubenswrapper[4942]: I0218 19:27:05.579368 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-gjbdb" event={"ID":"9574e413-faa5-4a62-a9ef-aaee68989944","Type":"ContainerStarted","Data":"72e5fef7206f23ebac783522ff692ffa396621e19f73e573f5a691473bc941ec"} Feb 18 19:27:07 crc kubenswrapper[4942]: I0218 19:27:07.595279 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-gjbdb" event={"ID":"9574e413-faa5-4a62-a9ef-aaee68989944","Type":"ContainerStarted","Data":"3e16529e2f53a255fd6b48abe3f547c6b939e7ab702b745ed2264244bd5959e5"} Feb 18 19:27:07 crc kubenswrapper[4942]: I0218 19:27:07.595661 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-gjbdb" Feb 18 19:27:07 crc kubenswrapper[4942]: I0218 19:27:07.597271 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-gjbdb" Feb 18 19:27:07 crc kubenswrapper[4942]: I0218 19:27:07.597305 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-gjbdb" Feb 18 19:27:07 crc kubenswrapper[4942]: I0218 19:27:07.623814 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-gjbdb" Feb 18 19:27:07 crc kubenswrapper[4942]: I0218 19:27:07.627260 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-gjbdb" Feb 18 19:27:07 crc kubenswrapper[4942]: I0218 19:27:07.629311 4942 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-gjbdb" podStartSLOduration=7.629297102 podStartE2EDuration="7.629297102s" podCreationTimestamp="2026-02-18 19:27:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:27:07.6246983 +0000 UTC m=+587.329630985" watchObservedRunningTime="2026-02-18 19:27:07.629297102 +0000 UTC m=+587.334229787" Feb 18 19:27:15 crc kubenswrapper[4942]: I0218 19:27:15.036422 4942 scope.go:117] "RemoveContainer" containerID="62118c834582250ad430997ee392aa040ba0e100f92c0bb922d559c42cf4e958" Feb 18 19:27:15 crc kubenswrapper[4942]: E0218 19:27:15.037345 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-multus pod=multus-8jfwb_openshift-multus(75150b8c-7a02-497b-86c3-eabc9c8dbc55)\"" pod="openshift-multus/multus-8jfwb" podUID="75150b8c-7a02-497b-86c3-eabc9c8dbc55" Feb 18 19:27:21 crc kubenswrapper[4942]: I0218 19:27:21.296585 4942 scope.go:117] "RemoveContainer" containerID="4ea9fbe1ac2843b80786e84d58bed874d360e223686eac9666589a7841d71c46" Feb 18 19:27:21 crc kubenswrapper[4942]: I0218 19:27:21.691734 4942 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-8jfwb_75150b8c-7a02-497b-86c3-eabc9c8dbc55/kube-multus/2.log" Feb 18 19:27:23 crc kubenswrapper[4942]: I0218 19:27:23.741045 4942 patch_prober.go:28] interesting pod/machine-config-daemon-wqxh4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 19:27:23 crc kubenswrapper[4942]: I0218 19:27:23.741147 4942 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wqxh4" podUID="28921539-823a-4439-a230-3b5aed7085cc" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 19:27:27 crc kubenswrapper[4942]: I0218 19:27:27.015718 4942 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08mnblc"] Feb 18 19:27:27 crc kubenswrapper[4942]: I0218 19:27:27.017995 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08mnblc" Feb 18 19:27:27 crc kubenswrapper[4942]: I0218 19:27:27.021729 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Feb 18 19:27:27 crc kubenswrapper[4942]: I0218 19:27:27.033234 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08mnblc"] Feb 18 19:27:27 crc kubenswrapper[4942]: I0218 19:27:27.036607 4942 scope.go:117] "RemoveContainer" containerID="62118c834582250ad430997ee392aa040ba0e100f92c0bb922d559c42cf4e958" Feb 18 19:27:27 crc kubenswrapper[4942]: I0218 19:27:27.090077 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/a2cedb85-fdc1-4d04-b9e2-967d0d2791da-util\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08mnblc\" (UID: \"a2cedb85-fdc1-4d04-b9e2-967d0d2791da\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08mnblc" Feb 18 19:27:27 crc kubenswrapper[4942]: I0218 19:27:27.090224 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/a2cedb85-fdc1-4d04-b9e2-967d0d2791da-bundle\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08mnblc\" (UID: \"a2cedb85-fdc1-4d04-b9e2-967d0d2791da\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08mnblc" Feb 18 19:27:27 crc kubenswrapper[4942]: I0218 19:27:27.090305 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xdmzp\" (UniqueName: \"kubernetes.io/projected/a2cedb85-fdc1-4d04-b9e2-967d0d2791da-kube-api-access-xdmzp\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08mnblc\" (UID: \"a2cedb85-fdc1-4d04-b9e2-967d0d2791da\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08mnblc" Feb 18 19:27:27 crc kubenswrapper[4942]: I0218 19:27:27.192336 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/a2cedb85-fdc1-4d04-b9e2-967d0d2791da-util\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08mnblc\" (UID: \"a2cedb85-fdc1-4d04-b9e2-967d0d2791da\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08mnblc" Feb 18 19:27:27 crc kubenswrapper[4942]: I0218 19:27:27.192494 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/a2cedb85-fdc1-4d04-b9e2-967d0d2791da-bundle\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08mnblc\" (UID: \"a2cedb85-fdc1-4d04-b9e2-967d0d2791da\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08mnblc" Feb 18 19:27:27 crc kubenswrapper[4942]: I0218 19:27:27.192600 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xdmzp\" (UniqueName: \"kubernetes.io/projected/a2cedb85-fdc1-4d04-b9e2-967d0d2791da-kube-api-access-xdmzp\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08mnblc\" (UID: \"a2cedb85-fdc1-4d04-b9e2-967d0d2791da\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08mnblc" Feb 18 19:27:27 crc kubenswrapper[4942]: I0218 19:27:27.193477 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/a2cedb85-fdc1-4d04-b9e2-967d0d2791da-bundle\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08mnblc\" (UID: \"a2cedb85-fdc1-4d04-b9e2-967d0d2791da\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08mnblc" Feb 18 19:27:27 crc kubenswrapper[4942]: I0218 19:27:27.193505 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/a2cedb85-fdc1-4d04-b9e2-967d0d2791da-util\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08mnblc\" (UID: \"a2cedb85-fdc1-4d04-b9e2-967d0d2791da\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08mnblc" Feb 18 19:27:27 crc kubenswrapper[4942]: I0218 19:27:27.225009 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xdmzp\" (UniqueName: \"kubernetes.io/projected/a2cedb85-fdc1-4d04-b9e2-967d0d2791da-kube-api-access-xdmzp\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08mnblc\" (UID: \"a2cedb85-fdc1-4d04-b9e2-967d0d2791da\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08mnblc" Feb 18 19:27:27 crc kubenswrapper[4942]: I0218 19:27:27.384573 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08mnblc" Feb 18 19:27:27 crc kubenswrapper[4942]: E0218 19:27:27.420419 4942 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08mnblc_openshift-marketplace_a2cedb85-fdc1-4d04-b9e2-967d0d2791da_0(7058b14b21ef01edd614ab28a7b919b5565fa10bd1f21fad4ac4b49b3e621e6d): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 18 19:27:27 crc kubenswrapper[4942]: E0218 19:27:27.420508 4942 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08mnblc_openshift-marketplace_a2cedb85-fdc1-4d04-b9e2-967d0d2791da_0(7058b14b21ef01edd614ab28a7b919b5565fa10bd1f21fad4ac4b49b3e621e6d): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08mnblc" Feb 18 19:27:27 crc kubenswrapper[4942]: E0218 19:27:27.420551 4942 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08mnblc_openshift-marketplace_a2cedb85-fdc1-4d04-b9e2-967d0d2791da_0(7058b14b21ef01edd614ab28a7b919b5565fa10bd1f21fad4ac4b49b3e621e6d): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08mnblc" Feb 18 19:27:27 crc kubenswrapper[4942]: E0218 19:27:27.420627 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08mnblc_openshift-marketplace(a2cedb85-fdc1-4d04-b9e2-967d0d2791da)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08mnblc_openshift-marketplace(a2cedb85-fdc1-4d04-b9e2-967d0d2791da)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08mnblc_openshift-marketplace_a2cedb85-fdc1-4d04-b9e2-967d0d2791da_0(7058b14b21ef01edd614ab28a7b919b5565fa10bd1f21fad4ac4b49b3e621e6d): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08mnblc" podUID="a2cedb85-fdc1-4d04-b9e2-967d0d2791da" Feb 18 19:27:27 crc kubenswrapper[4942]: I0218 19:27:27.734228 4942 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-8jfwb_75150b8c-7a02-497b-86c3-eabc9c8dbc55/kube-multus/2.log" Feb 18 19:27:27 crc kubenswrapper[4942]: I0218 19:27:27.734370 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08mnblc" Feb 18 19:27:27 crc kubenswrapper[4942]: I0218 19:27:27.734368 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-8jfwb" event={"ID":"75150b8c-7a02-497b-86c3-eabc9c8dbc55","Type":"ContainerStarted","Data":"68a6bd8e884ce1a855d0edd9eff0fbea8148383a78fc6b30daf35f06965eadbc"} Feb 18 19:27:27 crc kubenswrapper[4942]: I0218 19:27:27.736554 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08mnblc" Feb 18 19:27:27 crc kubenswrapper[4942]: E0218 19:27:27.777613 4942 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08mnblc_openshift-marketplace_a2cedb85-fdc1-4d04-b9e2-967d0d2791da_0(0e774a7a9d0bf2beee082ec0668f0f5a0f60588e4e59cb2ea604cd14da9a7429): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 18 19:27:27 crc kubenswrapper[4942]: E0218 19:27:27.777724 4942 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08mnblc_openshift-marketplace_a2cedb85-fdc1-4d04-b9e2-967d0d2791da_0(0e774a7a9d0bf2beee082ec0668f0f5a0f60588e4e59cb2ea604cd14da9a7429): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08mnblc" Feb 18 19:27:27 crc kubenswrapper[4942]: E0218 19:27:27.777804 4942 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08mnblc_openshift-marketplace_a2cedb85-fdc1-4d04-b9e2-967d0d2791da_0(0e774a7a9d0bf2beee082ec0668f0f5a0f60588e4e59cb2ea604cd14da9a7429): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08mnblc" Feb 18 19:27:27 crc kubenswrapper[4942]: E0218 19:27:27.777927 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08mnblc_openshift-marketplace(a2cedb85-fdc1-4d04-b9e2-967d0d2791da)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08mnblc_openshift-marketplace(a2cedb85-fdc1-4d04-b9e2-967d0d2791da)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08mnblc_openshift-marketplace_a2cedb85-fdc1-4d04-b9e2-967d0d2791da_0(0e774a7a9d0bf2beee082ec0668f0f5a0f60588e4e59cb2ea604cd14da9a7429): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08mnblc" podUID="a2cedb85-fdc1-4d04-b9e2-967d0d2791da" Feb 18 19:27:30 crc kubenswrapper[4942]: I0218 19:27:30.630826 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-gjbdb" Feb 18 19:27:41 crc kubenswrapper[4942]: I0218 19:27:41.035807 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08mnblc" Feb 18 19:27:41 crc kubenswrapper[4942]: I0218 19:27:41.041526 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08mnblc" Feb 18 19:27:41 crc kubenswrapper[4942]: I0218 19:27:41.293351 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08mnblc"] Feb 18 19:27:41 crc kubenswrapper[4942]: I0218 19:27:41.836550 4942 generic.go:334] "Generic (PLEG): container finished" podID="a2cedb85-fdc1-4d04-b9e2-967d0d2791da" containerID="f9036c547ff46d448e446680de7f71fc6a8d1a01d85f1b6d0cedaf3c3785e510" exitCode=0 Feb 18 19:27:41 crc kubenswrapper[4942]: I0218 19:27:41.836645 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08mnblc" event={"ID":"a2cedb85-fdc1-4d04-b9e2-967d0d2791da","Type":"ContainerDied","Data":"f9036c547ff46d448e446680de7f71fc6a8d1a01d85f1b6d0cedaf3c3785e510"} Feb 18 19:27:41 crc kubenswrapper[4942]: I0218 19:27:41.836910 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08mnblc" event={"ID":"a2cedb85-fdc1-4d04-b9e2-967d0d2791da","Type":"ContainerStarted","Data":"ef9f46e2a0b144ea8b66465734389623694b2295e808caaab96975617bc221ba"} Feb 18 19:27:43 crc kubenswrapper[4942]: I0218 19:27:43.857204 4942 generic.go:334] "Generic (PLEG): container finished" podID="a2cedb85-fdc1-4d04-b9e2-967d0d2791da" containerID="3bb342f48838670913250684aad6b73f7799ab3f4a96c8f68276fea888ab361d" exitCode=0 Feb 18 19:27:43 crc kubenswrapper[4942]: I0218 19:27:43.857298 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08mnblc" event={"ID":"a2cedb85-fdc1-4d04-b9e2-967d0d2791da","Type":"ContainerDied","Data":"3bb342f48838670913250684aad6b73f7799ab3f4a96c8f68276fea888ab361d"} Feb 18 19:27:44 crc kubenswrapper[4942]: I0218 19:27:44.865787 4942 generic.go:334] "Generic (PLEG): container finished" podID="a2cedb85-fdc1-4d04-b9e2-967d0d2791da" containerID="097d3e24b692e61af5414e5fb41749063e9281c93871177c569f43a4f903f6fd" exitCode=0 Feb 18 19:27:44 crc kubenswrapper[4942]: I0218 19:27:44.865804 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08mnblc" event={"ID":"a2cedb85-fdc1-4d04-b9e2-967d0d2791da","Type":"ContainerDied","Data":"097d3e24b692e61af5414e5fb41749063e9281c93871177c569f43a4f903f6fd"} Feb 18 19:27:46 crc kubenswrapper[4942]: I0218 19:27:46.190499 4942 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08mnblc" Feb 18 19:27:46 crc kubenswrapper[4942]: I0218 19:27:46.346410 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/a2cedb85-fdc1-4d04-b9e2-967d0d2791da-util\") pod \"a2cedb85-fdc1-4d04-b9e2-967d0d2791da\" (UID: \"a2cedb85-fdc1-4d04-b9e2-967d0d2791da\") " Feb 18 19:27:46 crc kubenswrapper[4942]: I0218 19:27:46.347121 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/a2cedb85-fdc1-4d04-b9e2-967d0d2791da-bundle\") pod \"a2cedb85-fdc1-4d04-b9e2-967d0d2791da\" (UID: \"a2cedb85-fdc1-4d04-b9e2-967d0d2791da\") " Feb 18 19:27:46 crc kubenswrapper[4942]: I0218 19:27:46.347171 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xdmzp\" (UniqueName: \"kubernetes.io/projected/a2cedb85-fdc1-4d04-b9e2-967d0d2791da-kube-api-access-xdmzp\") pod \"a2cedb85-fdc1-4d04-b9e2-967d0d2791da\" (UID: \"a2cedb85-fdc1-4d04-b9e2-967d0d2791da\") " Feb 18 19:27:46 crc kubenswrapper[4942]: I0218 19:27:46.351116 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a2cedb85-fdc1-4d04-b9e2-967d0d2791da-bundle" (OuterVolumeSpecName: "bundle") pod "a2cedb85-fdc1-4d04-b9e2-967d0d2791da" (UID: "a2cedb85-fdc1-4d04-b9e2-967d0d2791da"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 19:27:46 crc kubenswrapper[4942]: I0218 19:27:46.354128 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a2cedb85-fdc1-4d04-b9e2-967d0d2791da-kube-api-access-xdmzp" (OuterVolumeSpecName: "kube-api-access-xdmzp") pod "a2cedb85-fdc1-4d04-b9e2-967d0d2791da" (UID: "a2cedb85-fdc1-4d04-b9e2-967d0d2791da"). InnerVolumeSpecName "kube-api-access-xdmzp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:27:46 crc kubenswrapper[4942]: I0218 19:27:46.365396 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a2cedb85-fdc1-4d04-b9e2-967d0d2791da-util" (OuterVolumeSpecName: "util") pod "a2cedb85-fdc1-4d04-b9e2-967d0d2791da" (UID: "a2cedb85-fdc1-4d04-b9e2-967d0d2791da"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 19:27:46 crc kubenswrapper[4942]: I0218 19:27:46.449015 4942 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/a2cedb85-fdc1-4d04-b9e2-967d0d2791da-util\") on node \"crc\" DevicePath \"\"" Feb 18 19:27:46 crc kubenswrapper[4942]: I0218 19:27:46.449057 4942 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/a2cedb85-fdc1-4d04-b9e2-967d0d2791da-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 19:27:46 crc kubenswrapper[4942]: I0218 19:27:46.449069 4942 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xdmzp\" (UniqueName: \"kubernetes.io/projected/a2cedb85-fdc1-4d04-b9e2-967d0d2791da-kube-api-access-xdmzp\") on node \"crc\" DevicePath \"\"" Feb 18 19:27:46 crc kubenswrapper[4942]: I0218 19:27:46.899401 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08mnblc" event={"ID":"a2cedb85-fdc1-4d04-b9e2-967d0d2791da","Type":"ContainerDied","Data":"ef9f46e2a0b144ea8b66465734389623694b2295e808caaab96975617bc221ba"} Feb 18 19:27:46 crc kubenswrapper[4942]: I0218 19:27:46.899445 4942 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ef9f46e2a0b144ea8b66465734389623694b2295e808caaab96975617bc221ba" Feb 18 19:27:46 crc kubenswrapper[4942]: I0218 19:27:46.899479 4942 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08mnblc" Feb 18 19:27:53 crc kubenswrapper[4942]: I0218 19:27:53.741341 4942 patch_prober.go:28] interesting pod/machine-config-daemon-wqxh4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 19:27:53 crc kubenswrapper[4942]: I0218 19:27:53.741915 4942 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wqxh4" podUID="28921539-823a-4439-a230-3b5aed7085cc" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 19:27:53 crc kubenswrapper[4942]: I0218 19:27:53.741964 4942 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-wqxh4" Feb 18 19:27:53 crc kubenswrapper[4942]: I0218 19:27:53.742585 4942 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"69563ccc2ca715071d77cf8ee678820b7e15eada4a6e511a3ef021c2758d0101"} pod="openshift-machine-config-operator/machine-config-daemon-wqxh4" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 18 19:27:53 crc kubenswrapper[4942]: I0218 19:27:53.742655 4942 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-wqxh4" podUID="28921539-823a-4439-a230-3b5aed7085cc" containerName="machine-config-daemon" containerID="cri-o://69563ccc2ca715071d77cf8ee678820b7e15eada4a6e511a3ef021c2758d0101" gracePeriod=600 Feb 18 19:27:53 crc kubenswrapper[4942]: I0218 19:27:53.941366 4942 generic.go:334] "Generic (PLEG): container finished" podID="28921539-823a-4439-a230-3b5aed7085cc" containerID="69563ccc2ca715071d77cf8ee678820b7e15eada4a6e511a3ef021c2758d0101" exitCode=0 Feb 18 19:27:53 crc kubenswrapper[4942]: I0218 19:27:53.941442 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wqxh4" event={"ID":"28921539-823a-4439-a230-3b5aed7085cc","Type":"ContainerDied","Data":"69563ccc2ca715071d77cf8ee678820b7e15eada4a6e511a3ef021c2758d0101"} Feb 18 19:27:53 crc kubenswrapper[4942]: I0218 19:27:53.941680 4942 scope.go:117] "RemoveContainer" containerID="cbd8c39f4ca27a862760680c197d71be21444460d43b83855f644da4c249ce06" Feb 18 19:27:54 crc kubenswrapper[4942]: I0218 19:27:54.948631 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wqxh4" event={"ID":"28921539-823a-4439-a230-3b5aed7085cc","Type":"ContainerStarted","Data":"573640abad6b15c1dd30fd80a1b600755a1efda149dab25e49e3a1173acf646a"} Feb 18 19:27:55 crc kubenswrapper[4942]: I0218 19:27:55.023458 4942 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-68bc856cb9-nh9c8"] Feb 18 19:27:55 crc kubenswrapper[4942]: E0218 19:27:55.023696 4942 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a2cedb85-fdc1-4d04-b9e2-967d0d2791da" containerName="extract" Feb 18 19:27:55 crc kubenswrapper[4942]: I0218 19:27:55.023709 4942 state_mem.go:107] "Deleted CPUSet assignment" podUID="a2cedb85-fdc1-4d04-b9e2-967d0d2791da" containerName="extract" Feb 18 19:27:55 crc kubenswrapper[4942]: E0218 19:27:55.023728 4942 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a2cedb85-fdc1-4d04-b9e2-967d0d2791da" containerName="util" Feb 18 19:27:55 crc kubenswrapper[4942]: I0218 19:27:55.023735 4942 state_mem.go:107] "Deleted CPUSet assignment" podUID="a2cedb85-fdc1-4d04-b9e2-967d0d2791da" containerName="util" Feb 18 19:27:55 crc kubenswrapper[4942]: E0218 19:27:55.023748 4942 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a2cedb85-fdc1-4d04-b9e2-967d0d2791da" containerName="pull" Feb 18 19:27:55 crc kubenswrapper[4942]: I0218 19:27:55.023756 4942 state_mem.go:107] "Deleted CPUSet assignment" podUID="a2cedb85-fdc1-4d04-b9e2-967d0d2791da" containerName="pull" Feb 18 19:27:55 crc kubenswrapper[4942]: I0218 19:27:55.023878 4942 memory_manager.go:354] "RemoveStaleState removing state" podUID="a2cedb85-fdc1-4d04-b9e2-967d0d2791da" containerName="extract" Feb 18 19:27:55 crc kubenswrapper[4942]: I0218 19:27:55.024272 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-nh9c8" Feb 18 19:27:55 crc kubenswrapper[4942]: I0218 19:27:55.025699 4942 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators"/"openshift-service-ca.crt" Feb 18 19:27:55 crc kubenswrapper[4942]: I0218 19:27:55.025964 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"obo-prometheus-operator-dockercfg-4lq4s" Feb 18 19:27:55 crc kubenswrapper[4942]: I0218 19:27:55.028870 4942 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators"/"kube-root-ca.crt" Feb 18 19:27:55 crc kubenswrapper[4942]: I0218 19:27:55.076259 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-68bc856cb9-nh9c8"] Feb 18 19:27:55 crc kubenswrapper[4942]: I0218 19:27:55.080663 4942 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-5489f95489-5wrzk"] Feb 18 19:27:55 crc kubenswrapper[4942]: I0218 19:27:55.082977 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-5489f95489-5wrzk" Feb 18 19:27:55 crc kubenswrapper[4942]: I0218 19:27:55.085744 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"obo-prometheus-operator-admission-webhook-dockercfg-flnfb" Feb 18 19:27:55 crc kubenswrapper[4942]: I0218 19:27:55.086018 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"obo-prometheus-operator-admission-webhook-service-cert" Feb 18 19:27:55 crc kubenswrapper[4942]: I0218 19:27:55.090959 4942 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-5489f95489-qvrqw"] Feb 18 19:27:55 crc kubenswrapper[4942]: I0218 19:27:55.093957 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-5489f95489-qvrqw" Feb 18 19:27:55 crc kubenswrapper[4942]: I0218 19:27:55.097218 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-5489f95489-5wrzk"] Feb 18 19:27:55 crc kubenswrapper[4942]: I0218 19:27:55.124613 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-5489f95489-qvrqw"] Feb 18 19:27:55 crc kubenswrapper[4942]: I0218 19:27:55.153429 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-96x2f\" (UniqueName: \"kubernetes.io/projected/c22c1602-eed9-45f3-93cf-80a86cad1bab-kube-api-access-96x2f\") pod \"obo-prometheus-operator-68bc856cb9-nh9c8\" (UID: \"c22c1602-eed9-45f3-93cf-80a86cad1bab\") " pod="openshift-operators/obo-prometheus-operator-68bc856cb9-nh9c8" Feb 18 19:27:55 crc kubenswrapper[4942]: I0218 19:27:55.242811 4942 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/observability-operator-59bdc8b94-c4t79"] Feb 18 19:27:55 crc kubenswrapper[4942]: I0218 19:27:55.243966 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-59bdc8b94-c4t79" Feb 18 19:27:55 crc kubenswrapper[4942]: I0218 19:27:55.246468 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"observability-operator-sa-dockercfg-7nxw2" Feb 18 19:27:55 crc kubenswrapper[4942]: I0218 19:27:55.246720 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"observability-operator-tls" Feb 18 19:27:55 crc kubenswrapper[4942]: I0218 19:27:55.254922 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/4cf13df3-44a2-4895-ac06-37d5eba7767d-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-5489f95489-5wrzk\" (UID: \"4cf13df3-44a2-4895-ac06-37d5eba7767d\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-5489f95489-5wrzk" Feb 18 19:27:55 crc kubenswrapper[4942]: I0218 19:27:55.254975 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/61120cf0-34ff-4dbe-9a7a-c94fe6960d34-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-5489f95489-qvrqw\" (UID: \"61120cf0-34ff-4dbe-9a7a-c94fe6960d34\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-5489f95489-qvrqw" Feb 18 19:27:55 crc kubenswrapper[4942]: I0218 19:27:55.255037 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/61120cf0-34ff-4dbe-9a7a-c94fe6960d34-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-5489f95489-qvrqw\" (UID: \"61120cf0-34ff-4dbe-9a7a-c94fe6960d34\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-5489f95489-qvrqw" Feb 18 19:27:55 crc kubenswrapper[4942]: I0218 19:27:55.255072 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-96x2f\" (UniqueName: \"kubernetes.io/projected/c22c1602-eed9-45f3-93cf-80a86cad1bab-kube-api-access-96x2f\") pod \"obo-prometheus-operator-68bc856cb9-nh9c8\" (UID: \"c22c1602-eed9-45f3-93cf-80a86cad1bab\") " pod="openshift-operators/obo-prometheus-operator-68bc856cb9-nh9c8" Feb 18 19:27:55 crc kubenswrapper[4942]: I0218 19:27:55.255164 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/4cf13df3-44a2-4895-ac06-37d5eba7767d-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-5489f95489-5wrzk\" (UID: \"4cf13df3-44a2-4895-ac06-37d5eba7767d\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-5489f95489-5wrzk" Feb 18 19:27:55 crc kubenswrapper[4942]: I0218 19:27:55.258399 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/observability-operator-59bdc8b94-c4t79"] Feb 18 19:27:55 crc kubenswrapper[4942]: I0218 19:27:55.278055 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-96x2f\" (UniqueName: \"kubernetes.io/projected/c22c1602-eed9-45f3-93cf-80a86cad1bab-kube-api-access-96x2f\") pod \"obo-prometheus-operator-68bc856cb9-nh9c8\" (UID: \"c22c1602-eed9-45f3-93cf-80a86cad1bab\") " pod="openshift-operators/obo-prometheus-operator-68bc856cb9-nh9c8" Feb 18 19:27:55 crc kubenswrapper[4942]: I0218 19:27:55.338670 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-nh9c8" Feb 18 19:27:55 crc kubenswrapper[4942]: I0218 19:27:55.357005 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d59t2\" (UniqueName: \"kubernetes.io/projected/c013e97b-628a-48b9-9758-3b8c388b8be9-kube-api-access-d59t2\") pod \"observability-operator-59bdc8b94-c4t79\" (UID: \"c013e97b-628a-48b9-9758-3b8c388b8be9\") " pod="openshift-operators/observability-operator-59bdc8b94-c4t79" Feb 18 19:27:55 crc kubenswrapper[4942]: I0218 19:27:55.357073 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/c013e97b-628a-48b9-9758-3b8c388b8be9-observability-operator-tls\") pod \"observability-operator-59bdc8b94-c4t79\" (UID: \"c013e97b-628a-48b9-9758-3b8c388b8be9\") " pod="openshift-operators/observability-operator-59bdc8b94-c4t79" Feb 18 19:27:55 crc kubenswrapper[4942]: I0218 19:27:55.357120 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/4cf13df3-44a2-4895-ac06-37d5eba7767d-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-5489f95489-5wrzk\" (UID: \"4cf13df3-44a2-4895-ac06-37d5eba7767d\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-5489f95489-5wrzk" Feb 18 19:27:55 crc kubenswrapper[4942]: I0218 19:27:55.357150 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/4cf13df3-44a2-4895-ac06-37d5eba7767d-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-5489f95489-5wrzk\" (UID: \"4cf13df3-44a2-4895-ac06-37d5eba7767d\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-5489f95489-5wrzk" Feb 18 19:27:55 crc kubenswrapper[4942]: I0218 19:27:55.357204 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/61120cf0-34ff-4dbe-9a7a-c94fe6960d34-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-5489f95489-qvrqw\" (UID: \"61120cf0-34ff-4dbe-9a7a-c94fe6960d34\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-5489f95489-qvrqw" Feb 18 19:27:55 crc kubenswrapper[4942]: I0218 19:27:55.357239 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/61120cf0-34ff-4dbe-9a7a-c94fe6960d34-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-5489f95489-qvrqw\" (UID: \"61120cf0-34ff-4dbe-9a7a-c94fe6960d34\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-5489f95489-qvrqw" Feb 18 19:27:55 crc kubenswrapper[4942]: I0218 19:27:55.361743 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/61120cf0-34ff-4dbe-9a7a-c94fe6960d34-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-5489f95489-qvrqw\" (UID: \"61120cf0-34ff-4dbe-9a7a-c94fe6960d34\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-5489f95489-qvrqw" Feb 18 19:27:55 crc kubenswrapper[4942]: I0218 19:27:55.361895 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/61120cf0-34ff-4dbe-9a7a-c94fe6960d34-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-5489f95489-qvrqw\" (UID: \"61120cf0-34ff-4dbe-9a7a-c94fe6960d34\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-5489f95489-qvrqw" Feb 18 19:27:55 crc kubenswrapper[4942]: I0218 19:27:55.363139 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/4cf13df3-44a2-4895-ac06-37d5eba7767d-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-5489f95489-5wrzk\" (UID: \"4cf13df3-44a2-4895-ac06-37d5eba7767d\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-5489f95489-5wrzk" Feb 18 19:27:55 crc kubenswrapper[4942]: I0218 19:27:55.367274 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/4cf13df3-44a2-4895-ac06-37d5eba7767d-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-5489f95489-5wrzk\" (UID: \"4cf13df3-44a2-4895-ac06-37d5eba7767d\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-5489f95489-5wrzk" Feb 18 19:27:55 crc kubenswrapper[4942]: I0218 19:27:55.404433 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-5489f95489-5wrzk" Feb 18 19:27:55 crc kubenswrapper[4942]: I0218 19:27:55.414484 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-5489f95489-qvrqw" Feb 18 19:27:55 crc kubenswrapper[4942]: I0218 19:27:55.462433 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d59t2\" (UniqueName: \"kubernetes.io/projected/c013e97b-628a-48b9-9758-3b8c388b8be9-kube-api-access-d59t2\") pod \"observability-operator-59bdc8b94-c4t79\" (UID: \"c013e97b-628a-48b9-9758-3b8c388b8be9\") " pod="openshift-operators/observability-operator-59bdc8b94-c4t79" Feb 18 19:27:55 crc kubenswrapper[4942]: I0218 19:27:55.462778 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/c013e97b-628a-48b9-9758-3b8c388b8be9-observability-operator-tls\") pod \"observability-operator-59bdc8b94-c4t79\" (UID: \"c013e97b-628a-48b9-9758-3b8c388b8be9\") " pod="openshift-operators/observability-operator-59bdc8b94-c4t79" Feb 18 19:27:55 crc kubenswrapper[4942]: I0218 19:27:55.467931 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/c013e97b-628a-48b9-9758-3b8c388b8be9-observability-operator-tls\") pod \"observability-operator-59bdc8b94-c4t79\" (UID: \"c013e97b-628a-48b9-9758-3b8c388b8be9\") " pod="openshift-operators/observability-operator-59bdc8b94-c4t79" Feb 18 19:27:55 crc kubenswrapper[4942]: I0218 19:27:55.492675 4942 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/perses-operator-5bf474d74f-qs7ps"] Feb 18 19:27:55 crc kubenswrapper[4942]: I0218 19:27:55.493512 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5bf474d74f-qs7ps" Feb 18 19:27:55 crc kubenswrapper[4942]: I0218 19:27:55.518734 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d59t2\" (UniqueName: \"kubernetes.io/projected/c013e97b-628a-48b9-9758-3b8c388b8be9-kube-api-access-d59t2\") pod \"observability-operator-59bdc8b94-c4t79\" (UID: \"c013e97b-628a-48b9-9758-3b8c388b8be9\") " pod="openshift-operators/observability-operator-59bdc8b94-c4t79" Feb 18 19:27:55 crc kubenswrapper[4942]: I0218 19:27:55.528079 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"perses-operator-dockercfg-qxhvn" Feb 18 19:27:55 crc kubenswrapper[4942]: I0218 19:27:55.542058 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/perses-operator-5bf474d74f-qs7ps"] Feb 18 19:27:55 crc kubenswrapper[4942]: I0218 19:27:55.560175 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-59bdc8b94-c4t79" Feb 18 19:27:55 crc kubenswrapper[4942]: I0218 19:27:55.566319 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-94w99\" (UniqueName: \"kubernetes.io/projected/8fc17a06-be09-403e-8923-df71fac9cdfe-kube-api-access-94w99\") pod \"perses-operator-5bf474d74f-qs7ps\" (UID: \"8fc17a06-be09-403e-8923-df71fac9cdfe\") " pod="openshift-operators/perses-operator-5bf474d74f-qs7ps" Feb 18 19:27:55 crc kubenswrapper[4942]: I0218 19:27:55.566362 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/8fc17a06-be09-403e-8923-df71fac9cdfe-openshift-service-ca\") pod \"perses-operator-5bf474d74f-qs7ps\" (UID: \"8fc17a06-be09-403e-8923-df71fac9cdfe\") " pod="openshift-operators/perses-operator-5bf474d74f-qs7ps" Feb 18 19:27:55 crc kubenswrapper[4942]: I0218 19:27:55.646616 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-68bc856cb9-nh9c8"] Feb 18 19:27:55 crc kubenswrapper[4942]: I0218 19:27:55.668145 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-94w99\" (UniqueName: \"kubernetes.io/projected/8fc17a06-be09-403e-8923-df71fac9cdfe-kube-api-access-94w99\") pod \"perses-operator-5bf474d74f-qs7ps\" (UID: \"8fc17a06-be09-403e-8923-df71fac9cdfe\") " pod="openshift-operators/perses-operator-5bf474d74f-qs7ps" Feb 18 19:27:55 crc kubenswrapper[4942]: I0218 19:27:55.668204 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/8fc17a06-be09-403e-8923-df71fac9cdfe-openshift-service-ca\") pod \"perses-operator-5bf474d74f-qs7ps\" (UID: \"8fc17a06-be09-403e-8923-df71fac9cdfe\") " pod="openshift-operators/perses-operator-5bf474d74f-qs7ps" Feb 18 19:27:55 crc kubenswrapper[4942]: I0218 19:27:55.670144 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/8fc17a06-be09-403e-8923-df71fac9cdfe-openshift-service-ca\") pod \"perses-operator-5bf474d74f-qs7ps\" (UID: \"8fc17a06-be09-403e-8923-df71fac9cdfe\") " pod="openshift-operators/perses-operator-5bf474d74f-qs7ps" Feb 18 19:27:55 crc kubenswrapper[4942]: W0218 19:27:55.674948 4942 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc22c1602_eed9_45f3_93cf_80a86cad1bab.slice/crio-cd82067559c38da7d771016dbc8d003ed6c20dd918257b961141dced42d3f006 WatchSource:0}: Error finding container cd82067559c38da7d771016dbc8d003ed6c20dd918257b961141dced42d3f006: Status 404 returned error can't find the container with id cd82067559c38da7d771016dbc8d003ed6c20dd918257b961141dced42d3f006 Feb 18 19:27:55 crc kubenswrapper[4942]: I0218 19:27:55.697211 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-94w99\" (UniqueName: \"kubernetes.io/projected/8fc17a06-be09-403e-8923-df71fac9cdfe-kube-api-access-94w99\") pod \"perses-operator-5bf474d74f-qs7ps\" (UID: \"8fc17a06-be09-403e-8923-df71fac9cdfe\") " pod="openshift-operators/perses-operator-5bf474d74f-qs7ps" Feb 18 19:27:55 crc kubenswrapper[4942]: I0218 19:27:55.857956 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5bf474d74f-qs7ps" Feb 18 19:27:55 crc kubenswrapper[4942]: I0218 19:27:55.882379 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-5489f95489-qvrqw"] Feb 18 19:27:55 crc kubenswrapper[4942]: I0218 19:27:55.932057 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/observability-operator-59bdc8b94-c4t79"] Feb 18 19:27:55 crc kubenswrapper[4942]: W0218 19:27:55.937057 4942 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc013e97b_628a_48b9_9758_3b8c388b8be9.slice/crio-641f7b283402c1b4f1c8f90c8ba02b32cc72cad10611968363b08f3d8a7940b2 WatchSource:0}: Error finding container 641f7b283402c1b4f1c8f90c8ba02b32cc72cad10611968363b08f3d8a7940b2: Status 404 returned error can't find the container with id 641f7b283402c1b4f1c8f90c8ba02b32cc72cad10611968363b08f3d8a7940b2 Feb 18 19:27:55 crc kubenswrapper[4942]: I0218 19:27:55.975155 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-operator-59bdc8b94-c4t79" event={"ID":"c013e97b-628a-48b9-9758-3b8c388b8be9","Type":"ContainerStarted","Data":"641f7b283402c1b4f1c8f90c8ba02b32cc72cad10611968363b08f3d8a7940b2"} Feb 18 19:27:55 crc kubenswrapper[4942]: I0218 19:27:55.976135 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-nh9c8" event={"ID":"c22c1602-eed9-45f3-93cf-80a86cad1bab","Type":"ContainerStarted","Data":"cd82067559c38da7d771016dbc8d003ed6c20dd918257b961141dced42d3f006"} Feb 18 19:27:55 crc kubenswrapper[4942]: I0218 19:27:55.977596 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-5489f95489-qvrqw" event={"ID":"61120cf0-34ff-4dbe-9a7a-c94fe6960d34","Type":"ContainerStarted","Data":"df25e3cbe418dcc38a3ca7320c976ab8a35f94640782e5e39bc028bb756bc143"} Feb 18 19:27:55 crc kubenswrapper[4942]: I0218 19:27:55.991884 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-5489f95489-5wrzk"] Feb 18 19:27:56 crc kubenswrapper[4942]: W0218 19:27:56.005002 4942 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4cf13df3_44a2_4895_ac06_37d5eba7767d.slice/crio-89a8c8eb91124245f6d7152593de9f43f4a7b39d32375adf529896468714bbf5 WatchSource:0}: Error finding container 89a8c8eb91124245f6d7152593de9f43f4a7b39d32375adf529896468714bbf5: Status 404 returned error can't find the container with id 89a8c8eb91124245f6d7152593de9f43f4a7b39d32375adf529896468714bbf5 Feb 18 19:27:56 crc kubenswrapper[4942]: I0218 19:27:56.113418 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/perses-operator-5bf474d74f-qs7ps"] Feb 18 19:27:56 crc kubenswrapper[4942]: W0218 19:27:56.122697 4942 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8fc17a06_be09_403e_8923_df71fac9cdfe.slice/crio-33c118cc62c87837026207f4a577defbffb7a215c4109c290d071b1684100c56 WatchSource:0}: Error finding container 33c118cc62c87837026207f4a577defbffb7a215c4109c290d071b1684100c56: Status 404 returned error can't find the container with id 33c118cc62c87837026207f4a577defbffb7a215c4109c290d071b1684100c56 Feb 18 19:27:56 crc kubenswrapper[4942]: I0218 19:27:56.983294 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/perses-operator-5bf474d74f-qs7ps" event={"ID":"8fc17a06-be09-403e-8923-df71fac9cdfe","Type":"ContainerStarted","Data":"33c118cc62c87837026207f4a577defbffb7a215c4109c290d071b1684100c56"} Feb 18 19:27:56 crc kubenswrapper[4942]: I0218 19:27:56.984590 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-5489f95489-5wrzk" event={"ID":"4cf13df3-44a2-4895-ac06-37d5eba7767d","Type":"ContainerStarted","Data":"89a8c8eb91124245f6d7152593de9f43f4a7b39d32375adf529896468714bbf5"} Feb 18 19:28:07 crc kubenswrapper[4942]: I0218 19:28:07.591693 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/perses-operator-5bf474d74f-qs7ps" event={"ID":"8fc17a06-be09-403e-8923-df71fac9cdfe","Type":"ContainerStarted","Data":"678888a0246e824597c9ddf9dea76c8fe8b8bdd40947dff23fab75196d418c41"} Feb 18 19:28:07 crc kubenswrapper[4942]: I0218 19:28:07.622595 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operators/perses-operator-5bf474d74f-qs7ps" Feb 18 19:28:07 crc kubenswrapper[4942]: I0218 19:28:07.630008 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-5489f95489-5wrzk" event={"ID":"4cf13df3-44a2-4895-ac06-37d5eba7767d","Type":"ContainerStarted","Data":"a5095c5c5df4d4e439e71a7788bdcdb8960f9234d622f27913f2ea4ab15a5077"} Feb 18 19:28:07 crc kubenswrapper[4942]: I0218 19:28:07.639316 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-nh9c8" event={"ID":"c22c1602-eed9-45f3-93cf-80a86cad1bab","Type":"ContainerStarted","Data":"b919e859fa991b7a897fee4ce0cc9a550c122caae155afaa47a4d1b41bda2b38"} Feb 18 19:28:07 crc kubenswrapper[4942]: I0218 19:28:07.646738 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-5489f95489-qvrqw" event={"ID":"61120cf0-34ff-4dbe-9a7a-c94fe6960d34","Type":"ContainerStarted","Data":"bc312846055406168f6288fece1f051f1daf157e5ce554dc1354211e400748d2"} Feb 18 19:28:07 crc kubenswrapper[4942]: I0218 19:28:07.649593 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-operator-59bdc8b94-c4t79" event={"ID":"c013e97b-628a-48b9-9758-3b8c388b8be9","Type":"ContainerStarted","Data":"ef35457f469716a899b842dea2fbd203aa16a947aaf54b87b58f68c79f293397"} Feb 18 19:28:07 crc kubenswrapper[4942]: I0218 19:28:07.650298 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operators/observability-operator-59bdc8b94-c4t79" Feb 18 19:28:07 crc kubenswrapper[4942]: I0218 19:28:07.654487 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operators/observability-operator-59bdc8b94-c4t79" Feb 18 19:28:07 crc kubenswrapper[4942]: I0218 19:28:07.655737 4942 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/perses-operator-5bf474d74f-qs7ps" podStartSLOduration=2.992199748 podStartE2EDuration="12.655721028s" podCreationTimestamp="2026-02-18 19:27:55 +0000 UTC" firstStartedPulling="2026-02-18 19:27:56.124030555 +0000 UTC m=+635.828963220" lastFinishedPulling="2026-02-18 19:28:05.787551835 +0000 UTC m=+645.492484500" observedRunningTime="2026-02-18 19:28:07.652698248 +0000 UTC m=+647.357630913" watchObservedRunningTime="2026-02-18 19:28:07.655721028 +0000 UTC m=+647.360653693" Feb 18 19:28:07 crc kubenswrapper[4942]: I0218 19:28:07.679488 4942 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/observability-operator-59bdc8b94-c4t79" podStartSLOduration=2.796576324 podStartE2EDuration="12.679473289s" podCreationTimestamp="2026-02-18 19:27:55 +0000 UTC" firstStartedPulling="2026-02-18 19:27:55.939070464 +0000 UTC m=+635.644003129" lastFinishedPulling="2026-02-18 19:28:05.821967429 +0000 UTC m=+645.526900094" observedRunningTime="2026-02-18 19:28:07.678333478 +0000 UTC m=+647.383266143" watchObservedRunningTime="2026-02-18 19:28:07.679473289 +0000 UTC m=+647.384405954" Feb 18 19:28:07 crc kubenswrapper[4942]: I0218 19:28:07.706564 4942 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-admission-webhook-5489f95489-5wrzk" podStartSLOduration=2.929197195 podStartE2EDuration="12.706530727s" podCreationTimestamp="2026-02-18 19:27:55 +0000 UTC" firstStartedPulling="2026-02-18 19:27:56.012258577 +0000 UTC m=+635.717191232" lastFinishedPulling="2026-02-18 19:28:05.789592099 +0000 UTC m=+645.494524764" observedRunningTime="2026-02-18 19:28:07.705398497 +0000 UTC m=+647.410331162" watchObservedRunningTime="2026-02-18 19:28:07.706530727 +0000 UTC m=+647.411463392" Feb 18 19:28:07 crc kubenswrapper[4942]: I0218 19:28:07.738869 4942 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-nh9c8" podStartSLOduration=2.6068059249999997 podStartE2EDuration="12.738828655s" podCreationTimestamp="2026-02-18 19:27:55 +0000 UTC" firstStartedPulling="2026-02-18 19:27:55.689958359 +0000 UTC m=+635.394891024" lastFinishedPulling="2026-02-18 19:28:05.821981089 +0000 UTC m=+645.526913754" observedRunningTime="2026-02-18 19:28:07.734837599 +0000 UTC m=+647.439770284" watchObservedRunningTime="2026-02-18 19:28:07.738828655 +0000 UTC m=+647.443761340" Feb 18 19:28:07 crc kubenswrapper[4942]: I0218 19:28:07.757723 4942 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-admission-webhook-5489f95489-qvrqw" podStartSLOduration=2.875975533 podStartE2EDuration="12.757702706s" podCreationTimestamp="2026-02-18 19:27:55 +0000 UTC" firstStartedPulling="2026-02-18 19:27:55.904978139 +0000 UTC m=+635.609910804" lastFinishedPulling="2026-02-18 19:28:05.786705312 +0000 UTC m=+645.491637977" observedRunningTime="2026-02-18 19:28:07.756124084 +0000 UTC m=+647.461056759" watchObservedRunningTime="2026-02-18 19:28:07.757702706 +0000 UTC m=+647.462635371" Feb 18 19:28:15 crc kubenswrapper[4942]: I0218 19:28:15.864207 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operators/perses-operator-5bf474d74f-qs7ps" Feb 18 19:28:31 crc kubenswrapper[4942]: I0218 19:28:31.771311 4942 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecansbj2"] Feb 18 19:28:31 crc kubenswrapper[4942]: I0218 19:28:31.773200 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecansbj2" Feb 18 19:28:31 crc kubenswrapper[4942]: I0218 19:28:31.784135 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Feb 18 19:28:31 crc kubenswrapper[4942]: I0218 19:28:31.790899 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecansbj2"] Feb 18 19:28:31 crc kubenswrapper[4942]: I0218 19:28:31.889821 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qjxvj\" (UniqueName: \"kubernetes.io/projected/13b36241-8d25-425c-a2bb-ad032c01715e-kube-api-access-qjxvj\") pod \"f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecansbj2\" (UID: \"13b36241-8d25-425c-a2bb-ad032c01715e\") " pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecansbj2" Feb 18 19:28:31 crc kubenswrapper[4942]: I0218 19:28:31.889901 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/13b36241-8d25-425c-a2bb-ad032c01715e-util\") pod \"f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecansbj2\" (UID: \"13b36241-8d25-425c-a2bb-ad032c01715e\") " pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecansbj2" Feb 18 19:28:31 crc kubenswrapper[4942]: I0218 19:28:31.889992 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/13b36241-8d25-425c-a2bb-ad032c01715e-bundle\") pod \"f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecansbj2\" (UID: \"13b36241-8d25-425c-a2bb-ad032c01715e\") " pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecansbj2" Feb 18 19:28:31 crc kubenswrapper[4942]: I0218 19:28:31.991241 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qjxvj\" (UniqueName: \"kubernetes.io/projected/13b36241-8d25-425c-a2bb-ad032c01715e-kube-api-access-qjxvj\") pod \"f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecansbj2\" (UID: \"13b36241-8d25-425c-a2bb-ad032c01715e\") " pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecansbj2" Feb 18 19:28:31 crc kubenswrapper[4942]: I0218 19:28:31.991287 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/13b36241-8d25-425c-a2bb-ad032c01715e-util\") pod \"f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecansbj2\" (UID: \"13b36241-8d25-425c-a2bb-ad032c01715e\") " pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecansbj2" Feb 18 19:28:31 crc kubenswrapper[4942]: I0218 19:28:31.991341 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/13b36241-8d25-425c-a2bb-ad032c01715e-bundle\") pod \"f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecansbj2\" (UID: \"13b36241-8d25-425c-a2bb-ad032c01715e\") " pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecansbj2" Feb 18 19:28:31 crc kubenswrapper[4942]: I0218 19:28:31.991842 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/13b36241-8d25-425c-a2bb-ad032c01715e-bundle\") pod \"f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecansbj2\" (UID: \"13b36241-8d25-425c-a2bb-ad032c01715e\") " pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecansbj2" Feb 18 19:28:31 crc kubenswrapper[4942]: I0218 19:28:31.992013 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/13b36241-8d25-425c-a2bb-ad032c01715e-util\") pod \"f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecansbj2\" (UID: \"13b36241-8d25-425c-a2bb-ad032c01715e\") " pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecansbj2" Feb 18 19:28:32 crc kubenswrapper[4942]: I0218 19:28:32.012642 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qjxvj\" (UniqueName: \"kubernetes.io/projected/13b36241-8d25-425c-a2bb-ad032c01715e-kube-api-access-qjxvj\") pod \"f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecansbj2\" (UID: \"13b36241-8d25-425c-a2bb-ad032c01715e\") " pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecansbj2" Feb 18 19:28:32 crc kubenswrapper[4942]: I0218 19:28:32.102346 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecansbj2" Feb 18 19:28:32 crc kubenswrapper[4942]: I0218 19:28:32.365148 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecansbj2"] Feb 18 19:28:32 crc kubenswrapper[4942]: I0218 19:28:32.801082 4942 generic.go:334] "Generic (PLEG): container finished" podID="13b36241-8d25-425c-a2bb-ad032c01715e" containerID="b9855a89980d529712bdb1ac0219e24a48d2bfa0e5ac826ad414d05d73c7a8bc" exitCode=0 Feb 18 19:28:32 crc kubenswrapper[4942]: I0218 19:28:32.801153 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecansbj2" event={"ID":"13b36241-8d25-425c-a2bb-ad032c01715e","Type":"ContainerDied","Data":"b9855a89980d529712bdb1ac0219e24a48d2bfa0e5ac826ad414d05d73c7a8bc"} Feb 18 19:28:32 crc kubenswrapper[4942]: I0218 19:28:32.801220 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecansbj2" event={"ID":"13b36241-8d25-425c-a2bb-ad032c01715e","Type":"ContainerStarted","Data":"0aa1a26b7b5317dc8c29f64256ca8ed84810bdb3d8227b9965cdf2753bd0ff57"} Feb 18 19:28:34 crc kubenswrapper[4942]: I0218 19:28:34.819285 4942 generic.go:334] "Generic (PLEG): container finished" podID="13b36241-8d25-425c-a2bb-ad032c01715e" containerID="c16d573685b10c337397d32eabd7bb8785cd8921470fd92faebcb8240b0c04c8" exitCode=0 Feb 18 19:28:34 crc kubenswrapper[4942]: I0218 19:28:34.819429 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecansbj2" event={"ID":"13b36241-8d25-425c-a2bb-ad032c01715e","Type":"ContainerDied","Data":"c16d573685b10c337397d32eabd7bb8785cd8921470fd92faebcb8240b0c04c8"} Feb 18 19:28:35 crc kubenswrapper[4942]: I0218 19:28:35.828651 4942 generic.go:334] "Generic (PLEG): container finished" podID="13b36241-8d25-425c-a2bb-ad032c01715e" containerID="440b6fc2ef73ba0831ee5f1ed9047a7827643fed22e83a23678096e76380f922" exitCode=0 Feb 18 19:28:35 crc kubenswrapper[4942]: I0218 19:28:35.828986 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecansbj2" event={"ID":"13b36241-8d25-425c-a2bb-ad032c01715e","Type":"ContainerDied","Data":"440b6fc2ef73ba0831ee5f1ed9047a7827643fed22e83a23678096e76380f922"} Feb 18 19:28:37 crc kubenswrapper[4942]: I0218 19:28:37.202654 4942 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecansbj2" Feb 18 19:28:37 crc kubenswrapper[4942]: I0218 19:28:37.366065 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/13b36241-8d25-425c-a2bb-ad032c01715e-util\") pod \"13b36241-8d25-425c-a2bb-ad032c01715e\" (UID: \"13b36241-8d25-425c-a2bb-ad032c01715e\") " Feb 18 19:28:37 crc kubenswrapper[4942]: I0218 19:28:37.366230 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/13b36241-8d25-425c-a2bb-ad032c01715e-bundle\") pod \"13b36241-8d25-425c-a2bb-ad032c01715e\" (UID: \"13b36241-8d25-425c-a2bb-ad032c01715e\") " Feb 18 19:28:37 crc kubenswrapper[4942]: I0218 19:28:37.366285 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qjxvj\" (UniqueName: \"kubernetes.io/projected/13b36241-8d25-425c-a2bb-ad032c01715e-kube-api-access-qjxvj\") pod \"13b36241-8d25-425c-a2bb-ad032c01715e\" (UID: \"13b36241-8d25-425c-a2bb-ad032c01715e\") " Feb 18 19:28:37 crc kubenswrapper[4942]: I0218 19:28:37.366798 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/13b36241-8d25-425c-a2bb-ad032c01715e-bundle" (OuterVolumeSpecName: "bundle") pod "13b36241-8d25-425c-a2bb-ad032c01715e" (UID: "13b36241-8d25-425c-a2bb-ad032c01715e"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 19:28:37 crc kubenswrapper[4942]: I0218 19:28:37.374076 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/13b36241-8d25-425c-a2bb-ad032c01715e-kube-api-access-qjxvj" (OuterVolumeSpecName: "kube-api-access-qjxvj") pod "13b36241-8d25-425c-a2bb-ad032c01715e" (UID: "13b36241-8d25-425c-a2bb-ad032c01715e"). InnerVolumeSpecName "kube-api-access-qjxvj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:28:37 crc kubenswrapper[4942]: I0218 19:28:37.381049 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/13b36241-8d25-425c-a2bb-ad032c01715e-util" (OuterVolumeSpecName: "util") pod "13b36241-8d25-425c-a2bb-ad032c01715e" (UID: "13b36241-8d25-425c-a2bb-ad032c01715e"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 19:28:37 crc kubenswrapper[4942]: I0218 19:28:37.467849 4942 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/13b36241-8d25-425c-a2bb-ad032c01715e-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 19:28:37 crc kubenswrapper[4942]: I0218 19:28:37.468245 4942 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qjxvj\" (UniqueName: \"kubernetes.io/projected/13b36241-8d25-425c-a2bb-ad032c01715e-kube-api-access-qjxvj\") on node \"crc\" DevicePath \"\"" Feb 18 19:28:37 crc kubenswrapper[4942]: I0218 19:28:37.468263 4942 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/13b36241-8d25-425c-a2bb-ad032c01715e-util\") on node \"crc\" DevicePath \"\"" Feb 18 19:28:37 crc kubenswrapper[4942]: I0218 19:28:37.843862 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecansbj2" event={"ID":"13b36241-8d25-425c-a2bb-ad032c01715e","Type":"ContainerDied","Data":"0aa1a26b7b5317dc8c29f64256ca8ed84810bdb3d8227b9965cdf2753bd0ff57"} Feb 18 19:28:37 crc kubenswrapper[4942]: I0218 19:28:37.843905 4942 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0aa1a26b7b5317dc8c29f64256ca8ed84810bdb3d8227b9965cdf2753bd0ff57" Feb 18 19:28:37 crc kubenswrapper[4942]: I0218 19:28:37.843986 4942 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecansbj2" Feb 18 19:28:43 crc kubenswrapper[4942]: I0218 19:28:43.420318 4942 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-operator-694c9596b7-h4gr7"] Feb 18 19:28:43 crc kubenswrapper[4942]: E0218 19:28:43.420938 4942 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="13b36241-8d25-425c-a2bb-ad032c01715e" containerName="pull" Feb 18 19:28:43 crc kubenswrapper[4942]: I0218 19:28:43.420950 4942 state_mem.go:107] "Deleted CPUSet assignment" podUID="13b36241-8d25-425c-a2bb-ad032c01715e" containerName="pull" Feb 18 19:28:43 crc kubenswrapper[4942]: E0218 19:28:43.420961 4942 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="13b36241-8d25-425c-a2bb-ad032c01715e" containerName="util" Feb 18 19:28:43 crc kubenswrapper[4942]: I0218 19:28:43.420967 4942 state_mem.go:107] "Deleted CPUSet assignment" podUID="13b36241-8d25-425c-a2bb-ad032c01715e" containerName="util" Feb 18 19:28:43 crc kubenswrapper[4942]: E0218 19:28:43.420980 4942 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="13b36241-8d25-425c-a2bb-ad032c01715e" containerName="extract" Feb 18 19:28:43 crc kubenswrapper[4942]: I0218 19:28:43.420986 4942 state_mem.go:107] "Deleted CPUSet assignment" podUID="13b36241-8d25-425c-a2bb-ad032c01715e" containerName="extract" Feb 18 19:28:43 crc kubenswrapper[4942]: I0218 19:28:43.421067 4942 memory_manager.go:354] "RemoveStaleState removing state" podUID="13b36241-8d25-425c-a2bb-ad032c01715e" containerName="extract" Feb 18 19:28:43 crc kubenswrapper[4942]: I0218 19:28:43.421506 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-694c9596b7-h4gr7" Feb 18 19:28:43 crc kubenswrapper[4942]: I0218 19:28:43.423873 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-operator-dockercfg-5swn2" Feb 18 19:28:43 crc kubenswrapper[4942]: I0218 19:28:43.423966 4942 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"openshift-service-ca.crt" Feb 18 19:28:43 crc kubenswrapper[4942]: I0218 19:28:43.427457 4942 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"kube-root-ca.crt" Feb 18 19:28:43 crc kubenswrapper[4942]: I0218 19:28:43.432628 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-694c9596b7-h4gr7"] Feb 18 19:28:43 crc kubenswrapper[4942]: I0218 19:28:43.443973 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mvbhs\" (UniqueName: \"kubernetes.io/projected/2f24c234-adb6-4353-94d0-c91f7d538d3d-kube-api-access-mvbhs\") pod \"nmstate-operator-694c9596b7-h4gr7\" (UID: \"2f24c234-adb6-4353-94d0-c91f7d538d3d\") " pod="openshift-nmstate/nmstate-operator-694c9596b7-h4gr7" Feb 18 19:28:43 crc kubenswrapper[4942]: I0218 19:28:43.544957 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mvbhs\" (UniqueName: \"kubernetes.io/projected/2f24c234-adb6-4353-94d0-c91f7d538d3d-kube-api-access-mvbhs\") pod \"nmstate-operator-694c9596b7-h4gr7\" (UID: \"2f24c234-adb6-4353-94d0-c91f7d538d3d\") " pod="openshift-nmstate/nmstate-operator-694c9596b7-h4gr7" Feb 18 19:28:43 crc kubenswrapper[4942]: I0218 19:28:43.570807 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mvbhs\" (UniqueName: \"kubernetes.io/projected/2f24c234-adb6-4353-94d0-c91f7d538d3d-kube-api-access-mvbhs\") pod \"nmstate-operator-694c9596b7-h4gr7\" (UID: \"2f24c234-adb6-4353-94d0-c91f7d538d3d\") " pod="openshift-nmstate/nmstate-operator-694c9596b7-h4gr7" Feb 18 19:28:43 crc kubenswrapper[4942]: I0218 19:28:43.744791 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-694c9596b7-h4gr7" Feb 18 19:28:44 crc kubenswrapper[4942]: I0218 19:28:44.052561 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-694c9596b7-h4gr7"] Feb 18 19:28:44 crc kubenswrapper[4942]: W0218 19:28:44.055509 4942 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2f24c234_adb6_4353_94d0_c91f7d538d3d.slice/crio-bceef4da1e07916b4ede2c4038e4d07623456c4674603dd4e83a76af0f17dc7c WatchSource:0}: Error finding container bceef4da1e07916b4ede2c4038e4d07623456c4674603dd4e83a76af0f17dc7c: Status 404 returned error can't find the container with id bceef4da1e07916b4ede2c4038e4d07623456c4674603dd4e83a76af0f17dc7c Feb 18 19:28:44 crc kubenswrapper[4942]: I0218 19:28:44.910488 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-694c9596b7-h4gr7" event={"ID":"2f24c234-adb6-4353-94d0-c91f7d538d3d","Type":"ContainerStarted","Data":"bceef4da1e07916b4ede2c4038e4d07623456c4674603dd4e83a76af0f17dc7c"} Feb 18 19:28:47 crc kubenswrapper[4942]: I0218 19:28:47.929178 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-694c9596b7-h4gr7" event={"ID":"2f24c234-adb6-4353-94d0-c91f7d538d3d","Type":"ContainerStarted","Data":"e8f2bc028988ca179eba1ab033011b48819215697390f2d0d534b8e8731572ca"} Feb 18 19:28:47 crc kubenswrapper[4942]: I0218 19:28:47.953637 4942 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-operator-694c9596b7-h4gr7" podStartSLOduration=2.115184122 podStartE2EDuration="4.953617204s" podCreationTimestamp="2026-02-18 19:28:43 +0000 UTC" firstStartedPulling="2026-02-18 19:28:44.057297772 +0000 UTC m=+683.762230437" lastFinishedPulling="2026-02-18 19:28:46.895730814 +0000 UTC m=+686.600663519" observedRunningTime="2026-02-18 19:28:47.949785339 +0000 UTC m=+687.654718014" watchObservedRunningTime="2026-02-18 19:28:47.953617204 +0000 UTC m=+687.658549889" Feb 18 19:28:52 crc kubenswrapper[4942]: I0218 19:28:52.920455 4942 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-metrics-58c85c668d-rttzx"] Feb 18 19:28:52 crc kubenswrapper[4942]: I0218 19:28:52.922118 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-58c85c668d-rttzx" Feb 18 19:28:52 crc kubenswrapper[4942]: I0218 19:28:52.924875 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-handler-dockercfg-r5wfx" Feb 18 19:28:52 crc kubenswrapper[4942]: I0218 19:28:52.947254 4942 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-webhook-866bcb46dc-bqlvw"] Feb 18 19:28:52 crc kubenswrapper[4942]: I0218 19:28:52.948395 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-866bcb46dc-bqlvw" Feb 18 19:28:52 crc kubenswrapper[4942]: I0218 19:28:52.952622 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"openshift-nmstate-webhook" Feb 18 19:28:52 crc kubenswrapper[4942]: I0218 19:28:52.977226 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-58c85c668d-rttzx"] Feb 18 19:28:52 crc kubenswrapper[4942]: I0218 19:28:52.984906 4942 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-handler-plkfj"] Feb 18 19:28:52 crc kubenswrapper[4942]: I0218 19:28:52.985963 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-plkfj" Feb 18 19:28:52 crc kubenswrapper[4942]: I0218 19:28:52.994056 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-866bcb46dc-bqlvw"] Feb 18 19:28:53 crc kubenswrapper[4942]: I0218 19:28:53.068054 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-45xgj\" (UniqueName: \"kubernetes.io/projected/76ecd9a6-426a-4dd2-b701-dc478849bf8c-kube-api-access-45xgj\") pod \"nmstate-metrics-58c85c668d-rttzx\" (UID: \"76ecd9a6-426a-4dd2-b701-dc478849bf8c\") " pod="openshift-nmstate/nmstate-metrics-58c85c668d-rttzx" Feb 18 19:28:53 crc kubenswrapper[4942]: I0218 19:28:53.068138 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/5f16510c-481e-41fd-a588-da27d576478c-tls-key-pair\") pod \"nmstate-webhook-866bcb46dc-bqlvw\" (UID: \"5f16510c-481e-41fd-a588-da27d576478c\") " pod="openshift-nmstate/nmstate-webhook-866bcb46dc-bqlvw" Feb 18 19:28:53 crc kubenswrapper[4942]: I0218 19:28:53.068177 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d6cm4\" (UniqueName: \"kubernetes.io/projected/5f16510c-481e-41fd-a588-da27d576478c-kube-api-access-d6cm4\") pod \"nmstate-webhook-866bcb46dc-bqlvw\" (UID: \"5f16510c-481e-41fd-a588-da27d576478c\") " pod="openshift-nmstate/nmstate-webhook-866bcb46dc-bqlvw" Feb 18 19:28:53 crc kubenswrapper[4942]: I0218 19:28:53.085969 4942 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-console-plugin-5c78fc5d65-4jzjl"] Feb 18 19:28:53 crc kubenswrapper[4942]: I0218 19:28:53.086824 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-4jzjl" Feb 18 19:28:53 crc kubenswrapper[4942]: I0218 19:28:53.089883 4942 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"nginx-conf" Feb 18 19:28:53 crc kubenswrapper[4942]: I0218 19:28:53.089936 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"default-dockercfg-fgggf" Feb 18 19:28:53 crc kubenswrapper[4942]: I0218 19:28:53.089950 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"plugin-serving-cert" Feb 18 19:28:53 crc kubenswrapper[4942]: I0218 19:28:53.103825 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-5c78fc5d65-4jzjl"] Feb 18 19:28:53 crc kubenswrapper[4942]: I0218 19:28:53.169311 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vdjcw\" (UniqueName: \"kubernetes.io/projected/0069ee73-95fc-4f06-980a-585ed1af868b-kube-api-access-vdjcw\") pod \"nmstate-handler-plkfj\" (UID: \"0069ee73-95fc-4f06-980a-585ed1af868b\") " pod="openshift-nmstate/nmstate-handler-plkfj" Feb 18 19:28:53 crc kubenswrapper[4942]: I0218 19:28:53.169361 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/0069ee73-95fc-4f06-980a-585ed1af868b-ovs-socket\") pod \"nmstate-handler-plkfj\" (UID: \"0069ee73-95fc-4f06-980a-585ed1af868b\") " pod="openshift-nmstate/nmstate-handler-plkfj" Feb 18 19:28:53 crc kubenswrapper[4942]: I0218 19:28:53.169435 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-45xgj\" (UniqueName: \"kubernetes.io/projected/76ecd9a6-426a-4dd2-b701-dc478849bf8c-kube-api-access-45xgj\") pod \"nmstate-metrics-58c85c668d-rttzx\" (UID: \"76ecd9a6-426a-4dd2-b701-dc478849bf8c\") " pod="openshift-nmstate/nmstate-metrics-58c85c668d-rttzx" Feb 18 19:28:53 crc kubenswrapper[4942]: I0218 19:28:53.169468 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/0069ee73-95fc-4f06-980a-585ed1af868b-dbus-socket\") pod \"nmstate-handler-plkfj\" (UID: \"0069ee73-95fc-4f06-980a-585ed1af868b\") " pod="openshift-nmstate/nmstate-handler-plkfj" Feb 18 19:28:53 crc kubenswrapper[4942]: I0218 19:28:53.170053 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/5f16510c-481e-41fd-a588-da27d576478c-tls-key-pair\") pod \"nmstate-webhook-866bcb46dc-bqlvw\" (UID: \"5f16510c-481e-41fd-a588-da27d576478c\") " pod="openshift-nmstate/nmstate-webhook-866bcb46dc-bqlvw" Feb 18 19:28:53 crc kubenswrapper[4942]: E0218 19:28:53.170170 4942 secret.go:188] Couldn't get secret openshift-nmstate/openshift-nmstate-webhook: secret "openshift-nmstate-webhook" not found Feb 18 19:28:53 crc kubenswrapper[4942]: E0218 19:28:53.170241 4942 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5f16510c-481e-41fd-a588-da27d576478c-tls-key-pair podName:5f16510c-481e-41fd-a588-da27d576478c nodeName:}" failed. No retries permitted until 2026-02-18 19:28:53.670212195 +0000 UTC m=+693.375144850 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "tls-key-pair" (UniqueName: "kubernetes.io/secret/5f16510c-481e-41fd-a588-da27d576478c-tls-key-pair") pod "nmstate-webhook-866bcb46dc-bqlvw" (UID: "5f16510c-481e-41fd-a588-da27d576478c") : secret "openshift-nmstate-webhook" not found Feb 18 19:28:53 crc kubenswrapper[4942]: I0218 19:28:53.170321 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d6cm4\" (UniqueName: \"kubernetes.io/projected/5f16510c-481e-41fd-a588-da27d576478c-kube-api-access-d6cm4\") pod \"nmstate-webhook-866bcb46dc-bqlvw\" (UID: \"5f16510c-481e-41fd-a588-da27d576478c\") " pod="openshift-nmstate/nmstate-webhook-866bcb46dc-bqlvw" Feb 18 19:28:53 crc kubenswrapper[4942]: I0218 19:28:53.170507 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/0069ee73-95fc-4f06-980a-585ed1af868b-nmstate-lock\") pod \"nmstate-handler-plkfj\" (UID: \"0069ee73-95fc-4f06-980a-585ed1af868b\") " pod="openshift-nmstate/nmstate-handler-plkfj" Feb 18 19:28:53 crc kubenswrapper[4942]: I0218 19:28:53.192422 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-45xgj\" (UniqueName: \"kubernetes.io/projected/76ecd9a6-426a-4dd2-b701-dc478849bf8c-kube-api-access-45xgj\") pod \"nmstate-metrics-58c85c668d-rttzx\" (UID: \"76ecd9a6-426a-4dd2-b701-dc478849bf8c\") " pod="openshift-nmstate/nmstate-metrics-58c85c668d-rttzx" Feb 18 19:28:53 crc kubenswrapper[4942]: I0218 19:28:53.205054 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d6cm4\" (UniqueName: \"kubernetes.io/projected/5f16510c-481e-41fd-a588-da27d576478c-kube-api-access-d6cm4\") pod \"nmstate-webhook-866bcb46dc-bqlvw\" (UID: \"5f16510c-481e-41fd-a588-da27d576478c\") " pod="openshift-nmstate/nmstate-webhook-866bcb46dc-bqlvw" Feb 18 19:28:53 crc kubenswrapper[4942]: I0218 19:28:53.241837 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-58c85c668d-rttzx" Feb 18 19:28:53 crc kubenswrapper[4942]: I0218 19:28:53.272011 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/fd2af045-6e0c-43de-8714-f052306c8899-nginx-conf\") pod \"nmstate-console-plugin-5c78fc5d65-4jzjl\" (UID: \"fd2af045-6e0c-43de-8714-f052306c8899\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-4jzjl" Feb 18 19:28:53 crc kubenswrapper[4942]: I0218 19:28:53.272068 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/0069ee73-95fc-4f06-980a-585ed1af868b-dbus-socket\") pod \"nmstate-handler-plkfj\" (UID: \"0069ee73-95fc-4f06-980a-585ed1af868b\") " pod="openshift-nmstate/nmstate-handler-plkfj" Feb 18 19:28:53 crc kubenswrapper[4942]: I0218 19:28:53.272429 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/0069ee73-95fc-4f06-980a-585ed1af868b-dbus-socket\") pod \"nmstate-handler-plkfj\" (UID: \"0069ee73-95fc-4f06-980a-585ed1af868b\") " pod="openshift-nmstate/nmstate-handler-plkfj" Feb 18 19:28:53 crc kubenswrapper[4942]: I0218 19:28:53.272485 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pnsmg\" (UniqueName: \"kubernetes.io/projected/fd2af045-6e0c-43de-8714-f052306c8899-kube-api-access-pnsmg\") pod \"nmstate-console-plugin-5c78fc5d65-4jzjl\" (UID: \"fd2af045-6e0c-43de-8714-f052306c8899\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-4jzjl" Feb 18 19:28:53 crc kubenswrapper[4942]: I0218 19:28:53.272514 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/fd2af045-6e0c-43de-8714-f052306c8899-plugin-serving-cert\") pod \"nmstate-console-plugin-5c78fc5d65-4jzjl\" (UID: \"fd2af045-6e0c-43de-8714-f052306c8899\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-4jzjl" Feb 18 19:28:53 crc kubenswrapper[4942]: I0218 19:28:53.272604 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/0069ee73-95fc-4f06-980a-585ed1af868b-nmstate-lock\") pod \"nmstate-handler-plkfj\" (UID: \"0069ee73-95fc-4f06-980a-585ed1af868b\") " pod="openshift-nmstate/nmstate-handler-plkfj" Feb 18 19:28:53 crc kubenswrapper[4942]: I0218 19:28:53.272678 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/0069ee73-95fc-4f06-980a-585ed1af868b-nmstate-lock\") pod \"nmstate-handler-plkfj\" (UID: \"0069ee73-95fc-4f06-980a-585ed1af868b\") " pod="openshift-nmstate/nmstate-handler-plkfj" Feb 18 19:28:53 crc kubenswrapper[4942]: I0218 19:28:53.272641 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vdjcw\" (UniqueName: \"kubernetes.io/projected/0069ee73-95fc-4f06-980a-585ed1af868b-kube-api-access-vdjcw\") pod \"nmstate-handler-plkfj\" (UID: \"0069ee73-95fc-4f06-980a-585ed1af868b\") " pod="openshift-nmstate/nmstate-handler-plkfj" Feb 18 19:28:53 crc kubenswrapper[4942]: I0218 19:28:53.288602 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/0069ee73-95fc-4f06-980a-585ed1af868b-ovs-socket\") pod \"nmstate-handler-plkfj\" (UID: \"0069ee73-95fc-4f06-980a-585ed1af868b\") " pod="openshift-nmstate/nmstate-handler-plkfj" Feb 18 19:28:53 crc kubenswrapper[4942]: I0218 19:28:53.288789 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/0069ee73-95fc-4f06-980a-585ed1af868b-ovs-socket\") pod \"nmstate-handler-plkfj\" (UID: \"0069ee73-95fc-4f06-980a-585ed1af868b\") " pod="openshift-nmstate/nmstate-handler-plkfj" Feb 18 19:28:53 crc kubenswrapper[4942]: I0218 19:28:53.321572 4942 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-8b7698c8d-dspsm"] Feb 18 19:28:53 crc kubenswrapper[4942]: I0218 19:28:53.323120 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-8b7698c8d-dspsm" Feb 18 19:28:53 crc kubenswrapper[4942]: I0218 19:28:53.339159 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-8b7698c8d-dspsm"] Feb 18 19:28:53 crc kubenswrapper[4942]: I0218 19:28:53.340048 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vdjcw\" (UniqueName: \"kubernetes.io/projected/0069ee73-95fc-4f06-980a-585ed1af868b-kube-api-access-vdjcw\") pod \"nmstate-handler-plkfj\" (UID: \"0069ee73-95fc-4f06-980a-585ed1af868b\") " pod="openshift-nmstate/nmstate-handler-plkfj" Feb 18 19:28:53 crc kubenswrapper[4942]: I0218 19:28:53.390214 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/fd2af045-6e0c-43de-8714-f052306c8899-nginx-conf\") pod \"nmstate-console-plugin-5c78fc5d65-4jzjl\" (UID: \"fd2af045-6e0c-43de-8714-f052306c8899\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-4jzjl" Feb 18 19:28:53 crc kubenswrapper[4942]: I0218 19:28:53.390304 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pnsmg\" (UniqueName: \"kubernetes.io/projected/fd2af045-6e0c-43de-8714-f052306c8899-kube-api-access-pnsmg\") pod \"nmstate-console-plugin-5c78fc5d65-4jzjl\" (UID: \"fd2af045-6e0c-43de-8714-f052306c8899\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-4jzjl" Feb 18 19:28:53 crc kubenswrapper[4942]: I0218 19:28:53.390487 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/fd2af045-6e0c-43de-8714-f052306c8899-plugin-serving-cert\") pod \"nmstate-console-plugin-5c78fc5d65-4jzjl\" (UID: \"fd2af045-6e0c-43de-8714-f052306c8899\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-4jzjl" Feb 18 19:28:53 crc kubenswrapper[4942]: I0218 19:28:53.391637 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/fd2af045-6e0c-43de-8714-f052306c8899-nginx-conf\") pod \"nmstate-console-plugin-5c78fc5d65-4jzjl\" (UID: \"fd2af045-6e0c-43de-8714-f052306c8899\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-4jzjl" Feb 18 19:28:53 crc kubenswrapper[4942]: I0218 19:28:53.396256 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/fd2af045-6e0c-43de-8714-f052306c8899-plugin-serving-cert\") pod \"nmstate-console-plugin-5c78fc5d65-4jzjl\" (UID: \"fd2af045-6e0c-43de-8714-f052306c8899\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-4jzjl" Feb 18 19:28:53 crc kubenswrapper[4942]: I0218 19:28:53.412031 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pnsmg\" (UniqueName: \"kubernetes.io/projected/fd2af045-6e0c-43de-8714-f052306c8899-kube-api-access-pnsmg\") pod \"nmstate-console-plugin-5c78fc5d65-4jzjl\" (UID: \"fd2af045-6e0c-43de-8714-f052306c8899\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-4jzjl" Feb 18 19:28:53 crc kubenswrapper[4942]: I0218 19:28:53.491489 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/238ade24-4172-473c-b7e5-c51e7ecce031-trusted-ca-bundle\") pod \"console-8b7698c8d-dspsm\" (UID: \"238ade24-4172-473c-b7e5-c51e7ecce031\") " pod="openshift-console/console-8b7698c8d-dspsm" Feb 18 19:28:53 crc kubenswrapper[4942]: I0218 19:28:53.491537 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/238ade24-4172-473c-b7e5-c51e7ecce031-console-config\") pod \"console-8b7698c8d-dspsm\" (UID: \"238ade24-4172-473c-b7e5-c51e7ecce031\") " pod="openshift-console/console-8b7698c8d-dspsm" Feb 18 19:28:53 crc kubenswrapper[4942]: I0218 19:28:53.491559 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/238ade24-4172-473c-b7e5-c51e7ecce031-oauth-serving-cert\") pod \"console-8b7698c8d-dspsm\" (UID: \"238ade24-4172-473c-b7e5-c51e7ecce031\") " pod="openshift-console/console-8b7698c8d-dspsm" Feb 18 19:28:53 crc kubenswrapper[4942]: I0218 19:28:53.491739 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zwsm2\" (UniqueName: \"kubernetes.io/projected/238ade24-4172-473c-b7e5-c51e7ecce031-kube-api-access-zwsm2\") pod \"console-8b7698c8d-dspsm\" (UID: \"238ade24-4172-473c-b7e5-c51e7ecce031\") " pod="openshift-console/console-8b7698c8d-dspsm" Feb 18 19:28:53 crc kubenswrapper[4942]: I0218 19:28:53.491804 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/238ade24-4172-473c-b7e5-c51e7ecce031-service-ca\") pod \"console-8b7698c8d-dspsm\" (UID: \"238ade24-4172-473c-b7e5-c51e7ecce031\") " pod="openshift-console/console-8b7698c8d-dspsm" Feb 18 19:28:53 crc kubenswrapper[4942]: I0218 19:28:53.491827 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/238ade24-4172-473c-b7e5-c51e7ecce031-console-oauth-config\") pod \"console-8b7698c8d-dspsm\" (UID: \"238ade24-4172-473c-b7e5-c51e7ecce031\") " pod="openshift-console/console-8b7698c8d-dspsm" Feb 18 19:28:53 crc kubenswrapper[4942]: I0218 19:28:53.492026 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/238ade24-4172-473c-b7e5-c51e7ecce031-console-serving-cert\") pod \"console-8b7698c8d-dspsm\" (UID: \"238ade24-4172-473c-b7e5-c51e7ecce031\") " pod="openshift-console/console-8b7698c8d-dspsm" Feb 18 19:28:53 crc kubenswrapper[4942]: I0218 19:28:53.593305 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/238ade24-4172-473c-b7e5-c51e7ecce031-console-serving-cert\") pod \"console-8b7698c8d-dspsm\" (UID: \"238ade24-4172-473c-b7e5-c51e7ecce031\") " pod="openshift-console/console-8b7698c8d-dspsm" Feb 18 19:28:53 crc kubenswrapper[4942]: I0218 19:28:53.593375 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/238ade24-4172-473c-b7e5-c51e7ecce031-trusted-ca-bundle\") pod \"console-8b7698c8d-dspsm\" (UID: \"238ade24-4172-473c-b7e5-c51e7ecce031\") " pod="openshift-console/console-8b7698c8d-dspsm" Feb 18 19:28:53 crc kubenswrapper[4942]: I0218 19:28:53.593397 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/238ade24-4172-473c-b7e5-c51e7ecce031-console-config\") pod \"console-8b7698c8d-dspsm\" (UID: \"238ade24-4172-473c-b7e5-c51e7ecce031\") " pod="openshift-console/console-8b7698c8d-dspsm" Feb 18 19:28:53 crc kubenswrapper[4942]: I0218 19:28:53.593413 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/238ade24-4172-473c-b7e5-c51e7ecce031-oauth-serving-cert\") pod \"console-8b7698c8d-dspsm\" (UID: \"238ade24-4172-473c-b7e5-c51e7ecce031\") " pod="openshift-console/console-8b7698c8d-dspsm" Feb 18 19:28:53 crc kubenswrapper[4942]: I0218 19:28:53.593457 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zwsm2\" (UniqueName: \"kubernetes.io/projected/238ade24-4172-473c-b7e5-c51e7ecce031-kube-api-access-zwsm2\") pod \"console-8b7698c8d-dspsm\" (UID: \"238ade24-4172-473c-b7e5-c51e7ecce031\") " pod="openshift-console/console-8b7698c8d-dspsm" Feb 18 19:28:53 crc kubenswrapper[4942]: I0218 19:28:53.593476 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/238ade24-4172-473c-b7e5-c51e7ecce031-service-ca\") pod \"console-8b7698c8d-dspsm\" (UID: \"238ade24-4172-473c-b7e5-c51e7ecce031\") " pod="openshift-console/console-8b7698c8d-dspsm" Feb 18 19:28:53 crc kubenswrapper[4942]: I0218 19:28:53.593493 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/238ade24-4172-473c-b7e5-c51e7ecce031-console-oauth-config\") pod \"console-8b7698c8d-dspsm\" (UID: \"238ade24-4172-473c-b7e5-c51e7ecce031\") " pod="openshift-console/console-8b7698c8d-dspsm" Feb 18 19:28:53 crc kubenswrapper[4942]: I0218 19:28:53.594493 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/238ade24-4172-473c-b7e5-c51e7ecce031-console-config\") pod \"console-8b7698c8d-dspsm\" (UID: \"238ade24-4172-473c-b7e5-c51e7ecce031\") " pod="openshift-console/console-8b7698c8d-dspsm" Feb 18 19:28:53 crc kubenswrapper[4942]: I0218 19:28:53.595288 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/238ade24-4172-473c-b7e5-c51e7ecce031-oauth-serving-cert\") pod \"console-8b7698c8d-dspsm\" (UID: \"238ade24-4172-473c-b7e5-c51e7ecce031\") " pod="openshift-console/console-8b7698c8d-dspsm" Feb 18 19:28:53 crc kubenswrapper[4942]: I0218 19:28:53.595305 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/238ade24-4172-473c-b7e5-c51e7ecce031-trusted-ca-bundle\") pod \"console-8b7698c8d-dspsm\" (UID: \"238ade24-4172-473c-b7e5-c51e7ecce031\") " pod="openshift-console/console-8b7698c8d-dspsm" Feb 18 19:28:53 crc kubenswrapper[4942]: I0218 19:28:53.595312 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-58c85c668d-rttzx"] Feb 18 19:28:53 crc kubenswrapper[4942]: I0218 19:28:53.595414 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/238ade24-4172-473c-b7e5-c51e7ecce031-service-ca\") pod \"console-8b7698c8d-dspsm\" (UID: \"238ade24-4172-473c-b7e5-c51e7ecce031\") " pod="openshift-console/console-8b7698c8d-dspsm" Feb 18 19:28:53 crc kubenswrapper[4942]: I0218 19:28:53.599032 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/238ade24-4172-473c-b7e5-c51e7ecce031-console-oauth-config\") pod \"console-8b7698c8d-dspsm\" (UID: \"238ade24-4172-473c-b7e5-c51e7ecce031\") " pod="openshift-console/console-8b7698c8d-dspsm" Feb 18 19:28:53 crc kubenswrapper[4942]: I0218 19:28:53.599314 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/238ade24-4172-473c-b7e5-c51e7ecce031-console-serving-cert\") pod \"console-8b7698c8d-dspsm\" (UID: \"238ade24-4172-473c-b7e5-c51e7ecce031\") " pod="openshift-console/console-8b7698c8d-dspsm" Feb 18 19:28:53 crc kubenswrapper[4942]: I0218 19:28:53.605518 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-plkfj" Feb 18 19:28:53 crc kubenswrapper[4942]: I0218 19:28:53.614824 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zwsm2\" (UniqueName: \"kubernetes.io/projected/238ade24-4172-473c-b7e5-c51e7ecce031-kube-api-access-zwsm2\") pod \"console-8b7698c8d-dspsm\" (UID: \"238ade24-4172-473c-b7e5-c51e7ecce031\") " pod="openshift-console/console-8b7698c8d-dspsm" Feb 18 19:28:53 crc kubenswrapper[4942]: W0218 19:28:53.625566 4942 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0069ee73_95fc_4f06_980a_585ed1af868b.slice/crio-834bf83da2a9a51758c3d8dedc723d3fc8b39d236b3e0c59e98e95df37d76594 WatchSource:0}: Error finding container 834bf83da2a9a51758c3d8dedc723d3fc8b39d236b3e0c59e98e95df37d76594: Status 404 returned error can't find the container with id 834bf83da2a9a51758c3d8dedc723d3fc8b39d236b3e0c59e98e95df37d76594 Feb 18 19:28:53 crc kubenswrapper[4942]: I0218 19:28:53.659435 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-8b7698c8d-dspsm" Feb 18 19:28:53 crc kubenswrapper[4942]: I0218 19:28:53.695099 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/5f16510c-481e-41fd-a588-da27d576478c-tls-key-pair\") pod \"nmstate-webhook-866bcb46dc-bqlvw\" (UID: \"5f16510c-481e-41fd-a588-da27d576478c\") " pod="openshift-nmstate/nmstate-webhook-866bcb46dc-bqlvw" Feb 18 19:28:53 crc kubenswrapper[4942]: I0218 19:28:53.698238 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/5f16510c-481e-41fd-a588-da27d576478c-tls-key-pair\") pod \"nmstate-webhook-866bcb46dc-bqlvw\" (UID: \"5f16510c-481e-41fd-a588-da27d576478c\") " pod="openshift-nmstate/nmstate-webhook-866bcb46dc-bqlvw" Feb 18 19:28:53 crc kubenswrapper[4942]: I0218 19:28:53.700110 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-4jzjl" Feb 18 19:28:53 crc kubenswrapper[4942]: I0218 19:28:53.865411 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-866bcb46dc-bqlvw" Feb 18 19:28:53 crc kubenswrapper[4942]: I0218 19:28:53.866182 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-8b7698c8d-dspsm"] Feb 18 19:28:53 crc kubenswrapper[4942]: I0218 19:28:53.910097 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-5c78fc5d65-4jzjl"] Feb 18 19:28:53 crc kubenswrapper[4942]: W0218 19:28:53.915274 4942 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfd2af045_6e0c_43de_8714_f052306c8899.slice/crio-2436b1aae004bf913cf836ae8657158f8e9a7ab797094b8bdfd2fbb694c29eaf WatchSource:0}: Error finding container 2436b1aae004bf913cf836ae8657158f8e9a7ab797094b8bdfd2fbb694c29eaf: Status 404 returned error can't find the container with id 2436b1aae004bf913cf836ae8657158f8e9a7ab797094b8bdfd2fbb694c29eaf Feb 18 19:28:53 crc kubenswrapper[4942]: I0218 19:28:53.985835 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-plkfj" event={"ID":"0069ee73-95fc-4f06-980a-585ed1af868b","Type":"ContainerStarted","Data":"834bf83da2a9a51758c3d8dedc723d3fc8b39d236b3e0c59e98e95df37d76594"} Feb 18 19:28:53 crc kubenswrapper[4942]: I0218 19:28:53.987781 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-8b7698c8d-dspsm" event={"ID":"238ade24-4172-473c-b7e5-c51e7ecce031","Type":"ContainerStarted","Data":"0ccacfcc43c2a6cfb74658e58ef58ab962555a2736895f73d2af1368c89dbcd7"} Feb 18 19:28:53 crc kubenswrapper[4942]: I0218 19:28:53.988746 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-4jzjl" event={"ID":"fd2af045-6e0c-43de-8714-f052306c8899","Type":"ContainerStarted","Data":"2436b1aae004bf913cf836ae8657158f8e9a7ab797094b8bdfd2fbb694c29eaf"} Feb 18 19:28:53 crc kubenswrapper[4942]: I0218 19:28:53.989852 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-58c85c668d-rttzx" event={"ID":"76ecd9a6-426a-4dd2-b701-dc478849bf8c","Type":"ContainerStarted","Data":"6fa5d6079c231fa1550709d168b8ee8e1d57c942b4933df3d72edf9c870e7152"} Feb 18 19:28:54 crc kubenswrapper[4942]: I0218 19:28:54.130167 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-866bcb46dc-bqlvw"] Feb 18 19:28:54 crc kubenswrapper[4942]: W0218 19:28:54.135582 4942 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5f16510c_481e_41fd_a588_da27d576478c.slice/crio-7be0c00003b55b2b5c5bc5a95a7a95095dc8d13a8bc2a94ceb155474f53862f3 WatchSource:0}: Error finding container 7be0c00003b55b2b5c5bc5a95a7a95095dc8d13a8bc2a94ceb155474f53862f3: Status 404 returned error can't find the container with id 7be0c00003b55b2b5c5bc5a95a7a95095dc8d13a8bc2a94ceb155474f53862f3 Feb 18 19:28:54 crc kubenswrapper[4942]: I0218 19:28:54.997920 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-8b7698c8d-dspsm" event={"ID":"238ade24-4172-473c-b7e5-c51e7ecce031","Type":"ContainerStarted","Data":"5d112f5c3c4b1b6ee19325c6fe6b02c146dbfa2faaea1efac98f91ea5ee8f1b3"} Feb 18 19:28:55 crc kubenswrapper[4942]: I0218 19:28:55.002997 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-866bcb46dc-bqlvw" event={"ID":"5f16510c-481e-41fd-a588-da27d576478c","Type":"ContainerStarted","Data":"7be0c00003b55b2b5c5bc5a95a7a95095dc8d13a8bc2a94ceb155474f53862f3"} Feb 18 19:28:55 crc kubenswrapper[4942]: I0218 19:28:55.022062 4942 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-8b7698c8d-dspsm" podStartSLOduration=2.022035699 podStartE2EDuration="2.022035699s" podCreationTimestamp="2026-02-18 19:28:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:28:55.014653277 +0000 UTC m=+694.719585972" watchObservedRunningTime="2026-02-18 19:28:55.022035699 +0000 UTC m=+694.726968404" Feb 18 19:28:58 crc kubenswrapper[4942]: I0218 19:28:58.032523 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-plkfj" event={"ID":"0069ee73-95fc-4f06-980a-585ed1af868b","Type":"ContainerStarted","Data":"cc82c8372ba2ff3eb746eef78db366f9b56189a4d900f3a755c68bff8b3a9ae3"} Feb 18 19:28:58 crc kubenswrapper[4942]: I0218 19:28:58.033251 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-handler-plkfj" Feb 18 19:28:58 crc kubenswrapper[4942]: I0218 19:28:58.034735 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-4jzjl" event={"ID":"fd2af045-6e0c-43de-8714-f052306c8899","Type":"ContainerStarted","Data":"d165365662a30777e1e51ab8a0636e83fd0404f57429c996a6c1c5303716c2ce"} Feb 18 19:28:58 crc kubenswrapper[4942]: I0218 19:28:58.037272 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-866bcb46dc-bqlvw" event={"ID":"5f16510c-481e-41fd-a588-da27d576478c","Type":"ContainerStarted","Data":"9f5b68e120fc8ca512adc6d2d5994f9a2ce41a443b782c0fcfb090a39d700a90"} Feb 18 19:28:58 crc kubenswrapper[4942]: I0218 19:28:58.037390 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-webhook-866bcb46dc-bqlvw" Feb 18 19:28:58 crc kubenswrapper[4942]: I0218 19:28:58.038430 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-58c85c668d-rttzx" event={"ID":"76ecd9a6-426a-4dd2-b701-dc478849bf8c","Type":"ContainerStarted","Data":"0db4d45bd4c037d8c179c5a9e9e987896667e0553bf4015a5670fa5c63b63c5e"} Feb 18 19:28:58 crc kubenswrapper[4942]: I0218 19:28:58.060959 4942 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-handler-plkfj" podStartSLOduration=2.841517816 podStartE2EDuration="6.060934646s" podCreationTimestamp="2026-02-18 19:28:52 +0000 UTC" firstStartedPulling="2026-02-18 19:28:53.627325004 +0000 UTC m=+693.332257669" lastFinishedPulling="2026-02-18 19:28:56.846741814 +0000 UTC m=+696.551674499" observedRunningTime="2026-02-18 19:28:58.049822052 +0000 UTC m=+697.754754737" watchObservedRunningTime="2026-02-18 19:28:58.060934646 +0000 UTC m=+697.765867321" Feb 18 19:28:58 crc kubenswrapper[4942]: I0218 19:28:58.070679 4942 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-4jzjl" podStartSLOduration=2.166569471 podStartE2EDuration="5.070658007s" podCreationTimestamp="2026-02-18 19:28:53 +0000 UTC" firstStartedPulling="2026-02-18 19:28:53.917630789 +0000 UTC m=+693.622563454" lastFinishedPulling="2026-02-18 19:28:56.821719325 +0000 UTC m=+696.526651990" observedRunningTime="2026-02-18 19:28:58.066176086 +0000 UTC m=+697.771108791" watchObservedRunningTime="2026-02-18 19:28:58.070658007 +0000 UTC m=+697.775590672" Feb 18 19:28:58 crc kubenswrapper[4942]: I0218 19:28:58.088878 4942 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-webhook-866bcb46dc-bqlvw" podStartSLOduration=3.38577632 podStartE2EDuration="6.088859127s" podCreationTimestamp="2026-02-18 19:28:52 +0000 UTC" firstStartedPulling="2026-02-18 19:28:54.137441643 +0000 UTC m=+693.842374308" lastFinishedPulling="2026-02-18 19:28:56.84052441 +0000 UTC m=+696.545457115" observedRunningTime="2026-02-18 19:28:58.084887939 +0000 UTC m=+697.789820624" watchObservedRunningTime="2026-02-18 19:28:58.088859127 +0000 UTC m=+697.793791792" Feb 18 19:29:00 crc kubenswrapper[4942]: I0218 19:29:00.062665 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-58c85c668d-rttzx" event={"ID":"76ecd9a6-426a-4dd2-b701-dc478849bf8c","Type":"ContainerStarted","Data":"601c55800d31e3074e9adbe935099ddd7251b2b7763286942062bbbfcdc8a012"} Feb 18 19:29:00 crc kubenswrapper[4942]: I0218 19:29:00.095843 4942 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-metrics-58c85c668d-rttzx" podStartSLOduration=2.736358299 podStartE2EDuration="8.095814947s" podCreationTimestamp="2026-02-18 19:28:52 +0000 UTC" firstStartedPulling="2026-02-18 19:28:53.616023665 +0000 UTC m=+693.320956330" lastFinishedPulling="2026-02-18 19:28:58.975480313 +0000 UTC m=+698.680412978" observedRunningTime="2026-02-18 19:29:00.090209308 +0000 UTC m=+699.795142003" watchObservedRunningTime="2026-02-18 19:29:00.095814947 +0000 UTC m=+699.800747642" Feb 18 19:29:03 crc kubenswrapper[4942]: I0218 19:29:03.632575 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-handler-plkfj" Feb 18 19:29:03 crc kubenswrapper[4942]: I0218 19:29:03.660477 4942 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-8b7698c8d-dspsm" Feb 18 19:29:03 crc kubenswrapper[4942]: I0218 19:29:03.660525 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-8b7698c8d-dspsm" Feb 18 19:29:03 crc kubenswrapper[4942]: I0218 19:29:03.664668 4942 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-8b7698c8d-dspsm" Feb 18 19:29:04 crc kubenswrapper[4942]: I0218 19:29:04.087577 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-8b7698c8d-dspsm" Feb 18 19:29:04 crc kubenswrapper[4942]: I0218 19:29:04.143040 4942 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-5l26l"] Feb 18 19:29:13 crc kubenswrapper[4942]: I0218 19:29:13.872138 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-webhook-866bcb46dc-bqlvw" Feb 18 19:29:28 crc kubenswrapper[4942]: I0218 19:29:28.339262 4942 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213sqkzc"] Feb 18 19:29:28 crc kubenswrapper[4942]: I0218 19:29:28.342212 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213sqkzc" Feb 18 19:29:28 crc kubenswrapper[4942]: I0218 19:29:28.345260 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Feb 18 19:29:28 crc kubenswrapper[4942]: I0218 19:29:28.354682 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213sqkzc"] Feb 18 19:29:28 crc kubenswrapper[4942]: I0218 19:29:28.453376 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/aa407b7c-08d9-4762-9aea-25d6aa8e4338-bundle\") pod \"a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213sqkzc\" (UID: \"aa407b7c-08d9-4762-9aea-25d6aa8e4338\") " pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213sqkzc" Feb 18 19:29:28 crc kubenswrapper[4942]: I0218 19:29:28.453579 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d2q9v\" (UniqueName: \"kubernetes.io/projected/aa407b7c-08d9-4762-9aea-25d6aa8e4338-kube-api-access-d2q9v\") pod \"a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213sqkzc\" (UID: \"aa407b7c-08d9-4762-9aea-25d6aa8e4338\") " pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213sqkzc" Feb 18 19:29:28 crc kubenswrapper[4942]: I0218 19:29:28.453749 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/aa407b7c-08d9-4762-9aea-25d6aa8e4338-util\") pod \"a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213sqkzc\" (UID: \"aa407b7c-08d9-4762-9aea-25d6aa8e4338\") " pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213sqkzc" Feb 18 19:29:28 crc kubenswrapper[4942]: I0218 19:29:28.555251 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/aa407b7c-08d9-4762-9aea-25d6aa8e4338-bundle\") pod \"a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213sqkzc\" (UID: \"aa407b7c-08d9-4762-9aea-25d6aa8e4338\") " pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213sqkzc" Feb 18 19:29:28 crc kubenswrapper[4942]: I0218 19:29:28.555322 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d2q9v\" (UniqueName: \"kubernetes.io/projected/aa407b7c-08d9-4762-9aea-25d6aa8e4338-kube-api-access-d2q9v\") pod \"a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213sqkzc\" (UID: \"aa407b7c-08d9-4762-9aea-25d6aa8e4338\") " pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213sqkzc" Feb 18 19:29:28 crc kubenswrapper[4942]: I0218 19:29:28.555354 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/aa407b7c-08d9-4762-9aea-25d6aa8e4338-util\") pod \"a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213sqkzc\" (UID: \"aa407b7c-08d9-4762-9aea-25d6aa8e4338\") " pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213sqkzc" Feb 18 19:29:28 crc kubenswrapper[4942]: I0218 19:29:28.555879 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/aa407b7c-08d9-4762-9aea-25d6aa8e4338-util\") pod \"a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213sqkzc\" (UID: \"aa407b7c-08d9-4762-9aea-25d6aa8e4338\") " pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213sqkzc" Feb 18 19:29:28 crc kubenswrapper[4942]: I0218 19:29:28.555879 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/aa407b7c-08d9-4762-9aea-25d6aa8e4338-bundle\") pod \"a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213sqkzc\" (UID: \"aa407b7c-08d9-4762-9aea-25d6aa8e4338\") " pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213sqkzc" Feb 18 19:29:28 crc kubenswrapper[4942]: I0218 19:29:28.578630 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d2q9v\" (UniqueName: \"kubernetes.io/projected/aa407b7c-08d9-4762-9aea-25d6aa8e4338-kube-api-access-d2q9v\") pod \"a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213sqkzc\" (UID: \"aa407b7c-08d9-4762-9aea-25d6aa8e4338\") " pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213sqkzc" Feb 18 19:29:28 crc kubenswrapper[4942]: I0218 19:29:28.698121 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213sqkzc" Feb 18 19:29:29 crc kubenswrapper[4942]: I0218 19:29:29.113739 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213sqkzc"] Feb 18 19:29:29 crc kubenswrapper[4942]: I0218 19:29:29.177302 4942 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-f9d7485db-5l26l" podUID="5683bb73-dc7f-40ed-86cd-0c08f2d38147" containerName="console" containerID="cri-o://49458ca39b9ba344fe8c10dba2a8e9386f116a326c032cdb747d289d4ac6f704" gracePeriod=15 Feb 18 19:29:29 crc kubenswrapper[4942]: I0218 19:29:29.271235 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213sqkzc" event={"ID":"aa407b7c-08d9-4762-9aea-25d6aa8e4338","Type":"ContainerStarted","Data":"b2a2b13a57b633a3c9df216efc2916398a4a95bc31b1ab284c19a801ebf1cb8e"} Feb 18 19:29:29 crc kubenswrapper[4942]: I0218 19:29:29.271277 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213sqkzc" event={"ID":"aa407b7c-08d9-4762-9aea-25d6aa8e4338","Type":"ContainerStarted","Data":"9cf5918a0319959be412172b0bd964ed900873c62b9ea55107030c04fc05324f"} Feb 18 19:29:29 crc kubenswrapper[4942]: I0218 19:29:29.531666 4942 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-5l26l_5683bb73-dc7f-40ed-86cd-0c08f2d38147/console/0.log" Feb 18 19:29:29 crc kubenswrapper[4942]: I0218 19:29:29.532028 4942 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-5l26l" Feb 18 19:29:29 crc kubenswrapper[4942]: I0218 19:29:29.668702 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/5683bb73-dc7f-40ed-86cd-0c08f2d38147-console-oauth-config\") pod \"5683bb73-dc7f-40ed-86cd-0c08f2d38147\" (UID: \"5683bb73-dc7f-40ed-86cd-0c08f2d38147\") " Feb 18 19:29:29 crc kubenswrapper[4942]: I0218 19:29:29.668825 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/5683bb73-dc7f-40ed-86cd-0c08f2d38147-oauth-serving-cert\") pod \"5683bb73-dc7f-40ed-86cd-0c08f2d38147\" (UID: \"5683bb73-dc7f-40ed-86cd-0c08f2d38147\") " Feb 18 19:29:29 crc kubenswrapper[4942]: I0218 19:29:29.668898 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nn6b4\" (UniqueName: \"kubernetes.io/projected/5683bb73-dc7f-40ed-86cd-0c08f2d38147-kube-api-access-nn6b4\") pod \"5683bb73-dc7f-40ed-86cd-0c08f2d38147\" (UID: \"5683bb73-dc7f-40ed-86cd-0c08f2d38147\") " Feb 18 19:29:29 crc kubenswrapper[4942]: I0218 19:29:29.668958 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/5683bb73-dc7f-40ed-86cd-0c08f2d38147-console-config\") pod \"5683bb73-dc7f-40ed-86cd-0c08f2d38147\" (UID: \"5683bb73-dc7f-40ed-86cd-0c08f2d38147\") " Feb 18 19:29:29 crc kubenswrapper[4942]: I0218 19:29:29.669013 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5683bb73-dc7f-40ed-86cd-0c08f2d38147-trusted-ca-bundle\") pod \"5683bb73-dc7f-40ed-86cd-0c08f2d38147\" (UID: \"5683bb73-dc7f-40ed-86cd-0c08f2d38147\") " Feb 18 19:29:29 crc kubenswrapper[4942]: I0218 19:29:29.669080 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/5683bb73-dc7f-40ed-86cd-0c08f2d38147-service-ca\") pod \"5683bb73-dc7f-40ed-86cd-0c08f2d38147\" (UID: \"5683bb73-dc7f-40ed-86cd-0c08f2d38147\") " Feb 18 19:29:29 crc kubenswrapper[4942]: I0218 19:29:29.669161 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/5683bb73-dc7f-40ed-86cd-0c08f2d38147-console-serving-cert\") pod \"5683bb73-dc7f-40ed-86cd-0c08f2d38147\" (UID: \"5683bb73-dc7f-40ed-86cd-0c08f2d38147\") " Feb 18 19:29:29 crc kubenswrapper[4942]: I0218 19:29:29.670474 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5683bb73-dc7f-40ed-86cd-0c08f2d38147-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "5683bb73-dc7f-40ed-86cd-0c08f2d38147" (UID: "5683bb73-dc7f-40ed-86cd-0c08f2d38147"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:29:29 crc kubenswrapper[4942]: I0218 19:29:29.670877 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5683bb73-dc7f-40ed-86cd-0c08f2d38147-console-config" (OuterVolumeSpecName: "console-config") pod "5683bb73-dc7f-40ed-86cd-0c08f2d38147" (UID: "5683bb73-dc7f-40ed-86cd-0c08f2d38147"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:29:29 crc kubenswrapper[4942]: I0218 19:29:29.670911 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5683bb73-dc7f-40ed-86cd-0c08f2d38147-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "5683bb73-dc7f-40ed-86cd-0c08f2d38147" (UID: "5683bb73-dc7f-40ed-86cd-0c08f2d38147"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:29:29 crc kubenswrapper[4942]: I0218 19:29:29.671344 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5683bb73-dc7f-40ed-86cd-0c08f2d38147-service-ca" (OuterVolumeSpecName: "service-ca") pod "5683bb73-dc7f-40ed-86cd-0c08f2d38147" (UID: "5683bb73-dc7f-40ed-86cd-0c08f2d38147"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:29:29 crc kubenswrapper[4942]: I0218 19:29:29.677679 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5683bb73-dc7f-40ed-86cd-0c08f2d38147-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "5683bb73-dc7f-40ed-86cd-0c08f2d38147" (UID: "5683bb73-dc7f-40ed-86cd-0c08f2d38147"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:29:29 crc kubenswrapper[4942]: I0218 19:29:29.678107 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5683bb73-dc7f-40ed-86cd-0c08f2d38147-kube-api-access-nn6b4" (OuterVolumeSpecName: "kube-api-access-nn6b4") pod "5683bb73-dc7f-40ed-86cd-0c08f2d38147" (UID: "5683bb73-dc7f-40ed-86cd-0c08f2d38147"). InnerVolumeSpecName "kube-api-access-nn6b4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:29:29 crc kubenswrapper[4942]: I0218 19:29:29.678210 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5683bb73-dc7f-40ed-86cd-0c08f2d38147-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "5683bb73-dc7f-40ed-86cd-0c08f2d38147" (UID: "5683bb73-dc7f-40ed-86cd-0c08f2d38147"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:29:29 crc kubenswrapper[4942]: I0218 19:29:29.770455 4942 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nn6b4\" (UniqueName: \"kubernetes.io/projected/5683bb73-dc7f-40ed-86cd-0c08f2d38147-kube-api-access-nn6b4\") on node \"crc\" DevicePath \"\"" Feb 18 19:29:29 crc kubenswrapper[4942]: I0218 19:29:29.770506 4942 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/5683bb73-dc7f-40ed-86cd-0c08f2d38147-console-config\") on node \"crc\" DevicePath \"\"" Feb 18 19:29:29 crc kubenswrapper[4942]: I0218 19:29:29.770522 4942 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5683bb73-dc7f-40ed-86cd-0c08f2d38147-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 19:29:29 crc kubenswrapper[4942]: I0218 19:29:29.770538 4942 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/5683bb73-dc7f-40ed-86cd-0c08f2d38147-service-ca\") on node \"crc\" DevicePath \"\"" Feb 18 19:29:29 crc kubenswrapper[4942]: I0218 19:29:29.770553 4942 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/5683bb73-dc7f-40ed-86cd-0c08f2d38147-console-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 18 19:29:29 crc kubenswrapper[4942]: I0218 19:29:29.770568 4942 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/5683bb73-dc7f-40ed-86cd-0c08f2d38147-console-oauth-config\") on node \"crc\" DevicePath \"\"" Feb 18 19:29:29 crc kubenswrapper[4942]: I0218 19:29:29.770618 4942 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/5683bb73-dc7f-40ed-86cd-0c08f2d38147-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 18 19:29:30 crc kubenswrapper[4942]: I0218 19:29:30.280419 4942 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-5l26l_5683bb73-dc7f-40ed-86cd-0c08f2d38147/console/0.log" Feb 18 19:29:30 crc kubenswrapper[4942]: I0218 19:29:30.280482 4942 generic.go:334] "Generic (PLEG): container finished" podID="5683bb73-dc7f-40ed-86cd-0c08f2d38147" containerID="49458ca39b9ba344fe8c10dba2a8e9386f116a326c032cdb747d289d4ac6f704" exitCode=2 Feb 18 19:29:30 crc kubenswrapper[4942]: I0218 19:29:30.280553 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-5l26l" event={"ID":"5683bb73-dc7f-40ed-86cd-0c08f2d38147","Type":"ContainerDied","Data":"49458ca39b9ba344fe8c10dba2a8e9386f116a326c032cdb747d289d4ac6f704"} Feb 18 19:29:30 crc kubenswrapper[4942]: I0218 19:29:30.280633 4942 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-5l26l" Feb 18 19:29:30 crc kubenswrapper[4942]: I0218 19:29:30.280863 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-5l26l" event={"ID":"5683bb73-dc7f-40ed-86cd-0c08f2d38147","Type":"ContainerDied","Data":"76d66aaf89f1a5aa5957e318124bcfa92f6a6c37df6e5abcffc91fd45db84790"} Feb 18 19:29:30 crc kubenswrapper[4942]: I0218 19:29:30.280893 4942 scope.go:117] "RemoveContainer" containerID="49458ca39b9ba344fe8c10dba2a8e9386f116a326c032cdb747d289d4ac6f704" Feb 18 19:29:30 crc kubenswrapper[4942]: I0218 19:29:30.282877 4942 generic.go:334] "Generic (PLEG): container finished" podID="aa407b7c-08d9-4762-9aea-25d6aa8e4338" containerID="b2a2b13a57b633a3c9df216efc2916398a4a95bc31b1ab284c19a801ebf1cb8e" exitCode=0 Feb 18 19:29:30 crc kubenswrapper[4942]: I0218 19:29:30.282901 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213sqkzc" event={"ID":"aa407b7c-08d9-4762-9aea-25d6aa8e4338","Type":"ContainerDied","Data":"b2a2b13a57b633a3c9df216efc2916398a4a95bc31b1ab284c19a801ebf1cb8e"} Feb 18 19:29:30 crc kubenswrapper[4942]: I0218 19:29:30.308175 4942 scope.go:117] "RemoveContainer" containerID="49458ca39b9ba344fe8c10dba2a8e9386f116a326c032cdb747d289d4ac6f704" Feb 18 19:29:30 crc kubenswrapper[4942]: E0218 19:29:30.308496 4942 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"49458ca39b9ba344fe8c10dba2a8e9386f116a326c032cdb747d289d4ac6f704\": container with ID starting with 49458ca39b9ba344fe8c10dba2a8e9386f116a326c032cdb747d289d4ac6f704 not found: ID does not exist" containerID="49458ca39b9ba344fe8c10dba2a8e9386f116a326c032cdb747d289d4ac6f704" Feb 18 19:29:30 crc kubenswrapper[4942]: I0218 19:29:30.308526 4942 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"49458ca39b9ba344fe8c10dba2a8e9386f116a326c032cdb747d289d4ac6f704"} err="failed to get container status \"49458ca39b9ba344fe8c10dba2a8e9386f116a326c032cdb747d289d4ac6f704\": rpc error: code = NotFound desc = could not find container \"49458ca39b9ba344fe8c10dba2a8e9386f116a326c032cdb747d289d4ac6f704\": container with ID starting with 49458ca39b9ba344fe8c10dba2a8e9386f116a326c032cdb747d289d4ac6f704 not found: ID does not exist" Feb 18 19:29:30 crc kubenswrapper[4942]: I0218 19:29:30.327533 4942 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-5l26l"] Feb 18 19:29:30 crc kubenswrapper[4942]: I0218 19:29:30.331103 4942 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-f9d7485db-5l26l"] Feb 18 19:29:31 crc kubenswrapper[4942]: I0218 19:29:31.046259 4942 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5683bb73-dc7f-40ed-86cd-0c08f2d38147" path="/var/lib/kubelet/pods/5683bb73-dc7f-40ed-86cd-0c08f2d38147/volumes" Feb 18 19:29:33 crc kubenswrapper[4942]: I0218 19:29:33.312595 4942 generic.go:334] "Generic (PLEG): container finished" podID="aa407b7c-08d9-4762-9aea-25d6aa8e4338" containerID="3c528a9e0c17a160edba2f2a8a2b69ef6605da960e82d1cc9013c779b490ba81" exitCode=0 Feb 18 19:29:33 crc kubenswrapper[4942]: I0218 19:29:33.312684 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213sqkzc" event={"ID":"aa407b7c-08d9-4762-9aea-25d6aa8e4338","Type":"ContainerDied","Data":"3c528a9e0c17a160edba2f2a8a2b69ef6605da960e82d1cc9013c779b490ba81"} Feb 18 19:29:34 crc kubenswrapper[4942]: I0218 19:29:34.322472 4942 generic.go:334] "Generic (PLEG): container finished" podID="aa407b7c-08d9-4762-9aea-25d6aa8e4338" containerID="5f1301628b22efbf7e7a27e3b565eeb55a5c343b9627acbcb363cadab023d5dc" exitCode=0 Feb 18 19:29:34 crc kubenswrapper[4942]: I0218 19:29:34.322519 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213sqkzc" event={"ID":"aa407b7c-08d9-4762-9aea-25d6aa8e4338","Type":"ContainerDied","Data":"5f1301628b22efbf7e7a27e3b565eeb55a5c343b9627acbcb363cadab023d5dc"} Feb 18 19:29:35 crc kubenswrapper[4942]: I0218 19:29:35.658078 4942 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213sqkzc" Feb 18 19:29:35 crc kubenswrapper[4942]: I0218 19:29:35.751842 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d2q9v\" (UniqueName: \"kubernetes.io/projected/aa407b7c-08d9-4762-9aea-25d6aa8e4338-kube-api-access-d2q9v\") pod \"aa407b7c-08d9-4762-9aea-25d6aa8e4338\" (UID: \"aa407b7c-08d9-4762-9aea-25d6aa8e4338\") " Feb 18 19:29:35 crc kubenswrapper[4942]: I0218 19:29:35.751949 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/aa407b7c-08d9-4762-9aea-25d6aa8e4338-bundle\") pod \"aa407b7c-08d9-4762-9aea-25d6aa8e4338\" (UID: \"aa407b7c-08d9-4762-9aea-25d6aa8e4338\") " Feb 18 19:29:35 crc kubenswrapper[4942]: I0218 19:29:35.752107 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/aa407b7c-08d9-4762-9aea-25d6aa8e4338-util\") pod \"aa407b7c-08d9-4762-9aea-25d6aa8e4338\" (UID: \"aa407b7c-08d9-4762-9aea-25d6aa8e4338\") " Feb 18 19:29:35 crc kubenswrapper[4942]: I0218 19:29:35.753259 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/aa407b7c-08d9-4762-9aea-25d6aa8e4338-bundle" (OuterVolumeSpecName: "bundle") pod "aa407b7c-08d9-4762-9aea-25d6aa8e4338" (UID: "aa407b7c-08d9-4762-9aea-25d6aa8e4338"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 19:29:35 crc kubenswrapper[4942]: I0218 19:29:35.757570 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/aa407b7c-08d9-4762-9aea-25d6aa8e4338-kube-api-access-d2q9v" (OuterVolumeSpecName: "kube-api-access-d2q9v") pod "aa407b7c-08d9-4762-9aea-25d6aa8e4338" (UID: "aa407b7c-08d9-4762-9aea-25d6aa8e4338"). InnerVolumeSpecName "kube-api-access-d2q9v". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:29:35 crc kubenswrapper[4942]: I0218 19:29:35.776579 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/aa407b7c-08d9-4762-9aea-25d6aa8e4338-util" (OuterVolumeSpecName: "util") pod "aa407b7c-08d9-4762-9aea-25d6aa8e4338" (UID: "aa407b7c-08d9-4762-9aea-25d6aa8e4338"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 19:29:35 crc kubenswrapper[4942]: I0218 19:29:35.853575 4942 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/aa407b7c-08d9-4762-9aea-25d6aa8e4338-util\") on node \"crc\" DevicePath \"\"" Feb 18 19:29:35 crc kubenswrapper[4942]: I0218 19:29:35.853612 4942 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d2q9v\" (UniqueName: \"kubernetes.io/projected/aa407b7c-08d9-4762-9aea-25d6aa8e4338-kube-api-access-d2q9v\") on node \"crc\" DevicePath \"\"" Feb 18 19:29:35 crc kubenswrapper[4942]: I0218 19:29:35.853629 4942 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/aa407b7c-08d9-4762-9aea-25d6aa8e4338-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 19:29:36 crc kubenswrapper[4942]: I0218 19:29:36.339035 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213sqkzc" event={"ID":"aa407b7c-08d9-4762-9aea-25d6aa8e4338","Type":"ContainerDied","Data":"9cf5918a0319959be412172b0bd964ed900873c62b9ea55107030c04fc05324f"} Feb 18 19:29:36 crc kubenswrapper[4942]: I0218 19:29:36.339078 4942 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9cf5918a0319959be412172b0bd964ed900873c62b9ea55107030c04fc05324f" Feb 18 19:29:36 crc kubenswrapper[4942]: I0218 19:29:36.339141 4942 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213sqkzc" Feb 18 19:29:49 crc kubenswrapper[4942]: I0218 19:29:49.446419 4942 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-controller-manager-654b77968c-5hpbb"] Feb 18 19:29:49 crc kubenswrapper[4942]: E0218 19:29:49.447025 4942 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aa407b7c-08d9-4762-9aea-25d6aa8e4338" containerName="pull" Feb 18 19:29:49 crc kubenswrapper[4942]: I0218 19:29:49.447038 4942 state_mem.go:107] "Deleted CPUSet assignment" podUID="aa407b7c-08d9-4762-9aea-25d6aa8e4338" containerName="pull" Feb 18 19:29:49 crc kubenswrapper[4942]: E0218 19:29:49.447047 4942 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aa407b7c-08d9-4762-9aea-25d6aa8e4338" containerName="util" Feb 18 19:29:49 crc kubenswrapper[4942]: I0218 19:29:49.447053 4942 state_mem.go:107] "Deleted CPUSet assignment" podUID="aa407b7c-08d9-4762-9aea-25d6aa8e4338" containerName="util" Feb 18 19:29:49 crc kubenswrapper[4942]: E0218 19:29:49.447067 4942 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aa407b7c-08d9-4762-9aea-25d6aa8e4338" containerName="extract" Feb 18 19:29:49 crc kubenswrapper[4942]: I0218 19:29:49.447073 4942 state_mem.go:107] "Deleted CPUSet assignment" podUID="aa407b7c-08d9-4762-9aea-25d6aa8e4338" containerName="extract" Feb 18 19:29:49 crc kubenswrapper[4942]: E0218 19:29:49.447082 4942 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5683bb73-dc7f-40ed-86cd-0c08f2d38147" containerName="console" Feb 18 19:29:49 crc kubenswrapper[4942]: I0218 19:29:49.447088 4942 state_mem.go:107] "Deleted CPUSet assignment" podUID="5683bb73-dc7f-40ed-86cd-0c08f2d38147" containerName="console" Feb 18 19:29:49 crc kubenswrapper[4942]: I0218 19:29:49.447182 4942 memory_manager.go:354] "RemoveStaleState removing state" podUID="aa407b7c-08d9-4762-9aea-25d6aa8e4338" containerName="extract" Feb 18 19:29:49 crc kubenswrapper[4942]: I0218 19:29:49.447194 4942 memory_manager.go:354] "RemoveStaleState removing state" podUID="5683bb73-dc7f-40ed-86cd-0c08f2d38147" containerName="console" Feb 18 19:29:49 crc kubenswrapper[4942]: I0218 19:29:49.447576 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-654b77968c-5hpbb" Feb 18 19:29:49 crc kubenswrapper[4942]: I0218 19:29:49.449831 4942 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-cert" Feb 18 19:29:49 crc kubenswrapper[4942]: I0218 19:29:49.449866 4942 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-controller-manager-service-cert" Feb 18 19:29:49 crc kubenswrapper[4942]: I0218 19:29:49.450438 4942 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"openshift-service-ca.crt" Feb 18 19:29:49 crc kubenswrapper[4942]: I0218 19:29:49.450622 4942 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"manager-account-dockercfg-xbghr" Feb 18 19:29:49 crc kubenswrapper[4942]: I0218 19:29:49.450868 4942 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"kube-root-ca.crt" Feb 18 19:29:49 crc kubenswrapper[4942]: I0218 19:29:49.459080 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-654b77968c-5hpbb"] Feb 18 19:29:49 crc kubenswrapper[4942]: I0218 19:29:49.536739 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/b5ebea7c-4a93-46a4-866a-8e00981e0245-apiservice-cert\") pod \"metallb-operator-controller-manager-654b77968c-5hpbb\" (UID: \"b5ebea7c-4a93-46a4-866a-8e00981e0245\") " pod="metallb-system/metallb-operator-controller-manager-654b77968c-5hpbb" Feb 18 19:29:49 crc kubenswrapper[4942]: I0218 19:29:49.536883 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r58m4\" (UniqueName: \"kubernetes.io/projected/b5ebea7c-4a93-46a4-866a-8e00981e0245-kube-api-access-r58m4\") pod \"metallb-operator-controller-manager-654b77968c-5hpbb\" (UID: \"b5ebea7c-4a93-46a4-866a-8e00981e0245\") " pod="metallb-system/metallb-operator-controller-manager-654b77968c-5hpbb" Feb 18 19:29:49 crc kubenswrapper[4942]: I0218 19:29:49.536907 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/b5ebea7c-4a93-46a4-866a-8e00981e0245-webhook-cert\") pod \"metallb-operator-controller-manager-654b77968c-5hpbb\" (UID: \"b5ebea7c-4a93-46a4-866a-8e00981e0245\") " pod="metallb-system/metallb-operator-controller-manager-654b77968c-5hpbb" Feb 18 19:29:49 crc kubenswrapper[4942]: I0218 19:29:49.638417 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/b5ebea7c-4a93-46a4-866a-8e00981e0245-apiservice-cert\") pod \"metallb-operator-controller-manager-654b77968c-5hpbb\" (UID: \"b5ebea7c-4a93-46a4-866a-8e00981e0245\") " pod="metallb-system/metallb-operator-controller-manager-654b77968c-5hpbb" Feb 18 19:29:49 crc kubenswrapper[4942]: I0218 19:29:49.638473 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r58m4\" (UniqueName: \"kubernetes.io/projected/b5ebea7c-4a93-46a4-866a-8e00981e0245-kube-api-access-r58m4\") pod \"metallb-operator-controller-manager-654b77968c-5hpbb\" (UID: \"b5ebea7c-4a93-46a4-866a-8e00981e0245\") " pod="metallb-system/metallb-operator-controller-manager-654b77968c-5hpbb" Feb 18 19:29:49 crc kubenswrapper[4942]: I0218 19:29:49.638511 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/b5ebea7c-4a93-46a4-866a-8e00981e0245-webhook-cert\") pod \"metallb-operator-controller-manager-654b77968c-5hpbb\" (UID: \"b5ebea7c-4a93-46a4-866a-8e00981e0245\") " pod="metallb-system/metallb-operator-controller-manager-654b77968c-5hpbb" Feb 18 19:29:49 crc kubenswrapper[4942]: I0218 19:29:49.645566 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/b5ebea7c-4a93-46a4-866a-8e00981e0245-webhook-cert\") pod \"metallb-operator-controller-manager-654b77968c-5hpbb\" (UID: \"b5ebea7c-4a93-46a4-866a-8e00981e0245\") " pod="metallb-system/metallb-operator-controller-manager-654b77968c-5hpbb" Feb 18 19:29:49 crc kubenswrapper[4942]: I0218 19:29:49.650027 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/b5ebea7c-4a93-46a4-866a-8e00981e0245-apiservice-cert\") pod \"metallb-operator-controller-manager-654b77968c-5hpbb\" (UID: \"b5ebea7c-4a93-46a4-866a-8e00981e0245\") " pod="metallb-system/metallb-operator-controller-manager-654b77968c-5hpbb" Feb 18 19:29:49 crc kubenswrapper[4942]: I0218 19:29:49.663543 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r58m4\" (UniqueName: \"kubernetes.io/projected/b5ebea7c-4a93-46a4-866a-8e00981e0245-kube-api-access-r58m4\") pod \"metallb-operator-controller-manager-654b77968c-5hpbb\" (UID: \"b5ebea7c-4a93-46a4-866a-8e00981e0245\") " pod="metallb-system/metallb-operator-controller-manager-654b77968c-5hpbb" Feb 18 19:29:49 crc kubenswrapper[4942]: I0218 19:29:49.673974 4942 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-webhook-server-6586457bb5-2xsvf"] Feb 18 19:29:49 crc kubenswrapper[4942]: I0218 19:29:49.674907 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-6586457bb5-2xsvf" Feb 18 19:29:49 crc kubenswrapper[4942]: I0218 19:29:49.676737 4942 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-service-cert" Feb 18 19:29:49 crc kubenswrapper[4942]: I0218 19:29:49.676837 4942 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Feb 18 19:29:49 crc kubenswrapper[4942]: I0218 19:29:49.677095 4942 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-dockercfg-k4db2" Feb 18 19:29:49 crc kubenswrapper[4942]: I0218 19:29:49.691041 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-6586457bb5-2xsvf"] Feb 18 19:29:49 crc kubenswrapper[4942]: I0218 19:29:49.739481 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-52hv9\" (UniqueName: \"kubernetes.io/projected/5c90854a-ee13-4493-b4d1-7c891f1eb904-kube-api-access-52hv9\") pod \"metallb-operator-webhook-server-6586457bb5-2xsvf\" (UID: \"5c90854a-ee13-4493-b4d1-7c891f1eb904\") " pod="metallb-system/metallb-operator-webhook-server-6586457bb5-2xsvf" Feb 18 19:29:49 crc kubenswrapper[4942]: I0218 19:29:49.739549 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/5c90854a-ee13-4493-b4d1-7c891f1eb904-apiservice-cert\") pod \"metallb-operator-webhook-server-6586457bb5-2xsvf\" (UID: \"5c90854a-ee13-4493-b4d1-7c891f1eb904\") " pod="metallb-system/metallb-operator-webhook-server-6586457bb5-2xsvf" Feb 18 19:29:49 crc kubenswrapper[4942]: I0218 19:29:49.739864 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/5c90854a-ee13-4493-b4d1-7c891f1eb904-webhook-cert\") pod \"metallb-operator-webhook-server-6586457bb5-2xsvf\" (UID: \"5c90854a-ee13-4493-b4d1-7c891f1eb904\") " pod="metallb-system/metallb-operator-webhook-server-6586457bb5-2xsvf" Feb 18 19:29:49 crc kubenswrapper[4942]: I0218 19:29:49.765328 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-654b77968c-5hpbb" Feb 18 19:29:49 crc kubenswrapper[4942]: I0218 19:29:49.840905 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/5c90854a-ee13-4493-b4d1-7c891f1eb904-apiservice-cert\") pod \"metallb-operator-webhook-server-6586457bb5-2xsvf\" (UID: \"5c90854a-ee13-4493-b4d1-7c891f1eb904\") " pod="metallb-system/metallb-operator-webhook-server-6586457bb5-2xsvf" Feb 18 19:29:49 crc kubenswrapper[4942]: I0218 19:29:49.841022 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/5c90854a-ee13-4493-b4d1-7c891f1eb904-webhook-cert\") pod \"metallb-operator-webhook-server-6586457bb5-2xsvf\" (UID: \"5c90854a-ee13-4493-b4d1-7c891f1eb904\") " pod="metallb-system/metallb-operator-webhook-server-6586457bb5-2xsvf" Feb 18 19:29:49 crc kubenswrapper[4942]: I0218 19:29:49.841062 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-52hv9\" (UniqueName: \"kubernetes.io/projected/5c90854a-ee13-4493-b4d1-7c891f1eb904-kube-api-access-52hv9\") pod \"metallb-operator-webhook-server-6586457bb5-2xsvf\" (UID: \"5c90854a-ee13-4493-b4d1-7c891f1eb904\") " pod="metallb-system/metallb-operator-webhook-server-6586457bb5-2xsvf" Feb 18 19:29:49 crc kubenswrapper[4942]: I0218 19:29:49.854520 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/5c90854a-ee13-4493-b4d1-7c891f1eb904-webhook-cert\") pod \"metallb-operator-webhook-server-6586457bb5-2xsvf\" (UID: \"5c90854a-ee13-4493-b4d1-7c891f1eb904\") " pod="metallb-system/metallb-operator-webhook-server-6586457bb5-2xsvf" Feb 18 19:29:49 crc kubenswrapper[4942]: I0218 19:29:49.855832 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/5c90854a-ee13-4493-b4d1-7c891f1eb904-apiservice-cert\") pod \"metallb-operator-webhook-server-6586457bb5-2xsvf\" (UID: \"5c90854a-ee13-4493-b4d1-7c891f1eb904\") " pod="metallb-system/metallb-operator-webhook-server-6586457bb5-2xsvf" Feb 18 19:29:49 crc kubenswrapper[4942]: I0218 19:29:49.866195 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-52hv9\" (UniqueName: \"kubernetes.io/projected/5c90854a-ee13-4493-b4d1-7c891f1eb904-kube-api-access-52hv9\") pod \"metallb-operator-webhook-server-6586457bb5-2xsvf\" (UID: \"5c90854a-ee13-4493-b4d1-7c891f1eb904\") " pod="metallb-system/metallb-operator-webhook-server-6586457bb5-2xsvf" Feb 18 19:29:50 crc kubenswrapper[4942]: I0218 19:29:50.016298 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-6586457bb5-2xsvf" Feb 18 19:29:50 crc kubenswrapper[4942]: I0218 19:29:50.064651 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-654b77968c-5hpbb"] Feb 18 19:29:50 crc kubenswrapper[4942]: W0218 19:29:50.067104 4942 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb5ebea7c_4a93_46a4_866a_8e00981e0245.slice/crio-7a98a844c8a865a8ab547447405be51f137b9cfdf20d522c644d4b3db569b13d WatchSource:0}: Error finding container 7a98a844c8a865a8ab547447405be51f137b9cfdf20d522c644d4b3db569b13d: Status 404 returned error can't find the container with id 7a98a844c8a865a8ab547447405be51f137b9cfdf20d522c644d4b3db569b13d Feb 18 19:29:50 crc kubenswrapper[4942]: I0218 19:29:50.417933 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-654b77968c-5hpbb" event={"ID":"b5ebea7c-4a93-46a4-866a-8e00981e0245","Type":"ContainerStarted","Data":"7a98a844c8a865a8ab547447405be51f137b9cfdf20d522c644d4b3db569b13d"} Feb 18 19:29:50 crc kubenswrapper[4942]: I0218 19:29:50.477907 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-6586457bb5-2xsvf"] Feb 18 19:29:50 crc kubenswrapper[4942]: W0218 19:29:50.479153 4942 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5c90854a_ee13_4493_b4d1_7c891f1eb904.slice/crio-1609e9707c48d71f6a0118b27e06026b73cec6d29793a8bbf7aebb0e4b8ccc59 WatchSource:0}: Error finding container 1609e9707c48d71f6a0118b27e06026b73cec6d29793a8bbf7aebb0e4b8ccc59: Status 404 returned error can't find the container with id 1609e9707c48d71f6a0118b27e06026b73cec6d29793a8bbf7aebb0e4b8ccc59 Feb 18 19:29:51 crc kubenswrapper[4942]: I0218 19:29:51.423281 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-6586457bb5-2xsvf" event={"ID":"5c90854a-ee13-4493-b4d1-7c891f1eb904","Type":"ContainerStarted","Data":"1609e9707c48d71f6a0118b27e06026b73cec6d29793a8bbf7aebb0e4b8ccc59"} Feb 18 19:29:55 crc kubenswrapper[4942]: I0218 19:29:55.409322 4942 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Feb 18 19:29:56 crc kubenswrapper[4942]: I0218 19:29:56.453193 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-654b77968c-5hpbb" event={"ID":"b5ebea7c-4a93-46a4-866a-8e00981e0245","Type":"ContainerStarted","Data":"6694ca2ca9f79d5f85fea2aba7e2c1aa2a7eb8584d8f17755ab8a68ba13d5b51"} Feb 18 19:29:56 crc kubenswrapper[4942]: I0218 19:29:56.453539 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-controller-manager-654b77968c-5hpbb" Feb 18 19:29:56 crc kubenswrapper[4942]: I0218 19:29:56.455285 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-6586457bb5-2xsvf" event={"ID":"5c90854a-ee13-4493-b4d1-7c891f1eb904","Type":"ContainerStarted","Data":"500f83d0c676d2641cb6e778a46b8fdad6058b189d2f661876018118303d06ed"} Feb 18 19:29:56 crc kubenswrapper[4942]: I0218 19:29:56.455467 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-webhook-server-6586457bb5-2xsvf" Feb 18 19:29:56 crc kubenswrapper[4942]: I0218 19:29:56.495701 4942 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-controller-manager-654b77968c-5hpbb" podStartSLOduration=2.24724387 podStartE2EDuration="7.495679851s" podCreationTimestamp="2026-02-18 19:29:49 +0000 UTC" firstStartedPulling="2026-02-18 19:29:50.069309076 +0000 UTC m=+749.774241751" lastFinishedPulling="2026-02-18 19:29:55.317745027 +0000 UTC m=+755.022677732" observedRunningTime="2026-02-18 19:29:56.49017152 +0000 UTC m=+756.195104185" watchObservedRunningTime="2026-02-18 19:29:56.495679851 +0000 UTC m=+756.200612526" Feb 18 19:30:00 crc kubenswrapper[4942]: I0218 19:30:00.196216 4942 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-webhook-server-6586457bb5-2xsvf" podStartSLOduration=5.615165601 podStartE2EDuration="11.196196494s" podCreationTimestamp="2026-02-18 19:29:49 +0000 UTC" firstStartedPulling="2026-02-18 19:29:50.482323469 +0000 UTC m=+750.187256144" lastFinishedPulling="2026-02-18 19:29:56.063354372 +0000 UTC m=+755.768287037" observedRunningTime="2026-02-18 19:29:56.526665719 +0000 UTC m=+756.231598384" watchObservedRunningTime="2026-02-18 19:30:00.196196494 +0000 UTC m=+759.901129169" Feb 18 19:30:00 crc kubenswrapper[4942]: I0218 19:30:00.197243 4942 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29524050-zccjh"] Feb 18 19:30:00 crc kubenswrapper[4942]: I0218 19:30:00.198146 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29524050-zccjh" Feb 18 19:30:00 crc kubenswrapper[4942]: I0218 19:30:00.200001 4942 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 18 19:30:00 crc kubenswrapper[4942]: I0218 19:30:00.201189 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 18 19:30:00 crc kubenswrapper[4942]: I0218 19:30:00.208404 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29524050-zccjh"] Feb 18 19:30:00 crc kubenswrapper[4942]: I0218 19:30:00.297828 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/50e6e4f2-9597-4f04-aa2d-d60b56446486-secret-volume\") pod \"collect-profiles-29524050-zccjh\" (UID: \"50e6e4f2-9597-4f04-aa2d-d60b56446486\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524050-zccjh" Feb 18 19:30:00 crc kubenswrapper[4942]: I0218 19:30:00.297899 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/50e6e4f2-9597-4f04-aa2d-d60b56446486-config-volume\") pod \"collect-profiles-29524050-zccjh\" (UID: \"50e6e4f2-9597-4f04-aa2d-d60b56446486\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524050-zccjh" Feb 18 19:30:00 crc kubenswrapper[4942]: I0218 19:30:00.298151 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rpns5\" (UniqueName: \"kubernetes.io/projected/50e6e4f2-9597-4f04-aa2d-d60b56446486-kube-api-access-rpns5\") pod \"collect-profiles-29524050-zccjh\" (UID: \"50e6e4f2-9597-4f04-aa2d-d60b56446486\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524050-zccjh" Feb 18 19:30:00 crc kubenswrapper[4942]: I0218 19:30:00.399600 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/50e6e4f2-9597-4f04-aa2d-d60b56446486-secret-volume\") pod \"collect-profiles-29524050-zccjh\" (UID: \"50e6e4f2-9597-4f04-aa2d-d60b56446486\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524050-zccjh" Feb 18 19:30:00 crc kubenswrapper[4942]: I0218 19:30:00.399696 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/50e6e4f2-9597-4f04-aa2d-d60b56446486-config-volume\") pod \"collect-profiles-29524050-zccjh\" (UID: \"50e6e4f2-9597-4f04-aa2d-d60b56446486\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524050-zccjh" Feb 18 19:30:00 crc kubenswrapper[4942]: I0218 19:30:00.399784 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rpns5\" (UniqueName: \"kubernetes.io/projected/50e6e4f2-9597-4f04-aa2d-d60b56446486-kube-api-access-rpns5\") pod \"collect-profiles-29524050-zccjh\" (UID: \"50e6e4f2-9597-4f04-aa2d-d60b56446486\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524050-zccjh" Feb 18 19:30:00 crc kubenswrapper[4942]: I0218 19:30:00.400746 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/50e6e4f2-9597-4f04-aa2d-d60b56446486-config-volume\") pod \"collect-profiles-29524050-zccjh\" (UID: \"50e6e4f2-9597-4f04-aa2d-d60b56446486\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524050-zccjh" Feb 18 19:30:00 crc kubenswrapper[4942]: I0218 19:30:00.411394 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/50e6e4f2-9597-4f04-aa2d-d60b56446486-secret-volume\") pod \"collect-profiles-29524050-zccjh\" (UID: \"50e6e4f2-9597-4f04-aa2d-d60b56446486\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524050-zccjh" Feb 18 19:30:00 crc kubenswrapper[4942]: I0218 19:30:00.574504 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rpns5\" (UniqueName: \"kubernetes.io/projected/50e6e4f2-9597-4f04-aa2d-d60b56446486-kube-api-access-rpns5\") pod \"collect-profiles-29524050-zccjh\" (UID: \"50e6e4f2-9597-4f04-aa2d-d60b56446486\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524050-zccjh" Feb 18 19:30:00 crc kubenswrapper[4942]: I0218 19:30:00.816230 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29524050-zccjh" Feb 18 19:30:01 crc kubenswrapper[4942]: I0218 19:30:01.004107 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29524050-zccjh"] Feb 18 19:30:01 crc kubenswrapper[4942]: I0218 19:30:01.491354 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29524050-zccjh" event={"ID":"50e6e4f2-9597-4f04-aa2d-d60b56446486","Type":"ContainerStarted","Data":"a8b1969ada1b3f8254fddc0c25babc6706a63d49bbd527b2c0f97f6fdf13b622"} Feb 18 19:30:02 crc kubenswrapper[4942]: I0218 19:30:02.501628 4942 generic.go:334] "Generic (PLEG): container finished" podID="50e6e4f2-9597-4f04-aa2d-d60b56446486" containerID="45f611558efef294793c691f22c0d11c4ce92907ad4ca205006156562d59216c" exitCode=0 Feb 18 19:30:02 crc kubenswrapper[4942]: I0218 19:30:02.501738 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29524050-zccjh" event={"ID":"50e6e4f2-9597-4f04-aa2d-d60b56446486","Type":"ContainerDied","Data":"45f611558efef294793c691f22c0d11c4ce92907ad4ca205006156562d59216c"} Feb 18 19:30:03 crc kubenswrapper[4942]: I0218 19:30:03.813081 4942 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29524050-zccjh" Feb 18 19:30:03 crc kubenswrapper[4942]: I0218 19:30:03.944680 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rpns5\" (UniqueName: \"kubernetes.io/projected/50e6e4f2-9597-4f04-aa2d-d60b56446486-kube-api-access-rpns5\") pod \"50e6e4f2-9597-4f04-aa2d-d60b56446486\" (UID: \"50e6e4f2-9597-4f04-aa2d-d60b56446486\") " Feb 18 19:30:03 crc kubenswrapper[4942]: I0218 19:30:03.944834 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/50e6e4f2-9597-4f04-aa2d-d60b56446486-config-volume\") pod \"50e6e4f2-9597-4f04-aa2d-d60b56446486\" (UID: \"50e6e4f2-9597-4f04-aa2d-d60b56446486\") " Feb 18 19:30:03 crc kubenswrapper[4942]: I0218 19:30:03.944881 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/50e6e4f2-9597-4f04-aa2d-d60b56446486-secret-volume\") pod \"50e6e4f2-9597-4f04-aa2d-d60b56446486\" (UID: \"50e6e4f2-9597-4f04-aa2d-d60b56446486\") " Feb 18 19:30:03 crc kubenswrapper[4942]: I0218 19:30:03.946243 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/50e6e4f2-9597-4f04-aa2d-d60b56446486-config-volume" (OuterVolumeSpecName: "config-volume") pod "50e6e4f2-9597-4f04-aa2d-d60b56446486" (UID: "50e6e4f2-9597-4f04-aa2d-d60b56446486"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:30:03 crc kubenswrapper[4942]: I0218 19:30:03.950440 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/50e6e4f2-9597-4f04-aa2d-d60b56446486-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "50e6e4f2-9597-4f04-aa2d-d60b56446486" (UID: "50e6e4f2-9597-4f04-aa2d-d60b56446486"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:30:03 crc kubenswrapper[4942]: I0218 19:30:03.959954 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/50e6e4f2-9597-4f04-aa2d-d60b56446486-kube-api-access-rpns5" (OuterVolumeSpecName: "kube-api-access-rpns5") pod "50e6e4f2-9597-4f04-aa2d-d60b56446486" (UID: "50e6e4f2-9597-4f04-aa2d-d60b56446486"). InnerVolumeSpecName "kube-api-access-rpns5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:30:04 crc kubenswrapper[4942]: I0218 19:30:04.046623 4942 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/50e6e4f2-9597-4f04-aa2d-d60b56446486-config-volume\") on node \"crc\" DevicePath \"\"" Feb 18 19:30:04 crc kubenswrapper[4942]: I0218 19:30:04.046656 4942 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/50e6e4f2-9597-4f04-aa2d-d60b56446486-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 18 19:30:04 crc kubenswrapper[4942]: I0218 19:30:04.046668 4942 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rpns5\" (UniqueName: \"kubernetes.io/projected/50e6e4f2-9597-4f04-aa2d-d60b56446486-kube-api-access-rpns5\") on node \"crc\" DevicePath \"\"" Feb 18 19:30:04 crc kubenswrapper[4942]: I0218 19:30:04.516419 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29524050-zccjh" event={"ID":"50e6e4f2-9597-4f04-aa2d-d60b56446486","Type":"ContainerDied","Data":"a8b1969ada1b3f8254fddc0c25babc6706a63d49bbd527b2c0f97f6fdf13b622"} Feb 18 19:30:04 crc kubenswrapper[4942]: I0218 19:30:04.516456 4942 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a8b1969ada1b3f8254fddc0c25babc6706a63d49bbd527b2c0f97f6fdf13b622" Feb 18 19:30:04 crc kubenswrapper[4942]: I0218 19:30:04.516478 4942 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29524050-zccjh" Feb 18 19:30:10 crc kubenswrapper[4942]: I0218 19:30:10.021038 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-webhook-server-6586457bb5-2xsvf" Feb 18 19:30:23 crc kubenswrapper[4942]: I0218 19:30:23.741320 4942 patch_prober.go:28] interesting pod/machine-config-daemon-wqxh4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 19:30:23 crc kubenswrapper[4942]: I0218 19:30:23.742132 4942 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wqxh4" podUID="28921539-823a-4439-a230-3b5aed7085cc" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 19:30:29 crc kubenswrapper[4942]: I0218 19:30:29.768568 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-controller-manager-654b77968c-5hpbb" Feb 18 19:30:30 crc kubenswrapper[4942]: I0218 19:30:30.629670 4942 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-webhook-server-78b44bf5bb-7ghrb"] Feb 18 19:30:30 crc kubenswrapper[4942]: E0218 19:30:30.630326 4942 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="50e6e4f2-9597-4f04-aa2d-d60b56446486" containerName="collect-profiles" Feb 18 19:30:30 crc kubenswrapper[4942]: I0218 19:30:30.630360 4942 state_mem.go:107] "Deleted CPUSet assignment" podUID="50e6e4f2-9597-4f04-aa2d-d60b56446486" containerName="collect-profiles" Feb 18 19:30:30 crc kubenswrapper[4942]: I0218 19:30:30.630490 4942 memory_manager.go:354] "RemoveStaleState removing state" podUID="50e6e4f2-9597-4f04-aa2d-d60b56446486" containerName="collect-profiles" Feb 18 19:30:30 crc kubenswrapper[4942]: I0218 19:30:30.631040 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-7ghrb" Feb 18 19:30:30 crc kubenswrapper[4942]: I0218 19:30:30.637221 4942 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-daemon-dockercfg-5vknp" Feb 18 19:30:30 crc kubenswrapper[4942]: I0218 19:30:30.637240 4942 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-webhook-server-cert" Feb 18 19:30:30 crc kubenswrapper[4942]: I0218 19:30:30.648410 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-78b44bf5bb-7ghrb"] Feb 18 19:30:30 crc kubenswrapper[4942]: I0218 19:30:30.664515 4942 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-4jkrm"] Feb 18 19:30:30 crc kubenswrapper[4942]: I0218 19:30:30.666685 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-4jkrm" Feb 18 19:30:30 crc kubenswrapper[4942]: I0218 19:30:30.668652 4942 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"frr-startup" Feb 18 19:30:30 crc kubenswrapper[4942]: I0218 19:30:30.668940 4942 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-certs-secret" Feb 18 19:30:30 crc kubenswrapper[4942]: I0218 19:30:30.753509 4942 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/speaker-pm8vg"] Feb 18 19:30:30 crc kubenswrapper[4942]: I0218 19:30:30.758854 4942 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/controller-69bbfbf88f-gzp79"] Feb 18 19:30:30 crc kubenswrapper[4942]: I0218 19:30:30.759709 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-69bbfbf88f-gzp79" Feb 18 19:30:30 crc kubenswrapper[4942]: I0218 19:30:30.760259 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-pm8vg" Feb 18 19:30:30 crc kubenswrapper[4942]: I0218 19:30:30.761922 4942 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-dockercfg-xpfsc" Feb 18 19:30:30 crc kubenswrapper[4942]: I0218 19:30:30.762199 4942 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-certs-secret" Feb 18 19:30:30 crc kubenswrapper[4942]: I0218 19:30:30.762522 4942 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"metallb-excludel2" Feb 18 19:30:30 crc kubenswrapper[4942]: I0218 19:30:30.762560 4942 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-memberlist" Feb 18 19:30:30 crc kubenswrapper[4942]: I0218 19:30:30.762936 4942 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-certs-secret" Feb 18 19:30:30 crc kubenswrapper[4942]: I0218 19:30:30.776430 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-69bbfbf88f-gzp79"] Feb 18 19:30:30 crc kubenswrapper[4942]: I0218 19:30:30.805726 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/eddf6439-0868-428b-9bc0-5b85371d6103-frr-conf\") pod \"frr-k8s-4jkrm\" (UID: \"eddf6439-0868-428b-9bc0-5b85371d6103\") " pod="metallb-system/frr-k8s-4jkrm" Feb 18 19:30:30 crc kubenswrapper[4942]: I0218 19:30:30.805791 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/2963214d-df0b-4249-832e-8396a15ed441-cert\") pod \"frr-k8s-webhook-server-78b44bf5bb-7ghrb\" (UID: \"2963214d-df0b-4249-832e-8396a15ed441\") " pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-7ghrb" Feb 18 19:30:30 crc kubenswrapper[4942]: I0218 19:30:30.805815 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/eddf6439-0868-428b-9bc0-5b85371d6103-reloader\") pod \"frr-k8s-4jkrm\" (UID: \"eddf6439-0868-428b-9bc0-5b85371d6103\") " pod="metallb-system/frr-k8s-4jkrm" Feb 18 19:30:30 crc kubenswrapper[4942]: I0218 19:30:30.806071 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/eddf6439-0868-428b-9bc0-5b85371d6103-metrics-certs\") pod \"frr-k8s-4jkrm\" (UID: \"eddf6439-0868-428b-9bc0-5b85371d6103\") " pod="metallb-system/frr-k8s-4jkrm" Feb 18 19:30:30 crc kubenswrapper[4942]: I0218 19:30:30.806179 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/eddf6439-0868-428b-9bc0-5b85371d6103-metrics\") pod \"frr-k8s-4jkrm\" (UID: \"eddf6439-0868-428b-9bc0-5b85371d6103\") " pod="metallb-system/frr-k8s-4jkrm" Feb 18 19:30:30 crc kubenswrapper[4942]: I0218 19:30:30.806225 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/eddf6439-0868-428b-9bc0-5b85371d6103-frr-sockets\") pod \"frr-k8s-4jkrm\" (UID: \"eddf6439-0868-428b-9bc0-5b85371d6103\") " pod="metallb-system/frr-k8s-4jkrm" Feb 18 19:30:30 crc kubenswrapper[4942]: I0218 19:30:30.806310 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wgwrl\" (UniqueName: \"kubernetes.io/projected/2963214d-df0b-4249-832e-8396a15ed441-kube-api-access-wgwrl\") pod \"frr-k8s-webhook-server-78b44bf5bb-7ghrb\" (UID: \"2963214d-df0b-4249-832e-8396a15ed441\") " pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-7ghrb" Feb 18 19:30:30 crc kubenswrapper[4942]: I0218 19:30:30.806348 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vx4g5\" (UniqueName: \"kubernetes.io/projected/eddf6439-0868-428b-9bc0-5b85371d6103-kube-api-access-vx4g5\") pod \"frr-k8s-4jkrm\" (UID: \"eddf6439-0868-428b-9bc0-5b85371d6103\") " pod="metallb-system/frr-k8s-4jkrm" Feb 18 19:30:30 crc kubenswrapper[4942]: I0218 19:30:30.806372 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/eddf6439-0868-428b-9bc0-5b85371d6103-frr-startup\") pod \"frr-k8s-4jkrm\" (UID: \"eddf6439-0868-428b-9bc0-5b85371d6103\") " pod="metallb-system/frr-k8s-4jkrm" Feb 18 19:30:30 crc kubenswrapper[4942]: I0218 19:30:30.908008 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/6176b1cc-ddc9-4bdd-9707-ae3c04996b6c-metrics-certs\") pod \"speaker-pm8vg\" (UID: \"6176b1cc-ddc9-4bdd-9707-ae3c04996b6c\") " pod="metallb-system/speaker-pm8vg" Feb 18 19:30:30 crc kubenswrapper[4942]: I0218 19:30:30.908054 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f2qk2\" (UniqueName: \"kubernetes.io/projected/6176b1cc-ddc9-4bdd-9707-ae3c04996b6c-kube-api-access-f2qk2\") pod \"speaker-pm8vg\" (UID: \"6176b1cc-ddc9-4bdd-9707-ae3c04996b6c\") " pod="metallb-system/speaker-pm8vg" Feb 18 19:30:30 crc kubenswrapper[4942]: I0218 19:30:30.908073 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/56c6bc24-68cd-4bee-8746-d3cfd2bf97c7-metrics-certs\") pod \"controller-69bbfbf88f-gzp79\" (UID: \"56c6bc24-68cd-4bee-8746-d3cfd2bf97c7\") " pod="metallb-system/controller-69bbfbf88f-gzp79" Feb 18 19:30:30 crc kubenswrapper[4942]: I0218 19:30:30.908090 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/56c6bc24-68cd-4bee-8746-d3cfd2bf97c7-cert\") pod \"controller-69bbfbf88f-gzp79\" (UID: \"56c6bc24-68cd-4bee-8746-d3cfd2bf97c7\") " pod="metallb-system/controller-69bbfbf88f-gzp79" Feb 18 19:30:30 crc kubenswrapper[4942]: I0218 19:30:30.908114 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vx4g5\" (UniqueName: \"kubernetes.io/projected/eddf6439-0868-428b-9bc0-5b85371d6103-kube-api-access-vx4g5\") pod \"frr-k8s-4jkrm\" (UID: \"eddf6439-0868-428b-9bc0-5b85371d6103\") " pod="metallb-system/frr-k8s-4jkrm" Feb 18 19:30:30 crc kubenswrapper[4942]: I0218 19:30:30.908133 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/6176b1cc-ddc9-4bdd-9707-ae3c04996b6c-memberlist\") pod \"speaker-pm8vg\" (UID: \"6176b1cc-ddc9-4bdd-9707-ae3c04996b6c\") " pod="metallb-system/speaker-pm8vg" Feb 18 19:30:30 crc kubenswrapper[4942]: I0218 19:30:30.908200 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/eddf6439-0868-428b-9bc0-5b85371d6103-frr-startup\") pod \"frr-k8s-4jkrm\" (UID: \"eddf6439-0868-428b-9bc0-5b85371d6103\") " pod="metallb-system/frr-k8s-4jkrm" Feb 18 19:30:30 crc kubenswrapper[4942]: I0218 19:30:30.908234 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/eddf6439-0868-428b-9bc0-5b85371d6103-frr-conf\") pod \"frr-k8s-4jkrm\" (UID: \"eddf6439-0868-428b-9bc0-5b85371d6103\") " pod="metallb-system/frr-k8s-4jkrm" Feb 18 19:30:30 crc kubenswrapper[4942]: I0218 19:30:30.908253 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/6176b1cc-ddc9-4bdd-9707-ae3c04996b6c-metallb-excludel2\") pod \"speaker-pm8vg\" (UID: \"6176b1cc-ddc9-4bdd-9707-ae3c04996b6c\") " pod="metallb-system/speaker-pm8vg" Feb 18 19:30:30 crc kubenswrapper[4942]: I0218 19:30:30.908277 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/2963214d-df0b-4249-832e-8396a15ed441-cert\") pod \"frr-k8s-webhook-server-78b44bf5bb-7ghrb\" (UID: \"2963214d-df0b-4249-832e-8396a15ed441\") " pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-7ghrb" Feb 18 19:30:30 crc kubenswrapper[4942]: I0218 19:30:30.908293 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/eddf6439-0868-428b-9bc0-5b85371d6103-reloader\") pod \"frr-k8s-4jkrm\" (UID: \"eddf6439-0868-428b-9bc0-5b85371d6103\") " pod="metallb-system/frr-k8s-4jkrm" Feb 18 19:30:30 crc kubenswrapper[4942]: I0218 19:30:30.908315 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/eddf6439-0868-428b-9bc0-5b85371d6103-metrics-certs\") pod \"frr-k8s-4jkrm\" (UID: \"eddf6439-0868-428b-9bc0-5b85371d6103\") " pod="metallb-system/frr-k8s-4jkrm" Feb 18 19:30:30 crc kubenswrapper[4942]: I0218 19:30:30.908333 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/eddf6439-0868-428b-9bc0-5b85371d6103-metrics\") pod \"frr-k8s-4jkrm\" (UID: \"eddf6439-0868-428b-9bc0-5b85371d6103\") " pod="metallb-system/frr-k8s-4jkrm" Feb 18 19:30:30 crc kubenswrapper[4942]: I0218 19:30:30.908349 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/eddf6439-0868-428b-9bc0-5b85371d6103-frr-sockets\") pod \"frr-k8s-4jkrm\" (UID: \"eddf6439-0868-428b-9bc0-5b85371d6103\") " pod="metallb-system/frr-k8s-4jkrm" Feb 18 19:30:30 crc kubenswrapper[4942]: I0218 19:30:30.908384 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r4fhv\" (UniqueName: \"kubernetes.io/projected/56c6bc24-68cd-4bee-8746-d3cfd2bf97c7-kube-api-access-r4fhv\") pod \"controller-69bbfbf88f-gzp79\" (UID: \"56c6bc24-68cd-4bee-8746-d3cfd2bf97c7\") " pod="metallb-system/controller-69bbfbf88f-gzp79" Feb 18 19:30:30 crc kubenswrapper[4942]: I0218 19:30:30.908407 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wgwrl\" (UniqueName: \"kubernetes.io/projected/2963214d-df0b-4249-832e-8396a15ed441-kube-api-access-wgwrl\") pod \"frr-k8s-webhook-server-78b44bf5bb-7ghrb\" (UID: \"2963214d-df0b-4249-832e-8396a15ed441\") " pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-7ghrb" Feb 18 19:30:30 crc kubenswrapper[4942]: I0218 19:30:30.908775 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/eddf6439-0868-428b-9bc0-5b85371d6103-frr-conf\") pod \"frr-k8s-4jkrm\" (UID: \"eddf6439-0868-428b-9bc0-5b85371d6103\") " pod="metallb-system/frr-k8s-4jkrm" Feb 18 19:30:30 crc kubenswrapper[4942]: I0218 19:30:30.909332 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/eddf6439-0868-428b-9bc0-5b85371d6103-frr-startup\") pod \"frr-k8s-4jkrm\" (UID: \"eddf6439-0868-428b-9bc0-5b85371d6103\") " pod="metallb-system/frr-k8s-4jkrm" Feb 18 19:30:30 crc kubenswrapper[4942]: I0218 19:30:30.909575 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/eddf6439-0868-428b-9bc0-5b85371d6103-frr-sockets\") pod \"frr-k8s-4jkrm\" (UID: \"eddf6439-0868-428b-9bc0-5b85371d6103\") " pod="metallb-system/frr-k8s-4jkrm" Feb 18 19:30:30 crc kubenswrapper[4942]: I0218 19:30:30.910229 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/eddf6439-0868-428b-9bc0-5b85371d6103-reloader\") pod \"frr-k8s-4jkrm\" (UID: \"eddf6439-0868-428b-9bc0-5b85371d6103\") " pod="metallb-system/frr-k8s-4jkrm" Feb 18 19:30:30 crc kubenswrapper[4942]: I0218 19:30:30.910371 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/eddf6439-0868-428b-9bc0-5b85371d6103-metrics\") pod \"frr-k8s-4jkrm\" (UID: \"eddf6439-0868-428b-9bc0-5b85371d6103\") " pod="metallb-system/frr-k8s-4jkrm" Feb 18 19:30:30 crc kubenswrapper[4942]: I0218 19:30:30.914309 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/eddf6439-0868-428b-9bc0-5b85371d6103-metrics-certs\") pod \"frr-k8s-4jkrm\" (UID: \"eddf6439-0868-428b-9bc0-5b85371d6103\") " pod="metallb-system/frr-k8s-4jkrm" Feb 18 19:30:30 crc kubenswrapper[4942]: I0218 19:30:30.916815 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/2963214d-df0b-4249-832e-8396a15ed441-cert\") pod \"frr-k8s-webhook-server-78b44bf5bb-7ghrb\" (UID: \"2963214d-df0b-4249-832e-8396a15ed441\") " pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-7ghrb" Feb 18 19:30:30 crc kubenswrapper[4942]: I0218 19:30:30.922888 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vx4g5\" (UniqueName: \"kubernetes.io/projected/eddf6439-0868-428b-9bc0-5b85371d6103-kube-api-access-vx4g5\") pod \"frr-k8s-4jkrm\" (UID: \"eddf6439-0868-428b-9bc0-5b85371d6103\") " pod="metallb-system/frr-k8s-4jkrm" Feb 18 19:30:30 crc kubenswrapper[4942]: I0218 19:30:30.928302 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wgwrl\" (UniqueName: \"kubernetes.io/projected/2963214d-df0b-4249-832e-8396a15ed441-kube-api-access-wgwrl\") pod \"frr-k8s-webhook-server-78b44bf5bb-7ghrb\" (UID: \"2963214d-df0b-4249-832e-8396a15ed441\") " pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-7ghrb" Feb 18 19:30:31 crc kubenswrapper[4942]: I0218 19:30:31.003116 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-7ghrb" Feb 18 19:30:31 crc kubenswrapper[4942]: I0218 19:30:31.009563 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/6176b1cc-ddc9-4bdd-9707-ae3c04996b6c-metrics-certs\") pod \"speaker-pm8vg\" (UID: \"6176b1cc-ddc9-4bdd-9707-ae3c04996b6c\") " pod="metallb-system/speaker-pm8vg" Feb 18 19:30:31 crc kubenswrapper[4942]: I0218 19:30:31.009631 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f2qk2\" (UniqueName: \"kubernetes.io/projected/6176b1cc-ddc9-4bdd-9707-ae3c04996b6c-kube-api-access-f2qk2\") pod \"speaker-pm8vg\" (UID: \"6176b1cc-ddc9-4bdd-9707-ae3c04996b6c\") " pod="metallb-system/speaker-pm8vg" Feb 18 19:30:31 crc kubenswrapper[4942]: I0218 19:30:31.009682 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/56c6bc24-68cd-4bee-8746-d3cfd2bf97c7-metrics-certs\") pod \"controller-69bbfbf88f-gzp79\" (UID: \"56c6bc24-68cd-4bee-8746-d3cfd2bf97c7\") " pod="metallb-system/controller-69bbfbf88f-gzp79" Feb 18 19:30:31 crc kubenswrapper[4942]: I0218 19:30:31.009714 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/56c6bc24-68cd-4bee-8746-d3cfd2bf97c7-cert\") pod \"controller-69bbfbf88f-gzp79\" (UID: \"56c6bc24-68cd-4bee-8746-d3cfd2bf97c7\") " pod="metallb-system/controller-69bbfbf88f-gzp79" Feb 18 19:30:31 crc kubenswrapper[4942]: I0218 19:30:31.009783 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/6176b1cc-ddc9-4bdd-9707-ae3c04996b6c-memberlist\") pod \"speaker-pm8vg\" (UID: \"6176b1cc-ddc9-4bdd-9707-ae3c04996b6c\") " pod="metallb-system/speaker-pm8vg" Feb 18 19:30:31 crc kubenswrapper[4942]: I0218 19:30:31.009834 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/6176b1cc-ddc9-4bdd-9707-ae3c04996b6c-metallb-excludel2\") pod \"speaker-pm8vg\" (UID: \"6176b1cc-ddc9-4bdd-9707-ae3c04996b6c\") " pod="metallb-system/speaker-pm8vg" Feb 18 19:30:31 crc kubenswrapper[4942]: I0218 19:30:31.009962 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r4fhv\" (UniqueName: \"kubernetes.io/projected/56c6bc24-68cd-4bee-8746-d3cfd2bf97c7-kube-api-access-r4fhv\") pod \"controller-69bbfbf88f-gzp79\" (UID: \"56c6bc24-68cd-4bee-8746-d3cfd2bf97c7\") " pod="metallb-system/controller-69bbfbf88f-gzp79" Feb 18 19:30:31 crc kubenswrapper[4942]: E0218 19:30:31.010006 4942 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Feb 18 19:30:31 crc kubenswrapper[4942]: E0218 19:30:31.010103 4942 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6176b1cc-ddc9-4bdd-9707-ae3c04996b6c-memberlist podName:6176b1cc-ddc9-4bdd-9707-ae3c04996b6c nodeName:}" failed. No retries permitted until 2026-02-18 19:30:31.510078867 +0000 UTC m=+791.215011542 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/6176b1cc-ddc9-4bdd-9707-ae3c04996b6c-memberlist") pod "speaker-pm8vg" (UID: "6176b1cc-ddc9-4bdd-9707-ae3c04996b6c") : secret "metallb-memberlist" not found Feb 18 19:30:31 crc kubenswrapper[4942]: I0218 19:30:31.010483 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-4jkrm" Feb 18 19:30:31 crc kubenswrapper[4942]: I0218 19:30:31.010885 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/6176b1cc-ddc9-4bdd-9707-ae3c04996b6c-metallb-excludel2\") pod \"speaker-pm8vg\" (UID: \"6176b1cc-ddc9-4bdd-9707-ae3c04996b6c\") " pod="metallb-system/speaker-pm8vg" Feb 18 19:30:31 crc kubenswrapper[4942]: I0218 19:30:31.012391 4942 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Feb 18 19:30:31 crc kubenswrapper[4942]: I0218 19:30:31.012564 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/6176b1cc-ddc9-4bdd-9707-ae3c04996b6c-metrics-certs\") pod \"speaker-pm8vg\" (UID: \"6176b1cc-ddc9-4bdd-9707-ae3c04996b6c\") " pod="metallb-system/speaker-pm8vg" Feb 18 19:30:31 crc kubenswrapper[4942]: I0218 19:30:31.022373 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/56c6bc24-68cd-4bee-8746-d3cfd2bf97c7-metrics-certs\") pod \"controller-69bbfbf88f-gzp79\" (UID: \"56c6bc24-68cd-4bee-8746-d3cfd2bf97c7\") " pod="metallb-system/controller-69bbfbf88f-gzp79" Feb 18 19:30:31 crc kubenswrapper[4942]: I0218 19:30:31.026137 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/56c6bc24-68cd-4bee-8746-d3cfd2bf97c7-cert\") pod \"controller-69bbfbf88f-gzp79\" (UID: \"56c6bc24-68cd-4bee-8746-d3cfd2bf97c7\") " pod="metallb-system/controller-69bbfbf88f-gzp79" Feb 18 19:30:31 crc kubenswrapper[4942]: I0218 19:30:31.026256 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f2qk2\" (UniqueName: \"kubernetes.io/projected/6176b1cc-ddc9-4bdd-9707-ae3c04996b6c-kube-api-access-f2qk2\") pod \"speaker-pm8vg\" (UID: \"6176b1cc-ddc9-4bdd-9707-ae3c04996b6c\") " pod="metallb-system/speaker-pm8vg" Feb 18 19:30:31 crc kubenswrapper[4942]: I0218 19:30:31.032599 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r4fhv\" (UniqueName: \"kubernetes.io/projected/56c6bc24-68cd-4bee-8746-d3cfd2bf97c7-kube-api-access-r4fhv\") pod \"controller-69bbfbf88f-gzp79\" (UID: \"56c6bc24-68cd-4bee-8746-d3cfd2bf97c7\") " pod="metallb-system/controller-69bbfbf88f-gzp79" Feb 18 19:30:31 crc kubenswrapper[4942]: I0218 19:30:31.076894 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-69bbfbf88f-gzp79" Feb 18 19:30:31 crc kubenswrapper[4942]: I0218 19:30:31.353154 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-69bbfbf88f-gzp79"] Feb 18 19:30:31 crc kubenswrapper[4942]: W0218 19:30:31.356965 4942 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod56c6bc24_68cd_4bee_8746_d3cfd2bf97c7.slice/crio-2d44ce364a0306cba64752758aa782bac3237f81a5bd9b164fa2f61490b4c2cb WatchSource:0}: Error finding container 2d44ce364a0306cba64752758aa782bac3237f81a5bd9b164fa2f61490b4c2cb: Status 404 returned error can't find the container with id 2d44ce364a0306cba64752758aa782bac3237f81a5bd9b164fa2f61490b4c2cb Feb 18 19:30:31 crc kubenswrapper[4942]: I0218 19:30:31.463074 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-78b44bf5bb-7ghrb"] Feb 18 19:30:31 crc kubenswrapper[4942]: W0218 19:30:31.468283 4942 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2963214d_df0b_4249_832e_8396a15ed441.slice/crio-2997c4e70edc02d174d3015fa9ff906e14f93969e6d0d8e48d9ba24ac32f9433 WatchSource:0}: Error finding container 2997c4e70edc02d174d3015fa9ff906e14f93969e6d0d8e48d9ba24ac32f9433: Status 404 returned error can't find the container with id 2997c4e70edc02d174d3015fa9ff906e14f93969e6d0d8e48d9ba24ac32f9433 Feb 18 19:30:31 crc kubenswrapper[4942]: I0218 19:30:31.517613 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/6176b1cc-ddc9-4bdd-9707-ae3c04996b6c-memberlist\") pod \"speaker-pm8vg\" (UID: \"6176b1cc-ddc9-4bdd-9707-ae3c04996b6c\") " pod="metallb-system/speaker-pm8vg" Feb 18 19:30:31 crc kubenswrapper[4942]: E0218 19:30:31.517741 4942 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Feb 18 19:30:31 crc kubenswrapper[4942]: E0218 19:30:31.517831 4942 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6176b1cc-ddc9-4bdd-9707-ae3c04996b6c-memberlist podName:6176b1cc-ddc9-4bdd-9707-ae3c04996b6c nodeName:}" failed. No retries permitted until 2026-02-18 19:30:32.517814144 +0000 UTC m=+792.222746809 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/6176b1cc-ddc9-4bdd-9707-ae3c04996b6c-memberlist") pod "speaker-pm8vg" (UID: "6176b1cc-ddc9-4bdd-9707-ae3c04996b6c") : secret "metallb-memberlist" not found Feb 18 19:30:31 crc kubenswrapper[4942]: I0218 19:30:31.748652 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-7ghrb" event={"ID":"2963214d-df0b-4249-832e-8396a15ed441","Type":"ContainerStarted","Data":"2997c4e70edc02d174d3015fa9ff906e14f93969e6d0d8e48d9ba24ac32f9433"} Feb 18 19:30:31 crc kubenswrapper[4942]: I0218 19:30:31.750915 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-69bbfbf88f-gzp79" event={"ID":"56c6bc24-68cd-4bee-8746-d3cfd2bf97c7","Type":"ContainerStarted","Data":"a05e65257db07009159f617a08217d9ea2abf8742a300a1d8be2d852d6a4d7c9"} Feb 18 19:30:31 crc kubenswrapper[4942]: I0218 19:30:31.750957 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-69bbfbf88f-gzp79" event={"ID":"56c6bc24-68cd-4bee-8746-d3cfd2bf97c7","Type":"ContainerStarted","Data":"4ea91c287cbdb76922ecc10664e5b7478349c71e33e651fc26b3d04eb6ca2104"} Feb 18 19:30:31 crc kubenswrapper[4942]: I0218 19:30:31.751149 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-69bbfbf88f-gzp79" event={"ID":"56c6bc24-68cd-4bee-8746-d3cfd2bf97c7","Type":"ContainerStarted","Data":"2d44ce364a0306cba64752758aa782bac3237f81a5bd9b164fa2f61490b4c2cb"} Feb 18 19:30:31 crc kubenswrapper[4942]: I0218 19:30:31.751170 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/controller-69bbfbf88f-gzp79" Feb 18 19:30:31 crc kubenswrapper[4942]: I0218 19:30:31.751958 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-4jkrm" event={"ID":"eddf6439-0868-428b-9bc0-5b85371d6103","Type":"ContainerStarted","Data":"490af75b4c49bef278b67009aeba1191dc6afda18526a7f8efcbec1110ae7761"} Feb 18 19:30:31 crc kubenswrapper[4942]: I0218 19:30:31.770573 4942 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/controller-69bbfbf88f-gzp79" podStartSLOduration=1.77055242 podStartE2EDuration="1.77055242s" podCreationTimestamp="2026-02-18 19:30:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:30:31.765725349 +0000 UTC m=+791.470658074" watchObservedRunningTime="2026-02-18 19:30:31.77055242 +0000 UTC m=+791.475485085" Feb 18 19:30:32 crc kubenswrapper[4942]: I0218 19:30:32.539037 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/6176b1cc-ddc9-4bdd-9707-ae3c04996b6c-memberlist\") pod \"speaker-pm8vg\" (UID: \"6176b1cc-ddc9-4bdd-9707-ae3c04996b6c\") " pod="metallb-system/speaker-pm8vg" Feb 18 19:30:32 crc kubenswrapper[4942]: I0218 19:30:32.547190 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/6176b1cc-ddc9-4bdd-9707-ae3c04996b6c-memberlist\") pod \"speaker-pm8vg\" (UID: \"6176b1cc-ddc9-4bdd-9707-ae3c04996b6c\") " pod="metallb-system/speaker-pm8vg" Feb 18 19:30:32 crc kubenswrapper[4942]: I0218 19:30:32.596073 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-pm8vg" Feb 18 19:30:32 crc kubenswrapper[4942]: W0218 19:30:32.635945 4942 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6176b1cc_ddc9_4bdd_9707_ae3c04996b6c.slice/crio-d33300bb504e8e3940697de6eb3f39b22ae6ce2e51f920a3015a463871cf39d2 WatchSource:0}: Error finding container d33300bb504e8e3940697de6eb3f39b22ae6ce2e51f920a3015a463871cf39d2: Status 404 returned error can't find the container with id d33300bb504e8e3940697de6eb3f39b22ae6ce2e51f920a3015a463871cf39d2 Feb 18 19:30:32 crc kubenswrapper[4942]: I0218 19:30:32.780140 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-pm8vg" event={"ID":"6176b1cc-ddc9-4bdd-9707-ae3c04996b6c","Type":"ContainerStarted","Data":"d33300bb504e8e3940697de6eb3f39b22ae6ce2e51f920a3015a463871cf39d2"} Feb 18 19:30:33 crc kubenswrapper[4942]: I0218 19:30:33.789512 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-pm8vg" event={"ID":"6176b1cc-ddc9-4bdd-9707-ae3c04996b6c","Type":"ContainerStarted","Data":"e33de20324e503945414965a83aef91476f07dbca665144cf9241c297ac44447"} Feb 18 19:30:33 crc kubenswrapper[4942]: I0218 19:30:33.789552 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-pm8vg" event={"ID":"6176b1cc-ddc9-4bdd-9707-ae3c04996b6c","Type":"ContainerStarted","Data":"5c7b4fffa33ee6a2c976ae51e80386ff0dcb569d33fafb7a2e26d771e81d5cd7"} Feb 18 19:30:33 crc kubenswrapper[4942]: I0218 19:30:33.789665 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/speaker-pm8vg" Feb 18 19:30:33 crc kubenswrapper[4942]: I0218 19:30:33.817691 4942 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/speaker-pm8vg" podStartSLOduration=3.8176779339999998 podStartE2EDuration="3.817677934s" podCreationTimestamp="2026-02-18 19:30:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:30:33.814217318 +0000 UTC m=+793.519149983" watchObservedRunningTime="2026-02-18 19:30:33.817677934 +0000 UTC m=+793.522610599" Feb 18 19:30:38 crc kubenswrapper[4942]: I0218 19:30:38.833152 4942 generic.go:334] "Generic (PLEG): container finished" podID="eddf6439-0868-428b-9bc0-5b85371d6103" containerID="a3f318eb388ac356022134dc246f5da98fd0b3d94b33bb3683437dc1c2a303b5" exitCode=0 Feb 18 19:30:38 crc kubenswrapper[4942]: I0218 19:30:38.833243 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-4jkrm" event={"ID":"eddf6439-0868-428b-9bc0-5b85371d6103","Type":"ContainerDied","Data":"a3f318eb388ac356022134dc246f5da98fd0b3d94b33bb3683437dc1c2a303b5"} Feb 18 19:30:38 crc kubenswrapper[4942]: I0218 19:30:38.836334 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-7ghrb" event={"ID":"2963214d-df0b-4249-832e-8396a15ed441","Type":"ContainerStarted","Data":"371cf545a7ea38f9136c8f015c3b70951e9bd3e49097f90e64801bc4067d1f18"} Feb 18 19:30:38 crc kubenswrapper[4942]: I0218 19:30:38.836744 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-7ghrb" Feb 18 19:30:38 crc kubenswrapper[4942]: I0218 19:30:38.908064 4942 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-7ghrb" podStartSLOduration=2.00864274 podStartE2EDuration="8.908039572s" podCreationTimestamp="2026-02-18 19:30:30 +0000 UTC" firstStartedPulling="2026-02-18 19:30:31.470300077 +0000 UTC m=+791.175232742" lastFinishedPulling="2026-02-18 19:30:38.369696869 +0000 UTC m=+798.074629574" observedRunningTime="2026-02-18 19:30:38.904840552 +0000 UTC m=+798.609773267" watchObservedRunningTime="2026-02-18 19:30:38.908039572 +0000 UTC m=+798.612972257" Feb 18 19:30:39 crc kubenswrapper[4942]: I0218 19:30:39.845527 4942 generic.go:334] "Generic (PLEG): container finished" podID="eddf6439-0868-428b-9bc0-5b85371d6103" containerID="302cb526dd3643bbbd7b7f2cc1e8ac09f60a5a155079b8b66fedee1897dd4fa2" exitCode=0 Feb 18 19:30:39 crc kubenswrapper[4942]: I0218 19:30:39.846077 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-4jkrm" event={"ID":"eddf6439-0868-428b-9bc0-5b85371d6103","Type":"ContainerDied","Data":"302cb526dd3643bbbd7b7f2cc1e8ac09f60a5a155079b8b66fedee1897dd4fa2"} Feb 18 19:30:40 crc kubenswrapper[4942]: I0218 19:30:40.858663 4942 generic.go:334] "Generic (PLEG): container finished" podID="eddf6439-0868-428b-9bc0-5b85371d6103" containerID="42a262e61b422bd818b4f6a5e771aa968b48b6d351161c933d1149199aa5c10c" exitCode=0 Feb 18 19:30:40 crc kubenswrapper[4942]: I0218 19:30:40.858722 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-4jkrm" event={"ID":"eddf6439-0868-428b-9bc0-5b85371d6103","Type":"ContainerDied","Data":"42a262e61b422bd818b4f6a5e771aa968b48b6d351161c933d1149199aa5c10c"} Feb 18 19:30:41 crc kubenswrapper[4942]: I0218 19:30:41.088528 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/controller-69bbfbf88f-gzp79" Feb 18 19:30:41 crc kubenswrapper[4942]: I0218 19:30:41.871216 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-4jkrm" event={"ID":"eddf6439-0868-428b-9bc0-5b85371d6103","Type":"ContainerStarted","Data":"86097023d27d0341ea77cd48cdbf1e5b391fc69f61b7fe2ebe11564218a632c5"} Feb 18 19:30:41 crc kubenswrapper[4942]: I0218 19:30:41.871506 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-4jkrm" event={"ID":"eddf6439-0868-428b-9bc0-5b85371d6103","Type":"ContainerStarted","Data":"03f33caeacbbaca1b7a95e1fadee0d53533f89baea8a15ca3df499d50ba94747"} Feb 18 19:30:41 crc kubenswrapper[4942]: I0218 19:30:41.871526 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-4jkrm" Feb 18 19:30:41 crc kubenswrapper[4942]: I0218 19:30:41.871538 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-4jkrm" event={"ID":"eddf6439-0868-428b-9bc0-5b85371d6103","Type":"ContainerStarted","Data":"b67762593bfa38ab20dbba98a205fb02a1af2a52c922d04397f46e53277ce4fa"} Feb 18 19:30:41 crc kubenswrapper[4942]: I0218 19:30:41.871549 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-4jkrm" event={"ID":"eddf6439-0868-428b-9bc0-5b85371d6103","Type":"ContainerStarted","Data":"31738ac3e00a8360d37f5ea2d06de7aa12a322e2f0ab4ed1260b366b4f1823df"} Feb 18 19:30:41 crc kubenswrapper[4942]: I0218 19:30:41.871558 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-4jkrm" event={"ID":"eddf6439-0868-428b-9bc0-5b85371d6103","Type":"ContainerStarted","Data":"8fe1370375fde5ab82cb849e306c5a55005ecf678b7f98871206d083686119c5"} Feb 18 19:30:41 crc kubenswrapper[4942]: I0218 19:30:41.871570 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-4jkrm" event={"ID":"eddf6439-0868-428b-9bc0-5b85371d6103","Type":"ContainerStarted","Data":"7d9c9792cd5dfdb3e0bfb306d232e27602d9208a0b1d4fbf215965dec13f1bf2"} Feb 18 19:30:41 crc kubenswrapper[4942]: I0218 19:30:41.896232 4942 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-4jkrm" podStartSLOduration=4.771272472 podStartE2EDuration="11.89621418s" podCreationTimestamp="2026-02-18 19:30:30 +0000 UTC" firstStartedPulling="2026-02-18 19:30:31.20308783 +0000 UTC m=+790.908020495" lastFinishedPulling="2026-02-18 19:30:38.328029528 +0000 UTC m=+798.032962203" observedRunningTime="2026-02-18 19:30:41.892672902 +0000 UTC m=+801.597605567" watchObservedRunningTime="2026-02-18 19:30:41.89621418 +0000 UTC m=+801.601146845" Feb 18 19:30:42 crc kubenswrapper[4942]: I0218 19:30:42.600938 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/speaker-pm8vg" Feb 18 19:30:45 crc kubenswrapper[4942]: I0218 19:30:45.446748 4942 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-9fdg7"] Feb 18 19:30:45 crc kubenswrapper[4942]: I0218 19:30:45.453573 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-9fdg7" Feb 18 19:30:45 crc kubenswrapper[4942]: I0218 19:30:45.455931 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-index-dockercfg-nqjmc" Feb 18 19:30:45 crc kubenswrapper[4942]: I0218 19:30:45.456484 4942 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"kube-root-ca.crt" Feb 18 19:30:45 crc kubenswrapper[4942]: I0218 19:30:45.456540 4942 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"openshift-service-ca.crt" Feb 18 19:30:45 crc kubenswrapper[4942]: I0218 19:30:45.456617 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-9fdg7"] Feb 18 19:30:45 crc kubenswrapper[4942]: I0218 19:30:45.644146 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6mtww\" (UniqueName: \"kubernetes.io/projected/a991d775-aaf3-4672-a039-e0e212c0be47-kube-api-access-6mtww\") pod \"openstack-operator-index-9fdg7\" (UID: \"a991d775-aaf3-4672-a039-e0e212c0be47\") " pod="openstack-operators/openstack-operator-index-9fdg7" Feb 18 19:30:45 crc kubenswrapper[4942]: I0218 19:30:45.745424 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6mtww\" (UniqueName: \"kubernetes.io/projected/a991d775-aaf3-4672-a039-e0e212c0be47-kube-api-access-6mtww\") pod \"openstack-operator-index-9fdg7\" (UID: \"a991d775-aaf3-4672-a039-e0e212c0be47\") " pod="openstack-operators/openstack-operator-index-9fdg7" Feb 18 19:30:45 crc kubenswrapper[4942]: I0218 19:30:45.780524 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6mtww\" (UniqueName: \"kubernetes.io/projected/a991d775-aaf3-4672-a039-e0e212c0be47-kube-api-access-6mtww\") pod \"openstack-operator-index-9fdg7\" (UID: \"a991d775-aaf3-4672-a039-e0e212c0be47\") " pod="openstack-operators/openstack-operator-index-9fdg7" Feb 18 19:30:46 crc kubenswrapper[4942]: I0218 19:30:46.010952 4942 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="metallb-system/frr-k8s-4jkrm" Feb 18 19:30:46 crc kubenswrapper[4942]: I0218 19:30:46.069591 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-9fdg7" Feb 18 19:30:46 crc kubenswrapper[4942]: I0218 19:30:46.076873 4942 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="metallb-system/frr-k8s-4jkrm" Feb 18 19:30:46 crc kubenswrapper[4942]: I0218 19:30:46.552154 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-9fdg7"] Feb 18 19:30:46 crc kubenswrapper[4942]: I0218 19:30:46.909947 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-9fdg7" event={"ID":"a991d775-aaf3-4672-a039-e0e212c0be47","Type":"ContainerStarted","Data":"53714d49461e4e0da0b076abca969cde23b7aeaeda7b3afdc4dfa1f5170c63e5"} Feb 18 19:30:48 crc kubenswrapper[4942]: I0218 19:30:48.798494 4942 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/openstack-operator-index-9fdg7"] Feb 18 19:30:49 crc kubenswrapper[4942]: I0218 19:30:49.405530 4942 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-kjnfm"] Feb 18 19:30:49 crc kubenswrapper[4942]: I0218 19:30:49.406549 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-kjnfm" Feb 18 19:30:49 crc kubenswrapper[4942]: I0218 19:30:49.429414 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-kjnfm"] Feb 18 19:30:49 crc kubenswrapper[4942]: I0218 19:30:49.596230 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-95ql6\" (UniqueName: \"kubernetes.io/projected/c2a7b573-c260-4ebc-8a90-c935ce2e9b05-kube-api-access-95ql6\") pod \"openstack-operator-index-kjnfm\" (UID: \"c2a7b573-c260-4ebc-8a90-c935ce2e9b05\") " pod="openstack-operators/openstack-operator-index-kjnfm" Feb 18 19:30:49 crc kubenswrapper[4942]: I0218 19:30:49.697618 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-95ql6\" (UniqueName: \"kubernetes.io/projected/c2a7b573-c260-4ebc-8a90-c935ce2e9b05-kube-api-access-95ql6\") pod \"openstack-operator-index-kjnfm\" (UID: \"c2a7b573-c260-4ebc-8a90-c935ce2e9b05\") " pod="openstack-operators/openstack-operator-index-kjnfm" Feb 18 19:30:49 crc kubenswrapper[4942]: I0218 19:30:49.730324 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-95ql6\" (UniqueName: \"kubernetes.io/projected/c2a7b573-c260-4ebc-8a90-c935ce2e9b05-kube-api-access-95ql6\") pod \"openstack-operator-index-kjnfm\" (UID: \"c2a7b573-c260-4ebc-8a90-c935ce2e9b05\") " pod="openstack-operators/openstack-operator-index-kjnfm" Feb 18 19:30:49 crc kubenswrapper[4942]: I0218 19:30:49.740849 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-kjnfm" Feb 18 19:30:49 crc kubenswrapper[4942]: I0218 19:30:49.932108 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-9fdg7" event={"ID":"a991d775-aaf3-4672-a039-e0e212c0be47","Type":"ContainerStarted","Data":"c0aaaa315437c38653cc50f8c199db4869a42818a7f8eb059590f4ee478d0b5a"} Feb 18 19:30:49 crc kubenswrapper[4942]: I0218 19:30:49.932457 4942 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack-operators/openstack-operator-index-9fdg7" podUID="a991d775-aaf3-4672-a039-e0e212c0be47" containerName="registry-server" containerID="cri-o://c0aaaa315437c38653cc50f8c199db4869a42818a7f8eb059590f4ee478d0b5a" gracePeriod=2 Feb 18 19:30:49 crc kubenswrapper[4942]: I0218 19:30:49.948642 4942 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-9fdg7" podStartSLOduration=2.569181888 podStartE2EDuration="4.948625534s" podCreationTimestamp="2026-02-18 19:30:45 +0000 UTC" firstStartedPulling="2026-02-18 19:30:46.56158293 +0000 UTC m=+806.266515635" lastFinishedPulling="2026-02-18 19:30:48.941026616 +0000 UTC m=+808.645959281" observedRunningTime="2026-02-18 19:30:49.944738257 +0000 UTC m=+809.649670922" watchObservedRunningTime="2026-02-18 19:30:49.948625534 +0000 UTC m=+809.653558199" Feb 18 19:30:50 crc kubenswrapper[4942]: I0218 19:30:50.170213 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-kjnfm"] Feb 18 19:30:50 crc kubenswrapper[4942]: W0218 19:30:50.183650 4942 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc2a7b573_c260_4ebc_8a90_c935ce2e9b05.slice/crio-7753afbf3c777de2999a208e57e6fc75d0e6c30dd7288ba2239c2938ff4af054 WatchSource:0}: Error finding container 7753afbf3c777de2999a208e57e6fc75d0e6c30dd7288ba2239c2938ff4af054: Status 404 returned error can't find the container with id 7753afbf3c777de2999a208e57e6fc75d0e6c30dd7288ba2239c2938ff4af054 Feb 18 19:30:50 crc kubenswrapper[4942]: I0218 19:30:50.291276 4942 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-9fdg7" Feb 18 19:30:50 crc kubenswrapper[4942]: I0218 19:30:50.313149 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6mtww\" (UniqueName: \"kubernetes.io/projected/a991d775-aaf3-4672-a039-e0e212c0be47-kube-api-access-6mtww\") pod \"a991d775-aaf3-4672-a039-e0e212c0be47\" (UID: \"a991d775-aaf3-4672-a039-e0e212c0be47\") " Feb 18 19:30:50 crc kubenswrapper[4942]: I0218 19:30:50.322654 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a991d775-aaf3-4672-a039-e0e212c0be47-kube-api-access-6mtww" (OuterVolumeSpecName: "kube-api-access-6mtww") pod "a991d775-aaf3-4672-a039-e0e212c0be47" (UID: "a991d775-aaf3-4672-a039-e0e212c0be47"). InnerVolumeSpecName "kube-api-access-6mtww". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:30:50 crc kubenswrapper[4942]: I0218 19:30:50.416825 4942 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6mtww\" (UniqueName: \"kubernetes.io/projected/a991d775-aaf3-4672-a039-e0e212c0be47-kube-api-access-6mtww\") on node \"crc\" DevicePath \"\"" Feb 18 19:30:50 crc kubenswrapper[4942]: I0218 19:30:50.943701 4942 generic.go:334] "Generic (PLEG): container finished" podID="a991d775-aaf3-4672-a039-e0e212c0be47" containerID="c0aaaa315437c38653cc50f8c199db4869a42818a7f8eb059590f4ee478d0b5a" exitCode=0 Feb 18 19:30:50 crc kubenswrapper[4942]: I0218 19:30:50.943785 4942 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-9fdg7" Feb 18 19:30:50 crc kubenswrapper[4942]: I0218 19:30:50.943812 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-9fdg7" event={"ID":"a991d775-aaf3-4672-a039-e0e212c0be47","Type":"ContainerDied","Data":"c0aaaa315437c38653cc50f8c199db4869a42818a7f8eb059590f4ee478d0b5a"} Feb 18 19:30:50 crc kubenswrapper[4942]: I0218 19:30:50.944428 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-9fdg7" event={"ID":"a991d775-aaf3-4672-a039-e0e212c0be47","Type":"ContainerDied","Data":"53714d49461e4e0da0b076abca969cde23b7aeaeda7b3afdc4dfa1f5170c63e5"} Feb 18 19:30:50 crc kubenswrapper[4942]: I0218 19:30:50.944449 4942 scope.go:117] "RemoveContainer" containerID="c0aaaa315437c38653cc50f8c199db4869a42818a7f8eb059590f4ee478d0b5a" Feb 18 19:30:50 crc kubenswrapper[4942]: I0218 19:30:50.948854 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-kjnfm" event={"ID":"c2a7b573-c260-4ebc-8a90-c935ce2e9b05","Type":"ContainerStarted","Data":"f325496d6ebbd65e31717afe4b8565caca6e20cc557398f1b43cb36e8ca14c55"} Feb 18 19:30:50 crc kubenswrapper[4942]: I0218 19:30:50.948920 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-kjnfm" event={"ID":"c2a7b573-c260-4ebc-8a90-c935ce2e9b05","Type":"ContainerStarted","Data":"7753afbf3c777de2999a208e57e6fc75d0e6c30dd7288ba2239c2938ff4af054"} Feb 18 19:30:50 crc kubenswrapper[4942]: I0218 19:30:50.973789 4942 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-kjnfm" podStartSLOduration=1.923243228 podStartE2EDuration="1.973718239s" podCreationTimestamp="2026-02-18 19:30:49 +0000 UTC" firstStartedPulling="2026-02-18 19:30:50.188381715 +0000 UTC m=+809.893314380" lastFinishedPulling="2026-02-18 19:30:50.238856726 +0000 UTC m=+809.943789391" observedRunningTime="2026-02-18 19:30:50.972870987 +0000 UTC m=+810.677803692" watchObservedRunningTime="2026-02-18 19:30:50.973718239 +0000 UTC m=+810.678650954" Feb 18 19:30:50 crc kubenswrapper[4942]: I0218 19:30:50.981633 4942 scope.go:117] "RemoveContainer" containerID="c0aaaa315437c38653cc50f8c199db4869a42818a7f8eb059590f4ee478d0b5a" Feb 18 19:30:50 crc kubenswrapper[4942]: E0218 19:30:50.982197 4942 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c0aaaa315437c38653cc50f8c199db4869a42818a7f8eb059590f4ee478d0b5a\": container with ID starting with c0aaaa315437c38653cc50f8c199db4869a42818a7f8eb059590f4ee478d0b5a not found: ID does not exist" containerID="c0aaaa315437c38653cc50f8c199db4869a42818a7f8eb059590f4ee478d0b5a" Feb 18 19:30:50 crc kubenswrapper[4942]: I0218 19:30:50.982257 4942 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c0aaaa315437c38653cc50f8c199db4869a42818a7f8eb059590f4ee478d0b5a"} err="failed to get container status \"c0aaaa315437c38653cc50f8c199db4869a42818a7f8eb059590f4ee478d0b5a\": rpc error: code = NotFound desc = could not find container \"c0aaaa315437c38653cc50f8c199db4869a42818a7f8eb059590f4ee478d0b5a\": container with ID starting with c0aaaa315437c38653cc50f8c199db4869a42818a7f8eb059590f4ee478d0b5a not found: ID does not exist" Feb 18 19:30:51 crc kubenswrapper[4942]: I0218 19:30:51.000521 4942 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/openstack-operator-index-9fdg7"] Feb 18 19:30:51 crc kubenswrapper[4942]: I0218 19:30:51.006747 4942 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack-operators/openstack-operator-index-9fdg7"] Feb 18 19:30:51 crc kubenswrapper[4942]: I0218 19:30:51.010011 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-7ghrb" Feb 18 19:30:51 crc kubenswrapper[4942]: I0218 19:30:51.015511 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-4jkrm" Feb 18 19:30:51 crc kubenswrapper[4942]: I0218 19:30:51.053821 4942 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a991d775-aaf3-4672-a039-e0e212c0be47" path="/var/lib/kubelet/pods/a991d775-aaf3-4672-a039-e0e212c0be47/volumes" Feb 18 19:30:53 crc kubenswrapper[4942]: I0218 19:30:53.741397 4942 patch_prober.go:28] interesting pod/machine-config-daemon-wqxh4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 19:30:53 crc kubenswrapper[4942]: I0218 19:30:53.741836 4942 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wqxh4" podUID="28921539-823a-4439-a230-3b5aed7085cc" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 19:30:59 crc kubenswrapper[4942]: I0218 19:30:59.742015 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-index-kjnfm" Feb 18 19:30:59 crc kubenswrapper[4942]: I0218 19:30:59.742343 4942 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack-operators/openstack-operator-index-kjnfm" Feb 18 19:30:59 crc kubenswrapper[4942]: I0218 19:30:59.782145 4942 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack-operators/openstack-operator-index-kjnfm" Feb 18 19:31:00 crc kubenswrapper[4942]: I0218 19:31:00.060619 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-index-kjnfm" Feb 18 19:31:01 crc kubenswrapper[4942]: I0218 19:31:01.066162 4942 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/c2389ce2411c77579885128099fd69f69b8f53a852f66d1318588c5f70p7fln"] Feb 18 19:31:01 crc kubenswrapper[4942]: E0218 19:31:01.066456 4942 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a991d775-aaf3-4672-a039-e0e212c0be47" containerName="registry-server" Feb 18 19:31:01 crc kubenswrapper[4942]: I0218 19:31:01.066470 4942 state_mem.go:107] "Deleted CPUSet assignment" podUID="a991d775-aaf3-4672-a039-e0e212c0be47" containerName="registry-server" Feb 18 19:31:01 crc kubenswrapper[4942]: I0218 19:31:01.066593 4942 memory_manager.go:354] "RemoveStaleState removing state" podUID="a991d775-aaf3-4672-a039-e0e212c0be47" containerName="registry-server" Feb 18 19:31:01 crc kubenswrapper[4942]: I0218 19:31:01.067446 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/c2389ce2411c77579885128099fd69f69b8f53a852f66d1318588c5f70p7fln" Feb 18 19:31:01 crc kubenswrapper[4942]: I0218 19:31:01.076250 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/c2389ce2411c77579885128099fd69f69b8f53a852f66d1318588c5f70p7fln"] Feb 18 19:31:01 crc kubenswrapper[4942]: I0218 19:31:01.077754 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"default-dockercfg-wvrwg" Feb 18 19:31:01 crc kubenswrapper[4942]: I0218 19:31:01.185291 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/9d1e1c52-dc07-468c-ad10-e1c39be1a5b5-util\") pod \"c2389ce2411c77579885128099fd69f69b8f53a852f66d1318588c5f70p7fln\" (UID: \"9d1e1c52-dc07-468c-ad10-e1c39be1a5b5\") " pod="openstack-operators/c2389ce2411c77579885128099fd69f69b8f53a852f66d1318588c5f70p7fln" Feb 18 19:31:01 crc kubenswrapper[4942]: I0218 19:31:01.185720 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pc5ph\" (UniqueName: \"kubernetes.io/projected/9d1e1c52-dc07-468c-ad10-e1c39be1a5b5-kube-api-access-pc5ph\") pod \"c2389ce2411c77579885128099fd69f69b8f53a852f66d1318588c5f70p7fln\" (UID: \"9d1e1c52-dc07-468c-ad10-e1c39be1a5b5\") " pod="openstack-operators/c2389ce2411c77579885128099fd69f69b8f53a852f66d1318588c5f70p7fln" Feb 18 19:31:01 crc kubenswrapper[4942]: I0218 19:31:01.185948 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/9d1e1c52-dc07-468c-ad10-e1c39be1a5b5-bundle\") pod \"c2389ce2411c77579885128099fd69f69b8f53a852f66d1318588c5f70p7fln\" (UID: \"9d1e1c52-dc07-468c-ad10-e1c39be1a5b5\") " pod="openstack-operators/c2389ce2411c77579885128099fd69f69b8f53a852f66d1318588c5f70p7fln" Feb 18 19:31:01 crc kubenswrapper[4942]: I0218 19:31:01.286991 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/9d1e1c52-dc07-468c-ad10-e1c39be1a5b5-util\") pod \"c2389ce2411c77579885128099fd69f69b8f53a852f66d1318588c5f70p7fln\" (UID: \"9d1e1c52-dc07-468c-ad10-e1c39be1a5b5\") " pod="openstack-operators/c2389ce2411c77579885128099fd69f69b8f53a852f66d1318588c5f70p7fln" Feb 18 19:31:01 crc kubenswrapper[4942]: I0218 19:31:01.287099 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pc5ph\" (UniqueName: \"kubernetes.io/projected/9d1e1c52-dc07-468c-ad10-e1c39be1a5b5-kube-api-access-pc5ph\") pod \"c2389ce2411c77579885128099fd69f69b8f53a852f66d1318588c5f70p7fln\" (UID: \"9d1e1c52-dc07-468c-ad10-e1c39be1a5b5\") " pod="openstack-operators/c2389ce2411c77579885128099fd69f69b8f53a852f66d1318588c5f70p7fln" Feb 18 19:31:01 crc kubenswrapper[4942]: I0218 19:31:01.287189 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/9d1e1c52-dc07-468c-ad10-e1c39be1a5b5-bundle\") pod \"c2389ce2411c77579885128099fd69f69b8f53a852f66d1318588c5f70p7fln\" (UID: \"9d1e1c52-dc07-468c-ad10-e1c39be1a5b5\") " pod="openstack-operators/c2389ce2411c77579885128099fd69f69b8f53a852f66d1318588c5f70p7fln" Feb 18 19:31:01 crc kubenswrapper[4942]: I0218 19:31:01.287793 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/9d1e1c52-dc07-468c-ad10-e1c39be1a5b5-bundle\") pod \"c2389ce2411c77579885128099fd69f69b8f53a852f66d1318588c5f70p7fln\" (UID: \"9d1e1c52-dc07-468c-ad10-e1c39be1a5b5\") " pod="openstack-operators/c2389ce2411c77579885128099fd69f69b8f53a852f66d1318588c5f70p7fln" Feb 18 19:31:01 crc kubenswrapper[4942]: I0218 19:31:01.288018 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/9d1e1c52-dc07-468c-ad10-e1c39be1a5b5-util\") pod \"c2389ce2411c77579885128099fd69f69b8f53a852f66d1318588c5f70p7fln\" (UID: \"9d1e1c52-dc07-468c-ad10-e1c39be1a5b5\") " pod="openstack-operators/c2389ce2411c77579885128099fd69f69b8f53a852f66d1318588c5f70p7fln" Feb 18 19:31:01 crc kubenswrapper[4942]: I0218 19:31:01.317101 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pc5ph\" (UniqueName: \"kubernetes.io/projected/9d1e1c52-dc07-468c-ad10-e1c39be1a5b5-kube-api-access-pc5ph\") pod \"c2389ce2411c77579885128099fd69f69b8f53a852f66d1318588c5f70p7fln\" (UID: \"9d1e1c52-dc07-468c-ad10-e1c39be1a5b5\") " pod="openstack-operators/c2389ce2411c77579885128099fd69f69b8f53a852f66d1318588c5f70p7fln" Feb 18 19:31:01 crc kubenswrapper[4942]: I0218 19:31:01.394281 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/c2389ce2411c77579885128099fd69f69b8f53a852f66d1318588c5f70p7fln" Feb 18 19:31:01 crc kubenswrapper[4942]: I0218 19:31:01.828386 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/c2389ce2411c77579885128099fd69f69b8f53a852f66d1318588c5f70p7fln"] Feb 18 19:31:01 crc kubenswrapper[4942]: W0218 19:31:01.841861 4942 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9d1e1c52_dc07_468c_ad10_e1c39be1a5b5.slice/crio-c6e58174a012088918273a5d29807ed1147ce5b60b5df40edd1dd67a55f99d15 WatchSource:0}: Error finding container c6e58174a012088918273a5d29807ed1147ce5b60b5df40edd1dd67a55f99d15: Status 404 returned error can't find the container with id c6e58174a012088918273a5d29807ed1147ce5b60b5df40edd1dd67a55f99d15 Feb 18 19:31:02 crc kubenswrapper[4942]: I0218 19:31:02.054057 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/c2389ce2411c77579885128099fd69f69b8f53a852f66d1318588c5f70p7fln" event={"ID":"9d1e1c52-dc07-468c-ad10-e1c39be1a5b5","Type":"ContainerStarted","Data":"bc54afe9a8d3e8c30ea0ab0fda8b393f560170d2dddddbfc8d3f765fff73c7af"} Feb 18 19:31:02 crc kubenswrapper[4942]: I0218 19:31:02.054422 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/c2389ce2411c77579885128099fd69f69b8f53a852f66d1318588c5f70p7fln" event={"ID":"9d1e1c52-dc07-468c-ad10-e1c39be1a5b5","Type":"ContainerStarted","Data":"c6e58174a012088918273a5d29807ed1147ce5b60b5df40edd1dd67a55f99d15"} Feb 18 19:31:02 crc kubenswrapper[4942]: E0218 19:31:02.215925 4942 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9d1e1c52_dc07_468c_ad10_e1c39be1a5b5.slice/crio-bc54afe9a8d3e8c30ea0ab0fda8b393f560170d2dddddbfc8d3f765fff73c7af.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9d1e1c52_dc07_468c_ad10_e1c39be1a5b5.slice/crio-conmon-bc54afe9a8d3e8c30ea0ab0fda8b393f560170d2dddddbfc8d3f765fff73c7af.scope\": RecentStats: unable to find data in memory cache]" Feb 18 19:31:03 crc kubenswrapper[4942]: I0218 19:31:03.062976 4942 generic.go:334] "Generic (PLEG): container finished" podID="9d1e1c52-dc07-468c-ad10-e1c39be1a5b5" containerID="bc54afe9a8d3e8c30ea0ab0fda8b393f560170d2dddddbfc8d3f765fff73c7af" exitCode=0 Feb 18 19:31:03 crc kubenswrapper[4942]: I0218 19:31:03.063034 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/c2389ce2411c77579885128099fd69f69b8f53a852f66d1318588c5f70p7fln" event={"ID":"9d1e1c52-dc07-468c-ad10-e1c39be1a5b5","Type":"ContainerDied","Data":"bc54afe9a8d3e8c30ea0ab0fda8b393f560170d2dddddbfc8d3f765fff73c7af"} Feb 18 19:31:04 crc kubenswrapper[4942]: I0218 19:31:04.075990 4942 generic.go:334] "Generic (PLEG): container finished" podID="9d1e1c52-dc07-468c-ad10-e1c39be1a5b5" containerID="8a5286f376506706706bb1b0a1894c49cffb2a9d67abd03ae06a8cd1ec83057e" exitCode=0 Feb 18 19:31:04 crc kubenswrapper[4942]: I0218 19:31:04.076068 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/c2389ce2411c77579885128099fd69f69b8f53a852f66d1318588c5f70p7fln" event={"ID":"9d1e1c52-dc07-468c-ad10-e1c39be1a5b5","Type":"ContainerDied","Data":"8a5286f376506706706bb1b0a1894c49cffb2a9d67abd03ae06a8cd1ec83057e"} Feb 18 19:31:05 crc kubenswrapper[4942]: I0218 19:31:05.091151 4942 generic.go:334] "Generic (PLEG): container finished" podID="9d1e1c52-dc07-468c-ad10-e1c39be1a5b5" containerID="7d2535a756fd4b341284d12e18b8c048d1ad768b210a82136b38af44b4e60253" exitCode=0 Feb 18 19:31:05 crc kubenswrapper[4942]: I0218 19:31:05.091190 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/c2389ce2411c77579885128099fd69f69b8f53a852f66d1318588c5f70p7fln" event={"ID":"9d1e1c52-dc07-468c-ad10-e1c39be1a5b5","Type":"ContainerDied","Data":"7d2535a756fd4b341284d12e18b8c048d1ad768b210a82136b38af44b4e60253"} Feb 18 19:31:06 crc kubenswrapper[4942]: I0218 19:31:06.383239 4942 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/c2389ce2411c77579885128099fd69f69b8f53a852f66d1318588c5f70p7fln" Feb 18 19:31:06 crc kubenswrapper[4942]: I0218 19:31:06.460498 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/9d1e1c52-dc07-468c-ad10-e1c39be1a5b5-util\") pod \"9d1e1c52-dc07-468c-ad10-e1c39be1a5b5\" (UID: \"9d1e1c52-dc07-468c-ad10-e1c39be1a5b5\") " Feb 18 19:31:06 crc kubenswrapper[4942]: I0218 19:31:06.460614 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pc5ph\" (UniqueName: \"kubernetes.io/projected/9d1e1c52-dc07-468c-ad10-e1c39be1a5b5-kube-api-access-pc5ph\") pod \"9d1e1c52-dc07-468c-ad10-e1c39be1a5b5\" (UID: \"9d1e1c52-dc07-468c-ad10-e1c39be1a5b5\") " Feb 18 19:31:06 crc kubenswrapper[4942]: I0218 19:31:06.460663 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/9d1e1c52-dc07-468c-ad10-e1c39be1a5b5-bundle\") pod \"9d1e1c52-dc07-468c-ad10-e1c39be1a5b5\" (UID: \"9d1e1c52-dc07-468c-ad10-e1c39be1a5b5\") " Feb 18 19:31:06 crc kubenswrapper[4942]: I0218 19:31:06.461456 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9d1e1c52-dc07-468c-ad10-e1c39be1a5b5-bundle" (OuterVolumeSpecName: "bundle") pod "9d1e1c52-dc07-468c-ad10-e1c39be1a5b5" (UID: "9d1e1c52-dc07-468c-ad10-e1c39be1a5b5"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 19:31:06 crc kubenswrapper[4942]: I0218 19:31:06.465230 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9d1e1c52-dc07-468c-ad10-e1c39be1a5b5-kube-api-access-pc5ph" (OuterVolumeSpecName: "kube-api-access-pc5ph") pod "9d1e1c52-dc07-468c-ad10-e1c39be1a5b5" (UID: "9d1e1c52-dc07-468c-ad10-e1c39be1a5b5"). InnerVolumeSpecName "kube-api-access-pc5ph". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:31:06 crc kubenswrapper[4942]: I0218 19:31:06.482438 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9d1e1c52-dc07-468c-ad10-e1c39be1a5b5-util" (OuterVolumeSpecName: "util") pod "9d1e1c52-dc07-468c-ad10-e1c39be1a5b5" (UID: "9d1e1c52-dc07-468c-ad10-e1c39be1a5b5"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 19:31:06 crc kubenswrapper[4942]: I0218 19:31:06.562294 4942 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pc5ph\" (UniqueName: \"kubernetes.io/projected/9d1e1c52-dc07-468c-ad10-e1c39be1a5b5-kube-api-access-pc5ph\") on node \"crc\" DevicePath \"\"" Feb 18 19:31:06 crc kubenswrapper[4942]: I0218 19:31:06.562404 4942 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/9d1e1c52-dc07-468c-ad10-e1c39be1a5b5-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 19:31:06 crc kubenswrapper[4942]: I0218 19:31:06.562427 4942 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/9d1e1c52-dc07-468c-ad10-e1c39be1a5b5-util\") on node \"crc\" DevicePath \"\"" Feb 18 19:31:07 crc kubenswrapper[4942]: I0218 19:31:07.109667 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/c2389ce2411c77579885128099fd69f69b8f53a852f66d1318588c5f70p7fln" event={"ID":"9d1e1c52-dc07-468c-ad10-e1c39be1a5b5","Type":"ContainerDied","Data":"c6e58174a012088918273a5d29807ed1147ce5b60b5df40edd1dd67a55f99d15"} Feb 18 19:31:07 crc kubenswrapper[4942]: I0218 19:31:07.109740 4942 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c6e58174a012088918273a5d29807ed1147ce5b60b5df40edd1dd67a55f99d15" Feb 18 19:31:07 crc kubenswrapper[4942]: I0218 19:31:07.109863 4942 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/c2389ce2411c77579885128099fd69f69b8f53a852f66d1318588c5f70p7fln" Feb 18 19:31:13 crc kubenswrapper[4942]: I0218 19:31:13.210490 4942 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-init-fc58468f4-xvr6v"] Feb 18 19:31:13 crc kubenswrapper[4942]: E0218 19:31:13.211319 4942 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9d1e1c52-dc07-468c-ad10-e1c39be1a5b5" containerName="util" Feb 18 19:31:13 crc kubenswrapper[4942]: I0218 19:31:13.211336 4942 state_mem.go:107] "Deleted CPUSet assignment" podUID="9d1e1c52-dc07-468c-ad10-e1c39be1a5b5" containerName="util" Feb 18 19:31:13 crc kubenswrapper[4942]: E0218 19:31:13.211346 4942 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9d1e1c52-dc07-468c-ad10-e1c39be1a5b5" containerName="extract" Feb 18 19:31:13 crc kubenswrapper[4942]: I0218 19:31:13.211353 4942 state_mem.go:107] "Deleted CPUSet assignment" podUID="9d1e1c52-dc07-468c-ad10-e1c39be1a5b5" containerName="extract" Feb 18 19:31:13 crc kubenswrapper[4942]: E0218 19:31:13.211367 4942 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9d1e1c52-dc07-468c-ad10-e1c39be1a5b5" containerName="pull" Feb 18 19:31:13 crc kubenswrapper[4942]: I0218 19:31:13.211374 4942 state_mem.go:107] "Deleted CPUSet assignment" podUID="9d1e1c52-dc07-468c-ad10-e1c39be1a5b5" containerName="pull" Feb 18 19:31:13 crc kubenswrapper[4942]: I0218 19:31:13.211515 4942 memory_manager.go:354] "RemoveStaleState removing state" podUID="9d1e1c52-dc07-468c-ad10-e1c39be1a5b5" containerName="extract" Feb 18 19:31:13 crc kubenswrapper[4942]: I0218 19:31:13.212039 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-init-fc58468f4-xvr6v" Feb 18 19:31:13 crc kubenswrapper[4942]: I0218 19:31:13.215089 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-init-dockercfg-z8plb" Feb 18 19:31:13 crc kubenswrapper[4942]: I0218 19:31:13.236818 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-init-fc58468f4-xvr6v"] Feb 18 19:31:13 crc kubenswrapper[4942]: I0218 19:31:13.255364 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lpjdz\" (UniqueName: \"kubernetes.io/projected/268dc206-1be7-4a8a-8cd7-45b3c667b3bd-kube-api-access-lpjdz\") pod \"openstack-operator-controller-init-fc58468f4-xvr6v\" (UID: \"268dc206-1be7-4a8a-8cd7-45b3c667b3bd\") " pod="openstack-operators/openstack-operator-controller-init-fc58468f4-xvr6v" Feb 18 19:31:13 crc kubenswrapper[4942]: I0218 19:31:13.356317 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lpjdz\" (UniqueName: \"kubernetes.io/projected/268dc206-1be7-4a8a-8cd7-45b3c667b3bd-kube-api-access-lpjdz\") pod \"openstack-operator-controller-init-fc58468f4-xvr6v\" (UID: \"268dc206-1be7-4a8a-8cd7-45b3c667b3bd\") " pod="openstack-operators/openstack-operator-controller-init-fc58468f4-xvr6v" Feb 18 19:31:13 crc kubenswrapper[4942]: I0218 19:31:13.373685 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lpjdz\" (UniqueName: \"kubernetes.io/projected/268dc206-1be7-4a8a-8cd7-45b3c667b3bd-kube-api-access-lpjdz\") pod \"openstack-operator-controller-init-fc58468f4-xvr6v\" (UID: \"268dc206-1be7-4a8a-8cd7-45b3c667b3bd\") " pod="openstack-operators/openstack-operator-controller-init-fc58468f4-xvr6v" Feb 18 19:31:13 crc kubenswrapper[4942]: I0218 19:31:13.527989 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-init-fc58468f4-xvr6v" Feb 18 19:31:13 crc kubenswrapper[4942]: I0218 19:31:13.756219 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-init-fc58468f4-xvr6v"] Feb 18 19:31:14 crc kubenswrapper[4942]: I0218 19:31:14.166754 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-init-fc58468f4-xvr6v" event={"ID":"268dc206-1be7-4a8a-8cd7-45b3c667b3bd","Type":"ContainerStarted","Data":"1d462ee4383d0179f60289a8d57fa47871dad9b6029ef1927d016cbc87a139ae"} Feb 18 19:31:18 crc kubenswrapper[4942]: I0218 19:31:18.198742 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-init-fc58468f4-xvr6v" event={"ID":"268dc206-1be7-4a8a-8cd7-45b3c667b3bd","Type":"ContainerStarted","Data":"774686fb33e15d8649344f37dc7798663513e5d79967448c8dbc91c16bca7f32"} Feb 18 19:31:18 crc kubenswrapper[4942]: I0218 19:31:18.199556 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-init-fc58468f4-xvr6v" Feb 18 19:31:18 crc kubenswrapper[4942]: I0218 19:31:18.235642 4942 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-init-fc58468f4-xvr6v" podStartSLOduration=1.469611162 podStartE2EDuration="5.235625427s" podCreationTimestamp="2026-02-18 19:31:13 +0000 UTC" firstStartedPulling="2026-02-18 19:31:13.765475586 +0000 UTC m=+833.470408261" lastFinishedPulling="2026-02-18 19:31:17.531489861 +0000 UTC m=+837.236422526" observedRunningTime="2026-02-18 19:31:18.23017996 +0000 UTC m=+837.935112625" watchObservedRunningTime="2026-02-18 19:31:18.235625427 +0000 UTC m=+837.940558092" Feb 18 19:31:23 crc kubenswrapper[4942]: I0218 19:31:23.532640 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-init-fc58468f4-xvr6v" Feb 18 19:31:23 crc kubenswrapper[4942]: I0218 19:31:23.741202 4942 patch_prober.go:28] interesting pod/machine-config-daemon-wqxh4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 19:31:23 crc kubenswrapper[4942]: I0218 19:31:23.741596 4942 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wqxh4" podUID="28921539-823a-4439-a230-3b5aed7085cc" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 19:31:23 crc kubenswrapper[4942]: I0218 19:31:23.741655 4942 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-wqxh4" Feb 18 19:31:23 crc kubenswrapper[4942]: I0218 19:31:23.742450 4942 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"573640abad6b15c1dd30fd80a1b600755a1efda149dab25e49e3a1173acf646a"} pod="openshift-machine-config-operator/machine-config-daemon-wqxh4" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 18 19:31:23 crc kubenswrapper[4942]: I0218 19:31:23.742524 4942 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-wqxh4" podUID="28921539-823a-4439-a230-3b5aed7085cc" containerName="machine-config-daemon" containerID="cri-o://573640abad6b15c1dd30fd80a1b600755a1efda149dab25e49e3a1173acf646a" gracePeriod=600 Feb 18 19:31:24 crc kubenswrapper[4942]: I0218 19:31:24.247697 4942 generic.go:334] "Generic (PLEG): container finished" podID="28921539-823a-4439-a230-3b5aed7085cc" containerID="573640abad6b15c1dd30fd80a1b600755a1efda149dab25e49e3a1173acf646a" exitCode=0 Feb 18 19:31:24 crc kubenswrapper[4942]: I0218 19:31:24.247889 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wqxh4" event={"ID":"28921539-823a-4439-a230-3b5aed7085cc","Type":"ContainerDied","Data":"573640abad6b15c1dd30fd80a1b600755a1efda149dab25e49e3a1173acf646a"} Feb 18 19:31:24 crc kubenswrapper[4942]: I0218 19:31:24.248082 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wqxh4" event={"ID":"28921539-823a-4439-a230-3b5aed7085cc","Type":"ContainerStarted","Data":"4ad75b87330a71997979db298f42e179882b61890e654d3a0c077cf25d5cb90b"} Feb 18 19:31:24 crc kubenswrapper[4942]: I0218 19:31:24.248115 4942 scope.go:117] "RemoveContainer" containerID="69563ccc2ca715071d77cf8ee678820b7e15eada4a6e511a3ef021c2758d0101" Feb 18 19:31:35 crc kubenswrapper[4942]: I0218 19:31:35.856347 4942 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-928mg"] Feb 18 19:31:35 crc kubenswrapper[4942]: I0218 19:31:35.859730 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-928mg" Feb 18 19:31:35 crc kubenswrapper[4942]: I0218 19:31:35.873612 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-928mg"] Feb 18 19:31:35 crc kubenswrapper[4942]: I0218 19:31:35.961405 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/83b97eec-f1b8-4205-933f-205e30caeec2-catalog-content\") pod \"certified-operators-928mg\" (UID: \"83b97eec-f1b8-4205-933f-205e30caeec2\") " pod="openshift-marketplace/certified-operators-928mg" Feb 18 19:31:35 crc kubenswrapper[4942]: I0218 19:31:35.961464 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6kt9m\" (UniqueName: \"kubernetes.io/projected/83b97eec-f1b8-4205-933f-205e30caeec2-kube-api-access-6kt9m\") pod \"certified-operators-928mg\" (UID: \"83b97eec-f1b8-4205-933f-205e30caeec2\") " pod="openshift-marketplace/certified-operators-928mg" Feb 18 19:31:35 crc kubenswrapper[4942]: I0218 19:31:35.961496 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/83b97eec-f1b8-4205-933f-205e30caeec2-utilities\") pod \"certified-operators-928mg\" (UID: \"83b97eec-f1b8-4205-933f-205e30caeec2\") " pod="openshift-marketplace/certified-operators-928mg" Feb 18 19:31:36 crc kubenswrapper[4942]: I0218 19:31:36.062474 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/83b97eec-f1b8-4205-933f-205e30caeec2-catalog-content\") pod \"certified-operators-928mg\" (UID: \"83b97eec-f1b8-4205-933f-205e30caeec2\") " pod="openshift-marketplace/certified-operators-928mg" Feb 18 19:31:36 crc kubenswrapper[4942]: I0218 19:31:36.062544 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6kt9m\" (UniqueName: \"kubernetes.io/projected/83b97eec-f1b8-4205-933f-205e30caeec2-kube-api-access-6kt9m\") pod \"certified-operators-928mg\" (UID: \"83b97eec-f1b8-4205-933f-205e30caeec2\") " pod="openshift-marketplace/certified-operators-928mg" Feb 18 19:31:36 crc kubenswrapper[4942]: I0218 19:31:36.062581 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/83b97eec-f1b8-4205-933f-205e30caeec2-utilities\") pod \"certified-operators-928mg\" (UID: \"83b97eec-f1b8-4205-933f-205e30caeec2\") " pod="openshift-marketplace/certified-operators-928mg" Feb 18 19:31:36 crc kubenswrapper[4942]: I0218 19:31:36.063122 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/83b97eec-f1b8-4205-933f-205e30caeec2-utilities\") pod \"certified-operators-928mg\" (UID: \"83b97eec-f1b8-4205-933f-205e30caeec2\") " pod="openshift-marketplace/certified-operators-928mg" Feb 18 19:31:36 crc kubenswrapper[4942]: I0218 19:31:36.063335 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/83b97eec-f1b8-4205-933f-205e30caeec2-catalog-content\") pod \"certified-operators-928mg\" (UID: \"83b97eec-f1b8-4205-933f-205e30caeec2\") " pod="openshift-marketplace/certified-operators-928mg" Feb 18 19:31:36 crc kubenswrapper[4942]: I0218 19:31:36.085549 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6kt9m\" (UniqueName: \"kubernetes.io/projected/83b97eec-f1b8-4205-933f-205e30caeec2-kube-api-access-6kt9m\") pod \"certified-operators-928mg\" (UID: \"83b97eec-f1b8-4205-933f-205e30caeec2\") " pod="openshift-marketplace/certified-operators-928mg" Feb 18 19:31:36 crc kubenswrapper[4942]: I0218 19:31:36.185115 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-928mg" Feb 18 19:31:36 crc kubenswrapper[4942]: I0218 19:31:36.709477 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-928mg"] Feb 18 19:31:37 crc kubenswrapper[4942]: I0218 19:31:37.353561 4942 generic.go:334] "Generic (PLEG): container finished" podID="83b97eec-f1b8-4205-933f-205e30caeec2" containerID="d49471940515dac44ca7b4deb7b69786b17d58c82210cbd128da7a4353fdc212" exitCode=0 Feb 18 19:31:37 crc kubenswrapper[4942]: I0218 19:31:37.353808 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-928mg" event={"ID":"83b97eec-f1b8-4205-933f-205e30caeec2","Type":"ContainerDied","Data":"d49471940515dac44ca7b4deb7b69786b17d58c82210cbd128da7a4353fdc212"} Feb 18 19:31:37 crc kubenswrapper[4942]: I0218 19:31:37.353831 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-928mg" event={"ID":"83b97eec-f1b8-4205-933f-205e30caeec2","Type":"ContainerStarted","Data":"e90c667153875bf407511ed88e15dc632e46a63fd6b238de865623e6e16e6e1a"} Feb 18 19:31:38 crc kubenswrapper[4942]: I0218 19:31:38.362256 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-928mg" event={"ID":"83b97eec-f1b8-4205-933f-205e30caeec2","Type":"ContainerStarted","Data":"63a88ca6ca33dff5e2ab3bac904d6ef958cd2ba371f5bb8c57b0d9a89c8d0c4e"} Feb 18 19:31:39 crc kubenswrapper[4942]: I0218 19:31:39.369780 4942 generic.go:334] "Generic (PLEG): container finished" podID="83b97eec-f1b8-4205-933f-205e30caeec2" containerID="63a88ca6ca33dff5e2ab3bac904d6ef958cd2ba371f5bb8c57b0d9a89c8d0c4e" exitCode=0 Feb 18 19:31:39 crc kubenswrapper[4942]: I0218 19:31:39.370028 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-928mg" event={"ID":"83b97eec-f1b8-4205-933f-205e30caeec2","Type":"ContainerDied","Data":"63a88ca6ca33dff5e2ab3bac904d6ef958cd2ba371f5bb8c57b0d9a89c8d0c4e"} Feb 18 19:31:40 crc kubenswrapper[4942]: I0218 19:31:40.379223 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-928mg" event={"ID":"83b97eec-f1b8-4205-933f-205e30caeec2","Type":"ContainerStarted","Data":"801e3e316ccbbae8f59abaf78bbc4c858ca81ddbf88f422f0ae0653aa768a2de"} Feb 18 19:31:40 crc kubenswrapper[4942]: I0218 19:31:40.402238 4942 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-928mg" podStartSLOduration=2.8837019699999997 podStartE2EDuration="5.402219007s" podCreationTimestamp="2026-02-18 19:31:35 +0000 UTC" firstStartedPulling="2026-02-18 19:31:37.355387757 +0000 UTC m=+857.060320422" lastFinishedPulling="2026-02-18 19:31:39.873904794 +0000 UTC m=+859.578837459" observedRunningTime="2026-02-18 19:31:40.39673684 +0000 UTC m=+860.101669505" watchObservedRunningTime="2026-02-18 19:31:40.402219007 +0000 UTC m=+860.107151682" Feb 18 19:31:44 crc kubenswrapper[4942]: I0218 19:31:44.040268 4942 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/barbican-operator-controller-manager-c4b7d6946-rvgp6"] Feb 18 19:31:44 crc kubenswrapper[4942]: I0218 19:31:44.041799 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-c4b7d6946-rvgp6" Feb 18 19:31:44 crc kubenswrapper[4942]: I0218 19:31:44.043790 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"barbican-operator-controller-manager-dockercfg-lp758" Feb 18 19:31:44 crc kubenswrapper[4942]: I0218 19:31:44.045882 4942 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/cinder-operator-controller-manager-57746b5ff9-56k6g"] Feb 18 19:31:44 crc kubenswrapper[4942]: I0218 19:31:44.050389 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-57746b5ff9-56k6g" Feb 18 19:31:44 crc kubenswrapper[4942]: I0218 19:31:44.051720 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"cinder-operator-controller-manager-dockercfg-rrtjz" Feb 18 19:31:44 crc kubenswrapper[4942]: I0218 19:31:44.071551 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-57746b5ff9-56k6g"] Feb 18 19:31:44 crc kubenswrapper[4942]: I0218 19:31:44.077714 4942 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/designate-operator-controller-manager-55cc45767f-26x4h"] Feb 18 19:31:44 crc kubenswrapper[4942]: I0218 19:31:44.078690 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-55cc45767f-26x4h" Feb 18 19:31:44 crc kubenswrapper[4942]: I0218 19:31:44.082030 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"designate-operator-controller-manager-dockercfg-zgrcf" Feb 18 19:31:44 crc kubenswrapper[4942]: I0218 19:31:44.097403 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-55cc45767f-26x4h"] Feb 18 19:31:44 crc kubenswrapper[4942]: I0218 19:31:44.113721 4942 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/glance-operator-controller-manager-68c6d499cb-g7kpv"] Feb 18 19:31:44 crc kubenswrapper[4942]: I0218 19:31:44.114708 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-68c6d499cb-g7kpv" Feb 18 19:31:44 crc kubenswrapper[4942]: I0218 19:31:44.131967 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"glance-operator-controller-manager-dockercfg-5zz9p" Feb 18 19:31:44 crc kubenswrapper[4942]: I0218 19:31:44.172392 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-68c6d499cb-g7kpv"] Feb 18 19:31:44 crc kubenswrapper[4942]: I0218 19:31:44.174186 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4g2gg\" (UniqueName: \"kubernetes.io/projected/51f45ea1-2b95-4553-9e3d-5e6bb4c8b862-kube-api-access-4g2gg\") pod \"cinder-operator-controller-manager-57746b5ff9-56k6g\" (UID: \"51f45ea1-2b95-4553-9e3d-5e6bb4c8b862\") " pod="openstack-operators/cinder-operator-controller-manager-57746b5ff9-56k6g" Feb 18 19:31:44 crc kubenswrapper[4942]: I0218 19:31:44.174303 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zgbdq\" (UniqueName: \"kubernetes.io/projected/829c57a8-54c3-43c5-8bea-2ceeeafeb143-kube-api-access-zgbdq\") pod \"barbican-operator-controller-manager-c4b7d6946-rvgp6\" (UID: \"829c57a8-54c3-43c5-8bea-2ceeeafeb143\") " pod="openstack-operators/barbican-operator-controller-manager-c4b7d6946-rvgp6" Feb 18 19:31:44 crc kubenswrapper[4942]: I0218 19:31:44.174376 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n4hbp\" (UniqueName: \"kubernetes.io/projected/844a0cad-5a6a-4ab4-8e32-388835eb9f4a-kube-api-access-n4hbp\") pod \"designate-operator-controller-manager-55cc45767f-26x4h\" (UID: \"844a0cad-5a6a-4ab4-8e32-388835eb9f4a\") " pod="openstack-operators/designate-operator-controller-manager-55cc45767f-26x4h" Feb 18 19:31:44 crc kubenswrapper[4942]: I0218 19:31:44.182834 4942 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/heat-operator-controller-manager-9595d6797-xrzwv"] Feb 18 19:31:44 crc kubenswrapper[4942]: I0218 19:31:44.183676 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-9595d6797-xrzwv" Feb 18 19:31:44 crc kubenswrapper[4942]: I0218 19:31:44.185068 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-c4b7d6946-rvgp6"] Feb 18 19:31:44 crc kubenswrapper[4942]: I0218 19:31:44.195390 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"heat-operator-controller-manager-dockercfg-mgvvk" Feb 18 19:31:44 crc kubenswrapper[4942]: I0218 19:31:44.196546 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-9595d6797-xrzwv"] Feb 18 19:31:44 crc kubenswrapper[4942]: I0218 19:31:44.203456 4942 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/horizon-operator-controller-manager-54fb488b88-9gjbj"] Feb 18 19:31:44 crc kubenswrapper[4942]: I0218 19:31:44.204275 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-54fb488b88-9gjbj" Feb 18 19:31:44 crc kubenswrapper[4942]: I0218 19:31:44.216071 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"horizon-operator-controller-manager-dockercfg-ns786" Feb 18 19:31:44 crc kubenswrapper[4942]: I0218 19:31:44.228813 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-54fb488b88-9gjbj"] Feb 18 19:31:44 crc kubenswrapper[4942]: I0218 19:31:44.236404 4942 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/infra-operator-controller-manager-66d6b5f488-5vptt"] Feb 18 19:31:44 crc kubenswrapper[4942]: I0218 19:31:44.237269 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-66d6b5f488-5vptt" Feb 18 19:31:44 crc kubenswrapper[4942]: I0218 19:31:44.242739 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-webhook-server-cert" Feb 18 19:31:44 crc kubenswrapper[4942]: I0218 19:31:44.243046 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-controller-manager-dockercfg-ncxkw" Feb 18 19:31:44 crc kubenswrapper[4942]: I0218 19:31:44.246162 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-66d6b5f488-5vptt"] Feb 18 19:31:44 crc kubenswrapper[4942]: I0218 19:31:44.263778 4942 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ironic-operator-controller-manager-6494cdbf8f-9qvzl"] Feb 18 19:31:44 crc kubenswrapper[4942]: I0218 19:31:44.264827 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-6494cdbf8f-9qvzl" Feb 18 19:31:44 crc kubenswrapper[4942]: I0218 19:31:44.268296 4942 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/keystone-operator-controller-manager-6c78d668d5-t9dzq"] Feb 18 19:31:44 crc kubenswrapper[4942]: I0218 19:31:44.269242 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-6c78d668d5-t9dzq" Feb 18 19:31:44 crc kubenswrapper[4942]: I0218 19:31:44.276207 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-6494cdbf8f-9qvzl"] Feb 18 19:31:44 crc kubenswrapper[4942]: I0218 19:31:44.276693 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4g2gg\" (UniqueName: \"kubernetes.io/projected/51f45ea1-2b95-4553-9e3d-5e6bb4c8b862-kube-api-access-4g2gg\") pod \"cinder-operator-controller-manager-57746b5ff9-56k6g\" (UID: \"51f45ea1-2b95-4553-9e3d-5e6bb4c8b862\") " pod="openstack-operators/cinder-operator-controller-manager-57746b5ff9-56k6g" Feb 18 19:31:44 crc kubenswrapper[4942]: I0218 19:31:44.276777 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rsb7d\" (UniqueName: \"kubernetes.io/projected/8d849c9e-0da1-4910-9922-5ea2dd2728a2-kube-api-access-rsb7d\") pod \"glance-operator-controller-manager-68c6d499cb-g7kpv\" (UID: \"8d849c9e-0da1-4910-9922-5ea2dd2728a2\") " pod="openstack-operators/glance-operator-controller-manager-68c6d499cb-g7kpv" Feb 18 19:31:44 crc kubenswrapper[4942]: I0218 19:31:44.276801 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c7f4p\" (UniqueName: \"kubernetes.io/projected/4cbefad2-6c6d-4b7b-bba9-acf857a54a4b-kube-api-access-c7f4p\") pod \"heat-operator-controller-manager-9595d6797-xrzwv\" (UID: \"4cbefad2-6c6d-4b7b-bba9-acf857a54a4b\") " pod="openstack-operators/heat-operator-controller-manager-9595d6797-xrzwv" Feb 18 19:31:44 crc kubenswrapper[4942]: I0218 19:31:44.276819 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zgbdq\" (UniqueName: \"kubernetes.io/projected/829c57a8-54c3-43c5-8bea-2ceeeafeb143-kube-api-access-zgbdq\") pod \"barbican-operator-controller-manager-c4b7d6946-rvgp6\" (UID: \"829c57a8-54c3-43c5-8bea-2ceeeafeb143\") " pod="openstack-operators/barbican-operator-controller-manager-c4b7d6946-rvgp6" Feb 18 19:31:44 crc kubenswrapper[4942]: I0218 19:31:44.276851 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n4hbp\" (UniqueName: \"kubernetes.io/projected/844a0cad-5a6a-4ab4-8e32-388835eb9f4a-kube-api-access-n4hbp\") pod \"designate-operator-controller-manager-55cc45767f-26x4h\" (UID: \"844a0cad-5a6a-4ab4-8e32-388835eb9f4a\") " pod="openstack-operators/designate-operator-controller-manager-55cc45767f-26x4h" Feb 18 19:31:44 crc kubenswrapper[4942]: I0218 19:31:44.278755 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"keystone-operator-controller-manager-dockercfg-q9kfx" Feb 18 19:31:44 crc kubenswrapper[4942]: I0218 19:31:44.278931 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ironic-operator-controller-manager-dockercfg-9v4zz" Feb 18 19:31:44 crc kubenswrapper[4942]: I0218 19:31:44.312812 4942 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/manila-operator-controller-manager-96fff9cb8-qs9mb"] Feb 18 19:31:44 crc kubenswrapper[4942]: I0218 19:31:44.313587 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-96fff9cb8-qs9mb" Feb 18 19:31:44 crc kubenswrapper[4942]: I0218 19:31:44.333645 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4g2gg\" (UniqueName: \"kubernetes.io/projected/51f45ea1-2b95-4553-9e3d-5e6bb4c8b862-kube-api-access-4g2gg\") pod \"cinder-operator-controller-manager-57746b5ff9-56k6g\" (UID: \"51f45ea1-2b95-4553-9e3d-5e6bb4c8b862\") " pod="openstack-operators/cinder-operator-controller-manager-57746b5ff9-56k6g" Feb 18 19:31:44 crc kubenswrapper[4942]: I0218 19:31:44.333662 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zgbdq\" (UniqueName: \"kubernetes.io/projected/829c57a8-54c3-43c5-8bea-2ceeeafeb143-kube-api-access-zgbdq\") pod \"barbican-operator-controller-manager-c4b7d6946-rvgp6\" (UID: \"829c57a8-54c3-43c5-8bea-2ceeeafeb143\") " pod="openstack-operators/barbican-operator-controller-manager-c4b7d6946-rvgp6" Feb 18 19:31:44 crc kubenswrapper[4942]: I0218 19:31:44.334421 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"manila-operator-controller-manager-dockercfg-cxw8x" Feb 18 19:31:44 crc kubenswrapper[4942]: I0218 19:31:44.339817 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-6c78d668d5-t9dzq"] Feb 18 19:31:44 crc kubenswrapper[4942]: I0218 19:31:44.346401 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n4hbp\" (UniqueName: \"kubernetes.io/projected/844a0cad-5a6a-4ab4-8e32-388835eb9f4a-kube-api-access-n4hbp\") pod \"designate-operator-controller-manager-55cc45767f-26x4h\" (UID: \"844a0cad-5a6a-4ab4-8e32-388835eb9f4a\") " pod="openstack-operators/designate-operator-controller-manager-55cc45767f-26x4h" Feb 18 19:31:44 crc kubenswrapper[4942]: I0218 19:31:44.353832 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-96fff9cb8-qs9mb"] Feb 18 19:31:44 crc kubenswrapper[4942]: I0218 19:31:44.370941 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-c4b7d6946-rvgp6" Feb 18 19:31:44 crc kubenswrapper[4942]: I0218 19:31:44.379812 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/230a2167-e078-48a6-93ce-84a37ff4ac02-cert\") pod \"infra-operator-controller-manager-66d6b5f488-5vptt\" (UID: \"230a2167-e078-48a6-93ce-84a37ff4ac02\") " pod="openstack-operators/infra-operator-controller-manager-66d6b5f488-5vptt" Feb 18 19:31:44 crc kubenswrapper[4942]: I0218 19:31:44.379858 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8w4v6\" (UniqueName: \"kubernetes.io/projected/80bc5b9b-00c2-4003-8279-1dbc3ff3aa05-kube-api-access-8w4v6\") pod \"horizon-operator-controller-manager-54fb488b88-9gjbj\" (UID: \"80bc5b9b-00c2-4003-8279-1dbc3ff3aa05\") " pod="openstack-operators/horizon-operator-controller-manager-54fb488b88-9gjbj" Feb 18 19:31:44 crc kubenswrapper[4942]: I0218 19:31:44.379925 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rsb7d\" (UniqueName: \"kubernetes.io/projected/8d849c9e-0da1-4910-9922-5ea2dd2728a2-kube-api-access-rsb7d\") pod \"glance-operator-controller-manager-68c6d499cb-g7kpv\" (UID: \"8d849c9e-0da1-4910-9922-5ea2dd2728a2\") " pod="openstack-operators/glance-operator-controller-manager-68c6d499cb-g7kpv" Feb 18 19:31:44 crc kubenswrapper[4942]: I0218 19:31:44.379946 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c7f4p\" (UniqueName: \"kubernetes.io/projected/4cbefad2-6c6d-4b7b-bba9-acf857a54a4b-kube-api-access-c7f4p\") pod \"heat-operator-controller-manager-9595d6797-xrzwv\" (UID: \"4cbefad2-6c6d-4b7b-bba9-acf857a54a4b\") " pod="openstack-operators/heat-operator-controller-manager-9595d6797-xrzwv" Feb 18 19:31:44 crc kubenswrapper[4942]: I0218 19:31:44.380006 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zz6ht\" (UniqueName: \"kubernetes.io/projected/230a2167-e078-48a6-93ce-84a37ff4ac02-kube-api-access-zz6ht\") pod \"infra-operator-controller-manager-66d6b5f488-5vptt\" (UID: \"230a2167-e078-48a6-93ce-84a37ff4ac02\") " pod="openstack-operators/infra-operator-controller-manager-66d6b5f488-5vptt" Feb 18 19:31:44 crc kubenswrapper[4942]: I0218 19:31:44.380030 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n49wh\" (UniqueName: \"kubernetes.io/projected/11715b33-f996-46bf-81db-0557e84e7fea-kube-api-access-n49wh\") pod \"keystone-operator-controller-manager-6c78d668d5-t9dzq\" (UID: \"11715b33-f996-46bf-81db-0557e84e7fea\") " pod="openstack-operators/keystone-operator-controller-manager-6c78d668d5-t9dzq" Feb 18 19:31:44 crc kubenswrapper[4942]: I0218 19:31:44.380046 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s2bnw\" (UniqueName: \"kubernetes.io/projected/1e73a8a0-3246-4a08-b4be-d587d82742a4-kube-api-access-s2bnw\") pod \"ironic-operator-controller-manager-6494cdbf8f-9qvzl\" (UID: \"1e73a8a0-3246-4a08-b4be-d587d82742a4\") " pod="openstack-operators/ironic-operator-controller-manager-6494cdbf8f-9qvzl" Feb 18 19:31:44 crc kubenswrapper[4942]: I0218 19:31:44.386414 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-57746b5ff9-56k6g" Feb 18 19:31:44 crc kubenswrapper[4942]: I0218 19:31:44.387631 4942 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/neutron-operator-controller-manager-54967dbbdf-tzn65"] Feb 18 19:31:44 crc kubenswrapper[4942]: I0218 19:31:44.388813 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-54967dbbdf-tzn65" Feb 18 19:31:44 crc kubenswrapper[4942]: I0218 19:31:44.391691 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"neutron-operator-controller-manager-dockercfg-gq7ft" Feb 18 19:31:44 crc kubenswrapper[4942]: I0218 19:31:44.399383 4942 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-66997756f6-f8nnp"] Feb 18 19:31:44 crc kubenswrapper[4942]: I0218 19:31:44.400526 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-66997756f6-f8nnp" Feb 18 19:31:44 crc kubenswrapper[4942]: I0218 19:31:44.410638 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c7f4p\" (UniqueName: \"kubernetes.io/projected/4cbefad2-6c6d-4b7b-bba9-acf857a54a4b-kube-api-access-c7f4p\") pod \"heat-operator-controller-manager-9595d6797-xrzwv\" (UID: \"4cbefad2-6c6d-4b7b-bba9-acf857a54a4b\") " pod="openstack-operators/heat-operator-controller-manager-9595d6797-xrzwv" Feb 18 19:31:44 crc kubenswrapper[4942]: I0218 19:31:44.411122 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"mariadb-operator-controller-manager-dockercfg-pvgms" Feb 18 19:31:44 crc kubenswrapper[4942]: I0218 19:31:44.416205 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-55cc45767f-26x4h" Feb 18 19:31:44 crc kubenswrapper[4942]: I0218 19:31:44.435599 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rsb7d\" (UniqueName: \"kubernetes.io/projected/8d849c9e-0da1-4910-9922-5ea2dd2728a2-kube-api-access-rsb7d\") pod \"glance-operator-controller-manager-68c6d499cb-g7kpv\" (UID: \"8d849c9e-0da1-4910-9922-5ea2dd2728a2\") " pod="openstack-operators/glance-operator-controller-manager-68c6d499cb-g7kpv" Feb 18 19:31:44 crc kubenswrapper[4942]: I0218 19:31:44.458825 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-54967dbbdf-tzn65"] Feb 18 19:31:44 crc kubenswrapper[4942]: I0218 19:31:44.479205 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-66997756f6-f8nnp"] Feb 18 19:31:44 crc kubenswrapper[4942]: I0218 19:31:44.481668 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-68c6d499cb-g7kpv" Feb 18 19:31:44 crc kubenswrapper[4942]: I0218 19:31:44.482956 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zz6ht\" (UniqueName: \"kubernetes.io/projected/230a2167-e078-48a6-93ce-84a37ff4ac02-kube-api-access-zz6ht\") pod \"infra-operator-controller-manager-66d6b5f488-5vptt\" (UID: \"230a2167-e078-48a6-93ce-84a37ff4ac02\") " pod="openstack-operators/infra-operator-controller-manager-66d6b5f488-5vptt" Feb 18 19:31:44 crc kubenswrapper[4942]: I0218 19:31:44.483007 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n49wh\" (UniqueName: \"kubernetes.io/projected/11715b33-f996-46bf-81db-0557e84e7fea-kube-api-access-n49wh\") pod \"keystone-operator-controller-manager-6c78d668d5-t9dzq\" (UID: \"11715b33-f996-46bf-81db-0557e84e7fea\") " pod="openstack-operators/keystone-operator-controller-manager-6c78d668d5-t9dzq" Feb 18 19:31:44 crc kubenswrapper[4942]: I0218 19:31:44.483027 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2bnw\" (UniqueName: \"kubernetes.io/projected/1e73a8a0-3246-4a08-b4be-d587d82742a4-kube-api-access-s2bnw\") pod \"ironic-operator-controller-manager-6494cdbf8f-9qvzl\" (UID: \"1e73a8a0-3246-4a08-b4be-d587d82742a4\") " pod="openstack-operators/ironic-operator-controller-manager-6494cdbf8f-9qvzl" Feb 18 19:31:44 crc kubenswrapper[4942]: I0218 19:31:44.483047 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/230a2167-e078-48a6-93ce-84a37ff4ac02-cert\") pod \"infra-operator-controller-manager-66d6b5f488-5vptt\" (UID: \"230a2167-e078-48a6-93ce-84a37ff4ac02\") " pod="openstack-operators/infra-operator-controller-manager-66d6b5f488-5vptt" Feb 18 19:31:44 crc kubenswrapper[4942]: I0218 19:31:44.483069 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8w4v6\" (UniqueName: \"kubernetes.io/projected/80bc5b9b-00c2-4003-8279-1dbc3ff3aa05-kube-api-access-8w4v6\") pod \"horizon-operator-controller-manager-54fb488b88-9gjbj\" (UID: \"80bc5b9b-00c2-4003-8279-1dbc3ff3aa05\") " pod="openstack-operators/horizon-operator-controller-manager-54fb488b88-9gjbj" Feb 18 19:31:44 crc kubenswrapper[4942]: I0218 19:31:44.483112 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-89l7l\" (UniqueName: \"kubernetes.io/projected/9d43a851-2d6c-4fe9-86e1-04c7d382b257-kube-api-access-89l7l\") pod \"neutron-operator-controller-manager-54967dbbdf-tzn65\" (UID: \"9d43a851-2d6c-4fe9-86e1-04c7d382b257\") " pod="openstack-operators/neutron-operator-controller-manager-54967dbbdf-tzn65" Feb 18 19:31:44 crc kubenswrapper[4942]: I0218 19:31:44.483145 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mgjwf\" (UniqueName: \"kubernetes.io/projected/a15b8ac2-0742-4fd7-9a14-005620c93a3d-kube-api-access-mgjwf\") pod \"manila-operator-controller-manager-96fff9cb8-qs9mb\" (UID: \"a15b8ac2-0742-4fd7-9a14-005620c93a3d\") " pod="openstack-operators/manila-operator-controller-manager-96fff9cb8-qs9mb" Feb 18 19:31:44 crc kubenswrapper[4942]: E0218 19:31:44.483287 4942 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Feb 18 19:31:44 crc kubenswrapper[4942]: E0218 19:31:44.483331 4942 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/230a2167-e078-48a6-93ce-84a37ff4ac02-cert podName:230a2167-e078-48a6-93ce-84a37ff4ac02 nodeName:}" failed. No retries permitted until 2026-02-18 19:31:44.983313132 +0000 UTC m=+864.688245797 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/230a2167-e078-48a6-93ce-84a37ff4ac02-cert") pod "infra-operator-controller-manager-66d6b5f488-5vptt" (UID: "230a2167-e078-48a6-93ce-84a37ff4ac02") : secret "infra-operator-webhook-server-cert" not found Feb 18 19:31:44 crc kubenswrapper[4942]: I0218 19:31:44.489477 4942 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/nova-operator-controller-manager-5ddd85db87-5jzdp"] Feb 18 19:31:44 crc kubenswrapper[4942]: I0218 19:31:44.490279 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-5ddd85db87-5jzdp" Feb 18 19:31:44 crc kubenswrapper[4942]: I0218 19:31:44.501383 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"nova-operator-controller-manager-dockercfg-cmcch" Feb 18 19:31:44 crc kubenswrapper[4942]: I0218 19:31:44.513197 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-9595d6797-xrzwv" Feb 18 19:31:44 crc kubenswrapper[4942]: I0218 19:31:44.522917 4942 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/octavia-operator-controller-manager-745bbbd77b-4xhmd"] Feb 18 19:31:44 crc kubenswrapper[4942]: I0218 19:31:44.525890 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-745bbbd77b-4xhmd" Feb 18 19:31:44 crc kubenswrapper[4942]: I0218 19:31:44.524397 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2bnw\" (UniqueName: \"kubernetes.io/projected/1e73a8a0-3246-4a08-b4be-d587d82742a4-kube-api-access-s2bnw\") pod \"ironic-operator-controller-manager-6494cdbf8f-9qvzl\" (UID: \"1e73a8a0-3246-4a08-b4be-d587d82742a4\") " pod="openstack-operators/ironic-operator-controller-manager-6494cdbf8f-9qvzl" Feb 18 19:31:44 crc kubenswrapper[4942]: I0218 19:31:44.527790 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zz6ht\" (UniqueName: \"kubernetes.io/projected/230a2167-e078-48a6-93ce-84a37ff4ac02-kube-api-access-zz6ht\") pod \"infra-operator-controller-manager-66d6b5f488-5vptt\" (UID: \"230a2167-e078-48a6-93ce-84a37ff4ac02\") " pod="openstack-operators/infra-operator-controller-manager-66d6b5f488-5vptt" Feb 18 19:31:44 crc kubenswrapper[4942]: I0218 19:31:44.530016 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"octavia-operator-controller-manager-dockercfg-t6cn7" Feb 18 19:31:44 crc kubenswrapper[4942]: I0218 19:31:44.535047 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n49wh\" (UniqueName: \"kubernetes.io/projected/11715b33-f996-46bf-81db-0557e84e7fea-kube-api-access-n49wh\") pod \"keystone-operator-controller-manager-6c78d668d5-t9dzq\" (UID: \"11715b33-f996-46bf-81db-0557e84e7fea\") " pod="openstack-operators/keystone-operator-controller-manager-6c78d668d5-t9dzq" Feb 18 19:31:44 crc kubenswrapper[4942]: I0218 19:31:44.542382 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-745bbbd77b-4xhmd"] Feb 18 19:31:44 crc kubenswrapper[4942]: I0218 19:31:44.548656 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8w4v6\" (UniqueName: \"kubernetes.io/projected/80bc5b9b-00c2-4003-8279-1dbc3ff3aa05-kube-api-access-8w4v6\") pod \"horizon-operator-controller-manager-54fb488b88-9gjbj\" (UID: \"80bc5b9b-00c2-4003-8279-1dbc3ff3aa05\") " pod="openstack-operators/horizon-operator-controller-manager-54fb488b88-9gjbj" Feb 18 19:31:44 crc kubenswrapper[4942]: I0218 19:31:44.584609 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hktdt\" (UniqueName: \"kubernetes.io/projected/cde9a09e-2dfe-410e-95ad-8f297b517ef4-kube-api-access-hktdt\") pod \"nova-operator-controller-manager-5ddd85db87-5jzdp\" (UID: \"cde9a09e-2dfe-410e-95ad-8f297b517ef4\") " pod="openstack-operators/nova-operator-controller-manager-5ddd85db87-5jzdp" Feb 18 19:31:44 crc kubenswrapper[4942]: I0218 19:31:44.584661 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-89l7l\" (UniqueName: \"kubernetes.io/projected/9d43a851-2d6c-4fe9-86e1-04c7d382b257-kube-api-access-89l7l\") pod \"neutron-operator-controller-manager-54967dbbdf-tzn65\" (UID: \"9d43a851-2d6c-4fe9-86e1-04c7d382b257\") " pod="openstack-operators/neutron-operator-controller-manager-54967dbbdf-tzn65" Feb 18 19:31:44 crc kubenswrapper[4942]: I0218 19:31:44.584684 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mgjwf\" (UniqueName: \"kubernetes.io/projected/a15b8ac2-0742-4fd7-9a14-005620c93a3d-kube-api-access-mgjwf\") pod \"manila-operator-controller-manager-96fff9cb8-qs9mb\" (UID: \"a15b8ac2-0742-4fd7-9a14-005620c93a3d\") " pod="openstack-operators/manila-operator-controller-manager-96fff9cb8-qs9mb" Feb 18 19:31:44 crc kubenswrapper[4942]: I0218 19:31:44.584724 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ngp72\" (UniqueName: \"kubernetes.io/projected/c2cc0d22-92b6-4c67-9627-79abffb9917c-kube-api-access-ngp72\") pod \"mariadb-operator-controller-manager-66997756f6-f8nnp\" (UID: \"c2cc0d22-92b6-4c67-9627-79abffb9917c\") " pod="openstack-operators/mariadb-operator-controller-manager-66997756f6-f8nnp" Feb 18 19:31:44 crc kubenswrapper[4942]: I0218 19:31:44.603037 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-6494cdbf8f-9qvzl" Feb 18 19:31:44 crc kubenswrapper[4942]: I0218 19:31:44.608524 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mgjwf\" (UniqueName: \"kubernetes.io/projected/a15b8ac2-0742-4fd7-9a14-005620c93a3d-kube-api-access-mgjwf\") pod \"manila-operator-controller-manager-96fff9cb8-qs9mb\" (UID: \"a15b8ac2-0742-4fd7-9a14-005620c93a3d\") " pod="openstack-operators/manila-operator-controller-manager-96fff9cb8-qs9mb" Feb 18 19:31:44 crc kubenswrapper[4942]: I0218 19:31:44.618075 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-89l7l\" (UniqueName: \"kubernetes.io/projected/9d43a851-2d6c-4fe9-86e1-04c7d382b257-kube-api-access-89l7l\") pod \"neutron-operator-controller-manager-54967dbbdf-tzn65\" (UID: \"9d43a851-2d6c-4fe9-86e1-04c7d382b257\") " pod="openstack-operators/neutron-operator-controller-manager-54967dbbdf-tzn65" Feb 18 19:31:44 crc kubenswrapper[4942]: I0218 19:31:44.618244 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-5ddd85db87-5jzdp"] Feb 18 19:31:44 crc kubenswrapper[4942]: I0218 19:31:44.633230 4942 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-c5677dc5d-7ssrk"] Feb 18 19:31:44 crc kubenswrapper[4942]: I0218 19:31:44.634191 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-c5677dc5d-7ssrk" Feb 18 19:31:44 crc kubenswrapper[4942]: I0218 19:31:44.638847 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-webhook-server-cert" Feb 18 19:31:44 crc kubenswrapper[4942]: I0218 19:31:44.638872 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-controller-manager-dockercfg-tnb9n" Feb 18 19:31:44 crc kubenswrapper[4942]: I0218 19:31:44.642602 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-c5677dc5d-7ssrk"] Feb 18 19:31:44 crc kubenswrapper[4942]: I0218 19:31:44.696881 4942 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ovn-operator-controller-manager-85c99d655-6kt98"] Feb 18 19:31:44 crc kubenswrapper[4942]: I0218 19:31:44.701565 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hktdt\" (UniqueName: \"kubernetes.io/projected/cde9a09e-2dfe-410e-95ad-8f297b517ef4-kube-api-access-hktdt\") pod \"nova-operator-controller-manager-5ddd85db87-5jzdp\" (UID: \"cde9a09e-2dfe-410e-95ad-8f297b517ef4\") " pod="openstack-operators/nova-operator-controller-manager-5ddd85db87-5jzdp" Feb 18 19:31:44 crc kubenswrapper[4942]: I0218 19:31:44.701665 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ngp72\" (UniqueName: \"kubernetes.io/projected/c2cc0d22-92b6-4c67-9627-79abffb9917c-kube-api-access-ngp72\") pod \"mariadb-operator-controller-manager-66997756f6-f8nnp\" (UID: \"c2cc0d22-92b6-4c67-9627-79abffb9917c\") " pod="openstack-operators/mariadb-operator-controller-manager-66997756f6-f8nnp" Feb 18 19:31:44 crc kubenswrapper[4942]: I0218 19:31:44.701718 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-28xtk\" (UniqueName: \"kubernetes.io/projected/3b42f10c-a162-4d74-9eed-b6c3ef08cdb7-kube-api-access-28xtk\") pod \"octavia-operator-controller-manager-745bbbd77b-4xhmd\" (UID: \"3b42f10c-a162-4d74-9eed-b6c3ef08cdb7\") " pod="openstack-operators/octavia-operator-controller-manager-745bbbd77b-4xhmd" Feb 18 19:31:44 crc kubenswrapper[4942]: I0218 19:31:44.704498 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-85c99d655-6kt98" Feb 18 19:31:44 crc kubenswrapper[4942]: I0218 19:31:44.716581 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ovn-operator-controller-manager-dockercfg-6vthr" Feb 18 19:31:44 crc kubenswrapper[4942]: I0218 19:31:44.725116 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-85c99d655-6kt98"] Feb 18 19:31:44 crc kubenswrapper[4942]: I0218 19:31:44.738793 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ngp72\" (UniqueName: \"kubernetes.io/projected/c2cc0d22-92b6-4c67-9627-79abffb9917c-kube-api-access-ngp72\") pod \"mariadb-operator-controller-manager-66997756f6-f8nnp\" (UID: \"c2cc0d22-92b6-4c67-9627-79abffb9917c\") " pod="openstack-operators/mariadb-operator-controller-manager-66997756f6-f8nnp" Feb 18 19:31:44 crc kubenswrapper[4942]: I0218 19:31:44.746182 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-6c78d668d5-t9dzq" Feb 18 19:31:44 crc kubenswrapper[4942]: I0218 19:31:44.746374 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hktdt\" (UniqueName: \"kubernetes.io/projected/cde9a09e-2dfe-410e-95ad-8f297b517ef4-kube-api-access-hktdt\") pod \"nova-operator-controller-manager-5ddd85db87-5jzdp\" (UID: \"cde9a09e-2dfe-410e-95ad-8f297b517ef4\") " pod="openstack-operators/nova-operator-controller-manager-5ddd85db87-5jzdp" Feb 18 19:31:44 crc kubenswrapper[4942]: I0218 19:31:44.749645 4942 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/placement-operator-controller-manager-57bd55f9b7-cg225"] Feb 18 19:31:44 crc kubenswrapper[4942]: I0218 19:31:44.750585 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-57bd55f9b7-cg225" Feb 18 19:31:44 crc kubenswrapper[4942]: I0218 19:31:44.754553 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-96fff9cb8-qs9mb" Feb 18 19:31:44 crc kubenswrapper[4942]: I0218 19:31:44.755418 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"placement-operator-controller-manager-dockercfg-h8dd5" Feb 18 19:31:44 crc kubenswrapper[4942]: I0218 19:31:44.762350 4942 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/swift-operator-controller-manager-79558bbfbf-r8hvr"] Feb 18 19:31:44 crc kubenswrapper[4942]: I0218 19:31:44.763427 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-79558bbfbf-r8hvr" Feb 18 19:31:44 crc kubenswrapper[4942]: I0218 19:31:44.774999 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-57bd55f9b7-cg225"] Feb 18 19:31:44 crc kubenswrapper[4942]: I0218 19:31:44.776180 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"swift-operator-controller-manager-dockercfg-f27rp" Feb 18 19:31:44 crc kubenswrapper[4942]: I0218 19:31:44.789653 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-66997756f6-f8nnp" Feb 18 19:31:44 crc kubenswrapper[4942]: I0218 19:31:44.791691 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-54967dbbdf-tzn65" Feb 18 19:31:44 crc kubenswrapper[4942]: I0218 19:31:44.797638 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-79558bbfbf-r8hvr"] Feb 18 19:31:44 crc kubenswrapper[4942]: I0218 19:31:44.802667 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-28xtk\" (UniqueName: \"kubernetes.io/projected/3b42f10c-a162-4d74-9eed-b6c3ef08cdb7-kube-api-access-28xtk\") pod \"octavia-operator-controller-manager-745bbbd77b-4xhmd\" (UID: \"3b42f10c-a162-4d74-9eed-b6c3ef08cdb7\") " pod="openstack-operators/octavia-operator-controller-manager-745bbbd77b-4xhmd" Feb 18 19:31:44 crc kubenswrapper[4942]: I0218 19:31:44.802852 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jmr8t\" (UniqueName: \"kubernetes.io/projected/716e0e70-0ef0-4843-9ad3-d84f47a3397f-kube-api-access-jmr8t\") pod \"openstack-baremetal-operator-controller-manager-c5677dc5d-7ssrk\" (UID: \"716e0e70-0ef0-4843-9ad3-d84f47a3397f\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-c5677dc5d-7ssrk" Feb 18 19:31:44 crc kubenswrapper[4942]: I0218 19:31:44.802904 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s8w6c\" (UniqueName: \"kubernetes.io/projected/df8c140d-a735-4a14-8239-67f577546e01-kube-api-access-s8w6c\") pod \"ovn-operator-controller-manager-85c99d655-6kt98\" (UID: \"df8c140d-a735-4a14-8239-67f577546e01\") " pod="openstack-operators/ovn-operator-controller-manager-85c99d655-6kt98" Feb 18 19:31:44 crc kubenswrapper[4942]: I0218 19:31:44.802924 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/716e0e70-0ef0-4843-9ad3-d84f47a3397f-cert\") pod \"openstack-baremetal-operator-controller-manager-c5677dc5d-7ssrk\" (UID: \"716e0e70-0ef0-4843-9ad3-d84f47a3397f\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-c5677dc5d-7ssrk" Feb 18 19:31:44 crc kubenswrapper[4942]: I0218 19:31:44.818443 4942 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-56dc67d744-hhjwz"] Feb 18 19:31:44 crc kubenswrapper[4942]: I0218 19:31:44.821207 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-56dc67d744-hhjwz" Feb 18 19:31:44 crc kubenswrapper[4942]: I0218 19:31:44.830513 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"telemetry-operator-controller-manager-dockercfg-df9tk" Feb 18 19:31:44 crc kubenswrapper[4942]: I0218 19:31:44.838081 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-54fb488b88-9gjbj" Feb 18 19:31:44 crc kubenswrapper[4942]: I0218 19:31:44.848706 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-56dc67d744-hhjwz"] Feb 18 19:31:44 crc kubenswrapper[4942]: I0218 19:31:44.849283 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-5ddd85db87-5jzdp" Feb 18 19:31:44 crc kubenswrapper[4942]: I0218 19:31:44.855585 4942 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/test-operator-controller-manager-8467ccb4c8-shr4v"] Feb 18 19:31:44 crc kubenswrapper[4942]: I0218 19:31:44.856694 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-8467ccb4c8-shr4v" Feb 18 19:31:44 crc kubenswrapper[4942]: I0218 19:31:44.866497 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"test-operator-controller-manager-dockercfg-v92qj" Feb 18 19:31:44 crc kubenswrapper[4942]: I0218 19:31:44.887234 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-28xtk\" (UniqueName: \"kubernetes.io/projected/3b42f10c-a162-4d74-9eed-b6c3ef08cdb7-kube-api-access-28xtk\") pod \"octavia-operator-controller-manager-745bbbd77b-4xhmd\" (UID: \"3b42f10c-a162-4d74-9eed-b6c3ef08cdb7\") " pod="openstack-operators/octavia-operator-controller-manager-745bbbd77b-4xhmd" Feb 18 19:31:44 crc kubenswrapper[4942]: I0218 19:31:44.889922 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-745bbbd77b-4xhmd" Feb 18 19:31:44 crc kubenswrapper[4942]: I0218 19:31:44.907515 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jmr8t\" (UniqueName: \"kubernetes.io/projected/716e0e70-0ef0-4843-9ad3-d84f47a3397f-kube-api-access-jmr8t\") pod \"openstack-baremetal-operator-controller-manager-c5677dc5d-7ssrk\" (UID: \"716e0e70-0ef0-4843-9ad3-d84f47a3397f\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-c5677dc5d-7ssrk" Feb 18 19:31:44 crc kubenswrapper[4942]: I0218 19:31:44.907562 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s8w6c\" (UniqueName: \"kubernetes.io/projected/df8c140d-a735-4a14-8239-67f577546e01-kube-api-access-s8w6c\") pod \"ovn-operator-controller-manager-85c99d655-6kt98\" (UID: \"df8c140d-a735-4a14-8239-67f577546e01\") " pod="openstack-operators/ovn-operator-controller-manager-85c99d655-6kt98" Feb 18 19:31:44 crc kubenswrapper[4942]: I0218 19:31:44.907586 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/716e0e70-0ef0-4843-9ad3-d84f47a3397f-cert\") pod \"openstack-baremetal-operator-controller-manager-c5677dc5d-7ssrk\" (UID: \"716e0e70-0ef0-4843-9ad3-d84f47a3397f\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-c5677dc5d-7ssrk" Feb 18 19:31:44 crc kubenswrapper[4942]: I0218 19:31:44.907664 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8kxjt\" (UniqueName: \"kubernetes.io/projected/8ca2018a-1b2e-4fa2-8564-3e2a0d3d8377-kube-api-access-8kxjt\") pod \"placement-operator-controller-manager-57bd55f9b7-cg225\" (UID: \"8ca2018a-1b2e-4fa2-8564-3e2a0d3d8377\") " pod="openstack-operators/placement-operator-controller-manager-57bd55f9b7-cg225" Feb 18 19:31:44 crc kubenswrapper[4942]: I0218 19:31:44.907687 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ppsnj\" (UniqueName: \"kubernetes.io/projected/6618726f-c93c-4d05-b6d9-a08aca84801f-kube-api-access-ppsnj\") pod \"swift-operator-controller-manager-79558bbfbf-r8hvr\" (UID: \"6618726f-c93c-4d05-b6d9-a08aca84801f\") " pod="openstack-operators/swift-operator-controller-manager-79558bbfbf-r8hvr" Feb 18 19:31:44 crc kubenswrapper[4942]: E0218 19:31:44.908011 4942 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 18 19:31:44 crc kubenswrapper[4942]: E0218 19:31:44.908084 4942 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/716e0e70-0ef0-4843-9ad3-d84f47a3397f-cert podName:716e0e70-0ef0-4843-9ad3-d84f47a3397f nodeName:}" failed. No retries permitted until 2026-02-18 19:31:45.408062218 +0000 UTC m=+865.112994983 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/716e0e70-0ef0-4843-9ad3-d84f47a3397f-cert") pod "openstack-baremetal-operator-controller-manager-c5677dc5d-7ssrk" (UID: "716e0e70-0ef0-4843-9ad3-d84f47a3397f") : secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 18 19:31:44 crc kubenswrapper[4942]: I0218 19:31:44.912808 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-8467ccb4c8-shr4v"] Feb 18 19:31:44 crc kubenswrapper[4942]: I0218 19:31:44.926687 4942 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/watcher-operator-controller-manager-c8b4db7df-h9q84"] Feb 18 19:31:44 crc kubenswrapper[4942]: I0218 19:31:44.927902 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-c8b4db7df-h9q84" Feb 18 19:31:44 crc kubenswrapper[4942]: I0218 19:31:44.931193 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-c8b4db7df-h9q84"] Feb 18 19:31:44 crc kubenswrapper[4942]: I0218 19:31:44.932392 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"watcher-operator-controller-manager-dockercfg-52wtk" Feb 18 19:31:44 crc kubenswrapper[4942]: I0218 19:31:44.935990 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s8w6c\" (UniqueName: \"kubernetes.io/projected/df8c140d-a735-4a14-8239-67f577546e01-kube-api-access-s8w6c\") pod \"ovn-operator-controller-manager-85c99d655-6kt98\" (UID: \"df8c140d-a735-4a14-8239-67f577546e01\") " pod="openstack-operators/ovn-operator-controller-manager-85c99d655-6kt98" Feb 18 19:31:44 crc kubenswrapper[4942]: I0218 19:31:44.943818 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-57746b5ff9-56k6g"] Feb 18 19:31:44 crc kubenswrapper[4942]: I0218 19:31:44.948830 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jmr8t\" (UniqueName: \"kubernetes.io/projected/716e0e70-0ef0-4843-9ad3-d84f47a3397f-kube-api-access-jmr8t\") pod \"openstack-baremetal-operator-controller-manager-c5677dc5d-7ssrk\" (UID: \"716e0e70-0ef0-4843-9ad3-d84f47a3397f\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-c5677dc5d-7ssrk" Feb 18 19:31:44 crc kubenswrapper[4942]: I0218 19:31:44.973344 4942 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-manager-57f845558-vcfm9"] Feb 18 19:31:44 crc kubenswrapper[4942]: I0218 19:31:44.974273 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-57f845558-vcfm9" Feb 18 19:31:44 crc kubenswrapper[4942]: I0218 19:31:44.976510 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"webhook-server-cert" Feb 18 19:31:44 crc kubenswrapper[4942]: I0218 19:31:44.976523 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-manager-dockercfg-fshwm" Feb 18 19:31:44 crc kubenswrapper[4942]: I0218 19:31:44.976336 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"metrics-server-cert" Feb 18 19:31:44 crc kubenswrapper[4942]: I0218 19:31:44.985495 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-57f845558-vcfm9"] Feb 18 19:31:45 crc kubenswrapper[4942]: I0218 19:31:44.999616 4942 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-wvj72"] Feb 18 19:31:45 crc kubenswrapper[4942]: I0218 19:31:45.000841 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-wvj72" Feb 18 19:31:45 crc kubenswrapper[4942]: I0218 19:31:45.005497 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"rabbitmq-cluster-operator-controller-manager-dockercfg-c9tgl" Feb 18 19:31:45 crc kubenswrapper[4942]: I0218 19:31:45.011910 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/230a2167-e078-48a6-93ce-84a37ff4ac02-cert\") pod \"infra-operator-controller-manager-66d6b5f488-5vptt\" (UID: \"230a2167-e078-48a6-93ce-84a37ff4ac02\") " pod="openstack-operators/infra-operator-controller-manager-66d6b5f488-5vptt" Feb 18 19:31:45 crc kubenswrapper[4942]: I0218 19:31:45.011956 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m6wkc\" (UniqueName: \"kubernetes.io/projected/a65b16e4-f55f-427a-a629-2fbff014a7af-kube-api-access-m6wkc\") pod \"telemetry-operator-controller-manager-56dc67d744-hhjwz\" (UID: \"a65b16e4-f55f-427a-a629-2fbff014a7af\") " pod="openstack-operators/telemetry-operator-controller-manager-56dc67d744-hhjwz" Feb 18 19:31:45 crc kubenswrapper[4942]: I0218 19:31:45.011978 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8kxjt\" (UniqueName: \"kubernetes.io/projected/8ca2018a-1b2e-4fa2-8564-3e2a0d3d8377-kube-api-access-8kxjt\") pod \"placement-operator-controller-manager-57bd55f9b7-cg225\" (UID: \"8ca2018a-1b2e-4fa2-8564-3e2a0d3d8377\") " pod="openstack-operators/placement-operator-controller-manager-57bd55f9b7-cg225" Feb 18 19:31:45 crc kubenswrapper[4942]: I0218 19:31:45.011998 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ppsnj\" (UniqueName: \"kubernetes.io/projected/6618726f-c93c-4d05-b6d9-a08aca84801f-kube-api-access-ppsnj\") pod \"swift-operator-controller-manager-79558bbfbf-r8hvr\" (UID: \"6618726f-c93c-4d05-b6d9-a08aca84801f\") " pod="openstack-operators/swift-operator-controller-manager-79558bbfbf-r8hvr" Feb 18 19:31:45 crc kubenswrapper[4942]: I0218 19:31:45.012025 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lwh79\" (UniqueName: \"kubernetes.io/projected/2fda65c9-97fe-4689-bd35-7f7974841223-kube-api-access-lwh79\") pod \"test-operator-controller-manager-8467ccb4c8-shr4v\" (UID: \"2fda65c9-97fe-4689-bd35-7f7974841223\") " pod="openstack-operators/test-operator-controller-manager-8467ccb4c8-shr4v" Feb 18 19:31:45 crc kubenswrapper[4942]: E0218 19:31:45.012150 4942 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Feb 18 19:31:45 crc kubenswrapper[4942]: E0218 19:31:45.012186 4942 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/230a2167-e078-48a6-93ce-84a37ff4ac02-cert podName:230a2167-e078-48a6-93ce-84a37ff4ac02 nodeName:}" failed. No retries permitted until 2026-02-18 19:31:46.012173059 +0000 UTC m=+865.717105714 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/230a2167-e078-48a6-93ce-84a37ff4ac02-cert") pod "infra-operator-controller-manager-66d6b5f488-5vptt" (UID: "230a2167-e078-48a6-93ce-84a37ff4ac02") : secret "infra-operator-webhook-server-cert" not found Feb 18 19:31:45 crc kubenswrapper[4942]: I0218 19:31:45.012286 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-c4b7d6946-rvgp6"] Feb 18 19:31:45 crc kubenswrapper[4942]: I0218 19:31:45.029645 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8kxjt\" (UniqueName: \"kubernetes.io/projected/8ca2018a-1b2e-4fa2-8564-3e2a0d3d8377-kube-api-access-8kxjt\") pod \"placement-operator-controller-manager-57bd55f9b7-cg225\" (UID: \"8ca2018a-1b2e-4fa2-8564-3e2a0d3d8377\") " pod="openstack-operators/placement-operator-controller-manager-57bd55f9b7-cg225" Feb 18 19:31:45 crc kubenswrapper[4942]: I0218 19:31:45.030109 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ppsnj\" (UniqueName: \"kubernetes.io/projected/6618726f-c93c-4d05-b6d9-a08aca84801f-kube-api-access-ppsnj\") pod \"swift-operator-controller-manager-79558bbfbf-r8hvr\" (UID: \"6618726f-c93c-4d05-b6d9-a08aca84801f\") " pod="openstack-operators/swift-operator-controller-manager-79558bbfbf-r8hvr" Feb 18 19:31:45 crc kubenswrapper[4942]: I0218 19:31:45.033891 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-wvj72"] Feb 18 19:31:45 crc kubenswrapper[4942]: I0218 19:31:45.055005 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-85c99d655-6kt98" Feb 18 19:31:45 crc kubenswrapper[4942]: I0218 19:31:45.079864 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-68c6d499cb-g7kpv"] Feb 18 19:31:45 crc kubenswrapper[4942]: I0218 19:31:45.083828 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-6494cdbf8f-9qvzl"] Feb 18 19:31:45 crc kubenswrapper[4942]: I0218 19:31:45.112362 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-57bd55f9b7-cg225" Feb 18 19:31:45 crc kubenswrapper[4942]: I0218 19:31:45.114750 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/0f7a5f35-f6e0-4f17-a380-13e8718ba658-metrics-certs\") pod \"openstack-operator-controller-manager-57f845558-vcfm9\" (UID: \"0f7a5f35-f6e0-4f17-a380-13e8718ba658\") " pod="openstack-operators/openstack-operator-controller-manager-57f845558-vcfm9" Feb 18 19:31:45 crc kubenswrapper[4942]: I0218 19:31:45.114820 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xkx4q\" (UniqueName: \"kubernetes.io/projected/250062ed-a35d-489a-a6b5-e6f96d1532d6-kube-api-access-xkx4q\") pod \"watcher-operator-controller-manager-c8b4db7df-h9q84\" (UID: \"250062ed-a35d-489a-a6b5-e6f96d1532d6\") " pod="openstack-operators/watcher-operator-controller-manager-c8b4db7df-h9q84" Feb 18 19:31:45 crc kubenswrapper[4942]: I0218 19:31:45.114889 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pfw9c\" (UniqueName: \"kubernetes.io/projected/0f7a5f35-f6e0-4f17-a380-13e8718ba658-kube-api-access-pfw9c\") pod \"openstack-operator-controller-manager-57f845558-vcfm9\" (UID: \"0f7a5f35-f6e0-4f17-a380-13e8718ba658\") " pod="openstack-operators/openstack-operator-controller-manager-57f845558-vcfm9" Feb 18 19:31:45 crc kubenswrapper[4942]: I0218 19:31:45.114953 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v687s\" (UniqueName: \"kubernetes.io/projected/5fe849cd-ac9e-48bb-a7dd-f7f529a324e3-kube-api-access-v687s\") pod \"rabbitmq-cluster-operator-manager-668c99d594-wvj72\" (UID: \"5fe849cd-ac9e-48bb-a7dd-f7f529a324e3\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-wvj72" Feb 18 19:31:45 crc kubenswrapper[4942]: I0218 19:31:45.114983 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m6wkc\" (UniqueName: \"kubernetes.io/projected/a65b16e4-f55f-427a-a629-2fbff014a7af-kube-api-access-m6wkc\") pod \"telemetry-operator-controller-manager-56dc67d744-hhjwz\" (UID: \"a65b16e4-f55f-427a-a629-2fbff014a7af\") " pod="openstack-operators/telemetry-operator-controller-manager-56dc67d744-hhjwz" Feb 18 19:31:45 crc kubenswrapper[4942]: I0218 19:31:45.115039 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lwh79\" (UniqueName: \"kubernetes.io/projected/2fda65c9-97fe-4689-bd35-7f7974841223-kube-api-access-lwh79\") pod \"test-operator-controller-manager-8467ccb4c8-shr4v\" (UID: \"2fda65c9-97fe-4689-bd35-7f7974841223\") " pod="openstack-operators/test-operator-controller-manager-8467ccb4c8-shr4v" Feb 18 19:31:45 crc kubenswrapper[4942]: I0218 19:31:45.115061 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/0f7a5f35-f6e0-4f17-a380-13e8718ba658-webhook-certs\") pod \"openstack-operator-controller-manager-57f845558-vcfm9\" (UID: \"0f7a5f35-f6e0-4f17-a380-13e8718ba658\") " pod="openstack-operators/openstack-operator-controller-manager-57f845558-vcfm9" Feb 18 19:31:45 crc kubenswrapper[4942]: I0218 19:31:45.120079 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-79558bbfbf-r8hvr" Feb 18 19:31:45 crc kubenswrapper[4942]: I0218 19:31:45.135250 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m6wkc\" (UniqueName: \"kubernetes.io/projected/a65b16e4-f55f-427a-a629-2fbff014a7af-kube-api-access-m6wkc\") pod \"telemetry-operator-controller-manager-56dc67d744-hhjwz\" (UID: \"a65b16e4-f55f-427a-a629-2fbff014a7af\") " pod="openstack-operators/telemetry-operator-controller-manager-56dc67d744-hhjwz" Feb 18 19:31:45 crc kubenswrapper[4942]: I0218 19:31:45.140504 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lwh79\" (UniqueName: \"kubernetes.io/projected/2fda65c9-97fe-4689-bd35-7f7974841223-kube-api-access-lwh79\") pod \"test-operator-controller-manager-8467ccb4c8-shr4v\" (UID: \"2fda65c9-97fe-4689-bd35-7f7974841223\") " pod="openstack-operators/test-operator-controller-manager-8467ccb4c8-shr4v" Feb 18 19:31:45 crc kubenswrapper[4942]: I0218 19:31:45.143941 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-55cc45767f-26x4h"] Feb 18 19:31:45 crc kubenswrapper[4942]: I0218 19:31:45.197474 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-56dc67d744-hhjwz" Feb 18 19:31:45 crc kubenswrapper[4942]: W0218 19:31:45.203467 4942 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod844a0cad_5a6a_4ab4_8e32_388835eb9f4a.slice/crio-491ea45a82cb77025866c40f2868170dcf34cae892f581557b64cce1088a204e WatchSource:0}: Error finding container 491ea45a82cb77025866c40f2868170dcf34cae892f581557b64cce1088a204e: Status 404 returned error can't find the container with id 491ea45a82cb77025866c40f2868170dcf34cae892f581557b64cce1088a204e Feb 18 19:31:45 crc kubenswrapper[4942]: I0218 19:31:45.216225 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-8467ccb4c8-shr4v" Feb 18 19:31:45 crc kubenswrapper[4942]: I0218 19:31:45.216622 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v687s\" (UniqueName: \"kubernetes.io/projected/5fe849cd-ac9e-48bb-a7dd-f7f529a324e3-kube-api-access-v687s\") pod \"rabbitmq-cluster-operator-manager-668c99d594-wvj72\" (UID: \"5fe849cd-ac9e-48bb-a7dd-f7f529a324e3\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-wvj72" Feb 18 19:31:45 crc kubenswrapper[4942]: I0218 19:31:45.216680 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/0f7a5f35-f6e0-4f17-a380-13e8718ba658-webhook-certs\") pod \"openstack-operator-controller-manager-57f845558-vcfm9\" (UID: \"0f7a5f35-f6e0-4f17-a380-13e8718ba658\") " pod="openstack-operators/openstack-operator-controller-manager-57f845558-vcfm9" Feb 18 19:31:45 crc kubenswrapper[4942]: I0218 19:31:45.216706 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/0f7a5f35-f6e0-4f17-a380-13e8718ba658-metrics-certs\") pod \"openstack-operator-controller-manager-57f845558-vcfm9\" (UID: \"0f7a5f35-f6e0-4f17-a380-13e8718ba658\") " pod="openstack-operators/openstack-operator-controller-manager-57f845558-vcfm9" Feb 18 19:31:45 crc kubenswrapper[4942]: I0218 19:31:45.216729 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xkx4q\" (UniqueName: \"kubernetes.io/projected/250062ed-a35d-489a-a6b5-e6f96d1532d6-kube-api-access-xkx4q\") pod \"watcher-operator-controller-manager-c8b4db7df-h9q84\" (UID: \"250062ed-a35d-489a-a6b5-e6f96d1532d6\") " pod="openstack-operators/watcher-operator-controller-manager-c8b4db7df-h9q84" Feb 18 19:31:45 crc kubenswrapper[4942]: I0218 19:31:45.216784 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pfw9c\" (UniqueName: \"kubernetes.io/projected/0f7a5f35-f6e0-4f17-a380-13e8718ba658-kube-api-access-pfw9c\") pod \"openstack-operator-controller-manager-57f845558-vcfm9\" (UID: \"0f7a5f35-f6e0-4f17-a380-13e8718ba658\") " pod="openstack-operators/openstack-operator-controller-manager-57f845558-vcfm9" Feb 18 19:31:45 crc kubenswrapper[4942]: E0218 19:31:45.217199 4942 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Feb 18 19:31:45 crc kubenswrapper[4942]: E0218 19:31:45.217241 4942 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0f7a5f35-f6e0-4f17-a380-13e8718ba658-metrics-certs podName:0f7a5f35-f6e0-4f17-a380-13e8718ba658 nodeName:}" failed. No retries permitted until 2026-02-18 19:31:45.717225753 +0000 UTC m=+865.422158418 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/0f7a5f35-f6e0-4f17-a380-13e8718ba658-metrics-certs") pod "openstack-operator-controller-manager-57f845558-vcfm9" (UID: "0f7a5f35-f6e0-4f17-a380-13e8718ba658") : secret "metrics-server-cert" not found Feb 18 19:31:45 crc kubenswrapper[4942]: E0218 19:31:45.217341 4942 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Feb 18 19:31:45 crc kubenswrapper[4942]: E0218 19:31:45.217382 4942 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0f7a5f35-f6e0-4f17-a380-13e8718ba658-webhook-certs podName:0f7a5f35-f6e0-4f17-a380-13e8718ba658 nodeName:}" failed. No retries permitted until 2026-02-18 19:31:45.717367597 +0000 UTC m=+865.422300262 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/0f7a5f35-f6e0-4f17-a380-13e8718ba658-webhook-certs") pod "openstack-operator-controller-manager-57f845558-vcfm9" (UID: "0f7a5f35-f6e0-4f17-a380-13e8718ba658") : secret "webhook-server-cert" not found Feb 18 19:31:45 crc kubenswrapper[4942]: I0218 19:31:45.233706 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pfw9c\" (UniqueName: \"kubernetes.io/projected/0f7a5f35-f6e0-4f17-a380-13e8718ba658-kube-api-access-pfw9c\") pod \"openstack-operator-controller-manager-57f845558-vcfm9\" (UID: \"0f7a5f35-f6e0-4f17-a380-13e8718ba658\") " pod="openstack-operators/openstack-operator-controller-manager-57f845558-vcfm9" Feb 18 19:31:45 crc kubenswrapper[4942]: I0218 19:31:45.235646 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xkx4q\" (UniqueName: \"kubernetes.io/projected/250062ed-a35d-489a-a6b5-e6f96d1532d6-kube-api-access-xkx4q\") pod \"watcher-operator-controller-manager-c8b4db7df-h9q84\" (UID: \"250062ed-a35d-489a-a6b5-e6f96d1532d6\") " pod="openstack-operators/watcher-operator-controller-manager-c8b4db7df-h9q84" Feb 18 19:31:45 crc kubenswrapper[4942]: I0218 19:31:45.235865 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v687s\" (UniqueName: \"kubernetes.io/projected/5fe849cd-ac9e-48bb-a7dd-f7f529a324e3-kube-api-access-v687s\") pod \"rabbitmq-cluster-operator-manager-668c99d594-wvj72\" (UID: \"5fe849cd-ac9e-48bb-a7dd-f7f529a324e3\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-wvj72" Feb 18 19:31:45 crc kubenswrapper[4942]: I0218 19:31:45.250851 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-c8b4db7df-h9q84" Feb 18 19:31:45 crc kubenswrapper[4942]: I0218 19:31:45.313527 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-9595d6797-xrzwv"] Feb 18 19:31:45 crc kubenswrapper[4942]: I0218 19:31:45.383040 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-wvj72" Feb 18 19:31:45 crc kubenswrapper[4942]: I0218 19:31:45.419295 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/716e0e70-0ef0-4843-9ad3-d84f47a3397f-cert\") pod \"openstack-baremetal-operator-controller-manager-c5677dc5d-7ssrk\" (UID: \"716e0e70-0ef0-4843-9ad3-d84f47a3397f\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-c5677dc5d-7ssrk" Feb 18 19:31:45 crc kubenswrapper[4942]: E0218 19:31:45.419373 4942 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 18 19:31:45 crc kubenswrapper[4942]: E0218 19:31:45.419432 4942 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/716e0e70-0ef0-4843-9ad3-d84f47a3397f-cert podName:716e0e70-0ef0-4843-9ad3-d84f47a3397f nodeName:}" failed. No retries permitted until 2026-02-18 19:31:46.419416345 +0000 UTC m=+866.124349080 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/716e0e70-0ef0-4843-9ad3-d84f47a3397f-cert") pod "openstack-baremetal-operator-controller-manager-c5677dc5d-7ssrk" (UID: "716e0e70-0ef0-4843-9ad3-d84f47a3397f") : secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 18 19:31:45 crc kubenswrapper[4942]: I0218 19:31:45.432609 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-9595d6797-xrzwv" event={"ID":"4cbefad2-6c6d-4b7b-bba9-acf857a54a4b","Type":"ContainerStarted","Data":"bf3ca6b57890b06540c918f1f74d9fdd6429bf4d4eb1333bb3f3cc33715ae771"} Feb 18 19:31:45 crc kubenswrapper[4942]: I0218 19:31:45.443537 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-55cc45767f-26x4h" event={"ID":"844a0cad-5a6a-4ab4-8e32-388835eb9f4a","Type":"ContainerStarted","Data":"491ea45a82cb77025866c40f2868170dcf34cae892f581557b64cce1088a204e"} Feb 18 19:31:45 crc kubenswrapper[4942]: I0218 19:31:45.446100 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-68c6d499cb-g7kpv" event={"ID":"8d849c9e-0da1-4910-9922-5ea2dd2728a2","Type":"ContainerStarted","Data":"e63ee5cbb8fcb10c5613d21b8cf6969d33e6f10cb4401c3e53c0b50f03f9e36b"} Feb 18 19:31:45 crc kubenswrapper[4942]: I0218 19:31:45.447038 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-c4b7d6946-rvgp6" event={"ID":"829c57a8-54c3-43c5-8bea-2ceeeafeb143","Type":"ContainerStarted","Data":"45d1768bbbe9899b96b394eebe9b15dcd5ebd49b77d371d6a68706ae71b29b43"} Feb 18 19:31:45 crc kubenswrapper[4942]: I0218 19:31:45.448127 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-57746b5ff9-56k6g" event={"ID":"51f45ea1-2b95-4553-9e3d-5e6bb4c8b862","Type":"ContainerStarted","Data":"209f319da30a65f3066f8ac773346a71c1c2f2f39926654cd7691583ffb3e8b5"} Feb 18 19:31:45 crc kubenswrapper[4942]: I0218 19:31:45.449488 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-6494cdbf8f-9qvzl" event={"ID":"1e73a8a0-3246-4a08-b4be-d587d82742a4","Type":"ContainerStarted","Data":"dd27502d63022ab3e4e0f450c4d42036958bc1c8d44bf2a67fc397b583d73b8d"} Feb 18 19:31:45 crc kubenswrapper[4942]: I0218 19:31:45.449706 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-6c78d668d5-t9dzq"] Feb 18 19:31:45 crc kubenswrapper[4942]: I0218 19:31:45.509283 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-96fff9cb8-qs9mb"] Feb 18 19:31:45 crc kubenswrapper[4942]: I0218 19:31:45.515682 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-66997756f6-f8nnp"] Feb 18 19:31:45 crc kubenswrapper[4942]: I0218 19:31:45.685862 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-54fb488b88-9gjbj"] Feb 18 19:31:45 crc kubenswrapper[4942]: W0218 19:31:45.688673 4942 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9d43a851_2d6c_4fe9_86e1_04c7d382b257.slice/crio-ba3e9924d1af9a2e2cdbe04a4e584f303b04aa5530466114eb7031ec34dab3f3 WatchSource:0}: Error finding container ba3e9924d1af9a2e2cdbe04a4e584f303b04aa5530466114eb7031ec34dab3f3: Status 404 returned error can't find the container with id ba3e9924d1af9a2e2cdbe04a4e584f303b04aa5530466114eb7031ec34dab3f3 Feb 18 19:31:45 crc kubenswrapper[4942]: I0218 19:31:45.692697 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-745bbbd77b-4xhmd"] Feb 18 19:31:45 crc kubenswrapper[4942]: I0218 19:31:45.700709 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-54967dbbdf-tzn65"] Feb 18 19:31:45 crc kubenswrapper[4942]: W0218 19:31:45.700736 4942 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podcde9a09e_2dfe_410e_95ad_8f297b517ef4.slice/crio-09fdc5a3f7408d0b9f325701dd398de889bf4e14ba86cfe3b8aa640d252a8589 WatchSource:0}: Error finding container 09fdc5a3f7408d0b9f325701dd398de889bf4e14ba86cfe3b8aa640d252a8589: Status 404 returned error can't find the container with id 09fdc5a3f7408d0b9f325701dd398de889bf4e14ba86cfe3b8aa640d252a8589 Feb 18 19:31:45 crc kubenswrapper[4942]: W0218 19:31:45.702415 4942 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3b42f10c_a162_4d74_9eed_b6c3ef08cdb7.slice/crio-4984f32847d5cbdc32f5da53ccaef59634d1a913dade134311d80a7ca8917838 WatchSource:0}: Error finding container 4984f32847d5cbdc32f5da53ccaef59634d1a913dade134311d80a7ca8917838: Status 404 returned error can't find the container with id 4984f32847d5cbdc32f5da53ccaef59634d1a913dade134311d80a7ca8917838 Feb 18 19:31:45 crc kubenswrapper[4942]: I0218 19:31:45.706170 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-5ddd85db87-5jzdp"] Feb 18 19:31:45 crc kubenswrapper[4942]: I0218 19:31:45.711928 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-85c99d655-6kt98"] Feb 18 19:31:45 crc kubenswrapper[4942]: W0218 19:31:45.718278 4942 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poddf8c140d_a735_4a14_8239_67f577546e01.slice/crio-be7a5234ea1ca872ac886838a591ce75b8185eb313f72b20a206db6baab7ffe9 WatchSource:0}: Error finding container be7a5234ea1ca872ac886838a591ce75b8185eb313f72b20a206db6baab7ffe9: Status 404 returned error can't find the container with id be7a5234ea1ca872ac886838a591ce75b8185eb313f72b20a206db6baab7ffe9 Feb 18 19:31:45 crc kubenswrapper[4942]: I0218 19:31:45.726016 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/0f7a5f35-f6e0-4f17-a380-13e8718ba658-metrics-certs\") pod \"openstack-operator-controller-manager-57f845558-vcfm9\" (UID: \"0f7a5f35-f6e0-4f17-a380-13e8718ba658\") " pod="openstack-operators/openstack-operator-controller-manager-57f845558-vcfm9" Feb 18 19:31:45 crc kubenswrapper[4942]: I0218 19:31:45.726167 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/0f7a5f35-f6e0-4f17-a380-13e8718ba658-webhook-certs\") pod \"openstack-operator-controller-manager-57f845558-vcfm9\" (UID: \"0f7a5f35-f6e0-4f17-a380-13e8718ba658\") " pod="openstack-operators/openstack-operator-controller-manager-57f845558-vcfm9" Feb 18 19:31:45 crc kubenswrapper[4942]: E0218 19:31:45.726181 4942 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Feb 18 19:31:45 crc kubenswrapper[4942]: E0218 19:31:45.726239 4942 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0f7a5f35-f6e0-4f17-a380-13e8718ba658-metrics-certs podName:0f7a5f35-f6e0-4f17-a380-13e8718ba658 nodeName:}" failed. No retries permitted until 2026-02-18 19:31:46.72622112 +0000 UTC m=+866.431153785 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/0f7a5f35-f6e0-4f17-a380-13e8718ba658-metrics-certs") pod "openstack-operator-controller-manager-57f845558-vcfm9" (UID: "0f7a5f35-f6e0-4f17-a380-13e8718ba658") : secret "metrics-server-cert" not found Feb 18 19:31:45 crc kubenswrapper[4942]: E0218 19:31:45.726287 4942 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Feb 18 19:31:45 crc kubenswrapper[4942]: E0218 19:31:45.726328 4942 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0f7a5f35-f6e0-4f17-a380-13e8718ba658-webhook-certs podName:0f7a5f35-f6e0-4f17-a380-13e8718ba658 nodeName:}" failed. No retries permitted until 2026-02-18 19:31:46.726313253 +0000 UTC m=+866.431245918 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/0f7a5f35-f6e0-4f17-a380-13e8718ba658-webhook-certs") pod "openstack-operator-controller-manager-57f845558-vcfm9" (UID: "0f7a5f35-f6e0-4f17-a380-13e8718ba658") : secret "webhook-server-cert" not found Feb 18 19:31:45 crc kubenswrapper[4942]: I0218 19:31:45.889136 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-56dc67d744-hhjwz"] Feb 18 19:31:45 crc kubenswrapper[4942]: I0218 19:31:45.901427 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-79558bbfbf-r8hvr"] Feb 18 19:31:45 crc kubenswrapper[4942]: W0218 19:31:45.906895 4942 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda65b16e4_f55f_427a_a629_2fbff014a7af.slice/crio-660a930740e1418c2dc360720ea1bf3998a8820c3fcb64645cac6ec6ee627413 WatchSource:0}: Error finding container 660a930740e1418c2dc360720ea1bf3998a8820c3fcb64645cac6ec6ee627413: Status 404 returned error can't find the container with id 660a930740e1418c2dc360720ea1bf3998a8820c3fcb64645cac6ec6ee627413 Feb 18 19:31:45 crc kubenswrapper[4942]: I0218 19:31:45.910281 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-57bd55f9b7-cg225"] Feb 18 19:31:45 crc kubenswrapper[4942]: W0218 19:31:45.915789 4942 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6618726f_c93c_4d05_b6d9_a08aca84801f.slice/crio-df22e4522fc4389f2ac239a4c109904e7123c68fb8dd4fab45d9bbe2031a749e WatchSource:0}: Error finding container df22e4522fc4389f2ac239a4c109904e7123c68fb8dd4fab45d9bbe2031a749e: Status 404 returned error can't find the container with id df22e4522fc4389f2ac239a4c109904e7123c68fb8dd4fab45d9bbe2031a749e Feb 18 19:31:45 crc kubenswrapper[4942]: I0218 19:31:45.917239 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-8467ccb4c8-shr4v"] Feb 18 19:31:45 crc kubenswrapper[4942]: W0218 19:31:45.919119 4942 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8ca2018a_1b2e_4fa2_8564_3e2a0d3d8377.slice/crio-638566932c2db92780adfcd405cd6d4ed321369db99a3902aad20533c069cb19 WatchSource:0}: Error finding container 638566932c2db92780adfcd405cd6d4ed321369db99a3902aad20533c069cb19: Status 404 returned error can't find the container with id 638566932c2db92780adfcd405cd6d4ed321369db99a3902aad20533c069cb19 Feb 18 19:31:45 crc kubenswrapper[4942]: E0218 19:31:45.921202 4942 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/placement-operator@sha256:d800f1288d1517d84a45ddd475c3c0b4e8686fd900c9edf1e20b662b15218b89,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-8kxjt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod placement-operator-controller-manager-57bd55f9b7-cg225_openstack-operators(8ca2018a-1b2e-4fa2-8564-3e2a0d3d8377): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Feb 18 19:31:45 crc kubenswrapper[4942]: E0218 19:31:45.920376 4942 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/swift-operator@sha256:015f7f2d8b5afc85e51dd3b2e02a4cfb8294b543437315b291006d2416764db9,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-ppsnj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod swift-operator-controller-manager-79558bbfbf-r8hvr_openstack-operators(6618726f-c93c-4d05-b6d9-a08aca84801f): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Feb 18 19:31:45 crc kubenswrapper[4942]: E0218 19:31:45.922441 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/placement-operator-controller-manager-57bd55f9b7-cg225" podUID="8ca2018a-1b2e-4fa2-8564-3e2a0d3d8377" Feb 18 19:31:45 crc kubenswrapper[4942]: E0218 19:31:45.923291 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/swift-operator-controller-manager-79558bbfbf-r8hvr" podUID="6618726f-c93c-4d05-b6d9-a08aca84801f" Feb 18 19:31:45 crc kubenswrapper[4942]: W0218 19:31:45.924346 4942 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2fda65c9_97fe_4689_bd35_7f7974841223.slice/crio-03424b38b3c0f673d58b17261e271e3dc671cb717c527775f272b6b7f16ee4e9 WatchSource:0}: Error finding container 03424b38b3c0f673d58b17261e271e3dc671cb717c527775f272b6b7f16ee4e9: Status 404 returned error can't find the container with id 03424b38b3c0f673d58b17261e271e3dc671cb717c527775f272b6b7f16ee4e9 Feb 18 19:31:45 crc kubenswrapper[4942]: E0218 19:31:45.926753 4942 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/test-operator@sha256:f9b2e00617c7f219932ea0d5e2bb795cc4361a335a72743077948d8108695c27,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-lwh79,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod test-operator-controller-manager-8467ccb4c8-shr4v_openstack-operators(2fda65c9-97fe-4689-bd35-7f7974841223): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Feb 18 19:31:45 crc kubenswrapper[4942]: E0218 19:31:45.928196 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/test-operator-controller-manager-8467ccb4c8-shr4v" podUID="2fda65c9-97fe-4689-bd35-7f7974841223" Feb 18 19:31:46 crc kubenswrapper[4942]: I0218 19:31:46.024818 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-c8b4db7df-h9q84"] Feb 18 19:31:46 crc kubenswrapper[4942]: W0218 19:31:46.025697 4942 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod250062ed_a35d_489a_a6b5_e6f96d1532d6.slice/crio-9a53e41ecd9bc15ba96074727586423e878c0a93bc1ce3ac75ba8f7ba5e61636 WatchSource:0}: Error finding container 9a53e41ecd9bc15ba96074727586423e878c0a93bc1ce3ac75ba8f7ba5e61636: Status 404 returned error can't find the container with id 9a53e41ecd9bc15ba96074727586423e878c0a93bc1ce3ac75ba8f7ba5e61636 Feb 18 19:31:46 crc kubenswrapper[4942]: I0218 19:31:46.029737 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-wvj72"] Feb 18 19:31:46 crc kubenswrapper[4942]: I0218 19:31:46.031033 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/230a2167-e078-48a6-93ce-84a37ff4ac02-cert\") pod \"infra-operator-controller-manager-66d6b5f488-5vptt\" (UID: \"230a2167-e078-48a6-93ce-84a37ff4ac02\") " pod="openstack-operators/infra-operator-controller-manager-66d6b5f488-5vptt" Feb 18 19:31:46 crc kubenswrapper[4942]: E0218 19:31:46.031311 4942 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Feb 18 19:31:46 crc kubenswrapper[4942]: E0218 19:31:46.031365 4942 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/230a2167-e078-48a6-93ce-84a37ff4ac02-cert podName:230a2167-e078-48a6-93ce-84a37ff4ac02 nodeName:}" failed. No retries permitted until 2026-02-18 19:31:48.031348155 +0000 UTC m=+867.736280820 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/230a2167-e078-48a6-93ce-84a37ff4ac02-cert") pod "infra-operator-controller-manager-66d6b5f488-5vptt" (UID: "230a2167-e078-48a6-93ce-84a37ff4ac02") : secret "infra-operator-webhook-server-cert" not found Feb 18 19:31:46 crc kubenswrapper[4942]: E0218 19:31:46.031389 4942 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:38.102.83.12:5001/openstack-k8s-operators/watcher-operator:bccc5f477aecf1b112841224406211ceeff240ba,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-xkx4q,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod watcher-operator-controller-manager-c8b4db7df-h9q84_openstack-operators(250062ed-a35d-489a-a6b5-e6f96d1532d6): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Feb 18 19:31:46 crc kubenswrapper[4942]: W0218 19:31:46.031707 4942 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5fe849cd_ac9e_48bb_a7dd_f7f529a324e3.slice/crio-513691ae5bc9e0537d5f7df8632eefb88aab16e7968a1a7710409c8cb9269a3f WatchSource:0}: Error finding container 513691ae5bc9e0537d5f7df8632eefb88aab16e7968a1a7710409c8cb9269a3f: Status 404 returned error can't find the container with id 513691ae5bc9e0537d5f7df8632eefb88aab16e7968a1a7710409c8cb9269a3f Feb 18 19:31:46 crc kubenswrapper[4942]: E0218 19:31:46.032861 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/watcher-operator-controller-manager-c8b4db7df-h9q84" podUID="250062ed-a35d-489a-a6b5-e6f96d1532d6" Feb 18 19:31:46 crc kubenswrapper[4942]: E0218 19:31:46.037103 4942 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:operator,Image:quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2,Command:[/manager],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:0,ContainerPort:9782,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:OPERATOR_NAMESPACE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{200 -3} {} 200m DecimalSI},memory: {{524288000 0} {} 500Mi BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-v687s,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-cluster-operator-manager-668c99d594-wvj72_openstack-operators(5fe849cd-ac9e-48bb-a7dd-f7f529a324e3): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Feb 18 19:31:46 crc kubenswrapper[4942]: E0218 19:31:46.038271 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-wvj72" podUID="5fe849cd-ac9e-48bb-a7dd-f7f529a324e3" Feb 18 19:31:46 crc kubenswrapper[4942]: I0218 19:31:46.187657 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-928mg" Feb 18 19:31:46 crc kubenswrapper[4942]: I0218 19:31:46.187706 4942 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-928mg" Feb 18 19:31:46 crc kubenswrapper[4942]: I0218 19:31:46.242720 4942 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-928mg" Feb 18 19:31:46 crc kubenswrapper[4942]: I0218 19:31:46.438789 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/716e0e70-0ef0-4843-9ad3-d84f47a3397f-cert\") pod \"openstack-baremetal-operator-controller-manager-c5677dc5d-7ssrk\" (UID: \"716e0e70-0ef0-4843-9ad3-d84f47a3397f\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-c5677dc5d-7ssrk" Feb 18 19:31:46 crc kubenswrapper[4942]: E0218 19:31:46.438971 4942 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 18 19:31:46 crc kubenswrapper[4942]: E0218 19:31:46.439048 4942 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/716e0e70-0ef0-4843-9ad3-d84f47a3397f-cert podName:716e0e70-0ef0-4843-9ad3-d84f47a3397f nodeName:}" failed. No retries permitted until 2026-02-18 19:31:48.439030072 +0000 UTC m=+868.143962737 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/716e0e70-0ef0-4843-9ad3-d84f47a3397f-cert") pod "openstack-baremetal-operator-controller-manager-c5677dc5d-7ssrk" (UID: "716e0e70-0ef0-4843-9ad3-d84f47a3397f") : secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 18 19:31:46 crc kubenswrapper[4942]: I0218 19:31:46.460665 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-745bbbd77b-4xhmd" event={"ID":"3b42f10c-a162-4d74-9eed-b6c3ef08cdb7","Type":"ContainerStarted","Data":"4984f32847d5cbdc32f5da53ccaef59634d1a913dade134311d80a7ca8917838"} Feb 18 19:31:46 crc kubenswrapper[4942]: I0218 19:31:46.462633 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-96fff9cb8-qs9mb" event={"ID":"a15b8ac2-0742-4fd7-9a14-005620c93a3d","Type":"ContainerStarted","Data":"1657474b3474d4bfd82ff0e36e34cba7d2fb16e42e519c3466e6987d12ee549f"} Feb 18 19:31:46 crc kubenswrapper[4942]: I0218 19:31:46.464633 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-56dc67d744-hhjwz" event={"ID":"a65b16e4-f55f-427a-a629-2fbff014a7af","Type":"ContainerStarted","Data":"660a930740e1418c2dc360720ea1bf3998a8820c3fcb64645cac6ec6ee627413"} Feb 18 19:31:46 crc kubenswrapper[4942]: I0218 19:31:46.465945 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-79558bbfbf-r8hvr" event={"ID":"6618726f-c93c-4d05-b6d9-a08aca84801f","Type":"ContainerStarted","Data":"df22e4522fc4389f2ac239a4c109904e7123c68fb8dd4fab45d9bbe2031a749e"} Feb 18 19:31:46 crc kubenswrapper[4942]: I0218 19:31:46.466987 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-57bd55f9b7-cg225" event={"ID":"8ca2018a-1b2e-4fa2-8564-3e2a0d3d8377","Type":"ContainerStarted","Data":"638566932c2db92780adfcd405cd6d4ed321369db99a3902aad20533c069cb19"} Feb 18 19:31:46 crc kubenswrapper[4942]: I0218 19:31:46.468641 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-8467ccb4c8-shr4v" event={"ID":"2fda65c9-97fe-4689-bd35-7f7974841223","Type":"ContainerStarted","Data":"03424b38b3c0f673d58b17261e271e3dc671cb717c527775f272b6b7f16ee4e9"} Feb 18 19:31:46 crc kubenswrapper[4942]: E0218 19:31:46.469193 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/placement-operator@sha256:d800f1288d1517d84a45ddd475c3c0b4e8686fd900c9edf1e20b662b15218b89\\\"\"" pod="openstack-operators/placement-operator-controller-manager-57bd55f9b7-cg225" podUID="8ca2018a-1b2e-4fa2-8564-3e2a0d3d8377" Feb 18 19:31:46 crc kubenswrapper[4942]: E0218 19:31:46.469944 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/test-operator@sha256:f9b2e00617c7f219932ea0d5e2bb795cc4361a335a72743077948d8108695c27\\\"\"" pod="openstack-operators/test-operator-controller-manager-8467ccb4c8-shr4v" podUID="2fda65c9-97fe-4689-bd35-7f7974841223" Feb 18 19:31:46 crc kubenswrapper[4942]: I0218 19:31:46.470019 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-66997756f6-f8nnp" event={"ID":"c2cc0d22-92b6-4c67-9627-79abffb9917c","Type":"ContainerStarted","Data":"de8058e26faf64d71c41c756de3bf8fc81b29468e63fa447937d48b21eada35e"} Feb 18 19:31:46 crc kubenswrapper[4942]: I0218 19:31:46.471750 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-54fb488b88-9gjbj" event={"ID":"80bc5b9b-00c2-4003-8279-1dbc3ff3aa05","Type":"ContainerStarted","Data":"fbd2bd82e4c0a6dbd816d36a53c5abdf89588dbd915ee9fed90d1d6f3f25640e"} Feb 18 19:31:46 crc kubenswrapper[4942]: I0218 19:31:46.474437 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-wvj72" event={"ID":"5fe849cd-ac9e-48bb-a7dd-f7f529a324e3","Type":"ContainerStarted","Data":"513691ae5bc9e0537d5f7df8632eefb88aab16e7968a1a7710409c8cb9269a3f"} Feb 18 19:31:46 crc kubenswrapper[4942]: E0218 19:31:46.478153 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-wvj72" podUID="5fe849cd-ac9e-48bb-a7dd-f7f529a324e3" Feb 18 19:31:46 crc kubenswrapper[4942]: E0218 19:31:46.480617 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/swift-operator@sha256:015f7f2d8b5afc85e51dd3b2e02a4cfb8294b543437315b291006d2416764db9\\\"\"" pod="openstack-operators/swift-operator-controller-manager-79558bbfbf-r8hvr" podUID="6618726f-c93c-4d05-b6d9-a08aca84801f" Feb 18 19:31:46 crc kubenswrapper[4942]: I0218 19:31:46.482015 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-c8b4db7df-h9q84" event={"ID":"250062ed-a35d-489a-a6b5-e6f96d1532d6","Type":"ContainerStarted","Data":"9a53e41ecd9bc15ba96074727586423e878c0a93bc1ce3ac75ba8f7ba5e61636"} Feb 18 19:31:46 crc kubenswrapper[4942]: E0218 19:31:46.493748 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"38.102.83.12:5001/openstack-k8s-operators/watcher-operator:bccc5f477aecf1b112841224406211ceeff240ba\\\"\"" pod="openstack-operators/watcher-operator-controller-manager-c8b4db7df-h9q84" podUID="250062ed-a35d-489a-a6b5-e6f96d1532d6" Feb 18 19:31:46 crc kubenswrapper[4942]: I0218 19:31:46.501479 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-5ddd85db87-5jzdp" event={"ID":"cde9a09e-2dfe-410e-95ad-8f297b517ef4","Type":"ContainerStarted","Data":"09fdc5a3f7408d0b9f325701dd398de889bf4e14ba86cfe3b8aa640d252a8589"} Feb 18 19:31:46 crc kubenswrapper[4942]: I0218 19:31:46.502781 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-85c99d655-6kt98" event={"ID":"df8c140d-a735-4a14-8239-67f577546e01","Type":"ContainerStarted","Data":"be7a5234ea1ca872ac886838a591ce75b8185eb313f72b20a206db6baab7ffe9"} Feb 18 19:31:46 crc kubenswrapper[4942]: I0218 19:31:46.503922 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-54967dbbdf-tzn65" event={"ID":"9d43a851-2d6c-4fe9-86e1-04c7d382b257","Type":"ContainerStarted","Data":"ba3e9924d1af9a2e2cdbe04a4e584f303b04aa5530466114eb7031ec34dab3f3"} Feb 18 19:31:46 crc kubenswrapper[4942]: I0218 19:31:46.505458 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-6c78d668d5-t9dzq" event={"ID":"11715b33-f996-46bf-81db-0557e84e7fea","Type":"ContainerStarted","Data":"a612a48db18acec5544ca2aac968fd95a3b6e878e7a0f9cc01fe74af053edc8a"} Feb 18 19:31:46 crc kubenswrapper[4942]: I0218 19:31:46.562498 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-928mg" Feb 18 19:31:46 crc kubenswrapper[4942]: I0218 19:31:46.609588 4942 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-928mg"] Feb 18 19:31:46 crc kubenswrapper[4942]: I0218 19:31:46.745225 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/0f7a5f35-f6e0-4f17-a380-13e8718ba658-webhook-certs\") pod \"openstack-operator-controller-manager-57f845558-vcfm9\" (UID: \"0f7a5f35-f6e0-4f17-a380-13e8718ba658\") " pod="openstack-operators/openstack-operator-controller-manager-57f845558-vcfm9" Feb 18 19:31:46 crc kubenswrapper[4942]: E0218 19:31:46.745385 4942 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Feb 18 19:31:46 crc kubenswrapper[4942]: E0218 19:31:46.745525 4942 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0f7a5f35-f6e0-4f17-a380-13e8718ba658-webhook-certs podName:0f7a5f35-f6e0-4f17-a380-13e8718ba658 nodeName:}" failed. No retries permitted until 2026-02-18 19:31:48.745502 +0000 UTC m=+868.450434665 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/0f7a5f35-f6e0-4f17-a380-13e8718ba658-webhook-certs") pod "openstack-operator-controller-manager-57f845558-vcfm9" (UID: "0f7a5f35-f6e0-4f17-a380-13e8718ba658") : secret "webhook-server-cert" not found Feb 18 19:31:46 crc kubenswrapper[4942]: E0218 19:31:46.745605 4942 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Feb 18 19:31:46 crc kubenswrapper[4942]: E0218 19:31:46.745650 4942 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0f7a5f35-f6e0-4f17-a380-13e8718ba658-metrics-certs podName:0f7a5f35-f6e0-4f17-a380-13e8718ba658 nodeName:}" failed. No retries permitted until 2026-02-18 19:31:48.745637243 +0000 UTC m=+868.450569918 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/0f7a5f35-f6e0-4f17-a380-13e8718ba658-metrics-certs") pod "openstack-operator-controller-manager-57f845558-vcfm9" (UID: "0f7a5f35-f6e0-4f17-a380-13e8718ba658") : secret "metrics-server-cert" not found Feb 18 19:31:46 crc kubenswrapper[4942]: I0218 19:31:46.745448 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/0f7a5f35-f6e0-4f17-a380-13e8718ba658-metrics-certs\") pod \"openstack-operator-controller-manager-57f845558-vcfm9\" (UID: \"0f7a5f35-f6e0-4f17-a380-13e8718ba658\") " pod="openstack-operators/openstack-operator-controller-manager-57f845558-vcfm9" Feb 18 19:31:47 crc kubenswrapper[4942]: E0218 19:31:47.517113 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/test-operator@sha256:f9b2e00617c7f219932ea0d5e2bb795cc4361a335a72743077948d8108695c27\\\"\"" pod="openstack-operators/test-operator-controller-manager-8467ccb4c8-shr4v" podUID="2fda65c9-97fe-4689-bd35-7f7974841223" Feb 18 19:31:47 crc kubenswrapper[4942]: E0218 19:31:47.517535 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/placement-operator@sha256:d800f1288d1517d84a45ddd475c3c0b4e8686fd900c9edf1e20b662b15218b89\\\"\"" pod="openstack-operators/placement-operator-controller-manager-57bd55f9b7-cg225" podUID="8ca2018a-1b2e-4fa2-8564-3e2a0d3d8377" Feb 18 19:31:47 crc kubenswrapper[4942]: E0218 19:31:47.517674 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/swift-operator@sha256:015f7f2d8b5afc85e51dd3b2e02a4cfb8294b543437315b291006d2416764db9\\\"\"" pod="openstack-operators/swift-operator-controller-manager-79558bbfbf-r8hvr" podUID="6618726f-c93c-4d05-b6d9-a08aca84801f" Feb 18 19:31:47 crc kubenswrapper[4942]: E0218 19:31:47.517679 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"38.102.83.12:5001/openstack-k8s-operators/watcher-operator:bccc5f477aecf1b112841224406211ceeff240ba\\\"\"" pod="openstack-operators/watcher-operator-controller-manager-c8b4db7df-h9q84" podUID="250062ed-a35d-489a-a6b5-e6f96d1532d6" Feb 18 19:31:47 crc kubenswrapper[4942]: E0218 19:31:47.518605 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-wvj72" podUID="5fe849cd-ac9e-48bb-a7dd-f7f529a324e3" Feb 18 19:31:48 crc kubenswrapper[4942]: I0218 19:31:48.082622 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/230a2167-e078-48a6-93ce-84a37ff4ac02-cert\") pod \"infra-operator-controller-manager-66d6b5f488-5vptt\" (UID: \"230a2167-e078-48a6-93ce-84a37ff4ac02\") " pod="openstack-operators/infra-operator-controller-manager-66d6b5f488-5vptt" Feb 18 19:31:48 crc kubenswrapper[4942]: E0218 19:31:48.082695 4942 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Feb 18 19:31:48 crc kubenswrapper[4942]: E0218 19:31:48.082823 4942 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/230a2167-e078-48a6-93ce-84a37ff4ac02-cert podName:230a2167-e078-48a6-93ce-84a37ff4ac02 nodeName:}" failed. No retries permitted until 2026-02-18 19:31:52.082806637 +0000 UTC m=+871.787739302 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/230a2167-e078-48a6-93ce-84a37ff4ac02-cert") pod "infra-operator-controller-manager-66d6b5f488-5vptt" (UID: "230a2167-e078-48a6-93ce-84a37ff4ac02") : secret "infra-operator-webhook-server-cert" not found Feb 18 19:31:48 crc kubenswrapper[4942]: I0218 19:31:48.488877 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/716e0e70-0ef0-4843-9ad3-d84f47a3397f-cert\") pod \"openstack-baremetal-operator-controller-manager-c5677dc5d-7ssrk\" (UID: \"716e0e70-0ef0-4843-9ad3-d84f47a3397f\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-c5677dc5d-7ssrk" Feb 18 19:31:48 crc kubenswrapper[4942]: E0218 19:31:48.489064 4942 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 18 19:31:48 crc kubenswrapper[4942]: E0218 19:31:48.489113 4942 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/716e0e70-0ef0-4843-9ad3-d84f47a3397f-cert podName:716e0e70-0ef0-4843-9ad3-d84f47a3397f nodeName:}" failed. No retries permitted until 2026-02-18 19:31:52.489098729 +0000 UTC m=+872.194031384 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/716e0e70-0ef0-4843-9ad3-d84f47a3397f-cert") pod "openstack-baremetal-operator-controller-manager-c5677dc5d-7ssrk" (UID: "716e0e70-0ef0-4843-9ad3-d84f47a3397f") : secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 18 19:31:48 crc kubenswrapper[4942]: I0218 19:31:48.522991 4942 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-928mg" podUID="83b97eec-f1b8-4205-933f-205e30caeec2" containerName="registry-server" containerID="cri-o://801e3e316ccbbae8f59abaf78bbc4c858ca81ddbf88f422f0ae0653aa768a2de" gracePeriod=2 Feb 18 19:31:48 crc kubenswrapper[4942]: I0218 19:31:48.792945 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/0f7a5f35-f6e0-4f17-a380-13e8718ba658-webhook-certs\") pod \"openstack-operator-controller-manager-57f845558-vcfm9\" (UID: \"0f7a5f35-f6e0-4f17-a380-13e8718ba658\") " pod="openstack-operators/openstack-operator-controller-manager-57f845558-vcfm9" Feb 18 19:31:48 crc kubenswrapper[4942]: I0218 19:31:48.793071 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/0f7a5f35-f6e0-4f17-a380-13e8718ba658-metrics-certs\") pod \"openstack-operator-controller-manager-57f845558-vcfm9\" (UID: \"0f7a5f35-f6e0-4f17-a380-13e8718ba658\") " pod="openstack-operators/openstack-operator-controller-manager-57f845558-vcfm9" Feb 18 19:31:48 crc kubenswrapper[4942]: E0218 19:31:48.793242 4942 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Feb 18 19:31:48 crc kubenswrapper[4942]: E0218 19:31:48.793281 4942 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Feb 18 19:31:48 crc kubenswrapper[4942]: E0218 19:31:48.793373 4942 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0f7a5f35-f6e0-4f17-a380-13e8718ba658-metrics-certs podName:0f7a5f35-f6e0-4f17-a380-13e8718ba658 nodeName:}" failed. No retries permitted until 2026-02-18 19:31:52.793342271 +0000 UTC m=+872.498274976 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/0f7a5f35-f6e0-4f17-a380-13e8718ba658-metrics-certs") pod "openstack-operator-controller-manager-57f845558-vcfm9" (UID: "0f7a5f35-f6e0-4f17-a380-13e8718ba658") : secret "metrics-server-cert" not found Feb 18 19:31:48 crc kubenswrapper[4942]: E0218 19:31:48.793410 4942 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0f7a5f35-f6e0-4f17-a380-13e8718ba658-webhook-certs podName:0f7a5f35-f6e0-4f17-a380-13e8718ba658 nodeName:}" failed. No retries permitted until 2026-02-18 19:31:52.793391402 +0000 UTC m=+872.498324107 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/0f7a5f35-f6e0-4f17-a380-13e8718ba658-webhook-certs") pod "openstack-operator-controller-manager-57f845558-vcfm9" (UID: "0f7a5f35-f6e0-4f17-a380-13e8718ba658") : secret "webhook-server-cert" not found Feb 18 19:31:49 crc kubenswrapper[4942]: I0218 19:31:49.532654 4942 generic.go:334] "Generic (PLEG): container finished" podID="83b97eec-f1b8-4205-933f-205e30caeec2" containerID="801e3e316ccbbae8f59abaf78bbc4c858ca81ddbf88f422f0ae0653aa768a2de" exitCode=0 Feb 18 19:31:49 crc kubenswrapper[4942]: I0218 19:31:49.532692 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-928mg" event={"ID":"83b97eec-f1b8-4205-933f-205e30caeec2","Type":"ContainerDied","Data":"801e3e316ccbbae8f59abaf78bbc4c858ca81ddbf88f422f0ae0653aa768a2de"} Feb 18 19:31:52 crc kubenswrapper[4942]: I0218 19:31:52.146860 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/230a2167-e078-48a6-93ce-84a37ff4ac02-cert\") pod \"infra-operator-controller-manager-66d6b5f488-5vptt\" (UID: \"230a2167-e078-48a6-93ce-84a37ff4ac02\") " pod="openstack-operators/infra-operator-controller-manager-66d6b5f488-5vptt" Feb 18 19:31:52 crc kubenswrapper[4942]: E0218 19:31:52.147066 4942 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Feb 18 19:31:52 crc kubenswrapper[4942]: E0218 19:31:52.147821 4942 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/230a2167-e078-48a6-93ce-84a37ff4ac02-cert podName:230a2167-e078-48a6-93ce-84a37ff4ac02 nodeName:}" failed. No retries permitted until 2026-02-18 19:32:00.147791738 +0000 UTC m=+879.852724443 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/230a2167-e078-48a6-93ce-84a37ff4ac02-cert") pod "infra-operator-controller-manager-66d6b5f488-5vptt" (UID: "230a2167-e078-48a6-93ce-84a37ff4ac02") : secret "infra-operator-webhook-server-cert" not found Feb 18 19:31:52 crc kubenswrapper[4942]: I0218 19:31:52.553953 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/716e0e70-0ef0-4843-9ad3-d84f47a3397f-cert\") pod \"openstack-baremetal-operator-controller-manager-c5677dc5d-7ssrk\" (UID: \"716e0e70-0ef0-4843-9ad3-d84f47a3397f\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-c5677dc5d-7ssrk" Feb 18 19:31:52 crc kubenswrapper[4942]: E0218 19:31:52.554092 4942 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 18 19:31:52 crc kubenswrapper[4942]: E0218 19:31:52.554171 4942 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/716e0e70-0ef0-4843-9ad3-d84f47a3397f-cert podName:716e0e70-0ef0-4843-9ad3-d84f47a3397f nodeName:}" failed. No retries permitted until 2026-02-18 19:32:00.554150552 +0000 UTC m=+880.259083217 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/716e0e70-0ef0-4843-9ad3-d84f47a3397f-cert") pod "openstack-baremetal-operator-controller-manager-c5677dc5d-7ssrk" (UID: "716e0e70-0ef0-4843-9ad3-d84f47a3397f") : secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 18 19:31:52 crc kubenswrapper[4942]: I0218 19:31:52.857848 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/0f7a5f35-f6e0-4f17-a380-13e8718ba658-webhook-certs\") pod \"openstack-operator-controller-manager-57f845558-vcfm9\" (UID: \"0f7a5f35-f6e0-4f17-a380-13e8718ba658\") " pod="openstack-operators/openstack-operator-controller-manager-57f845558-vcfm9" Feb 18 19:31:52 crc kubenswrapper[4942]: I0218 19:31:52.857925 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/0f7a5f35-f6e0-4f17-a380-13e8718ba658-metrics-certs\") pod \"openstack-operator-controller-manager-57f845558-vcfm9\" (UID: \"0f7a5f35-f6e0-4f17-a380-13e8718ba658\") " pod="openstack-operators/openstack-operator-controller-manager-57f845558-vcfm9" Feb 18 19:31:52 crc kubenswrapper[4942]: E0218 19:31:52.858141 4942 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Feb 18 19:31:52 crc kubenswrapper[4942]: E0218 19:31:52.858201 4942 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0f7a5f35-f6e0-4f17-a380-13e8718ba658-metrics-certs podName:0f7a5f35-f6e0-4f17-a380-13e8718ba658 nodeName:}" failed. No retries permitted until 2026-02-18 19:32:00.858182617 +0000 UTC m=+880.563115302 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/0f7a5f35-f6e0-4f17-a380-13e8718ba658-metrics-certs") pod "openstack-operator-controller-manager-57f845558-vcfm9" (UID: "0f7a5f35-f6e0-4f17-a380-13e8718ba658") : secret "metrics-server-cert" not found Feb 18 19:31:52 crc kubenswrapper[4942]: E0218 19:31:52.858655 4942 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Feb 18 19:31:52 crc kubenswrapper[4942]: E0218 19:31:52.858696 4942 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0f7a5f35-f6e0-4f17-a380-13e8718ba658-webhook-certs podName:0f7a5f35-f6e0-4f17-a380-13e8718ba658 nodeName:}" failed. No retries permitted until 2026-02-18 19:32:00.85868286 +0000 UTC m=+880.563615535 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/0f7a5f35-f6e0-4f17-a380-13e8718ba658-webhook-certs") pod "openstack-operator-controller-manager-57f845558-vcfm9" (UID: "0f7a5f35-f6e0-4f17-a380-13e8718ba658") : secret "webhook-server-cert" not found Feb 18 19:31:56 crc kubenswrapper[4942]: E0218 19:31:56.186065 4942 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 801e3e316ccbbae8f59abaf78bbc4c858ca81ddbf88f422f0ae0653aa768a2de is running failed: container process not found" containerID="801e3e316ccbbae8f59abaf78bbc4c858ca81ddbf88f422f0ae0653aa768a2de" cmd=["grpc_health_probe","-addr=:50051"] Feb 18 19:31:56 crc kubenswrapper[4942]: E0218 19:31:56.187073 4942 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 801e3e316ccbbae8f59abaf78bbc4c858ca81ddbf88f422f0ae0653aa768a2de is running failed: container process not found" containerID="801e3e316ccbbae8f59abaf78bbc4c858ca81ddbf88f422f0ae0653aa768a2de" cmd=["grpc_health_probe","-addr=:50051"] Feb 18 19:31:56 crc kubenswrapper[4942]: E0218 19:31:56.187531 4942 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 801e3e316ccbbae8f59abaf78bbc4c858ca81ddbf88f422f0ae0653aa768a2de is running failed: container process not found" containerID="801e3e316ccbbae8f59abaf78bbc4c858ca81ddbf88f422f0ae0653aa768a2de" cmd=["grpc_health_probe","-addr=:50051"] Feb 18 19:31:56 crc kubenswrapper[4942]: E0218 19:31:56.187555 4942 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 801e3e316ccbbae8f59abaf78bbc4c858ca81ddbf88f422f0ae0653aa768a2de is running failed: container process not found" probeType="Readiness" pod="openshift-marketplace/certified-operators-928mg" podUID="83b97eec-f1b8-4205-933f-205e30caeec2" containerName="registry-server" Feb 18 19:31:57 crc kubenswrapper[4942]: E0218 19:31:57.778971 4942 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/telemetry-operator@sha256:4b10e23983c3ec518c35aeabb33ac228063e56c81b4d7a100c5d91139ad7d7fc" Feb 18 19:31:57 crc kubenswrapper[4942]: E0218 19:31:57.779147 4942 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/telemetry-operator@sha256:4b10e23983c3ec518c35aeabb33ac228063e56c81b4d7a100c5d91139ad7d7fc,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-m6wkc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod telemetry-operator-controller-manager-56dc67d744-hhjwz_openstack-operators(a65b16e4-f55f-427a-a629-2fbff014a7af): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 18 19:31:57 crc kubenswrapper[4942]: E0218 19:31:57.780285 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/telemetry-operator-controller-manager-56dc67d744-hhjwz" podUID="a65b16e4-f55f-427a-a629-2fbff014a7af" Feb 18 19:31:58 crc kubenswrapper[4942]: E0218 19:31:58.258307 4942 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/neutron-operator@sha256:8d65a2becf279bb8b6b1a09e273d9a2cb1ff41f85bc42ef2e4d573cbb8cbac89" Feb 18 19:31:58 crc kubenswrapper[4942]: E0218 19:31:58.258488 4942 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/neutron-operator@sha256:8d65a2becf279bb8b6b1a09e273d9a2cb1ff41f85bc42ef2e4d573cbb8cbac89,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-89l7l,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod neutron-operator-controller-manager-54967dbbdf-tzn65_openstack-operators(9d43a851-2d6c-4fe9-86e1-04c7d382b257): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 18 19:31:58 crc kubenswrapper[4942]: E0218 19:31:58.259659 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/neutron-operator-controller-manager-54967dbbdf-tzn65" podUID="9d43a851-2d6c-4fe9-86e1-04c7d382b257" Feb 18 19:31:58 crc kubenswrapper[4942]: E0218 19:31:58.602924 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/telemetry-operator@sha256:4b10e23983c3ec518c35aeabb33ac228063e56c81b4d7a100c5d91139ad7d7fc\\\"\"" pod="openstack-operators/telemetry-operator-controller-manager-56dc67d744-hhjwz" podUID="a65b16e4-f55f-427a-a629-2fbff014a7af" Feb 18 19:31:58 crc kubenswrapper[4942]: E0218 19:31:58.604215 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/neutron-operator@sha256:8d65a2becf279bb8b6b1a09e273d9a2cb1ff41f85bc42ef2e4d573cbb8cbac89\\\"\"" pod="openstack-operators/neutron-operator-controller-manager-54967dbbdf-tzn65" podUID="9d43a851-2d6c-4fe9-86e1-04c7d382b257" Feb 18 19:31:59 crc kubenswrapper[4942]: I0218 19:31:59.039816 4942 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 18 19:32:00 crc kubenswrapper[4942]: I0218 19:32:00.175545 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/230a2167-e078-48a6-93ce-84a37ff4ac02-cert\") pod \"infra-operator-controller-manager-66d6b5f488-5vptt\" (UID: \"230a2167-e078-48a6-93ce-84a37ff4ac02\") " pod="openstack-operators/infra-operator-controller-manager-66d6b5f488-5vptt" Feb 18 19:32:00 crc kubenswrapper[4942]: E0218 19:32:00.175712 4942 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Feb 18 19:32:00 crc kubenswrapper[4942]: E0218 19:32:00.175796 4942 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/230a2167-e078-48a6-93ce-84a37ff4ac02-cert podName:230a2167-e078-48a6-93ce-84a37ff4ac02 nodeName:}" failed. No retries permitted until 2026-02-18 19:32:16.175754531 +0000 UTC m=+895.880687196 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/230a2167-e078-48a6-93ce-84a37ff4ac02-cert") pod "infra-operator-controller-manager-66d6b5f488-5vptt" (UID: "230a2167-e078-48a6-93ce-84a37ff4ac02") : secret "infra-operator-webhook-server-cert" not found Feb 18 19:32:00 crc kubenswrapper[4942]: I0218 19:32:00.580748 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/716e0e70-0ef0-4843-9ad3-d84f47a3397f-cert\") pod \"openstack-baremetal-operator-controller-manager-c5677dc5d-7ssrk\" (UID: \"716e0e70-0ef0-4843-9ad3-d84f47a3397f\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-c5677dc5d-7ssrk" Feb 18 19:32:04 crc kubenswrapper[4942]: I0218 19:32:00.589868 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/716e0e70-0ef0-4843-9ad3-d84f47a3397f-cert\") pod \"openstack-baremetal-operator-controller-manager-c5677dc5d-7ssrk\" (UID: \"716e0e70-0ef0-4843-9ad3-d84f47a3397f\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-c5677dc5d-7ssrk" Feb 18 19:32:04 crc kubenswrapper[4942]: I0218 19:32:00.851865 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-c5677dc5d-7ssrk" Feb 18 19:32:04 crc kubenswrapper[4942]: I0218 19:32:00.884023 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/0f7a5f35-f6e0-4f17-a380-13e8718ba658-metrics-certs\") pod \"openstack-operator-controller-manager-57f845558-vcfm9\" (UID: \"0f7a5f35-f6e0-4f17-a380-13e8718ba658\") " pod="openstack-operators/openstack-operator-controller-manager-57f845558-vcfm9" Feb 18 19:32:04 crc kubenswrapper[4942]: I0218 19:32:00.884223 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/0f7a5f35-f6e0-4f17-a380-13e8718ba658-webhook-certs\") pod \"openstack-operator-controller-manager-57f845558-vcfm9\" (UID: \"0f7a5f35-f6e0-4f17-a380-13e8718ba658\") " pod="openstack-operators/openstack-operator-controller-manager-57f845558-vcfm9" Feb 18 19:32:04 crc kubenswrapper[4942]: I0218 19:32:00.890340 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/0f7a5f35-f6e0-4f17-a380-13e8718ba658-webhook-certs\") pod \"openstack-operator-controller-manager-57f845558-vcfm9\" (UID: \"0f7a5f35-f6e0-4f17-a380-13e8718ba658\") " pod="openstack-operators/openstack-operator-controller-manager-57f845558-vcfm9" Feb 18 19:32:04 crc kubenswrapper[4942]: I0218 19:32:00.891010 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/0f7a5f35-f6e0-4f17-a380-13e8718ba658-metrics-certs\") pod \"openstack-operator-controller-manager-57f845558-vcfm9\" (UID: \"0f7a5f35-f6e0-4f17-a380-13e8718ba658\") " pod="openstack-operators/openstack-operator-controller-manager-57f845558-vcfm9" Feb 18 19:32:04 crc kubenswrapper[4942]: I0218 19:32:00.924492 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-57f845558-vcfm9" Feb 18 19:32:04 crc kubenswrapper[4942]: I0218 19:32:01.693482 4942 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-pmrtb"] Feb 18 19:32:04 crc kubenswrapper[4942]: I0218 19:32:01.697487 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-pmrtb" Feb 18 19:32:04 crc kubenswrapper[4942]: I0218 19:32:01.718491 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-pmrtb"] Feb 18 19:32:04 crc kubenswrapper[4942]: I0218 19:32:01.823557 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a4cd7c2a-4d5f-48c2-9af4-bcc237367416-utilities\") pod \"redhat-operators-pmrtb\" (UID: \"a4cd7c2a-4d5f-48c2-9af4-bcc237367416\") " pod="openshift-marketplace/redhat-operators-pmrtb" Feb 18 19:32:04 crc kubenswrapper[4942]: I0218 19:32:01.823636 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ksk4m\" (UniqueName: \"kubernetes.io/projected/a4cd7c2a-4d5f-48c2-9af4-bcc237367416-kube-api-access-ksk4m\") pod \"redhat-operators-pmrtb\" (UID: \"a4cd7c2a-4d5f-48c2-9af4-bcc237367416\") " pod="openshift-marketplace/redhat-operators-pmrtb" Feb 18 19:32:04 crc kubenswrapper[4942]: I0218 19:32:01.823724 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a4cd7c2a-4d5f-48c2-9af4-bcc237367416-catalog-content\") pod \"redhat-operators-pmrtb\" (UID: \"a4cd7c2a-4d5f-48c2-9af4-bcc237367416\") " pod="openshift-marketplace/redhat-operators-pmrtb" Feb 18 19:32:04 crc kubenswrapper[4942]: I0218 19:32:01.924657 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a4cd7c2a-4d5f-48c2-9af4-bcc237367416-utilities\") pod \"redhat-operators-pmrtb\" (UID: \"a4cd7c2a-4d5f-48c2-9af4-bcc237367416\") " pod="openshift-marketplace/redhat-operators-pmrtb" Feb 18 19:32:04 crc kubenswrapper[4942]: I0218 19:32:01.924735 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ksk4m\" (UniqueName: \"kubernetes.io/projected/a4cd7c2a-4d5f-48c2-9af4-bcc237367416-kube-api-access-ksk4m\") pod \"redhat-operators-pmrtb\" (UID: \"a4cd7c2a-4d5f-48c2-9af4-bcc237367416\") " pod="openshift-marketplace/redhat-operators-pmrtb" Feb 18 19:32:04 crc kubenswrapper[4942]: I0218 19:32:01.924788 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a4cd7c2a-4d5f-48c2-9af4-bcc237367416-catalog-content\") pod \"redhat-operators-pmrtb\" (UID: \"a4cd7c2a-4d5f-48c2-9af4-bcc237367416\") " pod="openshift-marketplace/redhat-operators-pmrtb" Feb 18 19:32:04 crc kubenswrapper[4942]: I0218 19:32:01.925543 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a4cd7c2a-4d5f-48c2-9af4-bcc237367416-catalog-content\") pod \"redhat-operators-pmrtb\" (UID: \"a4cd7c2a-4d5f-48c2-9af4-bcc237367416\") " pod="openshift-marketplace/redhat-operators-pmrtb" Feb 18 19:32:04 crc kubenswrapper[4942]: I0218 19:32:01.925549 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a4cd7c2a-4d5f-48c2-9af4-bcc237367416-utilities\") pod \"redhat-operators-pmrtb\" (UID: \"a4cd7c2a-4d5f-48c2-9af4-bcc237367416\") " pod="openshift-marketplace/redhat-operators-pmrtb" Feb 18 19:32:04 crc kubenswrapper[4942]: I0218 19:32:01.944120 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ksk4m\" (UniqueName: \"kubernetes.io/projected/a4cd7c2a-4d5f-48c2-9af4-bcc237367416-kube-api-access-ksk4m\") pod \"redhat-operators-pmrtb\" (UID: \"a4cd7c2a-4d5f-48c2-9af4-bcc237367416\") " pod="openshift-marketplace/redhat-operators-pmrtb" Feb 18 19:32:04 crc kubenswrapper[4942]: I0218 19:32:02.070805 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-pmrtb" Feb 18 19:32:04 crc kubenswrapper[4942]: E0218 19:32:03.174723 4942 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/manila-operator@sha256:16b541cff6581510978343a1bdc152a07fafcafa420b604f19291858e3d25fee" Feb 18 19:32:04 crc kubenswrapper[4942]: E0218 19:32:03.174961 4942 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/manila-operator@sha256:16b541cff6581510978343a1bdc152a07fafcafa420b604f19291858e3d25fee,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-mgjwf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod manila-operator-controller-manager-96fff9cb8-qs9mb_openstack-operators(a15b8ac2-0742-4fd7-9a14-005620c93a3d): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 18 19:32:04 crc kubenswrapper[4942]: E0218 19:32:03.177985 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/manila-operator-controller-manager-96fff9cb8-qs9mb" podUID="a15b8ac2-0742-4fd7-9a14-005620c93a3d" Feb 18 19:32:04 crc kubenswrapper[4942]: E0218 19:32:03.650642 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/manila-operator@sha256:16b541cff6581510978343a1bdc152a07fafcafa420b604f19291858e3d25fee\\\"\"" pod="openstack-operators/manila-operator-controller-manager-96fff9cb8-qs9mb" podUID="a15b8ac2-0742-4fd7-9a14-005620c93a3d" Feb 18 19:32:04 crc kubenswrapper[4942]: I0218 19:32:04.503746 4942 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-928mg" Feb 18 19:32:04 crc kubenswrapper[4942]: I0218 19:32:04.582132 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/83b97eec-f1b8-4205-933f-205e30caeec2-catalog-content\") pod \"83b97eec-f1b8-4205-933f-205e30caeec2\" (UID: \"83b97eec-f1b8-4205-933f-205e30caeec2\") " Feb 18 19:32:04 crc kubenswrapper[4942]: I0218 19:32:04.582198 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/83b97eec-f1b8-4205-933f-205e30caeec2-utilities\") pod \"83b97eec-f1b8-4205-933f-205e30caeec2\" (UID: \"83b97eec-f1b8-4205-933f-205e30caeec2\") " Feb 18 19:32:04 crc kubenswrapper[4942]: I0218 19:32:04.582250 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6kt9m\" (UniqueName: \"kubernetes.io/projected/83b97eec-f1b8-4205-933f-205e30caeec2-kube-api-access-6kt9m\") pod \"83b97eec-f1b8-4205-933f-205e30caeec2\" (UID: \"83b97eec-f1b8-4205-933f-205e30caeec2\") " Feb 18 19:32:04 crc kubenswrapper[4942]: I0218 19:32:04.587822 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/83b97eec-f1b8-4205-933f-205e30caeec2-kube-api-access-6kt9m" (OuterVolumeSpecName: "kube-api-access-6kt9m") pod "83b97eec-f1b8-4205-933f-205e30caeec2" (UID: "83b97eec-f1b8-4205-933f-205e30caeec2"). InnerVolumeSpecName "kube-api-access-6kt9m". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:32:04 crc kubenswrapper[4942]: I0218 19:32:04.591377 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/83b97eec-f1b8-4205-933f-205e30caeec2-utilities" (OuterVolumeSpecName: "utilities") pod "83b97eec-f1b8-4205-933f-205e30caeec2" (UID: "83b97eec-f1b8-4205-933f-205e30caeec2"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 19:32:04 crc kubenswrapper[4942]: I0218 19:32:04.629695 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/83b97eec-f1b8-4205-933f-205e30caeec2-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "83b97eec-f1b8-4205-933f-205e30caeec2" (UID: "83b97eec-f1b8-4205-933f-205e30caeec2"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 19:32:04 crc kubenswrapper[4942]: I0218 19:32:04.658934 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-928mg" event={"ID":"83b97eec-f1b8-4205-933f-205e30caeec2","Type":"ContainerDied","Data":"e90c667153875bf407511ed88e15dc632e46a63fd6b238de865623e6e16e6e1a"} Feb 18 19:32:04 crc kubenswrapper[4942]: I0218 19:32:04.658993 4942 scope.go:117] "RemoveContainer" containerID="801e3e316ccbbae8f59abaf78bbc4c858ca81ddbf88f422f0ae0653aa768a2de" Feb 18 19:32:04 crc kubenswrapper[4942]: I0218 19:32:04.659045 4942 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-928mg" Feb 18 19:32:04 crc kubenswrapper[4942]: I0218 19:32:04.684316 4942 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/83b97eec-f1b8-4205-933f-205e30caeec2-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 18 19:32:04 crc kubenswrapper[4942]: I0218 19:32:04.684344 4942 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/83b97eec-f1b8-4205-933f-205e30caeec2-utilities\") on node \"crc\" DevicePath \"\"" Feb 18 19:32:04 crc kubenswrapper[4942]: I0218 19:32:04.684354 4942 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6kt9m\" (UniqueName: \"kubernetes.io/projected/83b97eec-f1b8-4205-933f-205e30caeec2-kube-api-access-6kt9m\") on node \"crc\" DevicePath \"\"" Feb 18 19:32:04 crc kubenswrapper[4942]: I0218 19:32:04.709875 4942 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-928mg"] Feb 18 19:32:04 crc kubenswrapper[4942]: I0218 19:32:04.716538 4942 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-928mg"] Feb 18 19:32:05 crc kubenswrapper[4942]: I0218 19:32:05.056964 4942 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="83b97eec-f1b8-4205-933f-205e30caeec2" path="/var/lib/kubelet/pods/83b97eec-f1b8-4205-933f-205e30caeec2/volumes" Feb 18 19:32:05 crc kubenswrapper[4942]: E0218 19:32:05.954321 4942 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/keystone-operator@sha256:9cb0b42ba1836ba4320a0a4660bfdeddea8c0685be379c0000dafb16398f4469" Feb 18 19:32:05 crc kubenswrapper[4942]: E0218 19:32:05.955606 4942 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/keystone-operator@sha256:9cb0b42ba1836ba4320a0a4660bfdeddea8c0685be379c0000dafb16398f4469,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-n49wh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod keystone-operator-controller-manager-6c78d668d5-t9dzq_openstack-operators(11715b33-f996-46bf-81db-0557e84e7fea): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 18 19:32:05 crc kubenswrapper[4942]: E0218 19:32:05.957616 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/keystone-operator-controller-manager-6c78d668d5-t9dzq" podUID="11715b33-f996-46bf-81db-0557e84e7fea" Feb 18 19:32:06 crc kubenswrapper[4942]: E0218 19:32:06.416228 4942 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/nova-operator@sha256:ab8e8207abec9cf5da7afded75ea76d1c3d2b9ab0f8e3124f518651e38f3123c" Feb 18 19:32:06 crc kubenswrapper[4942]: E0218 19:32:06.416400 4942 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/nova-operator@sha256:ab8e8207abec9cf5da7afded75ea76d1c3d2b9ab0f8e3124f518651e38f3123c,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-hktdt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod nova-operator-controller-manager-5ddd85db87-5jzdp_openstack-operators(cde9a09e-2dfe-410e-95ad-8f297b517ef4): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 18 19:32:06 crc kubenswrapper[4942]: E0218 19:32:06.417504 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/nova-operator-controller-manager-5ddd85db87-5jzdp" podUID="cde9a09e-2dfe-410e-95ad-8f297b517ef4" Feb 18 19:32:06 crc kubenswrapper[4942]: E0218 19:32:06.672542 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/keystone-operator@sha256:9cb0b42ba1836ba4320a0a4660bfdeddea8c0685be379c0000dafb16398f4469\\\"\"" pod="openstack-operators/keystone-operator-controller-manager-6c78d668d5-t9dzq" podUID="11715b33-f996-46bf-81db-0557e84e7fea" Feb 18 19:32:06 crc kubenswrapper[4942]: E0218 19:32:06.672567 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/nova-operator@sha256:ab8e8207abec9cf5da7afded75ea76d1c3d2b9ab0f8e3124f518651e38f3123c\\\"\"" pod="openstack-operators/nova-operator-controller-manager-5ddd85db87-5jzdp" podUID="cde9a09e-2dfe-410e-95ad-8f297b517ef4" Feb 18 19:32:07 crc kubenswrapper[4942]: I0218 19:32:07.077860 4942 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-xhp5w"] Feb 18 19:32:07 crc kubenswrapper[4942]: E0218 19:32:07.078189 4942 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="83b97eec-f1b8-4205-933f-205e30caeec2" containerName="extract-content" Feb 18 19:32:07 crc kubenswrapper[4942]: I0218 19:32:07.078201 4942 state_mem.go:107] "Deleted CPUSet assignment" podUID="83b97eec-f1b8-4205-933f-205e30caeec2" containerName="extract-content" Feb 18 19:32:07 crc kubenswrapper[4942]: E0218 19:32:07.078215 4942 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="83b97eec-f1b8-4205-933f-205e30caeec2" containerName="extract-utilities" Feb 18 19:32:07 crc kubenswrapper[4942]: I0218 19:32:07.078221 4942 state_mem.go:107] "Deleted CPUSet assignment" podUID="83b97eec-f1b8-4205-933f-205e30caeec2" containerName="extract-utilities" Feb 18 19:32:07 crc kubenswrapper[4942]: E0218 19:32:07.078235 4942 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="83b97eec-f1b8-4205-933f-205e30caeec2" containerName="registry-server" Feb 18 19:32:07 crc kubenswrapper[4942]: I0218 19:32:07.078242 4942 state_mem.go:107] "Deleted CPUSet assignment" podUID="83b97eec-f1b8-4205-933f-205e30caeec2" containerName="registry-server" Feb 18 19:32:07 crc kubenswrapper[4942]: I0218 19:32:07.078395 4942 memory_manager.go:354] "RemoveStaleState removing state" podUID="83b97eec-f1b8-4205-933f-205e30caeec2" containerName="registry-server" Feb 18 19:32:07 crc kubenswrapper[4942]: I0218 19:32:07.079383 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-xhp5w" Feb 18 19:32:07 crc kubenswrapper[4942]: I0218 19:32:07.105085 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-xhp5w"] Feb 18 19:32:07 crc kubenswrapper[4942]: I0218 19:32:07.127105 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/04adf08c-4b6b-49c1-be25-d2cc8c67dce2-catalog-content\") pod \"community-operators-xhp5w\" (UID: \"04adf08c-4b6b-49c1-be25-d2cc8c67dce2\") " pod="openshift-marketplace/community-operators-xhp5w" Feb 18 19:32:07 crc kubenswrapper[4942]: I0218 19:32:07.127151 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/04adf08c-4b6b-49c1-be25-d2cc8c67dce2-utilities\") pod \"community-operators-xhp5w\" (UID: \"04adf08c-4b6b-49c1-be25-d2cc8c67dce2\") " pod="openshift-marketplace/community-operators-xhp5w" Feb 18 19:32:07 crc kubenswrapper[4942]: I0218 19:32:07.127493 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tm557\" (UniqueName: \"kubernetes.io/projected/04adf08c-4b6b-49c1-be25-d2cc8c67dce2-kube-api-access-tm557\") pod \"community-operators-xhp5w\" (UID: \"04adf08c-4b6b-49c1-be25-d2cc8c67dce2\") " pod="openshift-marketplace/community-operators-xhp5w" Feb 18 19:32:07 crc kubenswrapper[4942]: I0218 19:32:07.229039 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tm557\" (UniqueName: \"kubernetes.io/projected/04adf08c-4b6b-49c1-be25-d2cc8c67dce2-kube-api-access-tm557\") pod \"community-operators-xhp5w\" (UID: \"04adf08c-4b6b-49c1-be25-d2cc8c67dce2\") " pod="openshift-marketplace/community-operators-xhp5w" Feb 18 19:32:07 crc kubenswrapper[4942]: I0218 19:32:07.229110 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/04adf08c-4b6b-49c1-be25-d2cc8c67dce2-catalog-content\") pod \"community-operators-xhp5w\" (UID: \"04adf08c-4b6b-49c1-be25-d2cc8c67dce2\") " pod="openshift-marketplace/community-operators-xhp5w" Feb 18 19:32:07 crc kubenswrapper[4942]: I0218 19:32:07.229136 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/04adf08c-4b6b-49c1-be25-d2cc8c67dce2-utilities\") pod \"community-operators-xhp5w\" (UID: \"04adf08c-4b6b-49c1-be25-d2cc8c67dce2\") " pod="openshift-marketplace/community-operators-xhp5w" Feb 18 19:32:07 crc kubenswrapper[4942]: I0218 19:32:07.229609 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/04adf08c-4b6b-49c1-be25-d2cc8c67dce2-utilities\") pod \"community-operators-xhp5w\" (UID: \"04adf08c-4b6b-49c1-be25-d2cc8c67dce2\") " pod="openshift-marketplace/community-operators-xhp5w" Feb 18 19:32:07 crc kubenswrapper[4942]: I0218 19:32:07.229815 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/04adf08c-4b6b-49c1-be25-d2cc8c67dce2-catalog-content\") pod \"community-operators-xhp5w\" (UID: \"04adf08c-4b6b-49c1-be25-d2cc8c67dce2\") " pod="openshift-marketplace/community-operators-xhp5w" Feb 18 19:32:07 crc kubenswrapper[4942]: I0218 19:32:07.250512 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tm557\" (UniqueName: \"kubernetes.io/projected/04adf08c-4b6b-49c1-be25-d2cc8c67dce2-kube-api-access-tm557\") pod \"community-operators-xhp5w\" (UID: \"04adf08c-4b6b-49c1-be25-d2cc8c67dce2\") " pod="openshift-marketplace/community-operators-xhp5w" Feb 18 19:32:07 crc kubenswrapper[4942]: I0218 19:32:07.404807 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-xhp5w" Feb 18 19:32:10 crc kubenswrapper[4942]: I0218 19:32:10.018595 4942 scope.go:117] "RemoveContainer" containerID="63a88ca6ca33dff5e2ab3bac904d6ef958cd2ba371f5bb8c57b0d9a89c8d0c4e" Feb 18 19:32:10 crc kubenswrapper[4942]: I0218 19:32:10.811797 4942 scope.go:117] "RemoveContainer" containerID="d49471940515dac44ca7b4deb7b69786b17d58c82210cbd128da7a4353fdc212" Feb 18 19:32:11 crc kubenswrapper[4942]: I0218 19:32:11.080361 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-57f845558-vcfm9"] Feb 18 19:32:11 crc kubenswrapper[4942]: W0218 19:32:11.090971 4942 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0f7a5f35_f6e0_4f17_a380_13e8718ba658.slice/crio-90db3468cce7dfd35ccc773b5df3877a748faf704881c53896b478806d826b11 WatchSource:0}: Error finding container 90db3468cce7dfd35ccc773b5df3877a748faf704881c53896b478806d826b11: Status 404 returned error can't find the container with id 90db3468cce7dfd35ccc773b5df3877a748faf704881c53896b478806d826b11 Feb 18 19:32:11 crc kubenswrapper[4942]: I0218 19:32:11.139271 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-pmrtb"] Feb 18 19:32:11 crc kubenswrapper[4942]: I0218 19:32:11.147242 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-c5677dc5d-7ssrk"] Feb 18 19:32:11 crc kubenswrapper[4942]: W0218 19:32:11.171505 4942 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda4cd7c2a_4d5f_48c2_9af4_bcc237367416.slice/crio-dee0baac7e3a9c49a9187f5763b3e97bcf6e3a78d211e733d817f073fc2a3c4b WatchSource:0}: Error finding container dee0baac7e3a9c49a9187f5763b3e97bcf6e3a78d211e733d817f073fc2a3c4b: Status 404 returned error can't find the container with id dee0baac7e3a9c49a9187f5763b3e97bcf6e3a78d211e733d817f073fc2a3c4b Feb 18 19:32:11 crc kubenswrapper[4942]: I0218 19:32:11.304103 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-xhp5w"] Feb 18 19:32:11 crc kubenswrapper[4942]: I0218 19:32:11.708978 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-55cc45767f-26x4h" event={"ID":"844a0cad-5a6a-4ab4-8e32-388835eb9f4a","Type":"ContainerStarted","Data":"c1bbdecfe782024ab3ee2b60bd247e6c6890af98dee050a37e909cd64cd9d960"} Feb 18 19:32:11 crc kubenswrapper[4942]: I0218 19:32:11.709332 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/designate-operator-controller-manager-55cc45767f-26x4h" Feb 18 19:32:11 crc kubenswrapper[4942]: I0218 19:32:11.716196 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-54fb488b88-9gjbj" event={"ID":"80bc5b9b-00c2-4003-8279-1dbc3ff3aa05","Type":"ContainerStarted","Data":"7480a5995118be4d9ce6e060f3dcca85c3bf26dfca52129b531ff7ad10f4015a"} Feb 18 19:32:11 crc kubenswrapper[4942]: I0218 19:32:11.716350 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/horizon-operator-controller-manager-54fb488b88-9gjbj" Feb 18 19:32:11 crc kubenswrapper[4942]: I0218 19:32:11.728602 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xhp5w" event={"ID":"04adf08c-4b6b-49c1-be25-d2cc8c67dce2","Type":"ContainerStarted","Data":"d3a4dd0670baf5152526b789369cfc767a3b4f746a7ebf6b7d9421c87331aa1b"} Feb 18 19:32:11 crc kubenswrapper[4942]: I0218 19:32:11.728644 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xhp5w" event={"ID":"04adf08c-4b6b-49c1-be25-d2cc8c67dce2","Type":"ContainerStarted","Data":"5e3cbf30742449f377e56eae72b81f7947e381700f22252f373a197aae1f9b45"} Feb 18 19:32:11 crc kubenswrapper[4942]: I0218 19:32:11.733539 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-wvj72" event={"ID":"5fe849cd-ac9e-48bb-a7dd-f7f529a324e3","Type":"ContainerStarted","Data":"390939293f8458f582c4596c1f4efb6dad4bd2551babc091f16fbd23e1fb4133"} Feb 18 19:32:11 crc kubenswrapper[4942]: I0218 19:32:11.736105 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-c5677dc5d-7ssrk" event={"ID":"716e0e70-0ef0-4843-9ad3-d84f47a3397f","Type":"ContainerStarted","Data":"f56c83e0bb2fbdc3faa2568d196d341fbff48dae52eaf90f25beae7ad4410e7b"} Feb 18 19:32:11 crc kubenswrapper[4942]: I0218 19:32:11.748231 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-85c99d655-6kt98" event={"ID":"df8c140d-a735-4a14-8239-67f577546e01","Type":"ContainerStarted","Data":"e8309540c2de7c1091ee1196531e8a09c1f7fab1cce1e11ba5a88c193f81de4e"} Feb 18 19:32:11 crc kubenswrapper[4942]: I0218 19:32:11.748378 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ovn-operator-controller-manager-85c99d655-6kt98" Feb 18 19:32:11 crc kubenswrapper[4942]: I0218 19:32:11.753022 4942 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/designate-operator-controller-manager-55cc45767f-26x4h" podStartSLOduration=6.554672404 podStartE2EDuration="27.753005599s" podCreationTimestamp="2026-02-18 19:31:44 +0000 UTC" firstStartedPulling="2026-02-18 19:31:45.229753487 +0000 UTC m=+864.934686142" lastFinishedPulling="2026-02-18 19:32:06.428086672 +0000 UTC m=+886.133019337" observedRunningTime="2026-02-18 19:32:11.74746303 +0000 UTC m=+891.452395695" watchObservedRunningTime="2026-02-18 19:32:11.753005599 +0000 UTC m=+891.457938254" Feb 18 19:32:11 crc kubenswrapper[4942]: I0218 19:32:11.757043 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-66997756f6-f8nnp" event={"ID":"c2cc0d22-92b6-4c67-9627-79abffb9917c","Type":"ContainerStarted","Data":"10db60d2baee30c4eb2de8561b1daf0856f78e5e40bb024a2f329ea2f85eb594"} Feb 18 19:32:11 crc kubenswrapper[4942]: I0218 19:32:11.757737 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/mariadb-operator-controller-manager-66997756f6-f8nnp" Feb 18 19:32:11 crc kubenswrapper[4942]: I0218 19:32:11.762151 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pmrtb" event={"ID":"a4cd7c2a-4d5f-48c2-9af4-bcc237367416","Type":"ContainerStarted","Data":"dee0baac7e3a9c49a9187f5763b3e97bcf6e3a78d211e733d817f073fc2a3c4b"} Feb 18 19:32:11 crc kubenswrapper[4942]: I0218 19:32:11.775641 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-6494cdbf8f-9qvzl" event={"ID":"1e73a8a0-3246-4a08-b4be-d587d82742a4","Type":"ContainerStarted","Data":"d92a181d09de29a842692b8d7e0a930f74576a1453b28ad85cb69f74d7c93806"} Feb 18 19:32:11 crc kubenswrapper[4942]: I0218 19:32:11.775963 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ironic-operator-controller-manager-6494cdbf8f-9qvzl" Feb 18 19:32:11 crc kubenswrapper[4942]: I0218 19:32:11.792307 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-9595d6797-xrzwv" event={"ID":"4cbefad2-6c6d-4b7b-bba9-acf857a54a4b","Type":"ContainerStarted","Data":"680b16332f49059f660d9bb33e58365db2c2369f30b4ad6dbd1d1a7dbc47100d"} Feb 18 19:32:11 crc kubenswrapper[4942]: I0218 19:32:11.792585 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/heat-operator-controller-manager-9595d6797-xrzwv" Feb 18 19:32:11 crc kubenswrapper[4942]: I0218 19:32:11.810016 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-57bd55f9b7-cg225" event={"ID":"8ca2018a-1b2e-4fa2-8564-3e2a0d3d8377","Type":"ContainerStarted","Data":"0fb9b55866c91b5201a1184b05069132e435d7929f2a2d1cb55f7d78ec122461"} Feb 18 19:32:11 crc kubenswrapper[4942]: I0218 19:32:11.810693 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/placement-operator-controller-manager-57bd55f9b7-cg225" Feb 18 19:32:11 crc kubenswrapper[4942]: I0218 19:32:11.815865 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-57746b5ff9-56k6g" event={"ID":"51f45ea1-2b95-4553-9e3d-5e6bb4c8b862","Type":"ContainerStarted","Data":"b0487a7e832bf13b8b13552580da81bf5dcb7c629d72bf0966dab0b259a928e6"} Feb 18 19:32:11 crc kubenswrapper[4942]: I0218 19:32:11.815908 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/cinder-operator-controller-manager-57746b5ff9-56k6g" Feb 18 19:32:11 crc kubenswrapper[4942]: I0218 19:32:11.830119 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-57f845558-vcfm9" event={"ID":"0f7a5f35-f6e0-4f17-a380-13e8718ba658","Type":"ContainerStarted","Data":"90db3468cce7dfd35ccc773b5df3877a748faf704881c53896b478806d826b11"} Feb 18 19:32:11 crc kubenswrapper[4942]: I0218 19:32:11.831783 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-manager-57f845558-vcfm9" Feb 18 19:32:11 crc kubenswrapper[4942]: I0218 19:32:11.855907 4942 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/horizon-operator-controller-manager-54fb488b88-9gjbj" podStartSLOduration=7.114890696 podStartE2EDuration="27.85588791s" podCreationTimestamp="2026-02-18 19:31:44 +0000 UTC" firstStartedPulling="2026-02-18 19:31:45.68793886 +0000 UTC m=+865.392871525" lastFinishedPulling="2026-02-18 19:32:06.428936074 +0000 UTC m=+886.133868739" observedRunningTime="2026-02-18 19:32:11.84871382 +0000 UTC m=+891.553646495" watchObservedRunningTime="2026-02-18 19:32:11.85588791 +0000 UTC m=+891.560820575" Feb 18 19:32:11 crc kubenswrapper[4942]: I0218 19:32:11.860364 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-68c6d499cb-g7kpv" event={"ID":"8d849c9e-0da1-4910-9922-5ea2dd2728a2","Type":"ContainerStarted","Data":"ab2260185e477444de83cdabe59b88c988214cf068d9d7c979d902ecd79e09bf"} Feb 18 19:32:11 crc kubenswrapper[4942]: I0218 19:32:11.861156 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/glance-operator-controller-manager-68c6d499cb-g7kpv" Feb 18 19:32:11 crc kubenswrapper[4942]: I0218 19:32:11.887040 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-c4b7d6946-rvgp6" event={"ID":"829c57a8-54c3-43c5-8bea-2ceeeafeb143","Type":"ContainerStarted","Data":"b63a72628a085f5b208d81f35b38e1e21b8ab3209207b7d224ee7bfff08b74f2"} Feb 18 19:32:11 crc kubenswrapper[4942]: I0218 19:32:11.887682 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/barbican-operator-controller-manager-c4b7d6946-rvgp6" Feb 18 19:32:11 crc kubenswrapper[4942]: I0218 19:32:11.885751 4942 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-wvj72" podStartSLOduration=3.121887411 podStartE2EDuration="27.885728848s" podCreationTimestamp="2026-02-18 19:31:44 +0000 UTC" firstStartedPulling="2026-02-18 19:31:46.036995496 +0000 UTC m=+865.741928161" lastFinishedPulling="2026-02-18 19:32:10.800836933 +0000 UTC m=+890.505769598" observedRunningTime="2026-02-18 19:32:11.883093322 +0000 UTC m=+891.588025987" watchObservedRunningTime="2026-02-18 19:32:11.885728848 +0000 UTC m=+891.590661523" Feb 18 19:32:11 crc kubenswrapper[4942]: I0218 19:32:11.910645 4942 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ironic-operator-controller-manager-6494cdbf8f-9qvzl" podStartSLOduration=6.556031368 podStartE2EDuration="27.910627813s" podCreationTimestamp="2026-02-18 19:31:44 +0000 UTC" firstStartedPulling="2026-02-18 19:31:45.074545954 +0000 UTC m=+864.779478619" lastFinishedPulling="2026-02-18 19:32:06.429142399 +0000 UTC m=+886.134075064" observedRunningTime="2026-02-18 19:32:11.907414622 +0000 UTC m=+891.612347287" watchObservedRunningTime="2026-02-18 19:32:11.910627813 +0000 UTC m=+891.615560478" Feb 18 19:32:11 crc kubenswrapper[4942]: I0218 19:32:11.949914 4942 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/heat-operator-controller-manager-9595d6797-xrzwv" podStartSLOduration=6.9308956219999995 podStartE2EDuration="27.949892498s" podCreationTimestamp="2026-02-18 19:31:44 +0000 UTC" firstStartedPulling="2026-02-18 19:31:45.409117737 +0000 UTC m=+865.114050402" lastFinishedPulling="2026-02-18 19:32:06.428114603 +0000 UTC m=+886.133047278" observedRunningTime="2026-02-18 19:32:11.933368423 +0000 UTC m=+891.638301108" watchObservedRunningTime="2026-02-18 19:32:11.949892498 +0000 UTC m=+891.654825163" Feb 18 19:32:12 crc kubenswrapper[4942]: I0218 19:32:12.032984 4942 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/placement-operator-controller-manager-57bd55f9b7-cg225" podStartSLOduration=3.231510622 podStartE2EDuration="28.032961412s" podCreationTimestamp="2026-02-18 19:31:44 +0000 UTC" firstStartedPulling="2026-02-18 19:31:45.921086649 +0000 UTC m=+865.626019314" lastFinishedPulling="2026-02-18 19:32:10.722537439 +0000 UTC m=+890.427470104" observedRunningTime="2026-02-18 19:32:11.992377724 +0000 UTC m=+891.697310379" watchObservedRunningTime="2026-02-18 19:32:12.032961412 +0000 UTC m=+891.737894077" Feb 18 19:32:12 crc kubenswrapper[4942]: I0218 19:32:12.046863 4942 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/glance-operator-controller-manager-68c6d499cb-g7kpv" podStartSLOduration=6.692924812 podStartE2EDuration="28.04684801s" podCreationTimestamp="2026-02-18 19:31:44 +0000 UTC" firstStartedPulling="2026-02-18 19:31:45.073885957 +0000 UTC m=+864.778818622" lastFinishedPulling="2026-02-18 19:32:06.427809155 +0000 UTC m=+886.132741820" observedRunningTime="2026-02-18 19:32:12.025278139 +0000 UTC m=+891.730210814" watchObservedRunningTime="2026-02-18 19:32:12.04684801 +0000 UTC m=+891.751780675" Feb 18 19:32:12 crc kubenswrapper[4942]: I0218 19:32:12.048490 4942 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/cinder-operator-controller-manager-57746b5ff9-56k6g" podStartSLOduration=6.544076178 podStartE2EDuration="28.048484251s" podCreationTimestamp="2026-02-18 19:31:44 +0000 UTC" firstStartedPulling="2026-02-18 19:31:44.909219677 +0000 UTC m=+864.614152342" lastFinishedPulling="2026-02-18 19:32:06.41362774 +0000 UTC m=+886.118560415" observedRunningTime="2026-02-18 19:32:12.044107561 +0000 UTC m=+891.749040236" watchObservedRunningTime="2026-02-18 19:32:12.048484251 +0000 UTC m=+891.753416906" Feb 18 19:32:12 crc kubenswrapper[4942]: I0218 19:32:12.082514 4942 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-manager-57f845558-vcfm9" podStartSLOduration=28.082499434 podStartE2EDuration="28.082499434s" podCreationTimestamp="2026-02-18 19:31:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:32:12.079031517 +0000 UTC m=+891.783964182" watchObservedRunningTime="2026-02-18 19:32:12.082499434 +0000 UTC m=+891.787432099" Feb 18 19:32:12 crc kubenswrapper[4942]: I0218 19:32:12.181840 4942 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/mariadb-operator-controller-manager-66997756f6-f8nnp" podStartSLOduration=7.294289407 podStartE2EDuration="28.181824156s" podCreationTimestamp="2026-02-18 19:31:44 +0000 UTC" firstStartedPulling="2026-02-18 19:31:45.532519642 +0000 UTC m=+865.237452307" lastFinishedPulling="2026-02-18 19:32:06.420054381 +0000 UTC m=+886.124987056" observedRunningTime="2026-02-18 19:32:12.145252029 +0000 UTC m=+891.850184694" watchObservedRunningTime="2026-02-18 19:32:12.181824156 +0000 UTC m=+891.886756821" Feb 18 19:32:12 crc kubenswrapper[4942]: I0218 19:32:12.197239 4942 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ovn-operator-controller-manager-85c99d655-6kt98" podStartSLOduration=7.490436477 podStartE2EDuration="28.197222702s" podCreationTimestamp="2026-02-18 19:31:44 +0000 UTC" firstStartedPulling="2026-02-18 19:31:45.72102208 +0000 UTC m=+865.425954745" lastFinishedPulling="2026-02-18 19:32:06.427808315 +0000 UTC m=+886.132740970" observedRunningTime="2026-02-18 19:32:12.179463127 +0000 UTC m=+891.884395792" watchObservedRunningTime="2026-02-18 19:32:12.197222702 +0000 UTC m=+891.902155367" Feb 18 19:32:12 crc kubenswrapper[4942]: I0218 19:32:12.274106 4942 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/barbican-operator-controller-manager-c4b7d6946-rvgp6" podStartSLOduration=8.837003958 podStartE2EDuration="28.274091031s" podCreationTimestamp="2026-02-18 19:31:44 +0000 UTC" firstStartedPulling="2026-02-18 19:31:44.992895076 +0000 UTC m=+864.697827741" lastFinishedPulling="2026-02-18 19:32:04.429982149 +0000 UTC m=+884.134914814" observedRunningTime="2026-02-18 19:32:12.22700994 +0000 UTC m=+891.931942605" watchObservedRunningTime="2026-02-18 19:32:12.274091031 +0000 UTC m=+891.979023686" Feb 18 19:32:12 crc kubenswrapper[4942]: I0218 19:32:12.914939 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-c8b4db7df-h9q84" event={"ID":"250062ed-a35d-489a-a6b5-e6f96d1532d6","Type":"ContainerStarted","Data":"f1dd92b6986761456a3e1ceded0cd4cf772b850a7f5f50fb56434c23cb331ecc"} Feb 18 19:32:12 crc kubenswrapper[4942]: I0218 19:32:12.915992 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/watcher-operator-controller-manager-c8b4db7df-h9q84" Feb 18 19:32:12 crc kubenswrapper[4942]: I0218 19:32:12.924219 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-79558bbfbf-r8hvr" event={"ID":"6618726f-c93c-4d05-b6d9-a08aca84801f","Type":"ContainerStarted","Data":"178a8abd8da6dd29444ee8dd3b30ed9484c2e3123d1c655c9bbf99d410cc2433"} Feb 18 19:32:12 crc kubenswrapper[4942]: I0218 19:32:12.924697 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/swift-operator-controller-manager-79558bbfbf-r8hvr" Feb 18 19:32:12 crc kubenswrapper[4942]: I0218 19:32:12.935132 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-56dc67d744-hhjwz" event={"ID":"a65b16e4-f55f-427a-a629-2fbff014a7af","Type":"ContainerStarted","Data":"59fc3f13648d1b366f82c5928cd6cd4b94a82e04de46f3aeb21b7751df9a5d87"} Feb 18 19:32:12 crc kubenswrapper[4942]: I0218 19:32:12.935650 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/telemetry-operator-controller-manager-56dc67d744-hhjwz" Feb 18 19:32:12 crc kubenswrapper[4942]: I0218 19:32:12.945681 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-745bbbd77b-4xhmd" event={"ID":"3b42f10c-a162-4d74-9eed-b6c3ef08cdb7","Type":"ContainerStarted","Data":"b0e5cc17d5708a2bf67f2c62fdedb963fde1c3e9e426935ccb4895be0efefc73"} Feb 18 19:32:12 crc kubenswrapper[4942]: I0218 19:32:12.946394 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/octavia-operator-controller-manager-745bbbd77b-4xhmd" Feb 18 19:32:12 crc kubenswrapper[4942]: I0218 19:32:12.946715 4942 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/watcher-operator-controller-manager-c8b4db7df-h9q84" podStartSLOduration=4.166554668 podStartE2EDuration="28.946702034s" podCreationTimestamp="2026-02-18 19:31:44 +0000 UTC" firstStartedPulling="2026-02-18 19:31:46.031218741 +0000 UTC m=+865.736151406" lastFinishedPulling="2026-02-18 19:32:10.811366107 +0000 UTC m=+890.516298772" observedRunningTime="2026-02-18 19:32:12.940830566 +0000 UTC m=+892.645763231" watchObservedRunningTime="2026-02-18 19:32:12.946702034 +0000 UTC m=+892.651634699" Feb 18 19:32:12 crc kubenswrapper[4942]: I0218 19:32:12.961378 4942 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/swift-operator-controller-manager-79558bbfbf-r8hvr" podStartSLOduration=4.230936423 podStartE2EDuration="28.961363551s" podCreationTimestamp="2026-02-18 19:31:44 +0000 UTC" firstStartedPulling="2026-02-18 19:31:45.920252908 +0000 UTC m=+865.625185573" lastFinishedPulling="2026-02-18 19:32:10.650680036 +0000 UTC m=+890.355612701" observedRunningTime="2026-02-18 19:32:12.95972899 +0000 UTC m=+892.664661655" watchObservedRunningTime="2026-02-18 19:32:12.961363551 +0000 UTC m=+892.666296216" Feb 18 19:32:12 crc kubenswrapper[4942]: I0218 19:32:12.980485 4942 generic.go:334] "Generic (PLEG): container finished" podID="04adf08c-4b6b-49c1-be25-d2cc8c67dce2" containerID="d3a4dd0670baf5152526b789369cfc767a3b4f746a7ebf6b7d9421c87331aa1b" exitCode=0 Feb 18 19:32:12 crc kubenswrapper[4942]: I0218 19:32:12.980559 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xhp5w" event={"ID":"04adf08c-4b6b-49c1-be25-d2cc8c67dce2","Type":"ContainerDied","Data":"d3a4dd0670baf5152526b789369cfc767a3b4f746a7ebf6b7d9421c87331aa1b"} Feb 18 19:32:12 crc kubenswrapper[4942]: I0218 19:32:12.980584 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xhp5w" event={"ID":"04adf08c-4b6b-49c1-be25-d2cc8c67dce2","Type":"ContainerStarted","Data":"ed9840277aa9db07d748c964a420663376df6cd57140cd5dec23b586bf0ce286"} Feb 18 19:32:12 crc kubenswrapper[4942]: I0218 19:32:12.983199 4942 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/telemetry-operator-controller-manager-56dc67d744-hhjwz" podStartSLOduration=4.004905683 podStartE2EDuration="28.983183849s" podCreationTimestamp="2026-02-18 19:31:44 +0000 UTC" firstStartedPulling="2026-02-18 19:31:45.911362025 +0000 UTC m=+865.616294680" lastFinishedPulling="2026-02-18 19:32:10.889640181 +0000 UTC m=+890.594572846" observedRunningTime="2026-02-18 19:32:12.981800244 +0000 UTC m=+892.686732909" watchObservedRunningTime="2026-02-18 19:32:12.983183849 +0000 UTC m=+892.688116514" Feb 18 19:32:12 crc kubenswrapper[4942]: I0218 19:32:12.991006 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-57f845558-vcfm9" event={"ID":"0f7a5f35-f6e0-4f17-a380-13e8718ba658","Type":"ContainerStarted","Data":"7c1c4b59848fdb8ca9d69d3cf3df8dbf093a478293a91ec3ec32fdba3d691720"} Feb 18 19:32:13 crc kubenswrapper[4942]: I0218 19:32:13.006908 4942 generic.go:334] "Generic (PLEG): container finished" podID="a4cd7c2a-4d5f-48c2-9af4-bcc237367416" containerID="c0f81453e7d6dc223b51f45fabda95b58d634396c8127da2162b425bed1f7043" exitCode=0 Feb 18 19:32:13 crc kubenswrapper[4942]: I0218 19:32:13.006968 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pmrtb" event={"ID":"a4cd7c2a-4d5f-48c2-9af4-bcc237367416","Type":"ContainerDied","Data":"c0f81453e7d6dc223b51f45fabda95b58d634396c8127da2162b425bed1f7043"} Feb 18 19:32:13 crc kubenswrapper[4942]: I0218 19:32:13.029477 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-8467ccb4c8-shr4v" event={"ID":"2fda65c9-97fe-4689-bd35-7f7974841223","Type":"ContainerStarted","Data":"cb32a9d94b49ddf033562e8d3ce5669858dce2299b0f583f194dc4d58e683e47"} Feb 18 19:32:13 crc kubenswrapper[4942]: I0218 19:32:13.029861 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/test-operator-controller-manager-8467ccb4c8-shr4v" Feb 18 19:32:13 crc kubenswrapper[4942]: I0218 19:32:13.049185 4942 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/octavia-operator-controller-manager-745bbbd77b-4xhmd" podStartSLOduration=7.784979906 podStartE2EDuration="29.049167584s" podCreationTimestamp="2026-02-18 19:31:44 +0000 UTC" firstStartedPulling="2026-02-18 19:31:45.705977763 +0000 UTC m=+865.410910428" lastFinishedPulling="2026-02-18 19:32:06.970165451 +0000 UTC m=+886.675098106" observedRunningTime="2026-02-18 19:32:13.049121843 +0000 UTC m=+892.754054508" watchObservedRunningTime="2026-02-18 19:32:13.049167584 +0000 UTC m=+892.754100249" Feb 18 19:32:13 crc kubenswrapper[4942]: I0218 19:32:13.085198 4942 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/test-operator-controller-manager-8467ccb4c8-shr4v" podStartSLOduration=4.324620252 podStartE2EDuration="29.085179417s" podCreationTimestamp="2026-02-18 19:31:44 +0000 UTC" firstStartedPulling="2026-02-18 19:31:45.926638598 +0000 UTC m=+865.631571263" lastFinishedPulling="2026-02-18 19:32:10.687197773 +0000 UTC m=+890.392130428" observedRunningTime="2026-02-18 19:32:13.079056514 +0000 UTC m=+892.783989179" watchObservedRunningTime="2026-02-18 19:32:13.085179417 +0000 UTC m=+892.790112082" Feb 18 19:32:13 crc kubenswrapper[4942]: E0218 19:32:13.488915 4942 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod04adf08c_4b6b_49c1_be25_d2cc8c67dce2.slice/crio-ed9840277aa9db07d748c964a420663376df6cd57140cd5dec23b586bf0ce286.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod04adf08c_4b6b_49c1_be25_d2cc8c67dce2.slice/crio-conmon-ed9840277aa9db07d748c964a420663376df6cd57140cd5dec23b586bf0ce286.scope\": RecentStats: unable to find data in memory cache]" Feb 18 19:32:14 crc kubenswrapper[4942]: I0218 19:32:14.043260 4942 generic.go:334] "Generic (PLEG): container finished" podID="04adf08c-4b6b-49c1-be25-d2cc8c67dce2" containerID="ed9840277aa9db07d748c964a420663376df6cd57140cd5dec23b586bf0ce286" exitCode=0 Feb 18 19:32:14 crc kubenswrapper[4942]: I0218 19:32:14.043356 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xhp5w" event={"ID":"04adf08c-4b6b-49c1-be25-d2cc8c67dce2","Type":"ContainerDied","Data":"ed9840277aa9db07d748c964a420663376df6cd57140cd5dec23b586bf0ce286"} Feb 18 19:32:14 crc kubenswrapper[4942]: I0218 19:32:14.054547 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-54967dbbdf-tzn65" event={"ID":"9d43a851-2d6c-4fe9-86e1-04c7d382b257","Type":"ContainerStarted","Data":"d3e41fbd7f44dec587c3a844fafd8f7ac654e355eeca4863ea82863df532edd5"} Feb 18 19:32:14 crc kubenswrapper[4942]: I0218 19:32:14.106751 4942 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/neutron-operator-controller-manager-54967dbbdf-tzn65" podStartSLOduration=3.068937553 podStartE2EDuration="30.106731074s" podCreationTimestamp="2026-02-18 19:31:44 +0000 UTC" firstStartedPulling="2026-02-18 19:31:45.691686894 +0000 UTC m=+865.396619559" lastFinishedPulling="2026-02-18 19:32:12.729480415 +0000 UTC m=+892.434413080" observedRunningTime="2026-02-18 19:32:14.100563619 +0000 UTC m=+893.805496284" watchObservedRunningTime="2026-02-18 19:32:14.106731074 +0000 UTC m=+893.811663739" Feb 18 19:32:14 crc kubenswrapper[4942]: I0218 19:32:14.792416 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/neutron-operator-controller-manager-54967dbbdf-tzn65" Feb 18 19:32:15 crc kubenswrapper[4942]: I0218 19:32:15.065051 4942 generic.go:334] "Generic (PLEG): container finished" podID="a4cd7c2a-4d5f-48c2-9af4-bcc237367416" containerID="15e62c1b1fb7cf60b372e3cd480a3c4a8ecacdad111be63f1be8804883bfdafd" exitCode=0 Feb 18 19:32:15 crc kubenswrapper[4942]: I0218 19:32:15.065825 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pmrtb" event={"ID":"a4cd7c2a-4d5f-48c2-9af4-bcc237367416","Type":"ContainerDied","Data":"15e62c1b1fb7cf60b372e3cd480a3c4a8ecacdad111be63f1be8804883bfdafd"} Feb 18 19:32:16 crc kubenswrapper[4942]: I0218 19:32:16.071882 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-c5677dc5d-7ssrk" event={"ID":"716e0e70-0ef0-4843-9ad3-d84f47a3397f","Type":"ContainerStarted","Data":"a47c27420db7e4a7c48bc06d00cef1967f56d185f72e2e98c675f3e7e030695c"} Feb 18 19:32:16 crc kubenswrapper[4942]: I0218 19:32:16.072231 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-baremetal-operator-controller-manager-c5677dc5d-7ssrk" Feb 18 19:32:16 crc kubenswrapper[4942]: I0218 19:32:16.074863 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xhp5w" event={"ID":"04adf08c-4b6b-49c1-be25-d2cc8c67dce2","Type":"ContainerStarted","Data":"c694aafcd30c06ff7abe831fc6b89c1cfac7ecced4de1f26eec3229fdb38fd35"} Feb 18 19:32:16 crc kubenswrapper[4942]: I0218 19:32:16.105356 4942 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-baremetal-operator-controller-manager-c5677dc5d-7ssrk" podStartSLOduration=27.914395288 podStartE2EDuration="32.105333379s" podCreationTimestamp="2026-02-18 19:31:44 +0000 UTC" firstStartedPulling="2026-02-18 19:32:11.172479586 +0000 UTC m=+890.877412251" lastFinishedPulling="2026-02-18 19:32:15.363417677 +0000 UTC m=+895.068350342" observedRunningTime="2026-02-18 19:32:16.102149209 +0000 UTC m=+895.807081884" watchObservedRunningTime="2026-02-18 19:32:16.105333379 +0000 UTC m=+895.810266054" Feb 18 19:32:16 crc kubenswrapper[4942]: I0218 19:32:16.137706 4942 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-xhp5w" podStartSLOduration=5.5066826760000005 podStartE2EDuration="9.13768121s" podCreationTimestamp="2026-02-18 19:32:07 +0000 UTC" firstStartedPulling="2026-02-18 19:32:11.731868009 +0000 UTC m=+891.436800674" lastFinishedPulling="2026-02-18 19:32:15.362866543 +0000 UTC m=+895.067799208" observedRunningTime="2026-02-18 19:32:16.13010292 +0000 UTC m=+895.835035585" watchObservedRunningTime="2026-02-18 19:32:16.13768121 +0000 UTC m=+895.842613885" Feb 18 19:32:16 crc kubenswrapper[4942]: I0218 19:32:16.185481 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/230a2167-e078-48a6-93ce-84a37ff4ac02-cert\") pod \"infra-operator-controller-manager-66d6b5f488-5vptt\" (UID: \"230a2167-e078-48a6-93ce-84a37ff4ac02\") " pod="openstack-operators/infra-operator-controller-manager-66d6b5f488-5vptt" Feb 18 19:32:16 crc kubenswrapper[4942]: I0218 19:32:16.191955 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/230a2167-e078-48a6-93ce-84a37ff4ac02-cert\") pod \"infra-operator-controller-manager-66d6b5f488-5vptt\" (UID: \"230a2167-e078-48a6-93ce-84a37ff4ac02\") " pod="openstack-operators/infra-operator-controller-manager-66d6b5f488-5vptt" Feb 18 19:32:16 crc kubenswrapper[4942]: I0218 19:32:16.369144 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-66d6b5f488-5vptt" Feb 18 19:32:16 crc kubenswrapper[4942]: I0218 19:32:16.592870 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-66d6b5f488-5vptt"] Feb 18 19:32:16 crc kubenswrapper[4942]: W0218 19:32:16.618132 4942 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod230a2167_e078_48a6_93ce_84a37ff4ac02.slice/crio-1ab1536a8d4d7eba054845abb071cf879edb63e81a4f3e21e754ff08d0a88010 WatchSource:0}: Error finding container 1ab1536a8d4d7eba054845abb071cf879edb63e81a4f3e21e754ff08d0a88010: Status 404 returned error can't find the container with id 1ab1536a8d4d7eba054845abb071cf879edb63e81a4f3e21e754ff08d0a88010 Feb 18 19:32:17 crc kubenswrapper[4942]: I0218 19:32:17.084568 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pmrtb" event={"ID":"a4cd7c2a-4d5f-48c2-9af4-bcc237367416","Type":"ContainerStarted","Data":"c3838fe4209ef9113ae57deb956ef6ca941b71cb6e13a15ad9c87fff7e92adfe"} Feb 18 19:32:17 crc kubenswrapper[4942]: I0218 19:32:17.086077 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-66d6b5f488-5vptt" event={"ID":"230a2167-e078-48a6-93ce-84a37ff4ac02","Type":"ContainerStarted","Data":"1ab1536a8d4d7eba054845abb071cf879edb63e81a4f3e21e754ff08d0a88010"} Feb 18 19:32:17 crc kubenswrapper[4942]: I0218 19:32:17.109101 4942 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-pmrtb" podStartSLOduration=13.434142697 podStartE2EDuration="16.109080318s" podCreationTimestamp="2026-02-18 19:32:01 +0000 UTC" firstStartedPulling="2026-02-18 19:32:13.016275559 +0000 UTC m=+892.721208224" lastFinishedPulling="2026-02-18 19:32:15.69121317 +0000 UTC m=+895.396145845" observedRunningTime="2026-02-18 19:32:17.105694803 +0000 UTC m=+896.810627478" watchObservedRunningTime="2026-02-18 19:32:17.109080318 +0000 UTC m=+896.814012983" Feb 18 19:32:17 crc kubenswrapper[4942]: I0218 19:32:17.405797 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-xhp5w" Feb 18 19:32:17 crc kubenswrapper[4942]: I0218 19:32:17.406009 4942 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-xhp5w" Feb 18 19:32:18 crc kubenswrapper[4942]: I0218 19:32:18.097344 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-96fff9cb8-qs9mb" event={"ID":"a15b8ac2-0742-4fd7-9a14-005620c93a3d","Type":"ContainerStarted","Data":"0ad1cbaaf404d743218b61f9fddd7c41c34d9cb82a66025522f9402630a85f0f"} Feb 18 19:32:18 crc kubenswrapper[4942]: I0218 19:32:18.098136 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/manila-operator-controller-manager-96fff9cb8-qs9mb" Feb 18 19:32:18 crc kubenswrapper[4942]: I0218 19:32:18.115312 4942 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/manila-operator-controller-manager-96fff9cb8-qs9mb" podStartSLOduration=2.181942753 podStartE2EDuration="34.115294519s" podCreationTimestamp="2026-02-18 19:31:44 +0000 UTC" firstStartedPulling="2026-02-18 19:31:45.521871615 +0000 UTC m=+865.226804280" lastFinishedPulling="2026-02-18 19:32:17.455223381 +0000 UTC m=+897.160156046" observedRunningTime="2026-02-18 19:32:18.112201261 +0000 UTC m=+897.817133936" watchObservedRunningTime="2026-02-18 19:32:18.115294519 +0000 UTC m=+897.820227184" Feb 18 19:32:18 crc kubenswrapper[4942]: I0218 19:32:18.486255 4942 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-xhp5w" podUID="04adf08c-4b6b-49c1-be25-d2cc8c67dce2" containerName="registry-server" probeResult="failure" output=< Feb 18 19:32:18 crc kubenswrapper[4942]: timeout: failed to connect service ":50051" within 1s Feb 18 19:32:18 crc kubenswrapper[4942]: > Feb 18 19:32:19 crc kubenswrapper[4942]: I0218 19:32:19.104911 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-66d6b5f488-5vptt" event={"ID":"230a2167-e078-48a6-93ce-84a37ff4ac02","Type":"ContainerStarted","Data":"e22ee529428539d8e11ca7e2fdd7467105524ef09980bcb95ea88287e14f1ae0"} Feb 18 19:32:19 crc kubenswrapper[4942]: I0218 19:32:19.105163 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/infra-operator-controller-manager-66d6b5f488-5vptt" Feb 18 19:32:19 crc kubenswrapper[4942]: I0218 19:32:19.126170 4942 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/infra-operator-controller-manager-66d6b5f488-5vptt" podStartSLOduration=32.813120143 podStartE2EDuration="35.126150656s" podCreationTimestamp="2026-02-18 19:31:44 +0000 UTC" firstStartedPulling="2026-02-18 19:32:16.620227195 +0000 UTC m=+896.325159870" lastFinishedPulling="2026-02-18 19:32:18.933257718 +0000 UTC m=+898.638190383" observedRunningTime="2026-02-18 19:32:19.119387547 +0000 UTC m=+898.824320202" watchObservedRunningTime="2026-02-18 19:32:19.126150656 +0000 UTC m=+898.831083321" Feb 18 19:32:20 crc kubenswrapper[4942]: I0218 19:32:20.858915 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-baremetal-operator-controller-manager-c5677dc5d-7ssrk" Feb 18 19:32:20 crc kubenswrapper[4942]: I0218 19:32:20.933636 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-manager-57f845558-vcfm9" Feb 18 19:32:21 crc kubenswrapper[4942]: I0218 19:32:21.260607 4942 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-dh7gx"] Feb 18 19:32:21 crc kubenswrapper[4942]: I0218 19:32:21.264402 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-dh7gx" Feb 18 19:32:21 crc kubenswrapper[4942]: I0218 19:32:21.271636 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-dh7gx"] Feb 18 19:32:21 crc kubenswrapper[4942]: I0218 19:32:21.273986 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/078b3d71-94dd-42b8-8804-84590a8abe44-catalog-content\") pod \"redhat-marketplace-dh7gx\" (UID: \"078b3d71-94dd-42b8-8804-84590a8abe44\") " pod="openshift-marketplace/redhat-marketplace-dh7gx" Feb 18 19:32:21 crc kubenswrapper[4942]: I0218 19:32:21.274042 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/078b3d71-94dd-42b8-8804-84590a8abe44-utilities\") pod \"redhat-marketplace-dh7gx\" (UID: \"078b3d71-94dd-42b8-8804-84590a8abe44\") " pod="openshift-marketplace/redhat-marketplace-dh7gx" Feb 18 19:32:21 crc kubenswrapper[4942]: I0218 19:32:21.274076 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zbnpw\" (UniqueName: \"kubernetes.io/projected/078b3d71-94dd-42b8-8804-84590a8abe44-kube-api-access-zbnpw\") pod \"redhat-marketplace-dh7gx\" (UID: \"078b3d71-94dd-42b8-8804-84590a8abe44\") " pod="openshift-marketplace/redhat-marketplace-dh7gx" Feb 18 19:32:21 crc kubenswrapper[4942]: I0218 19:32:21.375483 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/078b3d71-94dd-42b8-8804-84590a8abe44-catalog-content\") pod \"redhat-marketplace-dh7gx\" (UID: \"078b3d71-94dd-42b8-8804-84590a8abe44\") " pod="openshift-marketplace/redhat-marketplace-dh7gx" Feb 18 19:32:21 crc kubenswrapper[4942]: I0218 19:32:21.375553 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/078b3d71-94dd-42b8-8804-84590a8abe44-utilities\") pod \"redhat-marketplace-dh7gx\" (UID: \"078b3d71-94dd-42b8-8804-84590a8abe44\") " pod="openshift-marketplace/redhat-marketplace-dh7gx" Feb 18 19:32:21 crc kubenswrapper[4942]: I0218 19:32:21.375600 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zbnpw\" (UniqueName: \"kubernetes.io/projected/078b3d71-94dd-42b8-8804-84590a8abe44-kube-api-access-zbnpw\") pod \"redhat-marketplace-dh7gx\" (UID: \"078b3d71-94dd-42b8-8804-84590a8abe44\") " pod="openshift-marketplace/redhat-marketplace-dh7gx" Feb 18 19:32:21 crc kubenswrapper[4942]: I0218 19:32:21.375961 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/078b3d71-94dd-42b8-8804-84590a8abe44-utilities\") pod \"redhat-marketplace-dh7gx\" (UID: \"078b3d71-94dd-42b8-8804-84590a8abe44\") " pod="openshift-marketplace/redhat-marketplace-dh7gx" Feb 18 19:32:21 crc kubenswrapper[4942]: I0218 19:32:21.375974 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/078b3d71-94dd-42b8-8804-84590a8abe44-catalog-content\") pod \"redhat-marketplace-dh7gx\" (UID: \"078b3d71-94dd-42b8-8804-84590a8abe44\") " pod="openshift-marketplace/redhat-marketplace-dh7gx" Feb 18 19:32:21 crc kubenswrapper[4942]: I0218 19:32:21.392842 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zbnpw\" (UniqueName: \"kubernetes.io/projected/078b3d71-94dd-42b8-8804-84590a8abe44-kube-api-access-zbnpw\") pod \"redhat-marketplace-dh7gx\" (UID: \"078b3d71-94dd-42b8-8804-84590a8abe44\") " pod="openshift-marketplace/redhat-marketplace-dh7gx" Feb 18 19:32:21 crc kubenswrapper[4942]: I0218 19:32:21.591973 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-dh7gx" Feb 18 19:32:21 crc kubenswrapper[4942]: I0218 19:32:21.924999 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-dh7gx"] Feb 18 19:32:22 crc kubenswrapper[4942]: I0218 19:32:22.071885 4942 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-pmrtb" Feb 18 19:32:22 crc kubenswrapper[4942]: I0218 19:32:22.071934 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-pmrtb" Feb 18 19:32:22 crc kubenswrapper[4942]: I0218 19:32:22.148147 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-dh7gx" event={"ID":"078b3d71-94dd-42b8-8804-84590a8abe44","Type":"ContainerStarted","Data":"256b7d08b3f9eb072fd2995b03828975c96e7e95fda87c10cfd7a170fdd2595a"} Feb 18 19:32:23 crc kubenswrapper[4942]: I0218 19:32:23.117544 4942 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-pmrtb" podUID="a4cd7c2a-4d5f-48c2-9af4-bcc237367416" containerName="registry-server" probeResult="failure" output=< Feb 18 19:32:23 crc kubenswrapper[4942]: timeout: failed to connect service ":50051" within 1s Feb 18 19:32:23 crc kubenswrapper[4942]: > Feb 18 19:32:24 crc kubenswrapper[4942]: I0218 19:32:24.374122 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/barbican-operator-controller-manager-c4b7d6946-rvgp6" Feb 18 19:32:24 crc kubenswrapper[4942]: I0218 19:32:24.388996 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/cinder-operator-controller-manager-57746b5ff9-56k6g" Feb 18 19:32:24 crc kubenswrapper[4942]: I0218 19:32:24.442441 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/designate-operator-controller-manager-55cc45767f-26x4h" Feb 18 19:32:24 crc kubenswrapper[4942]: I0218 19:32:24.485646 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/glance-operator-controller-manager-68c6d499cb-g7kpv" Feb 18 19:32:24 crc kubenswrapper[4942]: I0218 19:32:24.516721 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/heat-operator-controller-manager-9595d6797-xrzwv" Feb 18 19:32:24 crc kubenswrapper[4942]: I0218 19:32:24.608696 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ironic-operator-controller-manager-6494cdbf8f-9qvzl" Feb 18 19:32:24 crc kubenswrapper[4942]: I0218 19:32:24.757521 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/manila-operator-controller-manager-96fff9cb8-qs9mb" Feb 18 19:32:24 crc kubenswrapper[4942]: I0218 19:32:24.792977 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/mariadb-operator-controller-manager-66997756f6-f8nnp" Feb 18 19:32:24 crc kubenswrapper[4942]: I0218 19:32:24.793392 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/neutron-operator-controller-manager-54967dbbdf-tzn65" Feb 18 19:32:24 crc kubenswrapper[4942]: I0218 19:32:24.840686 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/horizon-operator-controller-manager-54fb488b88-9gjbj" Feb 18 19:32:24 crc kubenswrapper[4942]: I0218 19:32:24.892565 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/octavia-operator-controller-manager-745bbbd77b-4xhmd" Feb 18 19:32:25 crc kubenswrapper[4942]: I0218 19:32:25.057657 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ovn-operator-controller-manager-85c99d655-6kt98" Feb 18 19:32:25 crc kubenswrapper[4942]: I0218 19:32:25.116250 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/placement-operator-controller-manager-57bd55f9b7-cg225" Feb 18 19:32:25 crc kubenswrapper[4942]: I0218 19:32:25.123570 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/swift-operator-controller-manager-79558bbfbf-r8hvr" Feb 18 19:32:25 crc kubenswrapper[4942]: I0218 19:32:25.203049 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/telemetry-operator-controller-manager-56dc67d744-hhjwz" Feb 18 19:32:25 crc kubenswrapper[4942]: I0218 19:32:25.220663 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/test-operator-controller-manager-8467ccb4c8-shr4v" Feb 18 19:32:25 crc kubenswrapper[4942]: I0218 19:32:25.253832 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/watcher-operator-controller-manager-c8b4db7df-h9q84" Feb 18 19:32:26 crc kubenswrapper[4942]: I0218 19:32:26.375455 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/infra-operator-controller-manager-66d6b5f488-5vptt" Feb 18 19:32:27 crc kubenswrapper[4942]: I0218 19:32:27.465905 4942 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-xhp5w" Feb 18 19:32:27 crc kubenswrapper[4942]: I0218 19:32:27.513811 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-xhp5w" Feb 18 19:32:27 crc kubenswrapper[4942]: I0218 19:32:27.699945 4942 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-xhp5w"] Feb 18 19:32:29 crc kubenswrapper[4942]: I0218 19:32:29.194271 4942 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-xhp5w" podUID="04adf08c-4b6b-49c1-be25-d2cc8c67dce2" containerName="registry-server" containerID="cri-o://c694aafcd30c06ff7abe831fc6b89c1cfac7ecced4de1f26eec3229fdb38fd35" gracePeriod=2 Feb 18 19:32:32 crc kubenswrapper[4942]: I0218 19:32:32.121152 4942 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-pmrtb" Feb 18 19:32:32 crc kubenswrapper[4942]: I0218 19:32:32.183534 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-pmrtb" Feb 18 19:32:33 crc kubenswrapper[4942]: I0218 19:32:33.283438 4942 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-pmrtb"] Feb 18 19:32:33 crc kubenswrapper[4942]: I0218 19:32:33.283696 4942 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-pmrtb" podUID="a4cd7c2a-4d5f-48c2-9af4-bcc237367416" containerName="registry-server" containerID="cri-o://c3838fe4209ef9113ae57deb956ef6ca941b71cb6e13a15ad9c87fff7e92adfe" gracePeriod=2 Feb 18 19:32:36 crc kubenswrapper[4942]: I0218 19:32:36.246801 4942 generic.go:334] "Generic (PLEG): container finished" podID="078b3d71-94dd-42b8-8804-84590a8abe44" containerID="293ac05d862eba073555d3f1808af1fdd42c90b648ef7fc661fad46e5fb3f279" exitCode=0 Feb 18 19:32:36 crc kubenswrapper[4942]: I0218 19:32:36.247040 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-dh7gx" event={"ID":"078b3d71-94dd-42b8-8804-84590a8abe44","Type":"ContainerDied","Data":"293ac05d862eba073555d3f1808af1fdd42c90b648ef7fc661fad46e5fb3f279"} Feb 18 19:32:36 crc kubenswrapper[4942]: I0218 19:32:36.252675 4942 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-pmrtb_a4cd7c2a-4d5f-48c2-9af4-bcc237367416/registry-server/0.log" Feb 18 19:32:36 crc kubenswrapper[4942]: I0218 19:32:36.254238 4942 generic.go:334] "Generic (PLEG): container finished" podID="a4cd7c2a-4d5f-48c2-9af4-bcc237367416" containerID="c3838fe4209ef9113ae57deb956ef6ca941b71cb6e13a15ad9c87fff7e92adfe" exitCode=137 Feb 18 19:32:36 crc kubenswrapper[4942]: I0218 19:32:36.254298 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pmrtb" event={"ID":"a4cd7c2a-4d5f-48c2-9af4-bcc237367416","Type":"ContainerDied","Data":"c3838fe4209ef9113ae57deb956ef6ca941b71cb6e13a15ad9c87fff7e92adfe"} Feb 18 19:32:36 crc kubenswrapper[4942]: I0218 19:32:36.256599 4942 generic.go:334] "Generic (PLEG): container finished" podID="04adf08c-4b6b-49c1-be25-d2cc8c67dce2" containerID="c694aafcd30c06ff7abe831fc6b89c1cfac7ecced4de1f26eec3229fdb38fd35" exitCode=0 Feb 18 19:32:36 crc kubenswrapper[4942]: I0218 19:32:36.256663 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xhp5w" event={"ID":"04adf08c-4b6b-49c1-be25-d2cc8c67dce2","Type":"ContainerDied","Data":"c694aafcd30c06ff7abe831fc6b89c1cfac7ecced4de1f26eec3229fdb38fd35"} Feb 18 19:32:36 crc kubenswrapper[4942]: I0218 19:32:36.798334 4942 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-xhp5w" Feb 18 19:32:36 crc kubenswrapper[4942]: I0218 19:32:36.829020 4942 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-pmrtb_a4cd7c2a-4d5f-48c2-9af4-bcc237367416/registry-server/0.log" Feb 18 19:32:36 crc kubenswrapper[4942]: I0218 19:32:36.830465 4942 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-pmrtb" Feb 18 19:32:36 crc kubenswrapper[4942]: I0218 19:32:36.925504 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/04adf08c-4b6b-49c1-be25-d2cc8c67dce2-utilities\") pod \"04adf08c-4b6b-49c1-be25-d2cc8c67dce2\" (UID: \"04adf08c-4b6b-49c1-be25-d2cc8c67dce2\") " Feb 18 19:32:36 crc kubenswrapper[4942]: I0218 19:32:36.925629 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/04adf08c-4b6b-49c1-be25-d2cc8c67dce2-catalog-content\") pod \"04adf08c-4b6b-49c1-be25-d2cc8c67dce2\" (UID: \"04adf08c-4b6b-49c1-be25-d2cc8c67dce2\") " Feb 18 19:32:36 crc kubenswrapper[4942]: I0218 19:32:36.925703 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tm557\" (UniqueName: \"kubernetes.io/projected/04adf08c-4b6b-49c1-be25-d2cc8c67dce2-kube-api-access-tm557\") pod \"04adf08c-4b6b-49c1-be25-d2cc8c67dce2\" (UID: \"04adf08c-4b6b-49c1-be25-d2cc8c67dce2\") " Feb 18 19:32:36 crc kubenswrapper[4942]: I0218 19:32:36.926650 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/04adf08c-4b6b-49c1-be25-d2cc8c67dce2-utilities" (OuterVolumeSpecName: "utilities") pod "04adf08c-4b6b-49c1-be25-d2cc8c67dce2" (UID: "04adf08c-4b6b-49c1-be25-d2cc8c67dce2"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 19:32:36 crc kubenswrapper[4942]: I0218 19:32:36.930849 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/04adf08c-4b6b-49c1-be25-d2cc8c67dce2-kube-api-access-tm557" (OuterVolumeSpecName: "kube-api-access-tm557") pod "04adf08c-4b6b-49c1-be25-d2cc8c67dce2" (UID: "04adf08c-4b6b-49c1-be25-d2cc8c67dce2"). InnerVolumeSpecName "kube-api-access-tm557". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:32:36 crc kubenswrapper[4942]: I0218 19:32:36.981343 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/04adf08c-4b6b-49c1-be25-d2cc8c67dce2-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "04adf08c-4b6b-49c1-be25-d2cc8c67dce2" (UID: "04adf08c-4b6b-49c1-be25-d2cc8c67dce2"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 19:32:37 crc kubenswrapper[4942]: I0218 19:32:37.026894 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a4cd7c2a-4d5f-48c2-9af4-bcc237367416-utilities\") pod \"a4cd7c2a-4d5f-48c2-9af4-bcc237367416\" (UID: \"a4cd7c2a-4d5f-48c2-9af4-bcc237367416\") " Feb 18 19:32:37 crc kubenswrapper[4942]: I0218 19:32:37.027116 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a4cd7c2a-4d5f-48c2-9af4-bcc237367416-catalog-content\") pod \"a4cd7c2a-4d5f-48c2-9af4-bcc237367416\" (UID: \"a4cd7c2a-4d5f-48c2-9af4-bcc237367416\") " Feb 18 19:32:37 crc kubenswrapper[4942]: I0218 19:32:37.027249 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ksk4m\" (UniqueName: \"kubernetes.io/projected/a4cd7c2a-4d5f-48c2-9af4-bcc237367416-kube-api-access-ksk4m\") pod \"a4cd7c2a-4d5f-48c2-9af4-bcc237367416\" (UID: \"a4cd7c2a-4d5f-48c2-9af4-bcc237367416\") " Feb 18 19:32:37 crc kubenswrapper[4942]: I0218 19:32:37.027568 4942 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/04adf08c-4b6b-49c1-be25-d2cc8c67dce2-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 18 19:32:37 crc kubenswrapper[4942]: I0218 19:32:37.027645 4942 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tm557\" (UniqueName: \"kubernetes.io/projected/04adf08c-4b6b-49c1-be25-d2cc8c67dce2-kube-api-access-tm557\") on node \"crc\" DevicePath \"\"" Feb 18 19:32:37 crc kubenswrapper[4942]: I0218 19:32:37.027701 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a4cd7c2a-4d5f-48c2-9af4-bcc237367416-utilities" (OuterVolumeSpecName: "utilities") pod "a4cd7c2a-4d5f-48c2-9af4-bcc237367416" (UID: "a4cd7c2a-4d5f-48c2-9af4-bcc237367416"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 19:32:37 crc kubenswrapper[4942]: I0218 19:32:37.027712 4942 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/04adf08c-4b6b-49c1-be25-d2cc8c67dce2-utilities\") on node \"crc\" DevicePath \"\"" Feb 18 19:32:37 crc kubenswrapper[4942]: I0218 19:32:37.030394 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a4cd7c2a-4d5f-48c2-9af4-bcc237367416-kube-api-access-ksk4m" (OuterVolumeSpecName: "kube-api-access-ksk4m") pod "a4cd7c2a-4d5f-48c2-9af4-bcc237367416" (UID: "a4cd7c2a-4d5f-48c2-9af4-bcc237367416"). InnerVolumeSpecName "kube-api-access-ksk4m". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:32:37 crc kubenswrapper[4942]: I0218 19:32:37.129067 4942 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ksk4m\" (UniqueName: \"kubernetes.io/projected/a4cd7c2a-4d5f-48c2-9af4-bcc237367416-kube-api-access-ksk4m\") on node \"crc\" DevicePath \"\"" Feb 18 19:32:37 crc kubenswrapper[4942]: I0218 19:32:37.129339 4942 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a4cd7c2a-4d5f-48c2-9af4-bcc237367416-utilities\") on node \"crc\" DevicePath \"\"" Feb 18 19:32:37 crc kubenswrapper[4942]: I0218 19:32:37.172018 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a4cd7c2a-4d5f-48c2-9af4-bcc237367416-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "a4cd7c2a-4d5f-48c2-9af4-bcc237367416" (UID: "a4cd7c2a-4d5f-48c2-9af4-bcc237367416"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 19:32:37 crc kubenswrapper[4942]: I0218 19:32:37.230950 4942 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a4cd7c2a-4d5f-48c2-9af4-bcc237367416-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 18 19:32:37 crc kubenswrapper[4942]: I0218 19:32:37.267111 4942 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-pmrtb_a4cd7c2a-4d5f-48c2-9af4-bcc237367416/registry-server/0.log" Feb 18 19:32:37 crc kubenswrapper[4942]: I0218 19:32:37.267747 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pmrtb" event={"ID":"a4cd7c2a-4d5f-48c2-9af4-bcc237367416","Type":"ContainerDied","Data":"dee0baac7e3a9c49a9187f5763b3e97bcf6e3a78d211e733d817f073fc2a3c4b"} Feb 18 19:32:37 crc kubenswrapper[4942]: I0218 19:32:37.267805 4942 scope.go:117] "RemoveContainer" containerID="c3838fe4209ef9113ae57deb956ef6ca941b71cb6e13a15ad9c87fff7e92adfe" Feb 18 19:32:37 crc kubenswrapper[4942]: I0218 19:32:37.267931 4942 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-pmrtb" Feb 18 19:32:37 crc kubenswrapper[4942]: I0218 19:32:37.281435 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xhp5w" event={"ID":"04adf08c-4b6b-49c1-be25-d2cc8c67dce2","Type":"ContainerDied","Data":"5e3cbf30742449f377e56eae72b81f7947e381700f22252f373a197aae1f9b45"} Feb 18 19:32:37 crc kubenswrapper[4942]: I0218 19:32:37.281566 4942 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-xhp5w" Feb 18 19:32:37 crc kubenswrapper[4942]: I0218 19:32:37.285054 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-6c78d668d5-t9dzq" event={"ID":"11715b33-f996-46bf-81db-0557e84e7fea","Type":"ContainerStarted","Data":"c44555440393e73229dace887739ba537997ccb145b340ded9403ca08555b404"} Feb 18 19:32:37 crc kubenswrapper[4942]: I0218 19:32:37.285617 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/keystone-operator-controller-manager-6c78d668d5-t9dzq" Feb 18 19:32:37 crc kubenswrapper[4942]: I0218 19:32:37.291137 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-5ddd85db87-5jzdp" event={"ID":"cde9a09e-2dfe-410e-95ad-8f297b517ef4","Type":"ContainerStarted","Data":"77c02583ef350bf347f4c26bf4f780abdaad82069f189f920d5de32d851b2698"} Feb 18 19:32:37 crc kubenswrapper[4942]: I0218 19:32:37.291929 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/nova-operator-controller-manager-5ddd85db87-5jzdp" Feb 18 19:32:37 crc kubenswrapper[4942]: I0218 19:32:37.307456 4942 scope.go:117] "RemoveContainer" containerID="15e62c1b1fb7cf60b372e3cd480a3c4a8ecacdad111be63f1be8804883bfdafd" Feb 18 19:32:37 crc kubenswrapper[4942]: I0218 19:32:37.317068 4942 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/keystone-operator-controller-manager-6c78d668d5-t9dzq" podStartSLOduration=2.197664942 podStartE2EDuration="53.317052718s" podCreationTimestamp="2026-02-18 19:31:44 +0000 UTC" firstStartedPulling="2026-02-18 19:31:45.461950282 +0000 UTC m=+865.166882947" lastFinishedPulling="2026-02-18 19:32:36.581338038 +0000 UTC m=+916.286270723" observedRunningTime="2026-02-18 19:32:37.315332134 +0000 UTC m=+917.020264799" watchObservedRunningTime="2026-02-18 19:32:37.317052718 +0000 UTC m=+917.021985383" Feb 18 19:32:37 crc kubenswrapper[4942]: I0218 19:32:37.327599 4942 scope.go:117] "RemoveContainer" containerID="c0f81453e7d6dc223b51f45fabda95b58d634396c8127da2162b425bed1f7043" Feb 18 19:32:37 crc kubenswrapper[4942]: I0218 19:32:37.342987 4942 scope.go:117] "RemoveContainer" containerID="c694aafcd30c06ff7abe831fc6b89c1cfac7ecced4de1f26eec3229fdb38fd35" Feb 18 19:32:37 crc kubenswrapper[4942]: I0218 19:32:37.348362 4942 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/nova-operator-controller-manager-5ddd85db87-5jzdp" podStartSLOduration=2.478352544 podStartE2EDuration="53.34832778s" podCreationTimestamp="2026-02-18 19:31:44 +0000 UTC" firstStartedPulling="2026-02-18 19:31:45.704460375 +0000 UTC m=+865.409393040" lastFinishedPulling="2026-02-18 19:32:36.574435601 +0000 UTC m=+916.279368276" observedRunningTime="2026-02-18 19:32:37.333988422 +0000 UTC m=+917.038921117" watchObservedRunningTime="2026-02-18 19:32:37.34832778 +0000 UTC m=+917.053260435" Feb 18 19:32:37 crc kubenswrapper[4942]: I0218 19:32:37.352252 4942 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-xhp5w"] Feb 18 19:32:37 crc kubenswrapper[4942]: I0218 19:32:37.359057 4942 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-xhp5w"] Feb 18 19:32:37 crc kubenswrapper[4942]: I0218 19:32:37.360948 4942 scope.go:117] "RemoveContainer" containerID="ed9840277aa9db07d748c964a420663376df6cd57140cd5dec23b586bf0ce286" Feb 18 19:32:37 crc kubenswrapper[4942]: I0218 19:32:37.371612 4942 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-pmrtb"] Feb 18 19:32:37 crc kubenswrapper[4942]: I0218 19:32:37.374908 4942 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-pmrtb"] Feb 18 19:32:37 crc kubenswrapper[4942]: I0218 19:32:37.384547 4942 scope.go:117] "RemoveContainer" containerID="d3a4dd0670baf5152526b789369cfc767a3b4f746a7ebf6b7d9421c87331aa1b" Feb 18 19:32:38 crc kubenswrapper[4942]: I0218 19:32:38.303610 4942 generic.go:334] "Generic (PLEG): container finished" podID="078b3d71-94dd-42b8-8804-84590a8abe44" containerID="db02565180cd436fdf618f5684c2f10ad7c85a70673af35a1cea2e5ab44eca7e" exitCode=0 Feb 18 19:32:38 crc kubenswrapper[4942]: I0218 19:32:38.303928 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-dh7gx" event={"ID":"078b3d71-94dd-42b8-8804-84590a8abe44","Type":"ContainerDied","Data":"db02565180cd436fdf618f5684c2f10ad7c85a70673af35a1cea2e5ab44eca7e"} Feb 18 19:32:39 crc kubenswrapper[4942]: I0218 19:32:39.045381 4942 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="04adf08c-4b6b-49c1-be25-d2cc8c67dce2" path="/var/lib/kubelet/pods/04adf08c-4b6b-49c1-be25-d2cc8c67dce2/volumes" Feb 18 19:32:39 crc kubenswrapper[4942]: I0218 19:32:39.046654 4942 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a4cd7c2a-4d5f-48c2-9af4-bcc237367416" path="/var/lib/kubelet/pods/a4cd7c2a-4d5f-48c2-9af4-bcc237367416/volumes" Feb 18 19:32:39 crc kubenswrapper[4942]: I0218 19:32:39.322030 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-dh7gx" event={"ID":"078b3d71-94dd-42b8-8804-84590a8abe44","Type":"ContainerStarted","Data":"39cbc18aad44935f5299446259fddc74dba1410c4fd51a8a6d0a32304548250a"} Feb 18 19:32:39 crc kubenswrapper[4942]: I0218 19:32:39.351531 4942 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-dh7gx" podStartSLOduration=16.11519219 podStartE2EDuration="18.351506436s" podCreationTimestamp="2026-02-18 19:32:21 +0000 UTC" firstStartedPulling="2026-02-18 19:32:36.504319823 +0000 UTC m=+916.209252488" lastFinishedPulling="2026-02-18 19:32:38.740634069 +0000 UTC m=+918.445566734" observedRunningTime="2026-02-18 19:32:39.351430124 +0000 UTC m=+919.056362799" watchObservedRunningTime="2026-02-18 19:32:39.351506436 +0000 UTC m=+919.056439111" Feb 18 19:32:41 crc kubenswrapper[4942]: I0218 19:32:41.592840 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-dh7gx" Feb 18 19:32:41 crc kubenswrapper[4942]: I0218 19:32:41.594160 4942 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-dh7gx" Feb 18 19:32:41 crc kubenswrapper[4942]: I0218 19:32:41.652484 4942 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-dh7gx" Feb 18 19:32:44 crc kubenswrapper[4942]: I0218 19:32:44.750739 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/keystone-operator-controller-manager-6c78d668d5-t9dzq" Feb 18 19:32:44 crc kubenswrapper[4942]: I0218 19:32:44.855135 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/nova-operator-controller-manager-5ddd85db87-5jzdp" Feb 18 19:32:51 crc kubenswrapper[4942]: I0218 19:32:51.639917 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-dh7gx" Feb 18 19:32:51 crc kubenswrapper[4942]: I0218 19:32:51.685836 4942 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-dh7gx"] Feb 18 19:32:52 crc kubenswrapper[4942]: I0218 19:32:52.428620 4942 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-dh7gx" podUID="078b3d71-94dd-42b8-8804-84590a8abe44" containerName="registry-server" containerID="cri-o://39cbc18aad44935f5299446259fddc74dba1410c4fd51a8a6d0a32304548250a" gracePeriod=2 Feb 18 19:32:52 crc kubenswrapper[4942]: I0218 19:32:52.893638 4942 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-dh7gx" Feb 18 19:32:52 crc kubenswrapper[4942]: I0218 19:32:52.998889 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zbnpw\" (UniqueName: \"kubernetes.io/projected/078b3d71-94dd-42b8-8804-84590a8abe44-kube-api-access-zbnpw\") pod \"078b3d71-94dd-42b8-8804-84590a8abe44\" (UID: \"078b3d71-94dd-42b8-8804-84590a8abe44\") " Feb 18 19:32:52 crc kubenswrapper[4942]: I0218 19:32:52.998944 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/078b3d71-94dd-42b8-8804-84590a8abe44-utilities\") pod \"078b3d71-94dd-42b8-8804-84590a8abe44\" (UID: \"078b3d71-94dd-42b8-8804-84590a8abe44\") " Feb 18 19:32:52 crc kubenswrapper[4942]: I0218 19:32:52.999088 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/078b3d71-94dd-42b8-8804-84590a8abe44-catalog-content\") pod \"078b3d71-94dd-42b8-8804-84590a8abe44\" (UID: \"078b3d71-94dd-42b8-8804-84590a8abe44\") " Feb 18 19:32:53 crc kubenswrapper[4942]: I0218 19:32:53.000137 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/078b3d71-94dd-42b8-8804-84590a8abe44-utilities" (OuterVolumeSpecName: "utilities") pod "078b3d71-94dd-42b8-8804-84590a8abe44" (UID: "078b3d71-94dd-42b8-8804-84590a8abe44"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 19:32:53 crc kubenswrapper[4942]: I0218 19:32:53.004936 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/078b3d71-94dd-42b8-8804-84590a8abe44-kube-api-access-zbnpw" (OuterVolumeSpecName: "kube-api-access-zbnpw") pod "078b3d71-94dd-42b8-8804-84590a8abe44" (UID: "078b3d71-94dd-42b8-8804-84590a8abe44"). InnerVolumeSpecName "kube-api-access-zbnpw". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:32:53 crc kubenswrapper[4942]: I0218 19:32:53.026795 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/078b3d71-94dd-42b8-8804-84590a8abe44-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "078b3d71-94dd-42b8-8804-84590a8abe44" (UID: "078b3d71-94dd-42b8-8804-84590a8abe44"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 19:32:53 crc kubenswrapper[4942]: I0218 19:32:53.100657 4942 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/078b3d71-94dd-42b8-8804-84590a8abe44-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 18 19:32:53 crc kubenswrapper[4942]: I0218 19:32:53.100698 4942 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zbnpw\" (UniqueName: \"kubernetes.io/projected/078b3d71-94dd-42b8-8804-84590a8abe44-kube-api-access-zbnpw\") on node \"crc\" DevicePath \"\"" Feb 18 19:32:53 crc kubenswrapper[4942]: I0218 19:32:53.100712 4942 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/078b3d71-94dd-42b8-8804-84590a8abe44-utilities\") on node \"crc\" DevicePath \"\"" Feb 18 19:32:53 crc kubenswrapper[4942]: I0218 19:32:53.435879 4942 generic.go:334] "Generic (PLEG): container finished" podID="078b3d71-94dd-42b8-8804-84590a8abe44" containerID="39cbc18aad44935f5299446259fddc74dba1410c4fd51a8a6d0a32304548250a" exitCode=0 Feb 18 19:32:53 crc kubenswrapper[4942]: I0218 19:32:53.435931 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-dh7gx" event={"ID":"078b3d71-94dd-42b8-8804-84590a8abe44","Type":"ContainerDied","Data":"39cbc18aad44935f5299446259fddc74dba1410c4fd51a8a6d0a32304548250a"} Feb 18 19:32:53 crc kubenswrapper[4942]: I0218 19:32:53.435970 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-dh7gx" event={"ID":"078b3d71-94dd-42b8-8804-84590a8abe44","Type":"ContainerDied","Data":"256b7d08b3f9eb072fd2995b03828975c96e7e95fda87c10cfd7a170fdd2595a"} Feb 18 19:32:53 crc kubenswrapper[4942]: I0218 19:32:53.435997 4942 scope.go:117] "RemoveContainer" containerID="39cbc18aad44935f5299446259fddc74dba1410c4fd51a8a6d0a32304548250a" Feb 18 19:32:53 crc kubenswrapper[4942]: I0218 19:32:53.436043 4942 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-dh7gx" Feb 18 19:32:53 crc kubenswrapper[4942]: I0218 19:32:53.458589 4942 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-dh7gx"] Feb 18 19:32:53 crc kubenswrapper[4942]: I0218 19:32:53.471901 4942 scope.go:117] "RemoveContainer" containerID="db02565180cd436fdf618f5684c2f10ad7c85a70673af35a1cea2e5ab44eca7e" Feb 18 19:32:53 crc kubenswrapper[4942]: I0218 19:32:53.472078 4942 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-dh7gx"] Feb 18 19:32:53 crc kubenswrapper[4942]: I0218 19:32:53.490407 4942 scope.go:117] "RemoveContainer" containerID="293ac05d862eba073555d3f1808af1fdd42c90b648ef7fc661fad46e5fb3f279" Feb 18 19:32:53 crc kubenswrapper[4942]: I0218 19:32:53.517922 4942 scope.go:117] "RemoveContainer" containerID="39cbc18aad44935f5299446259fddc74dba1410c4fd51a8a6d0a32304548250a" Feb 18 19:32:53 crc kubenswrapper[4942]: E0218 19:32:53.518290 4942 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"39cbc18aad44935f5299446259fddc74dba1410c4fd51a8a6d0a32304548250a\": container with ID starting with 39cbc18aad44935f5299446259fddc74dba1410c4fd51a8a6d0a32304548250a not found: ID does not exist" containerID="39cbc18aad44935f5299446259fddc74dba1410c4fd51a8a6d0a32304548250a" Feb 18 19:32:53 crc kubenswrapper[4942]: I0218 19:32:53.518342 4942 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"39cbc18aad44935f5299446259fddc74dba1410c4fd51a8a6d0a32304548250a"} err="failed to get container status \"39cbc18aad44935f5299446259fddc74dba1410c4fd51a8a6d0a32304548250a\": rpc error: code = NotFound desc = could not find container \"39cbc18aad44935f5299446259fddc74dba1410c4fd51a8a6d0a32304548250a\": container with ID starting with 39cbc18aad44935f5299446259fddc74dba1410c4fd51a8a6d0a32304548250a not found: ID does not exist" Feb 18 19:32:53 crc kubenswrapper[4942]: I0218 19:32:53.518361 4942 scope.go:117] "RemoveContainer" containerID="db02565180cd436fdf618f5684c2f10ad7c85a70673af35a1cea2e5ab44eca7e" Feb 18 19:32:53 crc kubenswrapper[4942]: E0218 19:32:53.518551 4942 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"db02565180cd436fdf618f5684c2f10ad7c85a70673af35a1cea2e5ab44eca7e\": container with ID starting with db02565180cd436fdf618f5684c2f10ad7c85a70673af35a1cea2e5ab44eca7e not found: ID does not exist" containerID="db02565180cd436fdf618f5684c2f10ad7c85a70673af35a1cea2e5ab44eca7e" Feb 18 19:32:53 crc kubenswrapper[4942]: I0218 19:32:53.518570 4942 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"db02565180cd436fdf618f5684c2f10ad7c85a70673af35a1cea2e5ab44eca7e"} err="failed to get container status \"db02565180cd436fdf618f5684c2f10ad7c85a70673af35a1cea2e5ab44eca7e\": rpc error: code = NotFound desc = could not find container \"db02565180cd436fdf618f5684c2f10ad7c85a70673af35a1cea2e5ab44eca7e\": container with ID starting with db02565180cd436fdf618f5684c2f10ad7c85a70673af35a1cea2e5ab44eca7e not found: ID does not exist" Feb 18 19:32:53 crc kubenswrapper[4942]: I0218 19:32:53.518581 4942 scope.go:117] "RemoveContainer" containerID="293ac05d862eba073555d3f1808af1fdd42c90b648ef7fc661fad46e5fb3f279" Feb 18 19:32:53 crc kubenswrapper[4942]: E0218 19:32:53.518818 4942 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"293ac05d862eba073555d3f1808af1fdd42c90b648ef7fc661fad46e5fb3f279\": container with ID starting with 293ac05d862eba073555d3f1808af1fdd42c90b648ef7fc661fad46e5fb3f279 not found: ID does not exist" containerID="293ac05d862eba073555d3f1808af1fdd42c90b648ef7fc661fad46e5fb3f279" Feb 18 19:32:53 crc kubenswrapper[4942]: I0218 19:32:53.518842 4942 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"293ac05d862eba073555d3f1808af1fdd42c90b648ef7fc661fad46e5fb3f279"} err="failed to get container status \"293ac05d862eba073555d3f1808af1fdd42c90b648ef7fc661fad46e5fb3f279\": rpc error: code = NotFound desc = could not find container \"293ac05d862eba073555d3f1808af1fdd42c90b648ef7fc661fad46e5fb3f279\": container with ID starting with 293ac05d862eba073555d3f1808af1fdd42c90b648ef7fc661fad46e5fb3f279 not found: ID does not exist" Feb 18 19:32:55 crc kubenswrapper[4942]: I0218 19:32:55.047266 4942 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="078b3d71-94dd-42b8-8804-84590a8abe44" path="/var/lib/kubelet/pods/078b3d71-94dd-42b8-8804-84590a8abe44/volumes" Feb 18 19:33:02 crc kubenswrapper[4942]: I0218 19:33:02.669378 4942 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-b2r8r"] Feb 18 19:33:02 crc kubenswrapper[4942]: E0218 19:33:02.672946 4942 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="04adf08c-4b6b-49c1-be25-d2cc8c67dce2" containerName="extract-content" Feb 18 19:33:02 crc kubenswrapper[4942]: I0218 19:33:02.673103 4942 state_mem.go:107] "Deleted CPUSet assignment" podUID="04adf08c-4b6b-49c1-be25-d2cc8c67dce2" containerName="extract-content" Feb 18 19:33:02 crc kubenswrapper[4942]: E0218 19:33:02.673170 4942 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="078b3d71-94dd-42b8-8804-84590a8abe44" containerName="extract-utilities" Feb 18 19:33:02 crc kubenswrapper[4942]: I0218 19:33:02.673228 4942 state_mem.go:107] "Deleted CPUSet assignment" podUID="078b3d71-94dd-42b8-8804-84590a8abe44" containerName="extract-utilities" Feb 18 19:33:02 crc kubenswrapper[4942]: E0218 19:33:02.673300 4942 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="078b3d71-94dd-42b8-8804-84590a8abe44" containerName="extract-content" Feb 18 19:33:02 crc kubenswrapper[4942]: I0218 19:33:02.673359 4942 state_mem.go:107] "Deleted CPUSet assignment" podUID="078b3d71-94dd-42b8-8804-84590a8abe44" containerName="extract-content" Feb 18 19:33:02 crc kubenswrapper[4942]: E0218 19:33:02.673433 4942 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="04adf08c-4b6b-49c1-be25-d2cc8c67dce2" containerName="registry-server" Feb 18 19:33:02 crc kubenswrapper[4942]: I0218 19:33:02.673492 4942 state_mem.go:107] "Deleted CPUSet assignment" podUID="04adf08c-4b6b-49c1-be25-d2cc8c67dce2" containerName="registry-server" Feb 18 19:33:02 crc kubenswrapper[4942]: E0218 19:33:02.673552 4942 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="04adf08c-4b6b-49c1-be25-d2cc8c67dce2" containerName="extract-utilities" Feb 18 19:33:02 crc kubenswrapper[4942]: I0218 19:33:02.673603 4942 state_mem.go:107] "Deleted CPUSet assignment" podUID="04adf08c-4b6b-49c1-be25-d2cc8c67dce2" containerName="extract-utilities" Feb 18 19:33:02 crc kubenswrapper[4942]: E0218 19:33:02.673662 4942 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a4cd7c2a-4d5f-48c2-9af4-bcc237367416" containerName="extract-utilities" Feb 18 19:33:02 crc kubenswrapper[4942]: I0218 19:33:02.673724 4942 state_mem.go:107] "Deleted CPUSet assignment" podUID="a4cd7c2a-4d5f-48c2-9af4-bcc237367416" containerName="extract-utilities" Feb 18 19:33:02 crc kubenswrapper[4942]: E0218 19:33:02.673809 4942 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a4cd7c2a-4d5f-48c2-9af4-bcc237367416" containerName="extract-content" Feb 18 19:33:02 crc kubenswrapper[4942]: I0218 19:33:02.673875 4942 state_mem.go:107] "Deleted CPUSet assignment" podUID="a4cd7c2a-4d5f-48c2-9af4-bcc237367416" containerName="extract-content" Feb 18 19:33:02 crc kubenswrapper[4942]: E0218 19:33:02.673935 4942 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="078b3d71-94dd-42b8-8804-84590a8abe44" containerName="registry-server" Feb 18 19:33:02 crc kubenswrapper[4942]: I0218 19:33:02.673991 4942 state_mem.go:107] "Deleted CPUSet assignment" podUID="078b3d71-94dd-42b8-8804-84590a8abe44" containerName="registry-server" Feb 18 19:33:02 crc kubenswrapper[4942]: E0218 19:33:02.674047 4942 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a4cd7c2a-4d5f-48c2-9af4-bcc237367416" containerName="registry-server" Feb 18 19:33:02 crc kubenswrapper[4942]: I0218 19:33:02.674095 4942 state_mem.go:107] "Deleted CPUSet assignment" podUID="a4cd7c2a-4d5f-48c2-9af4-bcc237367416" containerName="registry-server" Feb 18 19:33:02 crc kubenswrapper[4942]: I0218 19:33:02.674302 4942 memory_manager.go:354] "RemoveStaleState removing state" podUID="078b3d71-94dd-42b8-8804-84590a8abe44" containerName="registry-server" Feb 18 19:33:02 crc kubenswrapper[4942]: I0218 19:33:02.674363 4942 memory_manager.go:354] "RemoveStaleState removing state" podUID="04adf08c-4b6b-49c1-be25-d2cc8c67dce2" containerName="registry-server" Feb 18 19:33:02 crc kubenswrapper[4942]: I0218 19:33:02.674424 4942 memory_manager.go:354] "RemoveStaleState removing state" podUID="a4cd7c2a-4d5f-48c2-9af4-bcc237367416" containerName="registry-server" Feb 18 19:33:02 crc kubenswrapper[4942]: I0218 19:33:02.675467 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-b2r8r" Feb 18 19:33:02 crc kubenswrapper[4942]: I0218 19:33:02.679077 4942 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"kube-root-ca.crt" Feb 18 19:33:02 crc kubenswrapper[4942]: I0218 19:33:02.679107 4942 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openshift-service-ca.crt" Feb 18 19:33:02 crc kubenswrapper[4942]: I0218 19:33:02.679094 4942 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns" Feb 18 19:33:02 crc kubenswrapper[4942]: I0218 19:33:02.679499 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dnsmasq-dns-dockercfg-kcnds" Feb 18 19:33:02 crc kubenswrapper[4942]: I0218 19:33:02.685747 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-b2r8r"] Feb 18 19:33:02 crc kubenswrapper[4942]: I0218 19:33:02.746515 4942 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-vvvkv"] Feb 18 19:33:02 crc kubenswrapper[4942]: I0218 19:33:02.747632 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-vvvkv" Feb 18 19:33:02 crc kubenswrapper[4942]: I0218 19:33:02.756126 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-vvvkv"] Feb 18 19:33:02 crc kubenswrapper[4942]: I0218 19:33:02.759422 4942 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-svc" Feb 18 19:33:02 crc kubenswrapper[4942]: I0218 19:33:02.847680 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hnnbj\" (UniqueName: \"kubernetes.io/projected/73ffa88c-83a5-4da2-a7cb-6a0dbbd55a67-kube-api-access-hnnbj\") pod \"dnsmasq-dns-675f4bcbfc-b2r8r\" (UID: \"73ffa88c-83a5-4da2-a7cb-6a0dbbd55a67\") " pod="openstack/dnsmasq-dns-675f4bcbfc-b2r8r" Feb 18 19:33:02 crc kubenswrapper[4942]: I0218 19:33:02.847755 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/73ffa88c-83a5-4da2-a7cb-6a0dbbd55a67-config\") pod \"dnsmasq-dns-675f4bcbfc-b2r8r\" (UID: \"73ffa88c-83a5-4da2-a7cb-6a0dbbd55a67\") " pod="openstack/dnsmasq-dns-675f4bcbfc-b2r8r" Feb 18 19:33:02 crc kubenswrapper[4942]: I0218 19:33:02.949528 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9fc86b17-5060-4828-9a92-7e40170ea226-config\") pod \"dnsmasq-dns-78dd6ddcc-vvvkv\" (UID: \"9fc86b17-5060-4828-9a92-7e40170ea226\") " pod="openstack/dnsmasq-dns-78dd6ddcc-vvvkv" Feb 18 19:33:02 crc kubenswrapper[4942]: I0218 19:33:02.949635 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hnnbj\" (UniqueName: \"kubernetes.io/projected/73ffa88c-83a5-4da2-a7cb-6a0dbbd55a67-kube-api-access-hnnbj\") pod \"dnsmasq-dns-675f4bcbfc-b2r8r\" (UID: \"73ffa88c-83a5-4da2-a7cb-6a0dbbd55a67\") " pod="openstack/dnsmasq-dns-675f4bcbfc-b2r8r" Feb 18 19:33:02 crc kubenswrapper[4942]: I0218 19:33:02.949684 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9fc86b17-5060-4828-9a92-7e40170ea226-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-vvvkv\" (UID: \"9fc86b17-5060-4828-9a92-7e40170ea226\") " pod="openstack/dnsmasq-dns-78dd6ddcc-vvvkv" Feb 18 19:33:02 crc kubenswrapper[4942]: I0218 19:33:02.949729 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rxlmd\" (UniqueName: \"kubernetes.io/projected/9fc86b17-5060-4828-9a92-7e40170ea226-kube-api-access-rxlmd\") pod \"dnsmasq-dns-78dd6ddcc-vvvkv\" (UID: \"9fc86b17-5060-4828-9a92-7e40170ea226\") " pod="openstack/dnsmasq-dns-78dd6ddcc-vvvkv" Feb 18 19:33:02 crc kubenswrapper[4942]: I0218 19:33:02.949883 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/73ffa88c-83a5-4da2-a7cb-6a0dbbd55a67-config\") pod \"dnsmasq-dns-675f4bcbfc-b2r8r\" (UID: \"73ffa88c-83a5-4da2-a7cb-6a0dbbd55a67\") " pod="openstack/dnsmasq-dns-675f4bcbfc-b2r8r" Feb 18 19:33:02 crc kubenswrapper[4942]: I0218 19:33:02.950902 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/73ffa88c-83a5-4da2-a7cb-6a0dbbd55a67-config\") pod \"dnsmasq-dns-675f4bcbfc-b2r8r\" (UID: \"73ffa88c-83a5-4da2-a7cb-6a0dbbd55a67\") " pod="openstack/dnsmasq-dns-675f4bcbfc-b2r8r" Feb 18 19:33:02 crc kubenswrapper[4942]: I0218 19:33:02.981742 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hnnbj\" (UniqueName: \"kubernetes.io/projected/73ffa88c-83a5-4da2-a7cb-6a0dbbd55a67-kube-api-access-hnnbj\") pod \"dnsmasq-dns-675f4bcbfc-b2r8r\" (UID: \"73ffa88c-83a5-4da2-a7cb-6a0dbbd55a67\") " pod="openstack/dnsmasq-dns-675f4bcbfc-b2r8r" Feb 18 19:33:02 crc kubenswrapper[4942]: I0218 19:33:02.996943 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-b2r8r" Feb 18 19:33:03 crc kubenswrapper[4942]: I0218 19:33:03.056925 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9fc86b17-5060-4828-9a92-7e40170ea226-config\") pod \"dnsmasq-dns-78dd6ddcc-vvvkv\" (UID: \"9fc86b17-5060-4828-9a92-7e40170ea226\") " pod="openstack/dnsmasq-dns-78dd6ddcc-vvvkv" Feb 18 19:33:03 crc kubenswrapper[4942]: I0218 19:33:03.057025 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9fc86b17-5060-4828-9a92-7e40170ea226-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-vvvkv\" (UID: \"9fc86b17-5060-4828-9a92-7e40170ea226\") " pod="openstack/dnsmasq-dns-78dd6ddcc-vvvkv" Feb 18 19:33:03 crc kubenswrapper[4942]: I0218 19:33:03.057063 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rxlmd\" (UniqueName: \"kubernetes.io/projected/9fc86b17-5060-4828-9a92-7e40170ea226-kube-api-access-rxlmd\") pod \"dnsmasq-dns-78dd6ddcc-vvvkv\" (UID: \"9fc86b17-5060-4828-9a92-7e40170ea226\") " pod="openstack/dnsmasq-dns-78dd6ddcc-vvvkv" Feb 18 19:33:03 crc kubenswrapper[4942]: I0218 19:33:03.058407 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9fc86b17-5060-4828-9a92-7e40170ea226-config\") pod \"dnsmasq-dns-78dd6ddcc-vvvkv\" (UID: \"9fc86b17-5060-4828-9a92-7e40170ea226\") " pod="openstack/dnsmasq-dns-78dd6ddcc-vvvkv" Feb 18 19:33:03 crc kubenswrapper[4942]: I0218 19:33:03.059136 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9fc86b17-5060-4828-9a92-7e40170ea226-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-vvvkv\" (UID: \"9fc86b17-5060-4828-9a92-7e40170ea226\") " pod="openstack/dnsmasq-dns-78dd6ddcc-vvvkv" Feb 18 19:33:03 crc kubenswrapper[4942]: I0218 19:33:03.085922 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rxlmd\" (UniqueName: \"kubernetes.io/projected/9fc86b17-5060-4828-9a92-7e40170ea226-kube-api-access-rxlmd\") pod \"dnsmasq-dns-78dd6ddcc-vvvkv\" (UID: \"9fc86b17-5060-4828-9a92-7e40170ea226\") " pod="openstack/dnsmasq-dns-78dd6ddcc-vvvkv" Feb 18 19:33:03 crc kubenswrapper[4942]: I0218 19:33:03.360694 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-vvvkv" Feb 18 19:33:03 crc kubenswrapper[4942]: I0218 19:33:03.506722 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-b2r8r"] Feb 18 19:33:03 crc kubenswrapper[4942]: W0218 19:33:03.824221 4942 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9fc86b17_5060_4828_9a92_7e40170ea226.slice/crio-e8ed36483d76587ac38831cc4f79afb97b35245143a1273adb206d00597090f3 WatchSource:0}: Error finding container e8ed36483d76587ac38831cc4f79afb97b35245143a1273adb206d00597090f3: Status 404 returned error can't find the container with id e8ed36483d76587ac38831cc4f79afb97b35245143a1273adb206d00597090f3 Feb 18 19:33:03 crc kubenswrapper[4942]: I0218 19:33:03.825160 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-vvvkv"] Feb 18 19:33:04 crc kubenswrapper[4942]: I0218 19:33:04.521150 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78dd6ddcc-vvvkv" event={"ID":"9fc86b17-5060-4828-9a92-7e40170ea226","Type":"ContainerStarted","Data":"e8ed36483d76587ac38831cc4f79afb97b35245143a1273adb206d00597090f3"} Feb 18 19:33:04 crc kubenswrapper[4942]: I0218 19:33:04.522746 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-675f4bcbfc-b2r8r" event={"ID":"73ffa88c-83a5-4da2-a7cb-6a0dbbd55a67","Type":"ContainerStarted","Data":"154555e460749d363c1de1a0b1abd6b993012b64a0c3c4566686feeefaddd1c6"} Feb 18 19:33:05 crc kubenswrapper[4942]: I0218 19:33:05.504914 4942 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-b2r8r"] Feb 18 19:33:05 crc kubenswrapper[4942]: I0218 19:33:05.534257 4942 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-2mvhf"] Feb 18 19:33:05 crc kubenswrapper[4942]: I0218 19:33:05.535802 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-2mvhf" Feb 18 19:33:05 crc kubenswrapper[4942]: I0218 19:33:05.561603 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-2mvhf"] Feb 18 19:33:05 crc kubenswrapper[4942]: I0218 19:33:05.697787 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qmzc4\" (UniqueName: \"kubernetes.io/projected/b34cdd67-e888-4718-8889-0dc284187fcc-kube-api-access-qmzc4\") pod \"dnsmasq-dns-666b6646f7-2mvhf\" (UID: \"b34cdd67-e888-4718-8889-0dc284187fcc\") " pod="openstack/dnsmasq-dns-666b6646f7-2mvhf" Feb 18 19:33:05 crc kubenswrapper[4942]: I0218 19:33:05.697881 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b34cdd67-e888-4718-8889-0dc284187fcc-dns-svc\") pod \"dnsmasq-dns-666b6646f7-2mvhf\" (UID: \"b34cdd67-e888-4718-8889-0dc284187fcc\") " pod="openstack/dnsmasq-dns-666b6646f7-2mvhf" Feb 18 19:33:05 crc kubenswrapper[4942]: I0218 19:33:05.697903 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b34cdd67-e888-4718-8889-0dc284187fcc-config\") pod \"dnsmasq-dns-666b6646f7-2mvhf\" (UID: \"b34cdd67-e888-4718-8889-0dc284187fcc\") " pod="openstack/dnsmasq-dns-666b6646f7-2mvhf" Feb 18 19:33:05 crc kubenswrapper[4942]: I0218 19:33:05.752130 4942 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-vvvkv"] Feb 18 19:33:05 crc kubenswrapper[4942]: I0218 19:33:05.779362 4942 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-99h4x"] Feb 18 19:33:05 crc kubenswrapper[4942]: I0218 19:33:05.789837 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-99h4x" Feb 18 19:33:05 crc kubenswrapper[4942]: I0218 19:33:05.799580 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b34cdd67-e888-4718-8889-0dc284187fcc-dns-svc\") pod \"dnsmasq-dns-666b6646f7-2mvhf\" (UID: \"b34cdd67-e888-4718-8889-0dc284187fcc\") " pod="openstack/dnsmasq-dns-666b6646f7-2mvhf" Feb 18 19:33:05 crc kubenswrapper[4942]: I0218 19:33:05.799617 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b34cdd67-e888-4718-8889-0dc284187fcc-config\") pod \"dnsmasq-dns-666b6646f7-2mvhf\" (UID: \"b34cdd67-e888-4718-8889-0dc284187fcc\") " pod="openstack/dnsmasq-dns-666b6646f7-2mvhf" Feb 18 19:33:05 crc kubenswrapper[4942]: I0218 19:33:05.799664 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qmzc4\" (UniqueName: \"kubernetes.io/projected/b34cdd67-e888-4718-8889-0dc284187fcc-kube-api-access-qmzc4\") pod \"dnsmasq-dns-666b6646f7-2mvhf\" (UID: \"b34cdd67-e888-4718-8889-0dc284187fcc\") " pod="openstack/dnsmasq-dns-666b6646f7-2mvhf" Feb 18 19:33:05 crc kubenswrapper[4942]: I0218 19:33:05.800623 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b34cdd67-e888-4718-8889-0dc284187fcc-dns-svc\") pod \"dnsmasq-dns-666b6646f7-2mvhf\" (UID: \"b34cdd67-e888-4718-8889-0dc284187fcc\") " pod="openstack/dnsmasq-dns-666b6646f7-2mvhf" Feb 18 19:33:05 crc kubenswrapper[4942]: I0218 19:33:05.809508 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b34cdd67-e888-4718-8889-0dc284187fcc-config\") pod \"dnsmasq-dns-666b6646f7-2mvhf\" (UID: \"b34cdd67-e888-4718-8889-0dc284187fcc\") " pod="openstack/dnsmasq-dns-666b6646f7-2mvhf" Feb 18 19:33:05 crc kubenswrapper[4942]: I0218 19:33:05.814766 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-99h4x"] Feb 18 19:33:05 crc kubenswrapper[4942]: I0218 19:33:05.822093 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qmzc4\" (UniqueName: \"kubernetes.io/projected/b34cdd67-e888-4718-8889-0dc284187fcc-kube-api-access-qmzc4\") pod \"dnsmasq-dns-666b6646f7-2mvhf\" (UID: \"b34cdd67-e888-4718-8889-0dc284187fcc\") " pod="openstack/dnsmasq-dns-666b6646f7-2mvhf" Feb 18 19:33:05 crc kubenswrapper[4942]: I0218 19:33:05.862692 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-2mvhf" Feb 18 19:33:05 crc kubenswrapper[4942]: I0218 19:33:05.900456 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vrdxp\" (UniqueName: \"kubernetes.io/projected/b7887418-e8d9-434c-a8e3-fed787cbc8c8-kube-api-access-vrdxp\") pod \"dnsmasq-dns-57d769cc4f-99h4x\" (UID: \"b7887418-e8d9-434c-a8e3-fed787cbc8c8\") " pod="openstack/dnsmasq-dns-57d769cc4f-99h4x" Feb 18 19:33:05 crc kubenswrapper[4942]: I0218 19:33:05.900507 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b7887418-e8d9-434c-a8e3-fed787cbc8c8-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-99h4x\" (UID: \"b7887418-e8d9-434c-a8e3-fed787cbc8c8\") " pod="openstack/dnsmasq-dns-57d769cc4f-99h4x" Feb 18 19:33:05 crc kubenswrapper[4942]: I0218 19:33:05.900632 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b7887418-e8d9-434c-a8e3-fed787cbc8c8-config\") pod \"dnsmasq-dns-57d769cc4f-99h4x\" (UID: \"b7887418-e8d9-434c-a8e3-fed787cbc8c8\") " pod="openstack/dnsmasq-dns-57d769cc4f-99h4x" Feb 18 19:33:06 crc kubenswrapper[4942]: I0218 19:33:06.002058 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b7887418-e8d9-434c-a8e3-fed787cbc8c8-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-99h4x\" (UID: \"b7887418-e8d9-434c-a8e3-fed787cbc8c8\") " pod="openstack/dnsmasq-dns-57d769cc4f-99h4x" Feb 18 19:33:06 crc kubenswrapper[4942]: I0218 19:33:06.002392 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b7887418-e8d9-434c-a8e3-fed787cbc8c8-config\") pod \"dnsmasq-dns-57d769cc4f-99h4x\" (UID: \"b7887418-e8d9-434c-a8e3-fed787cbc8c8\") " pod="openstack/dnsmasq-dns-57d769cc4f-99h4x" Feb 18 19:33:06 crc kubenswrapper[4942]: I0218 19:33:06.002447 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vrdxp\" (UniqueName: \"kubernetes.io/projected/b7887418-e8d9-434c-a8e3-fed787cbc8c8-kube-api-access-vrdxp\") pod \"dnsmasq-dns-57d769cc4f-99h4x\" (UID: \"b7887418-e8d9-434c-a8e3-fed787cbc8c8\") " pod="openstack/dnsmasq-dns-57d769cc4f-99h4x" Feb 18 19:33:06 crc kubenswrapper[4942]: I0218 19:33:06.003125 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b7887418-e8d9-434c-a8e3-fed787cbc8c8-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-99h4x\" (UID: \"b7887418-e8d9-434c-a8e3-fed787cbc8c8\") " pod="openstack/dnsmasq-dns-57d769cc4f-99h4x" Feb 18 19:33:06 crc kubenswrapper[4942]: I0218 19:33:06.003274 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b7887418-e8d9-434c-a8e3-fed787cbc8c8-config\") pod \"dnsmasq-dns-57d769cc4f-99h4x\" (UID: \"b7887418-e8d9-434c-a8e3-fed787cbc8c8\") " pod="openstack/dnsmasq-dns-57d769cc4f-99h4x" Feb 18 19:33:06 crc kubenswrapper[4942]: I0218 19:33:06.032889 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vrdxp\" (UniqueName: \"kubernetes.io/projected/b7887418-e8d9-434c-a8e3-fed787cbc8c8-kube-api-access-vrdxp\") pod \"dnsmasq-dns-57d769cc4f-99h4x\" (UID: \"b7887418-e8d9-434c-a8e3-fed787cbc8c8\") " pod="openstack/dnsmasq-dns-57d769cc4f-99h4x" Feb 18 19:33:06 crc kubenswrapper[4942]: I0218 19:33:06.115366 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-99h4x" Feb 18 19:33:06 crc kubenswrapper[4942]: I0218 19:33:06.377686 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-2mvhf"] Feb 18 19:33:06 crc kubenswrapper[4942]: W0218 19:33:06.384013 4942 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb34cdd67_e888_4718_8889_0dc284187fcc.slice/crio-2f50d2e2d9920890883c43f7e3f4d7d184c62c40d9aa2c1ba9d6825c0e37fee3 WatchSource:0}: Error finding container 2f50d2e2d9920890883c43f7e3f4d7d184c62c40d9aa2c1ba9d6825c0e37fee3: Status 404 returned error can't find the container with id 2f50d2e2d9920890883c43f7e3f4d7d184c62c40d9aa2c1ba9d6825c0e37fee3 Feb 18 19:33:06 crc kubenswrapper[4942]: I0218 19:33:06.542940 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-2mvhf" event={"ID":"b34cdd67-e888-4718-8889-0dc284187fcc","Type":"ContainerStarted","Data":"2f50d2e2d9920890883c43f7e3f4d7d184c62c40d9aa2c1ba9d6825c0e37fee3"} Feb 18 19:33:06 crc kubenswrapper[4942]: I0218 19:33:06.628731 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-99h4x"] Feb 18 19:33:06 crc kubenswrapper[4942]: I0218 19:33:06.660168 4942 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Feb 18 19:33:06 crc kubenswrapper[4942]: I0218 19:33:06.661342 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Feb 18 19:33:06 crc kubenswrapper[4942]: I0218 19:33:06.663523 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Feb 18 19:33:06 crc kubenswrapper[4942]: I0218 19:33:06.663704 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-server-dockercfg-jnzzx" Feb 18 19:33:06 crc kubenswrapper[4942]: I0218 19:33:06.664299 4942 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Feb 18 19:33:06 crc kubenswrapper[4942]: I0218 19:33:06.664365 4942 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-config-data" Feb 18 19:33:06 crc kubenswrapper[4942]: I0218 19:33:06.664469 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-svc" Feb 18 19:33:06 crc kubenswrapper[4942]: I0218 19:33:06.665601 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Feb 18 19:33:06 crc kubenswrapper[4942]: I0218 19:33:06.665915 4942 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Feb 18 19:33:06 crc kubenswrapper[4942]: I0218 19:33:06.682708 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Feb 18 19:33:06 crc kubenswrapper[4942]: I0218 19:33:06.815678 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/77de5cb0-e446-407d-9e32-b13f39c84ae2-server-conf\") pod \"rabbitmq-server-0\" (UID: \"77de5cb0-e446-407d-9e32-b13f39c84ae2\") " pod="openstack/rabbitmq-server-0" Feb 18 19:33:06 crc kubenswrapper[4942]: I0218 19:33:06.815717 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"rabbitmq-server-0\" (UID: \"77de5cb0-e446-407d-9e32-b13f39c84ae2\") " pod="openstack/rabbitmq-server-0" Feb 18 19:33:06 crc kubenswrapper[4942]: I0218 19:33:06.815959 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/77de5cb0-e446-407d-9e32-b13f39c84ae2-config-data\") pod \"rabbitmq-server-0\" (UID: \"77de5cb0-e446-407d-9e32-b13f39c84ae2\") " pod="openstack/rabbitmq-server-0" Feb 18 19:33:06 crc kubenswrapper[4942]: I0218 19:33:06.816015 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/77de5cb0-e446-407d-9e32-b13f39c84ae2-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"77de5cb0-e446-407d-9e32-b13f39c84ae2\") " pod="openstack/rabbitmq-server-0" Feb 18 19:33:06 crc kubenswrapper[4942]: I0218 19:33:06.816032 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/77de5cb0-e446-407d-9e32-b13f39c84ae2-pod-info\") pod \"rabbitmq-server-0\" (UID: \"77de5cb0-e446-407d-9e32-b13f39c84ae2\") " pod="openstack/rabbitmq-server-0" Feb 18 19:33:06 crc kubenswrapper[4942]: I0218 19:33:06.816093 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8wqkf\" (UniqueName: \"kubernetes.io/projected/77de5cb0-e446-407d-9e32-b13f39c84ae2-kube-api-access-8wqkf\") pod \"rabbitmq-server-0\" (UID: \"77de5cb0-e446-407d-9e32-b13f39c84ae2\") " pod="openstack/rabbitmq-server-0" Feb 18 19:33:06 crc kubenswrapper[4942]: I0218 19:33:06.816123 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/77de5cb0-e446-407d-9e32-b13f39c84ae2-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"77de5cb0-e446-407d-9e32-b13f39c84ae2\") " pod="openstack/rabbitmq-server-0" Feb 18 19:33:06 crc kubenswrapper[4942]: I0218 19:33:06.816338 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/77de5cb0-e446-407d-9e32-b13f39c84ae2-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"77de5cb0-e446-407d-9e32-b13f39c84ae2\") " pod="openstack/rabbitmq-server-0" Feb 18 19:33:06 crc kubenswrapper[4942]: I0218 19:33:06.816403 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/77de5cb0-e446-407d-9e32-b13f39c84ae2-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"77de5cb0-e446-407d-9e32-b13f39c84ae2\") " pod="openstack/rabbitmq-server-0" Feb 18 19:33:06 crc kubenswrapper[4942]: I0218 19:33:06.816461 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/77de5cb0-e446-407d-9e32-b13f39c84ae2-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"77de5cb0-e446-407d-9e32-b13f39c84ae2\") " pod="openstack/rabbitmq-server-0" Feb 18 19:33:06 crc kubenswrapper[4942]: I0218 19:33:06.816514 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/77de5cb0-e446-407d-9e32-b13f39c84ae2-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"77de5cb0-e446-407d-9e32-b13f39c84ae2\") " pod="openstack/rabbitmq-server-0" Feb 18 19:33:06 crc kubenswrapper[4942]: I0218 19:33:06.918144 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/77de5cb0-e446-407d-9e32-b13f39c84ae2-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"77de5cb0-e446-407d-9e32-b13f39c84ae2\") " pod="openstack/rabbitmq-server-0" Feb 18 19:33:06 crc kubenswrapper[4942]: I0218 19:33:06.918198 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/77de5cb0-e446-407d-9e32-b13f39c84ae2-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"77de5cb0-e446-407d-9e32-b13f39c84ae2\") " pod="openstack/rabbitmq-server-0" Feb 18 19:33:06 crc kubenswrapper[4942]: I0218 19:33:06.918443 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/77de5cb0-e446-407d-9e32-b13f39c84ae2-server-conf\") pod \"rabbitmq-server-0\" (UID: \"77de5cb0-e446-407d-9e32-b13f39c84ae2\") " pod="openstack/rabbitmq-server-0" Feb 18 19:33:06 crc kubenswrapper[4942]: I0218 19:33:06.918485 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"rabbitmq-server-0\" (UID: \"77de5cb0-e446-407d-9e32-b13f39c84ae2\") " pod="openstack/rabbitmq-server-0" Feb 18 19:33:06 crc kubenswrapper[4942]: I0218 19:33:06.918522 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/77de5cb0-e446-407d-9e32-b13f39c84ae2-config-data\") pod \"rabbitmq-server-0\" (UID: \"77de5cb0-e446-407d-9e32-b13f39c84ae2\") " pod="openstack/rabbitmq-server-0" Feb 18 19:33:06 crc kubenswrapper[4942]: I0218 19:33:06.918554 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/77de5cb0-e446-407d-9e32-b13f39c84ae2-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"77de5cb0-e446-407d-9e32-b13f39c84ae2\") " pod="openstack/rabbitmq-server-0" Feb 18 19:33:06 crc kubenswrapper[4942]: I0218 19:33:06.918577 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/77de5cb0-e446-407d-9e32-b13f39c84ae2-pod-info\") pod \"rabbitmq-server-0\" (UID: \"77de5cb0-e446-407d-9e32-b13f39c84ae2\") " pod="openstack/rabbitmq-server-0" Feb 18 19:33:06 crc kubenswrapper[4942]: I0218 19:33:06.918603 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8wqkf\" (UniqueName: \"kubernetes.io/projected/77de5cb0-e446-407d-9e32-b13f39c84ae2-kube-api-access-8wqkf\") pod \"rabbitmq-server-0\" (UID: \"77de5cb0-e446-407d-9e32-b13f39c84ae2\") " pod="openstack/rabbitmq-server-0" Feb 18 19:33:06 crc kubenswrapper[4942]: I0218 19:33:06.918628 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/77de5cb0-e446-407d-9e32-b13f39c84ae2-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"77de5cb0-e446-407d-9e32-b13f39c84ae2\") " pod="openstack/rabbitmq-server-0" Feb 18 19:33:06 crc kubenswrapper[4942]: I0218 19:33:06.918648 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/77de5cb0-e446-407d-9e32-b13f39c84ae2-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"77de5cb0-e446-407d-9e32-b13f39c84ae2\") " pod="openstack/rabbitmq-server-0" Feb 18 19:33:06 crc kubenswrapper[4942]: I0218 19:33:06.918665 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/77de5cb0-e446-407d-9e32-b13f39c84ae2-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"77de5cb0-e446-407d-9e32-b13f39c84ae2\") " pod="openstack/rabbitmq-server-0" Feb 18 19:33:06 crc kubenswrapper[4942]: I0218 19:33:06.918705 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/77de5cb0-e446-407d-9e32-b13f39c84ae2-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"77de5cb0-e446-407d-9e32-b13f39c84ae2\") " pod="openstack/rabbitmq-server-0" Feb 18 19:33:06 crc kubenswrapper[4942]: I0218 19:33:06.918724 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/77de5cb0-e446-407d-9e32-b13f39c84ae2-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"77de5cb0-e446-407d-9e32-b13f39c84ae2\") " pod="openstack/rabbitmq-server-0" Feb 18 19:33:06 crc kubenswrapper[4942]: I0218 19:33:06.919508 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/77de5cb0-e446-407d-9e32-b13f39c84ae2-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"77de5cb0-e446-407d-9e32-b13f39c84ae2\") " pod="openstack/rabbitmq-server-0" Feb 18 19:33:06 crc kubenswrapper[4942]: I0218 19:33:06.919886 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/77de5cb0-e446-407d-9e32-b13f39c84ae2-server-conf\") pod \"rabbitmq-server-0\" (UID: \"77de5cb0-e446-407d-9e32-b13f39c84ae2\") " pod="openstack/rabbitmq-server-0" Feb 18 19:33:06 crc kubenswrapper[4942]: I0218 19:33:06.919980 4942 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"rabbitmq-server-0\" (UID: \"77de5cb0-e446-407d-9e32-b13f39c84ae2\") device mount path \"/mnt/openstack/pv06\"" pod="openstack/rabbitmq-server-0" Feb 18 19:33:06 crc kubenswrapper[4942]: I0218 19:33:06.920554 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/77de5cb0-e446-407d-9e32-b13f39c84ae2-config-data\") pod \"rabbitmq-server-0\" (UID: \"77de5cb0-e446-407d-9e32-b13f39c84ae2\") " pod="openstack/rabbitmq-server-0" Feb 18 19:33:06 crc kubenswrapper[4942]: I0218 19:33:06.924897 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/77de5cb0-e446-407d-9e32-b13f39c84ae2-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"77de5cb0-e446-407d-9e32-b13f39c84ae2\") " pod="openstack/rabbitmq-server-0" Feb 18 19:33:06 crc kubenswrapper[4942]: I0218 19:33:06.925217 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/77de5cb0-e446-407d-9e32-b13f39c84ae2-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"77de5cb0-e446-407d-9e32-b13f39c84ae2\") " pod="openstack/rabbitmq-server-0" Feb 18 19:33:06 crc kubenswrapper[4942]: I0218 19:33:06.936218 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/77de5cb0-e446-407d-9e32-b13f39c84ae2-pod-info\") pod \"rabbitmq-server-0\" (UID: \"77de5cb0-e446-407d-9e32-b13f39c84ae2\") " pod="openstack/rabbitmq-server-0" Feb 18 19:33:06 crc kubenswrapper[4942]: I0218 19:33:06.936835 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/77de5cb0-e446-407d-9e32-b13f39c84ae2-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"77de5cb0-e446-407d-9e32-b13f39c84ae2\") " pod="openstack/rabbitmq-server-0" Feb 18 19:33:06 crc kubenswrapper[4942]: I0218 19:33:06.936915 4942 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 18 19:33:06 crc kubenswrapper[4942]: I0218 19:33:06.939641 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Feb 18 19:33:06 crc kubenswrapper[4942]: I0218 19:33:06.943706 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Feb 18 19:33:06 crc kubenswrapper[4942]: I0218 19:33:06.943952 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-wp8g5" Feb 18 19:33:06 crc kubenswrapper[4942]: I0218 19:33:06.944061 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Feb 18 19:33:06 crc kubenswrapper[4942]: I0218 19:33:06.944935 4942 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Feb 18 19:33:06 crc kubenswrapper[4942]: I0218 19:33:06.946083 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Feb 18 19:33:06 crc kubenswrapper[4942]: I0218 19:33:06.946187 4942 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Feb 18 19:33:06 crc kubenswrapper[4942]: I0218 19:33:06.946301 4942 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Feb 18 19:33:06 crc kubenswrapper[4942]: I0218 19:33:06.949682 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8wqkf\" (UniqueName: \"kubernetes.io/projected/77de5cb0-e446-407d-9e32-b13f39c84ae2-kube-api-access-8wqkf\") pod \"rabbitmq-server-0\" (UID: \"77de5cb0-e446-407d-9e32-b13f39c84ae2\") " pod="openstack/rabbitmq-server-0" Feb 18 19:33:06 crc kubenswrapper[4942]: I0218 19:33:06.965076 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 18 19:33:06 crc kubenswrapper[4942]: I0218 19:33:06.974192 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"rabbitmq-server-0\" (UID: \"77de5cb0-e446-407d-9e32-b13f39c84ae2\") " pod="openstack/rabbitmq-server-0" Feb 18 19:33:06 crc kubenswrapper[4942]: I0218 19:33:06.987196 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Feb 18 19:33:07 crc kubenswrapper[4942]: I0218 19:33:07.120877 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/b6b41292-c562-4964-bb25-d8945415b3da-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"b6b41292-c562-4964-bb25-d8945415b3da\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 19:33:07 crc kubenswrapper[4942]: I0218 19:33:07.120921 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/b6b41292-c562-4964-bb25-d8945415b3da-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"b6b41292-c562-4964-bb25-d8945415b3da\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 19:33:07 crc kubenswrapper[4942]: I0218 19:33:07.120946 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/b6b41292-c562-4964-bb25-d8945415b3da-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"b6b41292-c562-4964-bb25-d8945415b3da\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 19:33:07 crc kubenswrapper[4942]: I0218 19:33:07.120964 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/b6b41292-c562-4964-bb25-d8945415b3da-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"b6b41292-c562-4964-bb25-d8945415b3da\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 19:33:07 crc kubenswrapper[4942]: I0218 19:33:07.120992 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/b6b41292-c562-4964-bb25-d8945415b3da-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"b6b41292-c562-4964-bb25-d8945415b3da\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 19:33:07 crc kubenswrapper[4942]: I0218 19:33:07.121040 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/b6b41292-c562-4964-bb25-d8945415b3da-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"b6b41292-c562-4964-bb25-d8945415b3da\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 19:33:07 crc kubenswrapper[4942]: I0218 19:33:07.121068 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/b6b41292-c562-4964-bb25-d8945415b3da-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"b6b41292-c562-4964-bb25-d8945415b3da\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 19:33:07 crc kubenswrapper[4942]: I0218 19:33:07.121086 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"b6b41292-c562-4964-bb25-d8945415b3da\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 19:33:07 crc kubenswrapper[4942]: I0218 19:33:07.121258 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/b6b41292-c562-4964-bb25-d8945415b3da-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"b6b41292-c562-4964-bb25-d8945415b3da\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 19:33:07 crc kubenswrapper[4942]: I0218 19:33:07.121372 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p9vpz\" (UniqueName: \"kubernetes.io/projected/b6b41292-c562-4964-bb25-d8945415b3da-kube-api-access-p9vpz\") pod \"rabbitmq-cell1-server-0\" (UID: \"b6b41292-c562-4964-bb25-d8945415b3da\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 19:33:07 crc kubenswrapper[4942]: I0218 19:33:07.121399 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/b6b41292-c562-4964-bb25-d8945415b3da-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"b6b41292-c562-4964-bb25-d8945415b3da\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 19:33:07 crc kubenswrapper[4942]: I0218 19:33:07.225493 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p9vpz\" (UniqueName: \"kubernetes.io/projected/b6b41292-c562-4964-bb25-d8945415b3da-kube-api-access-p9vpz\") pod \"rabbitmq-cell1-server-0\" (UID: \"b6b41292-c562-4964-bb25-d8945415b3da\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 19:33:07 crc kubenswrapper[4942]: I0218 19:33:07.225546 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/b6b41292-c562-4964-bb25-d8945415b3da-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"b6b41292-c562-4964-bb25-d8945415b3da\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 19:33:07 crc kubenswrapper[4942]: I0218 19:33:07.225573 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/b6b41292-c562-4964-bb25-d8945415b3da-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"b6b41292-c562-4964-bb25-d8945415b3da\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 19:33:07 crc kubenswrapper[4942]: I0218 19:33:07.225589 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/b6b41292-c562-4964-bb25-d8945415b3da-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"b6b41292-c562-4964-bb25-d8945415b3da\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 19:33:07 crc kubenswrapper[4942]: I0218 19:33:07.225611 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/b6b41292-c562-4964-bb25-d8945415b3da-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"b6b41292-c562-4964-bb25-d8945415b3da\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 19:33:07 crc kubenswrapper[4942]: I0218 19:33:07.225628 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/b6b41292-c562-4964-bb25-d8945415b3da-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"b6b41292-c562-4964-bb25-d8945415b3da\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 19:33:07 crc kubenswrapper[4942]: I0218 19:33:07.225652 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/b6b41292-c562-4964-bb25-d8945415b3da-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"b6b41292-c562-4964-bb25-d8945415b3da\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 19:33:07 crc kubenswrapper[4942]: I0218 19:33:07.225680 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/b6b41292-c562-4964-bb25-d8945415b3da-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"b6b41292-c562-4964-bb25-d8945415b3da\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 19:33:07 crc kubenswrapper[4942]: I0218 19:33:07.225705 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/b6b41292-c562-4964-bb25-d8945415b3da-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"b6b41292-c562-4964-bb25-d8945415b3da\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 19:33:07 crc kubenswrapper[4942]: I0218 19:33:07.225724 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"b6b41292-c562-4964-bb25-d8945415b3da\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 19:33:07 crc kubenswrapper[4942]: I0218 19:33:07.225754 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/b6b41292-c562-4964-bb25-d8945415b3da-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"b6b41292-c562-4964-bb25-d8945415b3da\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 19:33:07 crc kubenswrapper[4942]: I0218 19:33:07.226146 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/b6b41292-c562-4964-bb25-d8945415b3da-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"b6b41292-c562-4964-bb25-d8945415b3da\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 19:33:07 crc kubenswrapper[4942]: I0218 19:33:07.226492 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/b6b41292-c562-4964-bb25-d8945415b3da-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"b6b41292-c562-4964-bb25-d8945415b3da\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 19:33:07 crc kubenswrapper[4942]: I0218 19:33:07.227484 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/b6b41292-c562-4964-bb25-d8945415b3da-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"b6b41292-c562-4964-bb25-d8945415b3da\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 19:33:07 crc kubenswrapper[4942]: I0218 19:33:07.228142 4942 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"b6b41292-c562-4964-bb25-d8945415b3da\") device mount path \"/mnt/openstack/pv07\"" pod="openstack/rabbitmq-cell1-server-0" Feb 18 19:33:07 crc kubenswrapper[4942]: I0218 19:33:07.228469 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/b6b41292-c562-4964-bb25-d8945415b3da-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"b6b41292-c562-4964-bb25-d8945415b3da\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 19:33:07 crc kubenswrapper[4942]: I0218 19:33:07.228478 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/b6b41292-c562-4964-bb25-d8945415b3da-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"b6b41292-c562-4964-bb25-d8945415b3da\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 19:33:07 crc kubenswrapper[4942]: I0218 19:33:07.230988 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/b6b41292-c562-4964-bb25-d8945415b3da-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"b6b41292-c562-4964-bb25-d8945415b3da\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 19:33:07 crc kubenswrapper[4942]: I0218 19:33:07.232977 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/b6b41292-c562-4964-bb25-d8945415b3da-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"b6b41292-c562-4964-bb25-d8945415b3da\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 19:33:07 crc kubenswrapper[4942]: I0218 19:33:07.233032 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/b6b41292-c562-4964-bb25-d8945415b3da-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"b6b41292-c562-4964-bb25-d8945415b3da\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 19:33:07 crc kubenswrapper[4942]: I0218 19:33:07.234158 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/b6b41292-c562-4964-bb25-d8945415b3da-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"b6b41292-c562-4964-bb25-d8945415b3da\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 19:33:07 crc kubenswrapper[4942]: I0218 19:33:07.252546 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p9vpz\" (UniqueName: \"kubernetes.io/projected/b6b41292-c562-4964-bb25-d8945415b3da-kube-api-access-p9vpz\") pod \"rabbitmq-cell1-server-0\" (UID: \"b6b41292-c562-4964-bb25-d8945415b3da\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 19:33:07 crc kubenswrapper[4942]: I0218 19:33:07.253869 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"b6b41292-c562-4964-bb25-d8945415b3da\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 19:33:07 crc kubenswrapper[4942]: I0218 19:33:07.325486 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Feb 18 19:33:08 crc kubenswrapper[4942]: I0218 19:33:08.168952 4942 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-galera-0"] Feb 18 19:33:08 crc kubenswrapper[4942]: I0218 19:33:08.170721 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Feb 18 19:33:08 crc kubenswrapper[4942]: I0218 19:33:08.174536 4942 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config-data" Feb 18 19:33:08 crc kubenswrapper[4942]: I0218 19:33:08.174701 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-svc" Feb 18 19:33:08 crc kubenswrapper[4942]: I0218 19:33:08.174883 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-dockercfg-z58zg" Feb 18 19:33:08 crc kubenswrapper[4942]: I0218 19:33:08.175022 4942 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-scripts" Feb 18 19:33:08 crc kubenswrapper[4942]: I0218 19:33:08.183588 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Feb 18 19:33:08 crc kubenswrapper[4942]: I0218 19:33:08.187709 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"combined-ca-bundle" Feb 18 19:33:08 crc kubenswrapper[4942]: I0218 19:33:08.341739 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/e39270f2-0125-43f1-a2b3-cda4813614dd-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"e39270f2-0125-43f1-a2b3-cda4813614dd\") " pod="openstack/openstack-galera-0" Feb 18 19:33:08 crc kubenswrapper[4942]: I0218 19:33:08.341865 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e39270f2-0125-43f1-a2b3-cda4813614dd-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"e39270f2-0125-43f1-a2b3-cda4813614dd\") " pod="openstack/openstack-galera-0" Feb 18 19:33:08 crc kubenswrapper[4942]: I0218 19:33:08.341936 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/e39270f2-0125-43f1-a2b3-cda4813614dd-config-data-generated\") pod \"openstack-galera-0\" (UID: \"e39270f2-0125-43f1-a2b3-cda4813614dd\") " pod="openstack/openstack-galera-0" Feb 18 19:33:08 crc kubenswrapper[4942]: I0218 19:33:08.342034 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"openstack-galera-0\" (UID: \"e39270f2-0125-43f1-a2b3-cda4813614dd\") " pod="openstack/openstack-galera-0" Feb 18 19:33:08 crc kubenswrapper[4942]: I0218 19:33:08.342070 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/e39270f2-0125-43f1-a2b3-cda4813614dd-config-data-default\") pod \"openstack-galera-0\" (UID: \"e39270f2-0125-43f1-a2b3-cda4813614dd\") " pod="openstack/openstack-galera-0" Feb 18 19:33:08 crc kubenswrapper[4942]: I0218 19:33:08.342108 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wqdfr\" (UniqueName: \"kubernetes.io/projected/e39270f2-0125-43f1-a2b3-cda4813614dd-kube-api-access-wqdfr\") pod \"openstack-galera-0\" (UID: \"e39270f2-0125-43f1-a2b3-cda4813614dd\") " pod="openstack/openstack-galera-0" Feb 18 19:33:08 crc kubenswrapper[4942]: I0218 19:33:08.342129 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e39270f2-0125-43f1-a2b3-cda4813614dd-operator-scripts\") pod \"openstack-galera-0\" (UID: \"e39270f2-0125-43f1-a2b3-cda4813614dd\") " pod="openstack/openstack-galera-0" Feb 18 19:33:08 crc kubenswrapper[4942]: I0218 19:33:08.342171 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/e39270f2-0125-43f1-a2b3-cda4813614dd-kolla-config\") pod \"openstack-galera-0\" (UID: \"e39270f2-0125-43f1-a2b3-cda4813614dd\") " pod="openstack/openstack-galera-0" Feb 18 19:33:08 crc kubenswrapper[4942]: I0218 19:33:08.444906 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/e39270f2-0125-43f1-a2b3-cda4813614dd-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"e39270f2-0125-43f1-a2b3-cda4813614dd\") " pod="openstack/openstack-galera-0" Feb 18 19:33:08 crc kubenswrapper[4942]: I0218 19:33:08.444960 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e39270f2-0125-43f1-a2b3-cda4813614dd-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"e39270f2-0125-43f1-a2b3-cda4813614dd\") " pod="openstack/openstack-galera-0" Feb 18 19:33:08 crc kubenswrapper[4942]: I0218 19:33:08.445030 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/e39270f2-0125-43f1-a2b3-cda4813614dd-config-data-generated\") pod \"openstack-galera-0\" (UID: \"e39270f2-0125-43f1-a2b3-cda4813614dd\") " pod="openstack/openstack-galera-0" Feb 18 19:33:08 crc kubenswrapper[4942]: I0218 19:33:08.445058 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"openstack-galera-0\" (UID: \"e39270f2-0125-43f1-a2b3-cda4813614dd\") " pod="openstack/openstack-galera-0" Feb 18 19:33:08 crc kubenswrapper[4942]: I0218 19:33:08.445089 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/e39270f2-0125-43f1-a2b3-cda4813614dd-config-data-default\") pod \"openstack-galera-0\" (UID: \"e39270f2-0125-43f1-a2b3-cda4813614dd\") " pod="openstack/openstack-galera-0" Feb 18 19:33:08 crc kubenswrapper[4942]: I0218 19:33:08.445125 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wqdfr\" (UniqueName: \"kubernetes.io/projected/e39270f2-0125-43f1-a2b3-cda4813614dd-kube-api-access-wqdfr\") pod \"openstack-galera-0\" (UID: \"e39270f2-0125-43f1-a2b3-cda4813614dd\") " pod="openstack/openstack-galera-0" Feb 18 19:33:08 crc kubenswrapper[4942]: I0218 19:33:08.445146 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e39270f2-0125-43f1-a2b3-cda4813614dd-operator-scripts\") pod \"openstack-galera-0\" (UID: \"e39270f2-0125-43f1-a2b3-cda4813614dd\") " pod="openstack/openstack-galera-0" Feb 18 19:33:08 crc kubenswrapper[4942]: I0218 19:33:08.445178 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/e39270f2-0125-43f1-a2b3-cda4813614dd-kolla-config\") pod \"openstack-galera-0\" (UID: \"e39270f2-0125-43f1-a2b3-cda4813614dd\") " pod="openstack/openstack-galera-0" Feb 18 19:33:08 crc kubenswrapper[4942]: I0218 19:33:08.445925 4942 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"openstack-galera-0\" (UID: \"e39270f2-0125-43f1-a2b3-cda4813614dd\") device mount path \"/mnt/openstack/pv02\"" pod="openstack/openstack-galera-0" Feb 18 19:33:08 crc kubenswrapper[4942]: I0218 19:33:08.446100 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/e39270f2-0125-43f1-a2b3-cda4813614dd-kolla-config\") pod \"openstack-galera-0\" (UID: \"e39270f2-0125-43f1-a2b3-cda4813614dd\") " pod="openstack/openstack-galera-0" Feb 18 19:33:08 crc kubenswrapper[4942]: I0218 19:33:08.447079 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/e39270f2-0125-43f1-a2b3-cda4813614dd-config-data-default\") pod \"openstack-galera-0\" (UID: \"e39270f2-0125-43f1-a2b3-cda4813614dd\") " pod="openstack/openstack-galera-0" Feb 18 19:33:08 crc kubenswrapper[4942]: I0218 19:33:08.449452 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e39270f2-0125-43f1-a2b3-cda4813614dd-operator-scripts\") pod \"openstack-galera-0\" (UID: \"e39270f2-0125-43f1-a2b3-cda4813614dd\") " pod="openstack/openstack-galera-0" Feb 18 19:33:08 crc kubenswrapper[4942]: I0218 19:33:08.449549 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/e39270f2-0125-43f1-a2b3-cda4813614dd-config-data-generated\") pod \"openstack-galera-0\" (UID: \"e39270f2-0125-43f1-a2b3-cda4813614dd\") " pod="openstack/openstack-galera-0" Feb 18 19:33:08 crc kubenswrapper[4942]: I0218 19:33:08.451414 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/e39270f2-0125-43f1-a2b3-cda4813614dd-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"e39270f2-0125-43f1-a2b3-cda4813614dd\") " pod="openstack/openstack-galera-0" Feb 18 19:33:08 crc kubenswrapper[4942]: I0218 19:33:08.454160 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e39270f2-0125-43f1-a2b3-cda4813614dd-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"e39270f2-0125-43f1-a2b3-cda4813614dd\") " pod="openstack/openstack-galera-0" Feb 18 19:33:08 crc kubenswrapper[4942]: I0218 19:33:08.477907 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"openstack-galera-0\" (UID: \"e39270f2-0125-43f1-a2b3-cda4813614dd\") " pod="openstack/openstack-galera-0" Feb 18 19:33:08 crc kubenswrapper[4942]: I0218 19:33:08.481805 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wqdfr\" (UniqueName: \"kubernetes.io/projected/e39270f2-0125-43f1-a2b3-cda4813614dd-kube-api-access-wqdfr\") pod \"openstack-galera-0\" (UID: \"e39270f2-0125-43f1-a2b3-cda4813614dd\") " pod="openstack/openstack-galera-0" Feb 18 19:33:08 crc kubenswrapper[4942]: I0218 19:33:08.500493 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Feb 18 19:33:09 crc kubenswrapper[4942]: W0218 19:33:09.521134 4942 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb7887418_e8d9_434c_a8e3_fed787cbc8c8.slice/crio-56b3e02a29b20e42401279e2e4fcf7e7debb435a70a1f70075eaa1d581cacb4f WatchSource:0}: Error finding container 56b3e02a29b20e42401279e2e4fcf7e7debb435a70a1f70075eaa1d581cacb4f: Status 404 returned error can't find the container with id 56b3e02a29b20e42401279e2e4fcf7e7debb435a70a1f70075eaa1d581cacb4f Feb 18 19:33:09 crc kubenswrapper[4942]: I0218 19:33:09.564519 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-99h4x" event={"ID":"b7887418-e8d9-434c-a8e3-fed787cbc8c8","Type":"ContainerStarted","Data":"56b3e02a29b20e42401279e2e4fcf7e7debb435a70a1f70075eaa1d581cacb4f"} Feb 18 19:33:09 crc kubenswrapper[4942]: I0218 19:33:09.810070 4942 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-cell1-galera-0"] Feb 18 19:33:09 crc kubenswrapper[4942]: I0218 19:33:09.811458 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Feb 18 19:33:09 crc kubenswrapper[4942]: I0218 19:33:09.814933 4942 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-config-data" Feb 18 19:33:09 crc kubenswrapper[4942]: I0218 19:33:09.815155 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-cell1-svc" Feb 18 19:33:09 crc kubenswrapper[4942]: I0218 19:33:09.815215 4942 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-scripts" Feb 18 19:33:09 crc kubenswrapper[4942]: I0218 19:33:09.815165 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-cell1-dockercfg-gxmcq" Feb 18 19:33:09 crc kubenswrapper[4942]: I0218 19:33:09.836527 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Feb 18 19:33:09 crc kubenswrapper[4942]: I0218 19:33:09.846300 4942 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/memcached-0"] Feb 18 19:33:09 crc kubenswrapper[4942]: I0218 19:33:09.847303 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Feb 18 19:33:09 crc kubenswrapper[4942]: I0218 19:33:09.849548 4942 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"memcached-config-data" Feb 18 19:33:09 crc kubenswrapper[4942]: I0218 19:33:09.849730 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-memcached-svc" Feb 18 19:33:09 crc kubenswrapper[4942]: I0218 19:33:09.849856 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"memcached-memcached-dockercfg-ts7vg" Feb 18 19:33:09 crc kubenswrapper[4942]: I0218 19:33:09.854410 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Feb 18 19:33:09 crc kubenswrapper[4942]: I0218 19:33:09.966641 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e07db76c-5ab3-430d-b9ad-eba96f02ab9e-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"e07db76c-5ab3-430d-b9ad-eba96f02ab9e\") " pod="openstack/openstack-cell1-galera-0" Feb 18 19:33:09 crc kubenswrapper[4942]: I0218 19:33:09.966938 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/e07db76c-5ab3-430d-b9ad-eba96f02ab9e-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"e07db76c-5ab3-430d-b9ad-eba96f02ab9e\") " pod="openstack/openstack-cell1-galera-0" Feb 18 19:33:09 crc kubenswrapper[4942]: I0218 19:33:09.966956 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4hvvh\" (UniqueName: \"kubernetes.io/projected/e07db76c-5ab3-430d-b9ad-eba96f02ab9e-kube-api-access-4hvvh\") pod \"openstack-cell1-galera-0\" (UID: \"e07db76c-5ab3-430d-b9ad-eba96f02ab9e\") " pod="openstack/openstack-cell1-galera-0" Feb 18 19:33:09 crc kubenswrapper[4942]: I0218 19:33:09.966972 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/242ed220-c516-4f30-bb5b-69f28626101a-combined-ca-bundle\") pod \"memcached-0\" (UID: \"242ed220-c516-4f30-bb5b-69f28626101a\") " pod="openstack/memcached-0" Feb 18 19:33:09 crc kubenswrapper[4942]: I0218 19:33:09.967001 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/242ed220-c516-4f30-bb5b-69f28626101a-kolla-config\") pod \"memcached-0\" (UID: \"242ed220-c516-4f30-bb5b-69f28626101a\") " pod="openstack/memcached-0" Feb 18 19:33:09 crc kubenswrapper[4942]: I0218 19:33:09.967020 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e07db76c-5ab3-430d-b9ad-eba96f02ab9e-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"e07db76c-5ab3-430d-b9ad-eba96f02ab9e\") " pod="openstack/openstack-cell1-galera-0" Feb 18 19:33:09 crc kubenswrapper[4942]: I0218 19:33:09.967047 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/e07db76c-5ab3-430d-b9ad-eba96f02ab9e-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"e07db76c-5ab3-430d-b9ad-eba96f02ab9e\") " pod="openstack/openstack-cell1-galera-0" Feb 18 19:33:09 crc kubenswrapper[4942]: I0218 19:33:09.967062 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w8fg9\" (UniqueName: \"kubernetes.io/projected/242ed220-c516-4f30-bb5b-69f28626101a-kube-api-access-w8fg9\") pod \"memcached-0\" (UID: \"242ed220-c516-4f30-bb5b-69f28626101a\") " pod="openstack/memcached-0" Feb 18 19:33:09 crc kubenswrapper[4942]: I0218 19:33:09.967078 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/242ed220-c516-4f30-bb5b-69f28626101a-memcached-tls-certs\") pod \"memcached-0\" (UID: \"242ed220-c516-4f30-bb5b-69f28626101a\") " pod="openstack/memcached-0" Feb 18 19:33:09 crc kubenswrapper[4942]: I0218 19:33:09.967100 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/e07db76c-5ab3-430d-b9ad-eba96f02ab9e-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"e07db76c-5ab3-430d-b9ad-eba96f02ab9e\") " pod="openstack/openstack-cell1-galera-0" Feb 18 19:33:09 crc kubenswrapper[4942]: I0218 19:33:09.967121 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/242ed220-c516-4f30-bb5b-69f28626101a-config-data\") pod \"memcached-0\" (UID: \"242ed220-c516-4f30-bb5b-69f28626101a\") " pod="openstack/memcached-0" Feb 18 19:33:09 crc kubenswrapper[4942]: I0218 19:33:09.967160 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/e07db76c-5ab3-430d-b9ad-eba96f02ab9e-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"e07db76c-5ab3-430d-b9ad-eba96f02ab9e\") " pod="openstack/openstack-cell1-galera-0" Feb 18 19:33:09 crc kubenswrapper[4942]: I0218 19:33:09.967190 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"openstack-cell1-galera-0\" (UID: \"e07db76c-5ab3-430d-b9ad-eba96f02ab9e\") " pod="openstack/openstack-cell1-galera-0" Feb 18 19:33:10 crc kubenswrapper[4942]: I0218 19:33:10.068619 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/e07db76c-5ab3-430d-b9ad-eba96f02ab9e-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"e07db76c-5ab3-430d-b9ad-eba96f02ab9e\") " pod="openstack/openstack-cell1-galera-0" Feb 18 19:33:10 crc kubenswrapper[4942]: I0218 19:33:10.068661 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w8fg9\" (UniqueName: \"kubernetes.io/projected/242ed220-c516-4f30-bb5b-69f28626101a-kube-api-access-w8fg9\") pod \"memcached-0\" (UID: \"242ed220-c516-4f30-bb5b-69f28626101a\") " pod="openstack/memcached-0" Feb 18 19:33:10 crc kubenswrapper[4942]: I0218 19:33:10.068680 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/242ed220-c516-4f30-bb5b-69f28626101a-memcached-tls-certs\") pod \"memcached-0\" (UID: \"242ed220-c516-4f30-bb5b-69f28626101a\") " pod="openstack/memcached-0" Feb 18 19:33:10 crc kubenswrapper[4942]: I0218 19:33:10.068700 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/e07db76c-5ab3-430d-b9ad-eba96f02ab9e-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"e07db76c-5ab3-430d-b9ad-eba96f02ab9e\") " pod="openstack/openstack-cell1-galera-0" Feb 18 19:33:10 crc kubenswrapper[4942]: I0218 19:33:10.068723 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/242ed220-c516-4f30-bb5b-69f28626101a-config-data\") pod \"memcached-0\" (UID: \"242ed220-c516-4f30-bb5b-69f28626101a\") " pod="openstack/memcached-0" Feb 18 19:33:10 crc kubenswrapper[4942]: I0218 19:33:10.068778 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/e07db76c-5ab3-430d-b9ad-eba96f02ab9e-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"e07db76c-5ab3-430d-b9ad-eba96f02ab9e\") " pod="openstack/openstack-cell1-galera-0" Feb 18 19:33:10 crc kubenswrapper[4942]: I0218 19:33:10.068807 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"openstack-cell1-galera-0\" (UID: \"e07db76c-5ab3-430d-b9ad-eba96f02ab9e\") " pod="openstack/openstack-cell1-galera-0" Feb 18 19:33:10 crc kubenswrapper[4942]: I0218 19:33:10.068843 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e07db76c-5ab3-430d-b9ad-eba96f02ab9e-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"e07db76c-5ab3-430d-b9ad-eba96f02ab9e\") " pod="openstack/openstack-cell1-galera-0" Feb 18 19:33:10 crc kubenswrapper[4942]: I0218 19:33:10.068856 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/e07db76c-5ab3-430d-b9ad-eba96f02ab9e-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"e07db76c-5ab3-430d-b9ad-eba96f02ab9e\") " pod="openstack/openstack-cell1-galera-0" Feb 18 19:33:10 crc kubenswrapper[4942]: I0218 19:33:10.068871 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4hvvh\" (UniqueName: \"kubernetes.io/projected/e07db76c-5ab3-430d-b9ad-eba96f02ab9e-kube-api-access-4hvvh\") pod \"openstack-cell1-galera-0\" (UID: \"e07db76c-5ab3-430d-b9ad-eba96f02ab9e\") " pod="openstack/openstack-cell1-galera-0" Feb 18 19:33:10 crc kubenswrapper[4942]: I0218 19:33:10.068885 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/242ed220-c516-4f30-bb5b-69f28626101a-combined-ca-bundle\") pod \"memcached-0\" (UID: \"242ed220-c516-4f30-bb5b-69f28626101a\") " pod="openstack/memcached-0" Feb 18 19:33:10 crc kubenswrapper[4942]: I0218 19:33:10.068916 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/242ed220-c516-4f30-bb5b-69f28626101a-kolla-config\") pod \"memcached-0\" (UID: \"242ed220-c516-4f30-bb5b-69f28626101a\") " pod="openstack/memcached-0" Feb 18 19:33:10 crc kubenswrapper[4942]: I0218 19:33:10.069102 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e07db76c-5ab3-430d-b9ad-eba96f02ab9e-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"e07db76c-5ab3-430d-b9ad-eba96f02ab9e\") " pod="openstack/openstack-cell1-galera-0" Feb 18 19:33:10 crc kubenswrapper[4942]: I0218 19:33:10.069433 4942 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"openstack-cell1-galera-0\" (UID: \"e07db76c-5ab3-430d-b9ad-eba96f02ab9e\") device mount path \"/mnt/openstack/pv09\"" pod="openstack/openstack-cell1-galera-0" Feb 18 19:33:10 crc kubenswrapper[4942]: I0218 19:33:10.070285 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/242ed220-c516-4f30-bb5b-69f28626101a-kolla-config\") pod \"memcached-0\" (UID: \"242ed220-c516-4f30-bb5b-69f28626101a\") " pod="openstack/memcached-0" Feb 18 19:33:10 crc kubenswrapper[4942]: I0218 19:33:10.070355 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/e07db76c-5ab3-430d-b9ad-eba96f02ab9e-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"e07db76c-5ab3-430d-b9ad-eba96f02ab9e\") " pod="openstack/openstack-cell1-galera-0" Feb 18 19:33:10 crc kubenswrapper[4942]: I0218 19:33:10.070370 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/e07db76c-5ab3-430d-b9ad-eba96f02ab9e-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"e07db76c-5ab3-430d-b9ad-eba96f02ab9e\") " pod="openstack/openstack-cell1-galera-0" Feb 18 19:33:10 crc kubenswrapper[4942]: I0218 19:33:10.070642 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/e07db76c-5ab3-430d-b9ad-eba96f02ab9e-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"e07db76c-5ab3-430d-b9ad-eba96f02ab9e\") " pod="openstack/openstack-cell1-galera-0" Feb 18 19:33:10 crc kubenswrapper[4942]: I0218 19:33:10.070958 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/242ed220-c516-4f30-bb5b-69f28626101a-config-data\") pod \"memcached-0\" (UID: \"242ed220-c516-4f30-bb5b-69f28626101a\") " pod="openstack/memcached-0" Feb 18 19:33:10 crc kubenswrapper[4942]: I0218 19:33:10.071128 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e07db76c-5ab3-430d-b9ad-eba96f02ab9e-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"e07db76c-5ab3-430d-b9ad-eba96f02ab9e\") " pod="openstack/openstack-cell1-galera-0" Feb 18 19:33:10 crc kubenswrapper[4942]: I0218 19:33:10.077441 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/242ed220-c516-4f30-bb5b-69f28626101a-combined-ca-bundle\") pod \"memcached-0\" (UID: \"242ed220-c516-4f30-bb5b-69f28626101a\") " pod="openstack/memcached-0" Feb 18 19:33:10 crc kubenswrapper[4942]: I0218 19:33:10.077726 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e07db76c-5ab3-430d-b9ad-eba96f02ab9e-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"e07db76c-5ab3-430d-b9ad-eba96f02ab9e\") " pod="openstack/openstack-cell1-galera-0" Feb 18 19:33:10 crc kubenswrapper[4942]: I0218 19:33:10.080133 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/e07db76c-5ab3-430d-b9ad-eba96f02ab9e-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"e07db76c-5ab3-430d-b9ad-eba96f02ab9e\") " pod="openstack/openstack-cell1-galera-0" Feb 18 19:33:10 crc kubenswrapper[4942]: I0218 19:33:10.089230 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/242ed220-c516-4f30-bb5b-69f28626101a-memcached-tls-certs\") pod \"memcached-0\" (UID: \"242ed220-c516-4f30-bb5b-69f28626101a\") " pod="openstack/memcached-0" Feb 18 19:33:10 crc kubenswrapper[4942]: I0218 19:33:10.092254 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w8fg9\" (UniqueName: \"kubernetes.io/projected/242ed220-c516-4f30-bb5b-69f28626101a-kube-api-access-w8fg9\") pod \"memcached-0\" (UID: \"242ed220-c516-4f30-bb5b-69f28626101a\") " pod="openstack/memcached-0" Feb 18 19:33:10 crc kubenswrapper[4942]: I0218 19:33:10.095059 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4hvvh\" (UniqueName: \"kubernetes.io/projected/e07db76c-5ab3-430d-b9ad-eba96f02ab9e-kube-api-access-4hvvh\") pod \"openstack-cell1-galera-0\" (UID: \"e07db76c-5ab3-430d-b9ad-eba96f02ab9e\") " pod="openstack/openstack-cell1-galera-0" Feb 18 19:33:10 crc kubenswrapper[4942]: I0218 19:33:10.104576 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"openstack-cell1-galera-0\" (UID: \"e07db76c-5ab3-430d-b9ad-eba96f02ab9e\") " pod="openstack/openstack-cell1-galera-0" Feb 18 19:33:10 crc kubenswrapper[4942]: I0218 19:33:10.180973 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Feb 18 19:33:10 crc kubenswrapper[4942]: I0218 19:33:10.193173 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Feb 18 19:33:12 crc kubenswrapper[4942]: I0218 19:33:12.209013 4942 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Feb 18 19:33:12 crc kubenswrapper[4942]: I0218 19:33:12.209948 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Feb 18 19:33:12 crc kubenswrapper[4942]: I0218 19:33:12.212804 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-ceilometer-dockercfg-pblt6" Feb 18 19:33:12 crc kubenswrapper[4942]: I0218 19:33:12.225818 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 18 19:33:12 crc kubenswrapper[4942]: I0218 19:33:12.304958 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f75rx\" (UniqueName: \"kubernetes.io/projected/a8f1712c-12df-4ca2-81d3-dc649c747868-kube-api-access-f75rx\") pod \"kube-state-metrics-0\" (UID: \"a8f1712c-12df-4ca2-81d3-dc649c747868\") " pod="openstack/kube-state-metrics-0" Feb 18 19:33:12 crc kubenswrapper[4942]: I0218 19:33:12.406625 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f75rx\" (UniqueName: \"kubernetes.io/projected/a8f1712c-12df-4ca2-81d3-dc649c747868-kube-api-access-f75rx\") pod \"kube-state-metrics-0\" (UID: \"a8f1712c-12df-4ca2-81d3-dc649c747868\") " pod="openstack/kube-state-metrics-0" Feb 18 19:33:12 crc kubenswrapper[4942]: I0218 19:33:12.433019 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f75rx\" (UniqueName: \"kubernetes.io/projected/a8f1712c-12df-4ca2-81d3-dc649c747868-kube-api-access-f75rx\") pod \"kube-state-metrics-0\" (UID: \"a8f1712c-12df-4ca2-81d3-dc649c747868\") " pod="openstack/kube-state-metrics-0" Feb 18 19:33:12 crc kubenswrapper[4942]: I0218 19:33:12.534026 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Feb 18 19:33:13 crc kubenswrapper[4942]: I0218 19:33:13.352470 4942 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/prometheus-metric-storage-0"] Feb 18 19:33:13 crc kubenswrapper[4942]: I0218 19:33:13.355072 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Feb 18 19:33:13 crc kubenswrapper[4942]: I0218 19:33:13.360350 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage" Feb 18 19:33:13 crc kubenswrapper[4942]: I0218 19:33:13.360888 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-thanos-prometheus-http-client-file" Feb 18 19:33:13 crc kubenswrapper[4942]: I0218 19:33:13.361191 4942 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-1" Feb 18 19:33:13 crc kubenswrapper[4942]: I0218 19:33:13.361415 4942 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-0" Feb 18 19:33:13 crc kubenswrapper[4942]: I0218 19:33:13.361563 4942 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-2" Feb 18 19:33:13 crc kubenswrapper[4942]: I0218 19:33:13.362118 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"metric-storage-prometheus-dockercfg-7f4m2" Feb 18 19:33:13 crc kubenswrapper[4942]: I0218 19:33:13.362498 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-web-config" Feb 18 19:33:13 crc kubenswrapper[4942]: I0218 19:33:13.375951 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-tls-assets-0" Feb 18 19:33:13 crc kubenswrapper[4942]: I0218 19:33:13.377701 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Feb 18 19:33:13 crc kubenswrapper[4942]: I0218 19:33:13.425884 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/543db3d4-08d8-473f-a6ad-7e6a5bb9734c-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"543db3d4-08d8-473f-a6ad-7e6a5bb9734c\") " pod="openstack/prometheus-metric-storage-0" Feb 18 19:33:13 crc kubenswrapper[4942]: I0218 19:33:13.425994 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/543db3d4-08d8-473f-a6ad-7e6a5bb9734c-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"543db3d4-08d8-473f-a6ad-7e6a5bb9734c\") " pod="openstack/prometheus-metric-storage-0" Feb 18 19:33:13 crc kubenswrapper[4942]: I0218 19:33:13.426031 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/543db3d4-08d8-473f-a6ad-7e6a5bb9734c-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"543db3d4-08d8-473f-a6ad-7e6a5bb9734c\") " pod="openstack/prometheus-metric-storage-0" Feb 18 19:33:13 crc kubenswrapper[4942]: I0218 19:33:13.426073 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-99d9d799-8f85-4f2f-8ca2-c6e20d4d69c5\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-99d9d799-8f85-4f2f-8ca2-c6e20d4d69c5\") pod \"prometheus-metric-storage-0\" (UID: \"543db3d4-08d8-473f-a6ad-7e6a5bb9734c\") " pod="openstack/prometheus-metric-storage-0" Feb 18 19:33:13 crc kubenswrapper[4942]: I0218 19:33:13.426237 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/543db3d4-08d8-473f-a6ad-7e6a5bb9734c-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"543db3d4-08d8-473f-a6ad-7e6a5bb9734c\") " pod="openstack/prometheus-metric-storage-0" Feb 18 19:33:13 crc kubenswrapper[4942]: I0218 19:33:13.426299 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6pvjr\" (UniqueName: \"kubernetes.io/projected/543db3d4-08d8-473f-a6ad-7e6a5bb9734c-kube-api-access-6pvjr\") pod \"prometheus-metric-storage-0\" (UID: \"543db3d4-08d8-473f-a6ad-7e6a5bb9734c\") " pod="openstack/prometheus-metric-storage-0" Feb 18 19:33:13 crc kubenswrapper[4942]: I0218 19:33:13.426418 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/543db3d4-08d8-473f-a6ad-7e6a5bb9734c-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"543db3d4-08d8-473f-a6ad-7e6a5bb9734c\") " pod="openstack/prometheus-metric-storage-0" Feb 18 19:33:13 crc kubenswrapper[4942]: I0218 19:33:13.426525 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/543db3d4-08d8-473f-a6ad-7e6a5bb9734c-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"543db3d4-08d8-473f-a6ad-7e6a5bb9734c\") " pod="openstack/prometheus-metric-storage-0" Feb 18 19:33:13 crc kubenswrapper[4942]: I0218 19:33:13.426584 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/543db3d4-08d8-473f-a6ad-7e6a5bb9734c-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"543db3d4-08d8-473f-a6ad-7e6a5bb9734c\") " pod="openstack/prometheus-metric-storage-0" Feb 18 19:33:13 crc kubenswrapper[4942]: I0218 19:33:13.426745 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/543db3d4-08d8-473f-a6ad-7e6a5bb9734c-config\") pod \"prometheus-metric-storage-0\" (UID: \"543db3d4-08d8-473f-a6ad-7e6a5bb9734c\") " pod="openstack/prometheus-metric-storage-0" Feb 18 19:33:13 crc kubenswrapper[4942]: I0218 19:33:13.528100 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/543db3d4-08d8-473f-a6ad-7e6a5bb9734c-config\") pod \"prometheus-metric-storage-0\" (UID: \"543db3d4-08d8-473f-a6ad-7e6a5bb9734c\") " pod="openstack/prometheus-metric-storage-0" Feb 18 19:33:13 crc kubenswrapper[4942]: I0218 19:33:13.528845 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/543db3d4-08d8-473f-a6ad-7e6a5bb9734c-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"543db3d4-08d8-473f-a6ad-7e6a5bb9734c\") " pod="openstack/prometheus-metric-storage-0" Feb 18 19:33:13 crc kubenswrapper[4942]: I0218 19:33:13.528907 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/543db3d4-08d8-473f-a6ad-7e6a5bb9734c-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"543db3d4-08d8-473f-a6ad-7e6a5bb9734c\") " pod="openstack/prometheus-metric-storage-0" Feb 18 19:33:13 crc kubenswrapper[4942]: I0218 19:33:13.528931 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/543db3d4-08d8-473f-a6ad-7e6a5bb9734c-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"543db3d4-08d8-473f-a6ad-7e6a5bb9734c\") " pod="openstack/prometheus-metric-storage-0" Feb 18 19:33:13 crc kubenswrapper[4942]: I0218 19:33:13.528979 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-99d9d799-8f85-4f2f-8ca2-c6e20d4d69c5\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-99d9d799-8f85-4f2f-8ca2-c6e20d4d69c5\") pod \"prometheus-metric-storage-0\" (UID: \"543db3d4-08d8-473f-a6ad-7e6a5bb9734c\") " pod="openstack/prometheus-metric-storage-0" Feb 18 19:33:13 crc kubenswrapper[4942]: I0218 19:33:13.529006 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/543db3d4-08d8-473f-a6ad-7e6a5bb9734c-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"543db3d4-08d8-473f-a6ad-7e6a5bb9734c\") " pod="openstack/prometheus-metric-storage-0" Feb 18 19:33:13 crc kubenswrapper[4942]: I0218 19:33:13.529032 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6pvjr\" (UniqueName: \"kubernetes.io/projected/543db3d4-08d8-473f-a6ad-7e6a5bb9734c-kube-api-access-6pvjr\") pod \"prometheus-metric-storage-0\" (UID: \"543db3d4-08d8-473f-a6ad-7e6a5bb9734c\") " pod="openstack/prometheus-metric-storage-0" Feb 18 19:33:13 crc kubenswrapper[4942]: I0218 19:33:13.529075 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/543db3d4-08d8-473f-a6ad-7e6a5bb9734c-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"543db3d4-08d8-473f-a6ad-7e6a5bb9734c\") " pod="openstack/prometheus-metric-storage-0" Feb 18 19:33:13 crc kubenswrapper[4942]: I0218 19:33:13.529115 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/543db3d4-08d8-473f-a6ad-7e6a5bb9734c-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"543db3d4-08d8-473f-a6ad-7e6a5bb9734c\") " pod="openstack/prometheus-metric-storage-0" Feb 18 19:33:13 crc kubenswrapper[4942]: I0218 19:33:13.529146 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/543db3d4-08d8-473f-a6ad-7e6a5bb9734c-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"543db3d4-08d8-473f-a6ad-7e6a5bb9734c\") " pod="openstack/prometheus-metric-storage-0" Feb 18 19:33:13 crc kubenswrapper[4942]: I0218 19:33:13.529755 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/543db3d4-08d8-473f-a6ad-7e6a5bb9734c-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"543db3d4-08d8-473f-a6ad-7e6a5bb9734c\") " pod="openstack/prometheus-metric-storage-0" Feb 18 19:33:13 crc kubenswrapper[4942]: I0218 19:33:13.530096 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/543db3d4-08d8-473f-a6ad-7e6a5bb9734c-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"543db3d4-08d8-473f-a6ad-7e6a5bb9734c\") " pod="openstack/prometheus-metric-storage-0" Feb 18 19:33:13 crc kubenswrapper[4942]: I0218 19:33:13.530157 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/543db3d4-08d8-473f-a6ad-7e6a5bb9734c-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"543db3d4-08d8-473f-a6ad-7e6a5bb9734c\") " pod="openstack/prometheus-metric-storage-0" Feb 18 19:33:13 crc kubenswrapper[4942]: I0218 19:33:13.535585 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/543db3d4-08d8-473f-a6ad-7e6a5bb9734c-config\") pod \"prometheus-metric-storage-0\" (UID: \"543db3d4-08d8-473f-a6ad-7e6a5bb9734c\") " pod="openstack/prometheus-metric-storage-0" Feb 18 19:33:13 crc kubenswrapper[4942]: I0218 19:33:13.538500 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/543db3d4-08d8-473f-a6ad-7e6a5bb9734c-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"543db3d4-08d8-473f-a6ad-7e6a5bb9734c\") " pod="openstack/prometheus-metric-storage-0" Feb 18 19:33:13 crc kubenswrapper[4942]: I0218 19:33:13.539097 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/543db3d4-08d8-473f-a6ad-7e6a5bb9734c-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"543db3d4-08d8-473f-a6ad-7e6a5bb9734c\") " pod="openstack/prometheus-metric-storage-0" Feb 18 19:33:13 crc kubenswrapper[4942]: I0218 19:33:13.539116 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/543db3d4-08d8-473f-a6ad-7e6a5bb9734c-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"543db3d4-08d8-473f-a6ad-7e6a5bb9734c\") " pod="openstack/prometheus-metric-storage-0" Feb 18 19:33:13 crc kubenswrapper[4942]: I0218 19:33:13.554846 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6pvjr\" (UniqueName: \"kubernetes.io/projected/543db3d4-08d8-473f-a6ad-7e6a5bb9734c-kube-api-access-6pvjr\") pod \"prometheus-metric-storage-0\" (UID: \"543db3d4-08d8-473f-a6ad-7e6a5bb9734c\") " pod="openstack/prometheus-metric-storage-0" Feb 18 19:33:13 crc kubenswrapper[4942]: I0218 19:33:13.565445 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/543db3d4-08d8-473f-a6ad-7e6a5bb9734c-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"543db3d4-08d8-473f-a6ad-7e6a5bb9734c\") " pod="openstack/prometheus-metric-storage-0" Feb 18 19:33:13 crc kubenswrapper[4942]: I0218 19:33:13.591829 4942 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 18 19:33:13 crc kubenswrapper[4942]: I0218 19:33:13.591871 4942 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-99d9d799-8f85-4f2f-8ca2-c6e20d4d69c5\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-99d9d799-8f85-4f2f-8ca2-c6e20d4d69c5\") pod \"prometheus-metric-storage-0\" (UID: \"543db3d4-08d8-473f-a6ad-7e6a5bb9734c\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/70b345b463ff13ff33bce45da0f4a8796a1574afa2d8fd2ecf4f2239b34767fb/globalmount\"" pod="openstack/prometheus-metric-storage-0" Feb 18 19:33:13 crc kubenswrapper[4942]: I0218 19:33:13.696944 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-99d9d799-8f85-4f2f-8ca2-c6e20d4d69c5\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-99d9d799-8f85-4f2f-8ca2-c6e20d4d69c5\") pod \"prometheus-metric-storage-0\" (UID: \"543db3d4-08d8-473f-a6ad-7e6a5bb9734c\") " pod="openstack/prometheus-metric-storage-0" Feb 18 19:33:13 crc kubenswrapper[4942]: I0218 19:33:13.976818 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Feb 18 19:33:15 crc kubenswrapper[4942]: I0218 19:33:15.662559 4942 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-llsph"] Feb 18 19:33:15 crc kubenswrapper[4942]: I0218 19:33:15.663822 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-llsph" Feb 18 19:33:15 crc kubenswrapper[4942]: I0218 19:33:15.670400 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-llsph"] Feb 18 19:33:15 crc kubenswrapper[4942]: I0218 19:33:15.672039 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovncontroller-ovndbs" Feb 18 19:33:15 crc kubenswrapper[4942]: I0218 19:33:15.672273 4942 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-scripts" Feb 18 19:33:15 crc kubenswrapper[4942]: I0218 19:33:15.672490 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncontroller-ovncontroller-dockercfg-2t4hx" Feb 18 19:33:15 crc kubenswrapper[4942]: I0218 19:33:15.677519 4942 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-ovs-7xrn9"] Feb 18 19:33:15 crc kubenswrapper[4942]: I0218 19:33:15.679513 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-7xrn9" Feb 18 19:33:15 crc kubenswrapper[4942]: I0218 19:33:15.733781 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-7xrn9"] Feb 18 19:33:15 crc kubenswrapper[4942]: I0218 19:33:15.767592 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/a740e80f-15e5-4745-bb1d-96da2561f33b-etc-ovs\") pod \"ovn-controller-ovs-7xrn9\" (UID: \"a740e80f-15e5-4745-bb1d-96da2561f33b\") " pod="openstack/ovn-controller-ovs-7xrn9" Feb 18 19:33:15 crc kubenswrapper[4942]: I0218 19:33:15.767637 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/28fe292c-6cda-4e3b-bce3-544ded95930b-combined-ca-bundle\") pod \"ovn-controller-llsph\" (UID: \"28fe292c-6cda-4e3b-bce3-544ded95930b\") " pod="openstack/ovn-controller-llsph" Feb 18 19:33:15 crc kubenswrapper[4942]: I0218 19:33:15.767664 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/a740e80f-15e5-4745-bb1d-96da2561f33b-var-log\") pod \"ovn-controller-ovs-7xrn9\" (UID: \"a740e80f-15e5-4745-bb1d-96da2561f33b\") " pod="openstack/ovn-controller-ovs-7xrn9" Feb 18 19:33:15 crc kubenswrapper[4942]: I0218 19:33:15.767692 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/28fe292c-6cda-4e3b-bce3-544ded95930b-var-run\") pod \"ovn-controller-llsph\" (UID: \"28fe292c-6cda-4e3b-bce3-544ded95930b\") " pod="openstack/ovn-controller-llsph" Feb 18 19:33:15 crc kubenswrapper[4942]: I0218 19:33:15.767708 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/28fe292c-6cda-4e3b-bce3-544ded95930b-var-run-ovn\") pod \"ovn-controller-llsph\" (UID: \"28fe292c-6cda-4e3b-bce3-544ded95930b\") " pod="openstack/ovn-controller-llsph" Feb 18 19:33:15 crc kubenswrapper[4942]: I0218 19:33:15.767737 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/a740e80f-15e5-4745-bb1d-96da2561f33b-var-run\") pod \"ovn-controller-ovs-7xrn9\" (UID: \"a740e80f-15e5-4745-bb1d-96da2561f33b\") " pod="openstack/ovn-controller-ovs-7xrn9" Feb 18 19:33:15 crc kubenswrapper[4942]: I0218 19:33:15.767918 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dxmr6\" (UniqueName: \"kubernetes.io/projected/a740e80f-15e5-4745-bb1d-96da2561f33b-kube-api-access-dxmr6\") pod \"ovn-controller-ovs-7xrn9\" (UID: \"a740e80f-15e5-4745-bb1d-96da2561f33b\") " pod="openstack/ovn-controller-ovs-7xrn9" Feb 18 19:33:15 crc kubenswrapper[4942]: I0218 19:33:15.767978 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mhphp\" (UniqueName: \"kubernetes.io/projected/28fe292c-6cda-4e3b-bce3-544ded95930b-kube-api-access-mhphp\") pod \"ovn-controller-llsph\" (UID: \"28fe292c-6cda-4e3b-bce3-544ded95930b\") " pod="openstack/ovn-controller-llsph" Feb 18 19:33:15 crc kubenswrapper[4942]: I0218 19:33:15.768080 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/a740e80f-15e5-4745-bb1d-96da2561f33b-scripts\") pod \"ovn-controller-ovs-7xrn9\" (UID: \"a740e80f-15e5-4745-bb1d-96da2561f33b\") " pod="openstack/ovn-controller-ovs-7xrn9" Feb 18 19:33:15 crc kubenswrapper[4942]: I0218 19:33:15.768117 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/a740e80f-15e5-4745-bb1d-96da2561f33b-var-lib\") pod \"ovn-controller-ovs-7xrn9\" (UID: \"a740e80f-15e5-4745-bb1d-96da2561f33b\") " pod="openstack/ovn-controller-ovs-7xrn9" Feb 18 19:33:15 crc kubenswrapper[4942]: I0218 19:33:15.768243 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/28fe292c-6cda-4e3b-bce3-544ded95930b-var-log-ovn\") pod \"ovn-controller-llsph\" (UID: \"28fe292c-6cda-4e3b-bce3-544ded95930b\") " pod="openstack/ovn-controller-llsph" Feb 18 19:33:15 crc kubenswrapper[4942]: I0218 19:33:15.768324 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/28fe292c-6cda-4e3b-bce3-544ded95930b-ovn-controller-tls-certs\") pod \"ovn-controller-llsph\" (UID: \"28fe292c-6cda-4e3b-bce3-544ded95930b\") " pod="openstack/ovn-controller-llsph" Feb 18 19:33:15 crc kubenswrapper[4942]: I0218 19:33:15.768419 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/28fe292c-6cda-4e3b-bce3-544ded95930b-scripts\") pod \"ovn-controller-llsph\" (UID: \"28fe292c-6cda-4e3b-bce3-544ded95930b\") " pod="openstack/ovn-controller-llsph" Feb 18 19:33:15 crc kubenswrapper[4942]: I0218 19:33:15.856633 4942 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-nb-0"] Feb 18 19:33:15 crc kubenswrapper[4942]: I0218 19:33:15.859587 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Feb 18 19:33:15 crc kubenswrapper[4942]: I0218 19:33:15.862488 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-nb-dockercfg-6dh9g" Feb 18 19:33:15 crc kubenswrapper[4942]: I0218 19:33:15.863206 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-nb-ovndbs" Feb 18 19:33:15 crc kubenswrapper[4942]: I0218 19:33:15.863356 4942 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-config" Feb 18 19:33:15 crc kubenswrapper[4942]: I0218 19:33:15.863388 4942 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-scripts" Feb 18 19:33:15 crc kubenswrapper[4942]: I0218 19:33:15.863365 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovn-metrics" Feb 18 19:33:15 crc kubenswrapper[4942]: I0218 19:33:15.870261 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/28fe292c-6cda-4e3b-bce3-544ded95930b-var-log-ovn\") pod \"ovn-controller-llsph\" (UID: \"28fe292c-6cda-4e3b-bce3-544ded95930b\") " pod="openstack/ovn-controller-llsph" Feb 18 19:33:15 crc kubenswrapper[4942]: I0218 19:33:15.870308 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/28fe292c-6cda-4e3b-bce3-544ded95930b-ovn-controller-tls-certs\") pod \"ovn-controller-llsph\" (UID: \"28fe292c-6cda-4e3b-bce3-544ded95930b\") " pod="openstack/ovn-controller-llsph" Feb 18 19:33:15 crc kubenswrapper[4942]: I0218 19:33:15.870345 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/28fe292c-6cda-4e3b-bce3-544ded95930b-scripts\") pod \"ovn-controller-llsph\" (UID: \"28fe292c-6cda-4e3b-bce3-544ded95930b\") " pod="openstack/ovn-controller-llsph" Feb 18 19:33:15 crc kubenswrapper[4942]: I0218 19:33:15.870393 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/a740e80f-15e5-4745-bb1d-96da2561f33b-etc-ovs\") pod \"ovn-controller-ovs-7xrn9\" (UID: \"a740e80f-15e5-4745-bb1d-96da2561f33b\") " pod="openstack/ovn-controller-ovs-7xrn9" Feb 18 19:33:15 crc kubenswrapper[4942]: I0218 19:33:15.870422 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/28fe292c-6cda-4e3b-bce3-544ded95930b-combined-ca-bundle\") pod \"ovn-controller-llsph\" (UID: \"28fe292c-6cda-4e3b-bce3-544ded95930b\") " pod="openstack/ovn-controller-llsph" Feb 18 19:33:15 crc kubenswrapper[4942]: I0218 19:33:15.870443 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/a740e80f-15e5-4745-bb1d-96da2561f33b-var-log\") pod \"ovn-controller-ovs-7xrn9\" (UID: \"a740e80f-15e5-4745-bb1d-96da2561f33b\") " pod="openstack/ovn-controller-ovs-7xrn9" Feb 18 19:33:15 crc kubenswrapper[4942]: I0218 19:33:15.870469 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/28fe292c-6cda-4e3b-bce3-544ded95930b-var-run\") pod \"ovn-controller-llsph\" (UID: \"28fe292c-6cda-4e3b-bce3-544ded95930b\") " pod="openstack/ovn-controller-llsph" Feb 18 19:33:15 crc kubenswrapper[4942]: I0218 19:33:15.870483 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/28fe292c-6cda-4e3b-bce3-544ded95930b-var-run-ovn\") pod \"ovn-controller-llsph\" (UID: \"28fe292c-6cda-4e3b-bce3-544ded95930b\") " pod="openstack/ovn-controller-llsph" Feb 18 19:33:15 crc kubenswrapper[4942]: I0218 19:33:15.870511 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/a740e80f-15e5-4745-bb1d-96da2561f33b-var-run\") pod \"ovn-controller-ovs-7xrn9\" (UID: \"a740e80f-15e5-4745-bb1d-96da2561f33b\") " pod="openstack/ovn-controller-ovs-7xrn9" Feb 18 19:33:15 crc kubenswrapper[4942]: I0218 19:33:15.870530 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dxmr6\" (UniqueName: \"kubernetes.io/projected/a740e80f-15e5-4745-bb1d-96da2561f33b-kube-api-access-dxmr6\") pod \"ovn-controller-ovs-7xrn9\" (UID: \"a740e80f-15e5-4745-bb1d-96da2561f33b\") " pod="openstack/ovn-controller-ovs-7xrn9" Feb 18 19:33:15 crc kubenswrapper[4942]: I0218 19:33:15.870565 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mhphp\" (UniqueName: \"kubernetes.io/projected/28fe292c-6cda-4e3b-bce3-544ded95930b-kube-api-access-mhphp\") pod \"ovn-controller-llsph\" (UID: \"28fe292c-6cda-4e3b-bce3-544ded95930b\") " pod="openstack/ovn-controller-llsph" Feb 18 19:33:15 crc kubenswrapper[4942]: I0218 19:33:15.870585 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/a740e80f-15e5-4745-bb1d-96da2561f33b-scripts\") pod \"ovn-controller-ovs-7xrn9\" (UID: \"a740e80f-15e5-4745-bb1d-96da2561f33b\") " pod="openstack/ovn-controller-ovs-7xrn9" Feb 18 19:33:15 crc kubenswrapper[4942]: I0218 19:33:15.870613 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/a740e80f-15e5-4745-bb1d-96da2561f33b-var-lib\") pod \"ovn-controller-ovs-7xrn9\" (UID: \"a740e80f-15e5-4745-bb1d-96da2561f33b\") " pod="openstack/ovn-controller-ovs-7xrn9" Feb 18 19:33:15 crc kubenswrapper[4942]: I0218 19:33:15.871108 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/28fe292c-6cda-4e3b-bce3-544ded95930b-var-run\") pod \"ovn-controller-llsph\" (UID: \"28fe292c-6cda-4e3b-bce3-544ded95930b\") " pod="openstack/ovn-controller-llsph" Feb 18 19:33:15 crc kubenswrapper[4942]: I0218 19:33:15.871174 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/a740e80f-15e5-4745-bb1d-96da2561f33b-var-lib\") pod \"ovn-controller-ovs-7xrn9\" (UID: \"a740e80f-15e5-4745-bb1d-96da2561f33b\") " pod="openstack/ovn-controller-ovs-7xrn9" Feb 18 19:33:15 crc kubenswrapper[4942]: I0218 19:33:15.871270 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/28fe292c-6cda-4e3b-bce3-544ded95930b-var-run-ovn\") pod \"ovn-controller-llsph\" (UID: \"28fe292c-6cda-4e3b-bce3-544ded95930b\") " pod="openstack/ovn-controller-llsph" Feb 18 19:33:15 crc kubenswrapper[4942]: I0218 19:33:15.871297 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/a740e80f-15e5-4745-bb1d-96da2561f33b-var-log\") pod \"ovn-controller-ovs-7xrn9\" (UID: \"a740e80f-15e5-4745-bb1d-96da2561f33b\") " pod="openstack/ovn-controller-ovs-7xrn9" Feb 18 19:33:15 crc kubenswrapper[4942]: I0218 19:33:15.871308 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/a740e80f-15e5-4745-bb1d-96da2561f33b-var-run\") pod \"ovn-controller-ovs-7xrn9\" (UID: \"a740e80f-15e5-4745-bb1d-96da2561f33b\") " pod="openstack/ovn-controller-ovs-7xrn9" Feb 18 19:33:15 crc kubenswrapper[4942]: I0218 19:33:15.871901 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/a740e80f-15e5-4745-bb1d-96da2561f33b-etc-ovs\") pod \"ovn-controller-ovs-7xrn9\" (UID: \"a740e80f-15e5-4745-bb1d-96da2561f33b\") " pod="openstack/ovn-controller-ovs-7xrn9" Feb 18 19:33:15 crc kubenswrapper[4942]: I0218 19:33:15.875798 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/28fe292c-6cda-4e3b-bce3-544ded95930b-var-log-ovn\") pod \"ovn-controller-llsph\" (UID: \"28fe292c-6cda-4e3b-bce3-544ded95930b\") " pod="openstack/ovn-controller-llsph" Feb 18 19:33:15 crc kubenswrapper[4942]: I0218 19:33:15.877064 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/a740e80f-15e5-4745-bb1d-96da2561f33b-scripts\") pod \"ovn-controller-ovs-7xrn9\" (UID: \"a740e80f-15e5-4745-bb1d-96da2561f33b\") " pod="openstack/ovn-controller-ovs-7xrn9" Feb 18 19:33:15 crc kubenswrapper[4942]: I0218 19:33:15.877353 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/28fe292c-6cda-4e3b-bce3-544ded95930b-combined-ca-bundle\") pod \"ovn-controller-llsph\" (UID: \"28fe292c-6cda-4e3b-bce3-544ded95930b\") " pod="openstack/ovn-controller-llsph" Feb 18 19:33:15 crc kubenswrapper[4942]: I0218 19:33:15.881560 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/28fe292c-6cda-4e3b-bce3-544ded95930b-ovn-controller-tls-certs\") pod \"ovn-controller-llsph\" (UID: \"28fe292c-6cda-4e3b-bce3-544ded95930b\") " pod="openstack/ovn-controller-llsph" Feb 18 19:33:15 crc kubenswrapper[4942]: I0218 19:33:15.882390 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/28fe292c-6cda-4e3b-bce3-544ded95930b-scripts\") pod \"ovn-controller-llsph\" (UID: \"28fe292c-6cda-4e3b-bce3-544ded95930b\") " pod="openstack/ovn-controller-llsph" Feb 18 19:33:15 crc kubenswrapper[4942]: I0218 19:33:15.885744 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Feb 18 19:33:15 crc kubenswrapper[4942]: I0218 19:33:15.895413 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dxmr6\" (UniqueName: \"kubernetes.io/projected/a740e80f-15e5-4745-bb1d-96da2561f33b-kube-api-access-dxmr6\") pod \"ovn-controller-ovs-7xrn9\" (UID: \"a740e80f-15e5-4745-bb1d-96da2561f33b\") " pod="openstack/ovn-controller-ovs-7xrn9" Feb 18 19:33:15 crc kubenswrapper[4942]: I0218 19:33:15.900246 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mhphp\" (UniqueName: \"kubernetes.io/projected/28fe292c-6cda-4e3b-bce3-544ded95930b-kube-api-access-mhphp\") pod \"ovn-controller-llsph\" (UID: \"28fe292c-6cda-4e3b-bce3-544ded95930b\") " pod="openstack/ovn-controller-llsph" Feb 18 19:33:15 crc kubenswrapper[4942]: I0218 19:33:15.972502 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/b9c56d4c-8421-4b07-992d-c0c45223259f-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"b9c56d4c-8421-4b07-992d-c0c45223259f\") " pod="openstack/ovsdbserver-nb-0" Feb 18 19:33:15 crc kubenswrapper[4942]: I0218 19:33:15.972837 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/b9c56d4c-8421-4b07-992d-c0c45223259f-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"b9c56d4c-8421-4b07-992d-c0c45223259f\") " pod="openstack/ovsdbserver-nb-0" Feb 18 19:33:15 crc kubenswrapper[4942]: I0218 19:33:15.973006 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b9c56d4c-8421-4b07-992d-c0c45223259f-config\") pod \"ovsdbserver-nb-0\" (UID: \"b9c56d4c-8421-4b07-992d-c0c45223259f\") " pod="openstack/ovsdbserver-nb-0" Feb 18 19:33:15 crc kubenswrapper[4942]: I0218 19:33:15.973112 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/b9c56d4c-8421-4b07-992d-c0c45223259f-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"b9c56d4c-8421-4b07-992d-c0c45223259f\") " pod="openstack/ovsdbserver-nb-0" Feb 18 19:33:15 crc kubenswrapper[4942]: I0218 19:33:15.973215 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"ovsdbserver-nb-0\" (UID: \"b9c56d4c-8421-4b07-992d-c0c45223259f\") " pod="openstack/ovsdbserver-nb-0" Feb 18 19:33:15 crc kubenswrapper[4942]: I0218 19:33:15.973297 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5lwc4\" (UniqueName: \"kubernetes.io/projected/b9c56d4c-8421-4b07-992d-c0c45223259f-kube-api-access-5lwc4\") pod \"ovsdbserver-nb-0\" (UID: \"b9c56d4c-8421-4b07-992d-c0c45223259f\") " pod="openstack/ovsdbserver-nb-0" Feb 18 19:33:15 crc kubenswrapper[4942]: I0218 19:33:15.973430 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b9c56d4c-8421-4b07-992d-c0c45223259f-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"b9c56d4c-8421-4b07-992d-c0c45223259f\") " pod="openstack/ovsdbserver-nb-0" Feb 18 19:33:15 crc kubenswrapper[4942]: I0218 19:33:15.973524 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/b9c56d4c-8421-4b07-992d-c0c45223259f-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"b9c56d4c-8421-4b07-992d-c0c45223259f\") " pod="openstack/ovsdbserver-nb-0" Feb 18 19:33:16 crc kubenswrapper[4942]: I0218 19:33:16.028241 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-llsph" Feb 18 19:33:16 crc kubenswrapper[4942]: I0218 19:33:16.035054 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-7xrn9" Feb 18 19:33:16 crc kubenswrapper[4942]: I0218 19:33:16.075214 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b9c56d4c-8421-4b07-992d-c0c45223259f-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"b9c56d4c-8421-4b07-992d-c0c45223259f\") " pod="openstack/ovsdbserver-nb-0" Feb 18 19:33:16 crc kubenswrapper[4942]: I0218 19:33:16.075292 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/b9c56d4c-8421-4b07-992d-c0c45223259f-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"b9c56d4c-8421-4b07-992d-c0c45223259f\") " pod="openstack/ovsdbserver-nb-0" Feb 18 19:33:16 crc kubenswrapper[4942]: I0218 19:33:16.075363 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/b9c56d4c-8421-4b07-992d-c0c45223259f-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"b9c56d4c-8421-4b07-992d-c0c45223259f\") " pod="openstack/ovsdbserver-nb-0" Feb 18 19:33:16 crc kubenswrapper[4942]: I0218 19:33:16.075384 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/b9c56d4c-8421-4b07-992d-c0c45223259f-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"b9c56d4c-8421-4b07-992d-c0c45223259f\") " pod="openstack/ovsdbserver-nb-0" Feb 18 19:33:16 crc kubenswrapper[4942]: I0218 19:33:16.075411 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b9c56d4c-8421-4b07-992d-c0c45223259f-config\") pod \"ovsdbserver-nb-0\" (UID: \"b9c56d4c-8421-4b07-992d-c0c45223259f\") " pod="openstack/ovsdbserver-nb-0" Feb 18 19:33:16 crc kubenswrapper[4942]: I0218 19:33:16.075436 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/b9c56d4c-8421-4b07-992d-c0c45223259f-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"b9c56d4c-8421-4b07-992d-c0c45223259f\") " pod="openstack/ovsdbserver-nb-0" Feb 18 19:33:16 crc kubenswrapper[4942]: I0218 19:33:16.075475 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"ovsdbserver-nb-0\" (UID: \"b9c56d4c-8421-4b07-992d-c0c45223259f\") " pod="openstack/ovsdbserver-nb-0" Feb 18 19:33:16 crc kubenswrapper[4942]: I0218 19:33:16.075496 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5lwc4\" (UniqueName: \"kubernetes.io/projected/b9c56d4c-8421-4b07-992d-c0c45223259f-kube-api-access-5lwc4\") pod \"ovsdbserver-nb-0\" (UID: \"b9c56d4c-8421-4b07-992d-c0c45223259f\") " pod="openstack/ovsdbserver-nb-0" Feb 18 19:33:16 crc kubenswrapper[4942]: I0218 19:33:16.077228 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/b9c56d4c-8421-4b07-992d-c0c45223259f-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"b9c56d4c-8421-4b07-992d-c0c45223259f\") " pod="openstack/ovsdbserver-nb-0" Feb 18 19:33:16 crc kubenswrapper[4942]: I0218 19:33:16.077685 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b9c56d4c-8421-4b07-992d-c0c45223259f-config\") pod \"ovsdbserver-nb-0\" (UID: \"b9c56d4c-8421-4b07-992d-c0c45223259f\") " pod="openstack/ovsdbserver-nb-0" Feb 18 19:33:16 crc kubenswrapper[4942]: I0218 19:33:16.078026 4942 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"ovsdbserver-nb-0\" (UID: \"b9c56d4c-8421-4b07-992d-c0c45223259f\") device mount path \"/mnt/openstack/pv03\"" pod="openstack/ovsdbserver-nb-0" Feb 18 19:33:16 crc kubenswrapper[4942]: I0218 19:33:16.080266 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/b9c56d4c-8421-4b07-992d-c0c45223259f-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"b9c56d4c-8421-4b07-992d-c0c45223259f\") " pod="openstack/ovsdbserver-nb-0" Feb 18 19:33:16 crc kubenswrapper[4942]: I0218 19:33:16.081123 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b9c56d4c-8421-4b07-992d-c0c45223259f-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"b9c56d4c-8421-4b07-992d-c0c45223259f\") " pod="openstack/ovsdbserver-nb-0" Feb 18 19:33:16 crc kubenswrapper[4942]: I0218 19:33:16.082000 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/b9c56d4c-8421-4b07-992d-c0c45223259f-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"b9c56d4c-8421-4b07-992d-c0c45223259f\") " pod="openstack/ovsdbserver-nb-0" Feb 18 19:33:16 crc kubenswrapper[4942]: I0218 19:33:16.089095 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/b9c56d4c-8421-4b07-992d-c0c45223259f-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"b9c56d4c-8421-4b07-992d-c0c45223259f\") " pod="openstack/ovsdbserver-nb-0" Feb 18 19:33:16 crc kubenswrapper[4942]: I0218 19:33:16.094704 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5lwc4\" (UniqueName: \"kubernetes.io/projected/b9c56d4c-8421-4b07-992d-c0c45223259f-kube-api-access-5lwc4\") pod \"ovsdbserver-nb-0\" (UID: \"b9c56d4c-8421-4b07-992d-c0c45223259f\") " pod="openstack/ovsdbserver-nb-0" Feb 18 19:33:16 crc kubenswrapper[4942]: I0218 19:33:16.108809 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"ovsdbserver-nb-0\" (UID: \"b9c56d4c-8421-4b07-992d-c0c45223259f\") " pod="openstack/ovsdbserver-nb-0" Feb 18 19:33:16 crc kubenswrapper[4942]: I0218 19:33:16.174415 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Feb 18 19:33:19 crc kubenswrapper[4942]: I0218 19:33:19.427454 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Feb 18 19:33:19 crc kubenswrapper[4942]: I0218 19:33:19.561073 4942 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-sb-0"] Feb 18 19:33:19 crc kubenswrapper[4942]: I0218 19:33:19.562735 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Feb 18 19:33:19 crc kubenswrapper[4942]: I0218 19:33:19.562845 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Feb 18 19:33:19 crc kubenswrapper[4942]: I0218 19:33:19.572171 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-sb-dockercfg-6ghpq" Feb 18 19:33:19 crc kubenswrapper[4942]: I0218 19:33:19.572395 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-sb-ovndbs" Feb 18 19:33:19 crc kubenswrapper[4942]: I0218 19:33:19.572558 4942 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-scripts" Feb 18 19:33:19 crc kubenswrapper[4942]: I0218 19:33:19.572649 4942 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-config" Feb 18 19:33:19 crc kubenswrapper[4942]: I0218 19:33:19.634978 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/4a1f9573-3ebf-4dbf-a269-938392cbd141-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"4a1f9573-3ebf-4dbf-a269-938392cbd141\") " pod="openstack/ovsdbserver-sb-0" Feb 18 19:33:19 crc kubenswrapper[4942]: I0218 19:33:19.635060 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"ovsdbserver-sb-0\" (UID: \"4a1f9573-3ebf-4dbf-a269-938392cbd141\") " pod="openstack/ovsdbserver-sb-0" Feb 18 19:33:19 crc kubenswrapper[4942]: I0218 19:33:19.635117 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/4a1f9573-3ebf-4dbf-a269-938392cbd141-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"4a1f9573-3ebf-4dbf-a269-938392cbd141\") " pod="openstack/ovsdbserver-sb-0" Feb 18 19:33:19 crc kubenswrapper[4942]: I0218 19:33:19.635147 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4a1f9573-3ebf-4dbf-a269-938392cbd141-config\") pod \"ovsdbserver-sb-0\" (UID: \"4a1f9573-3ebf-4dbf-a269-938392cbd141\") " pod="openstack/ovsdbserver-sb-0" Feb 18 19:33:19 crc kubenswrapper[4942]: I0218 19:33:19.635176 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/4a1f9573-3ebf-4dbf-a269-938392cbd141-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"4a1f9573-3ebf-4dbf-a269-938392cbd141\") " pod="openstack/ovsdbserver-sb-0" Feb 18 19:33:19 crc kubenswrapper[4942]: I0218 19:33:19.635195 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/4a1f9573-3ebf-4dbf-a269-938392cbd141-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"4a1f9573-3ebf-4dbf-a269-938392cbd141\") " pod="openstack/ovsdbserver-sb-0" Feb 18 19:33:19 crc kubenswrapper[4942]: I0218 19:33:19.635224 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-247h5\" (UniqueName: \"kubernetes.io/projected/4a1f9573-3ebf-4dbf-a269-938392cbd141-kube-api-access-247h5\") pod \"ovsdbserver-sb-0\" (UID: \"4a1f9573-3ebf-4dbf-a269-938392cbd141\") " pod="openstack/ovsdbserver-sb-0" Feb 18 19:33:19 crc kubenswrapper[4942]: I0218 19:33:19.635240 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4a1f9573-3ebf-4dbf-a269-938392cbd141-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"4a1f9573-3ebf-4dbf-a269-938392cbd141\") " pod="openstack/ovsdbserver-sb-0" Feb 18 19:33:19 crc kubenswrapper[4942]: I0218 19:33:19.737512 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/4a1f9573-3ebf-4dbf-a269-938392cbd141-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"4a1f9573-3ebf-4dbf-a269-938392cbd141\") " pod="openstack/ovsdbserver-sb-0" Feb 18 19:33:19 crc kubenswrapper[4942]: I0218 19:33:19.737579 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4a1f9573-3ebf-4dbf-a269-938392cbd141-config\") pod \"ovsdbserver-sb-0\" (UID: \"4a1f9573-3ebf-4dbf-a269-938392cbd141\") " pod="openstack/ovsdbserver-sb-0" Feb 18 19:33:19 crc kubenswrapper[4942]: I0218 19:33:19.738007 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/4a1f9573-3ebf-4dbf-a269-938392cbd141-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"4a1f9573-3ebf-4dbf-a269-938392cbd141\") " pod="openstack/ovsdbserver-sb-0" Feb 18 19:33:19 crc kubenswrapper[4942]: I0218 19:33:19.738060 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/4a1f9573-3ebf-4dbf-a269-938392cbd141-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"4a1f9573-3ebf-4dbf-a269-938392cbd141\") " pod="openstack/ovsdbserver-sb-0" Feb 18 19:33:19 crc kubenswrapper[4942]: I0218 19:33:19.738100 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-247h5\" (UniqueName: \"kubernetes.io/projected/4a1f9573-3ebf-4dbf-a269-938392cbd141-kube-api-access-247h5\") pod \"ovsdbserver-sb-0\" (UID: \"4a1f9573-3ebf-4dbf-a269-938392cbd141\") " pod="openstack/ovsdbserver-sb-0" Feb 18 19:33:19 crc kubenswrapper[4942]: I0218 19:33:19.738115 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4a1f9573-3ebf-4dbf-a269-938392cbd141-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"4a1f9573-3ebf-4dbf-a269-938392cbd141\") " pod="openstack/ovsdbserver-sb-0" Feb 18 19:33:19 crc kubenswrapper[4942]: I0218 19:33:19.738143 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/4a1f9573-3ebf-4dbf-a269-938392cbd141-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"4a1f9573-3ebf-4dbf-a269-938392cbd141\") " pod="openstack/ovsdbserver-sb-0" Feb 18 19:33:19 crc kubenswrapper[4942]: I0218 19:33:19.738193 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"ovsdbserver-sb-0\" (UID: \"4a1f9573-3ebf-4dbf-a269-938392cbd141\") " pod="openstack/ovsdbserver-sb-0" Feb 18 19:33:19 crc kubenswrapper[4942]: I0218 19:33:19.738398 4942 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"ovsdbserver-sb-0\" (UID: \"4a1f9573-3ebf-4dbf-a269-938392cbd141\") device mount path \"/mnt/openstack/pv05\"" pod="openstack/ovsdbserver-sb-0" Feb 18 19:33:19 crc kubenswrapper[4942]: I0218 19:33:19.738658 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/4a1f9573-3ebf-4dbf-a269-938392cbd141-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"4a1f9573-3ebf-4dbf-a269-938392cbd141\") " pod="openstack/ovsdbserver-sb-0" Feb 18 19:33:19 crc kubenswrapper[4942]: I0218 19:33:19.738696 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4a1f9573-3ebf-4dbf-a269-938392cbd141-config\") pod \"ovsdbserver-sb-0\" (UID: \"4a1f9573-3ebf-4dbf-a269-938392cbd141\") " pod="openstack/ovsdbserver-sb-0" Feb 18 19:33:19 crc kubenswrapper[4942]: I0218 19:33:19.739140 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/4a1f9573-3ebf-4dbf-a269-938392cbd141-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"4a1f9573-3ebf-4dbf-a269-938392cbd141\") " pod="openstack/ovsdbserver-sb-0" Feb 18 19:33:19 crc kubenswrapper[4942]: I0218 19:33:19.746993 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4a1f9573-3ebf-4dbf-a269-938392cbd141-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"4a1f9573-3ebf-4dbf-a269-938392cbd141\") " pod="openstack/ovsdbserver-sb-0" Feb 18 19:33:19 crc kubenswrapper[4942]: I0218 19:33:19.747604 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/4a1f9573-3ebf-4dbf-a269-938392cbd141-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"4a1f9573-3ebf-4dbf-a269-938392cbd141\") " pod="openstack/ovsdbserver-sb-0" Feb 18 19:33:19 crc kubenswrapper[4942]: I0218 19:33:19.766457 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/4a1f9573-3ebf-4dbf-a269-938392cbd141-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"4a1f9573-3ebf-4dbf-a269-938392cbd141\") " pod="openstack/ovsdbserver-sb-0" Feb 18 19:33:19 crc kubenswrapper[4942]: I0218 19:33:19.770637 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-247h5\" (UniqueName: \"kubernetes.io/projected/4a1f9573-3ebf-4dbf-a269-938392cbd141-kube-api-access-247h5\") pod \"ovsdbserver-sb-0\" (UID: \"4a1f9573-3ebf-4dbf-a269-938392cbd141\") " pod="openstack/ovsdbserver-sb-0" Feb 18 19:33:19 crc kubenswrapper[4942]: I0218 19:33:19.784308 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"ovsdbserver-sb-0\" (UID: \"4a1f9573-3ebf-4dbf-a269-938392cbd141\") " pod="openstack/ovsdbserver-sb-0" Feb 18 19:33:19 crc kubenswrapper[4942]: I0218 19:33:19.892349 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Feb 18 19:33:20 crc kubenswrapper[4942]: W0218 19:33:20.103612 4942 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod77de5cb0_e446_407d_9e32_b13f39c84ae2.slice/crio-f25769d8510cd516ae5401d18772436aec7e570a6454b6b2469618103a8155cf WatchSource:0}: Error finding container f25769d8510cd516ae5401d18772436aec7e570a6454b6b2469618103a8155cf: Status 404 returned error can't find the container with id f25769d8510cd516ae5401d18772436aec7e570a6454b6b2469618103a8155cf Feb 18 19:33:20 crc kubenswrapper[4942]: E0218 19:33:20.103667 4942 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Feb 18 19:33:20 crc kubenswrapper[4942]: E0218 19:33:20.103945 4942 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:nffh5bdhf4h5f8h79h55h77h58fh56dh7bh6fh578hbch55dh68h56bhd9h65dh57ch658hc9h566h666h688h58h65dh684h5d7h6ch575h5d6h88q,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-hnnbj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-675f4bcbfc-b2r8r_openstack(73ffa88c-83a5-4da2-a7cb-6a0dbbd55a67): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 18 19:33:20 crc kubenswrapper[4942]: E0218 19:33:20.105125 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-675f4bcbfc-b2r8r" podUID="73ffa88c-83a5-4da2-a7cb-6a0dbbd55a67" Feb 18 19:33:20 crc kubenswrapper[4942]: E0218 19:33:20.123526 4942 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Feb 18 19:33:20 crc kubenswrapper[4942]: E0218 19:33:20.123693 4942 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:ndfhb5h667h568h584h5f9h58dh565h664h587h597h577h64bh5c4h66fh647hbdh68ch5c5h68dh686h5f7h64hd7hc6h55fh57bh98h57fh87h5fh57fq,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rxlmd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-78dd6ddcc-vvvkv_openstack(9fc86b17-5060-4828-9a92-7e40170ea226): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 18 19:33:20 crc kubenswrapper[4942]: E0218 19:33:20.125149 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-78dd6ddcc-vvvkv" podUID="9fc86b17-5060-4828-9a92-7e40170ea226" Feb 18 19:33:20 crc kubenswrapper[4942]: I0218 19:33:20.593574 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 18 19:33:20 crc kubenswrapper[4942]: W0218 19:33:20.606066 4942 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb6b41292_c562_4964_bb25_d8945415b3da.slice/crio-dbe1e5a24b02c9ef82c5a83259f9ae73faa64933195a6e2349f17abe3b76bba3 WatchSource:0}: Error finding container dbe1e5a24b02c9ef82c5a83259f9ae73faa64933195a6e2349f17abe3b76bba3: Status 404 returned error can't find the container with id dbe1e5a24b02c9ef82c5a83259f9ae73faa64933195a6e2349f17abe3b76bba3 Feb 18 19:33:20 crc kubenswrapper[4942]: I0218 19:33:20.667226 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"77de5cb0-e446-407d-9e32-b13f39c84ae2","Type":"ContainerStarted","Data":"f25769d8510cd516ae5401d18772436aec7e570a6454b6b2469618103a8155cf"} Feb 18 19:33:20 crc kubenswrapper[4942]: I0218 19:33:20.668616 4942 generic.go:334] "Generic (PLEG): container finished" podID="b34cdd67-e888-4718-8889-0dc284187fcc" containerID="031b9ea9109a76a2044d40e6de17d03777ea8f76aba5a0391d56eb6c10d14754" exitCode=0 Feb 18 19:33:20 crc kubenswrapper[4942]: I0218 19:33:20.668663 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-2mvhf" event={"ID":"b34cdd67-e888-4718-8889-0dc284187fcc","Type":"ContainerDied","Data":"031b9ea9109a76a2044d40e6de17d03777ea8f76aba5a0391d56eb6c10d14754"} Feb 18 19:33:20 crc kubenswrapper[4942]: I0218 19:33:20.683515 4942 generic.go:334] "Generic (PLEG): container finished" podID="b7887418-e8d9-434c-a8e3-fed787cbc8c8" containerID="756562a4164ba39c406456f5f9881491ae21aa337026dce4848f70b89d661fc0" exitCode=0 Feb 18 19:33:20 crc kubenswrapper[4942]: I0218 19:33:20.683590 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-99h4x" event={"ID":"b7887418-e8d9-434c-a8e3-fed787cbc8c8","Type":"ContainerDied","Data":"756562a4164ba39c406456f5f9881491ae21aa337026dce4848f70b89d661fc0"} Feb 18 19:33:20 crc kubenswrapper[4942]: I0218 19:33:20.686827 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"b6b41292-c562-4964-bb25-d8945415b3da","Type":"ContainerStarted","Data":"dbe1e5a24b02c9ef82c5a83259f9ae73faa64933195a6e2349f17abe3b76bba3"} Feb 18 19:33:20 crc kubenswrapper[4942]: I0218 19:33:20.715483 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Feb 18 19:33:21 crc kubenswrapper[4942]: I0218 19:33:21.084106 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Feb 18 19:33:21 crc kubenswrapper[4942]: I0218 19:33:21.095084 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 18 19:33:21 crc kubenswrapper[4942]: I0218 19:33:21.149629 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Feb 18 19:33:21 crc kubenswrapper[4942]: I0218 19:33:21.177454 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Feb 18 19:33:21 crc kubenswrapper[4942]: I0218 19:33:21.308173 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-llsph"] Feb 18 19:33:21 crc kubenswrapper[4942]: W0218 19:33:21.321514 4942 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod28fe292c_6cda_4e3b_bce3_544ded95930b.slice/crio-bd8762695e07eaf1790db9c5f5764553cd568bf73835eb6d3d76426ce7570e2d WatchSource:0}: Error finding container bd8762695e07eaf1790db9c5f5764553cd568bf73835eb6d3d76426ce7570e2d: Status 404 returned error can't find the container with id bd8762695e07eaf1790db9c5f5764553cd568bf73835eb6d3d76426ce7570e2d Feb 18 19:33:21 crc kubenswrapper[4942]: I0218 19:33:21.346498 4942 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-b2r8r" Feb 18 19:33:21 crc kubenswrapper[4942]: I0218 19:33:21.350399 4942 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-vvvkv" Feb 18 19:33:21 crc kubenswrapper[4942]: I0218 19:33:21.367223 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/73ffa88c-83a5-4da2-a7cb-6a0dbbd55a67-config\") pod \"73ffa88c-83a5-4da2-a7cb-6a0dbbd55a67\" (UID: \"73ffa88c-83a5-4da2-a7cb-6a0dbbd55a67\") " Feb 18 19:33:21 crc kubenswrapper[4942]: I0218 19:33:21.367271 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hnnbj\" (UniqueName: \"kubernetes.io/projected/73ffa88c-83a5-4da2-a7cb-6a0dbbd55a67-kube-api-access-hnnbj\") pod \"73ffa88c-83a5-4da2-a7cb-6a0dbbd55a67\" (UID: \"73ffa88c-83a5-4da2-a7cb-6a0dbbd55a67\") " Feb 18 19:33:21 crc kubenswrapper[4942]: I0218 19:33:21.367302 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rxlmd\" (UniqueName: \"kubernetes.io/projected/9fc86b17-5060-4828-9a92-7e40170ea226-kube-api-access-rxlmd\") pod \"9fc86b17-5060-4828-9a92-7e40170ea226\" (UID: \"9fc86b17-5060-4828-9a92-7e40170ea226\") " Feb 18 19:33:21 crc kubenswrapper[4942]: I0218 19:33:21.367907 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/73ffa88c-83a5-4da2-a7cb-6a0dbbd55a67-config" (OuterVolumeSpecName: "config") pod "73ffa88c-83a5-4da2-a7cb-6a0dbbd55a67" (UID: "73ffa88c-83a5-4da2-a7cb-6a0dbbd55a67"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:33:21 crc kubenswrapper[4942]: I0218 19:33:21.368009 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9fc86b17-5060-4828-9a92-7e40170ea226-config\") pod \"9fc86b17-5060-4828-9a92-7e40170ea226\" (UID: \"9fc86b17-5060-4828-9a92-7e40170ea226\") " Feb 18 19:33:21 crc kubenswrapper[4942]: I0218 19:33:21.368055 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9fc86b17-5060-4828-9a92-7e40170ea226-dns-svc\") pod \"9fc86b17-5060-4828-9a92-7e40170ea226\" (UID: \"9fc86b17-5060-4828-9a92-7e40170ea226\") " Feb 18 19:33:21 crc kubenswrapper[4942]: I0218 19:33:21.368399 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9fc86b17-5060-4828-9a92-7e40170ea226-config" (OuterVolumeSpecName: "config") pod "9fc86b17-5060-4828-9a92-7e40170ea226" (UID: "9fc86b17-5060-4828-9a92-7e40170ea226"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:33:21 crc kubenswrapper[4942]: I0218 19:33:21.368869 4942 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/73ffa88c-83a5-4da2-a7cb-6a0dbbd55a67-config\") on node \"crc\" DevicePath \"\"" Feb 18 19:33:21 crc kubenswrapper[4942]: I0218 19:33:21.368969 4942 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9fc86b17-5060-4828-9a92-7e40170ea226-config\") on node \"crc\" DevicePath \"\"" Feb 18 19:33:21 crc kubenswrapper[4942]: I0218 19:33:21.368975 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9fc86b17-5060-4828-9a92-7e40170ea226-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "9fc86b17-5060-4828-9a92-7e40170ea226" (UID: "9fc86b17-5060-4828-9a92-7e40170ea226"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:33:21 crc kubenswrapper[4942]: I0218 19:33:21.373019 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9fc86b17-5060-4828-9a92-7e40170ea226-kube-api-access-rxlmd" (OuterVolumeSpecName: "kube-api-access-rxlmd") pod "9fc86b17-5060-4828-9a92-7e40170ea226" (UID: "9fc86b17-5060-4828-9a92-7e40170ea226"). InnerVolumeSpecName "kube-api-access-rxlmd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:33:21 crc kubenswrapper[4942]: I0218 19:33:21.373122 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/73ffa88c-83a5-4da2-a7cb-6a0dbbd55a67-kube-api-access-hnnbj" (OuterVolumeSpecName: "kube-api-access-hnnbj") pod "73ffa88c-83a5-4da2-a7cb-6a0dbbd55a67" (UID: "73ffa88c-83a5-4da2-a7cb-6a0dbbd55a67"). InnerVolumeSpecName "kube-api-access-hnnbj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:33:21 crc kubenswrapper[4942]: I0218 19:33:21.470192 4942 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hnnbj\" (UniqueName: \"kubernetes.io/projected/73ffa88c-83a5-4da2-a7cb-6a0dbbd55a67-kube-api-access-hnnbj\") on node \"crc\" DevicePath \"\"" Feb 18 19:33:21 crc kubenswrapper[4942]: I0218 19:33:21.470225 4942 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rxlmd\" (UniqueName: \"kubernetes.io/projected/9fc86b17-5060-4828-9a92-7e40170ea226-kube-api-access-rxlmd\") on node \"crc\" DevicePath \"\"" Feb 18 19:33:21 crc kubenswrapper[4942]: I0218 19:33:21.470236 4942 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9fc86b17-5060-4828-9a92-7e40170ea226-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 18 19:33:21 crc kubenswrapper[4942]: I0218 19:33:21.477454 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Feb 18 19:33:21 crc kubenswrapper[4942]: W0218 19:33:21.483970 4942 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb9c56d4c_8421_4b07_992d_c0c45223259f.slice/crio-b8bb8ef9f0d862adc5d74e5a0677908d61b1712b9b37898f1d630b9bf520008b WatchSource:0}: Error finding container b8bb8ef9f0d862adc5d74e5a0677908d61b1712b9b37898f1d630b9bf520008b: Status 404 returned error can't find the container with id b8bb8ef9f0d862adc5d74e5a0677908d61b1712b9b37898f1d630b9bf520008b Feb 18 19:33:21 crc kubenswrapper[4942]: I0218 19:33:21.695996 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"543db3d4-08d8-473f-a6ad-7e6a5bb9734c","Type":"ContainerStarted","Data":"1193c3f2b445b73f045913a6f677cad12654f417ef42c816b25977d36d83acd7"} Feb 18 19:33:21 crc kubenswrapper[4942]: I0218 19:33:21.698747 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"e39270f2-0125-43f1-a2b3-cda4813614dd","Type":"ContainerStarted","Data":"5de2930cf10c161baa496b8e34743f2af6f232c5ff9c029cd31649a3c4355fdc"} Feb 18 19:33:21 crc kubenswrapper[4942]: I0218 19:33:21.702067 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-2mvhf" event={"ID":"b34cdd67-e888-4718-8889-0dc284187fcc","Type":"ContainerStarted","Data":"4bc4279f98eaf570cc3afb16c06101e758c31001023850a03a68af7e102724fc"} Feb 18 19:33:21 crc kubenswrapper[4942]: I0218 19:33:21.703047 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-666b6646f7-2mvhf" Feb 18 19:33:21 crc kubenswrapper[4942]: I0218 19:33:21.704148 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-llsph" event={"ID":"28fe292c-6cda-4e3b-bce3-544ded95930b","Type":"ContainerStarted","Data":"bd8762695e07eaf1790db9c5f5764553cd568bf73835eb6d3d76426ce7570e2d"} Feb 18 19:33:21 crc kubenswrapper[4942]: I0218 19:33:21.705840 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"a8f1712c-12df-4ca2-81d3-dc649c747868","Type":"ContainerStarted","Data":"60cb4ff34d0b296ea32561c63d6c9eaa0072a589abe5d55659f37a97a3ea461d"} Feb 18 19:33:21 crc kubenswrapper[4942]: I0218 19:33:21.706903 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"242ed220-c516-4f30-bb5b-69f28626101a","Type":"ContainerStarted","Data":"565fc44d6da4c257adace4097ad9ed890137eae6d9af423357d18bbc592b3fef"} Feb 18 19:33:21 crc kubenswrapper[4942]: I0218 19:33:21.707365 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-675f4bcbfc-b2r8r" event={"ID":"73ffa88c-83a5-4da2-a7cb-6a0dbbd55a67","Type":"ContainerDied","Data":"154555e460749d363c1de1a0b1abd6b993012b64a0c3c4566686feeefaddd1c6"} Feb 18 19:33:21 crc kubenswrapper[4942]: I0218 19:33:21.707412 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-b2r8r" Feb 18 19:33:21 crc kubenswrapper[4942]: I0218 19:33:21.709957 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"e07db76c-5ab3-430d-b9ad-eba96f02ab9e","Type":"ContainerStarted","Data":"689297c4a8f7d9074ab9928fff44a71a5d276f927970e4ff0e03fa775cccf64e"} Feb 18 19:33:21 crc kubenswrapper[4942]: I0218 19:33:21.711531 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78dd6ddcc-vvvkv" event={"ID":"9fc86b17-5060-4828-9a92-7e40170ea226","Type":"ContainerDied","Data":"e8ed36483d76587ac38831cc4f79afb97b35245143a1273adb206d00597090f3"} Feb 18 19:33:21 crc kubenswrapper[4942]: I0218 19:33:21.711548 4942 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-vvvkv" Feb 18 19:33:21 crc kubenswrapper[4942]: I0218 19:33:21.714393 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-99h4x" event={"ID":"b7887418-e8d9-434c-a8e3-fed787cbc8c8","Type":"ContainerStarted","Data":"b0a08becdf6cd5acdde160303320ee77217b0a1a88e5089ff77de3d6134ce51a"} Feb 18 19:33:21 crc kubenswrapper[4942]: I0218 19:33:21.714524 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-57d769cc4f-99h4x" Feb 18 19:33:21 crc kubenswrapper[4942]: I0218 19:33:21.716177 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"b9c56d4c-8421-4b07-992d-c0c45223259f","Type":"ContainerStarted","Data":"b8bb8ef9f0d862adc5d74e5a0677908d61b1712b9b37898f1d630b9bf520008b"} Feb 18 19:33:21 crc kubenswrapper[4942]: I0218 19:33:21.721343 4942 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-666b6646f7-2mvhf" podStartSLOduration=2.847127291 podStartE2EDuration="16.72132791s" podCreationTimestamp="2026-02-18 19:33:05 +0000 UTC" firstStartedPulling="2026-02-18 19:33:06.386341715 +0000 UTC m=+946.091274380" lastFinishedPulling="2026-02-18 19:33:20.260542344 +0000 UTC m=+959.965474999" observedRunningTime="2026-02-18 19:33:21.720882468 +0000 UTC m=+961.425815133" watchObservedRunningTime="2026-02-18 19:33:21.72132791 +0000 UTC m=+961.426260575" Feb 18 19:33:21 crc kubenswrapper[4942]: I0218 19:33:21.755019 4942 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-57d769cc4f-99h4x" podStartSLOduration=6.014443434 podStartE2EDuration="16.754995603s" podCreationTimestamp="2026-02-18 19:33:05 +0000 UTC" firstStartedPulling="2026-02-18 19:33:09.527519458 +0000 UTC m=+949.232452133" lastFinishedPulling="2026-02-18 19:33:20.268071637 +0000 UTC m=+959.973004302" observedRunningTime="2026-02-18 19:33:21.752255223 +0000 UTC m=+961.457187888" watchObservedRunningTime="2026-02-18 19:33:21.754995603 +0000 UTC m=+961.459928298" Feb 18 19:33:21 crc kubenswrapper[4942]: I0218 19:33:21.819888 4942 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-b2r8r"] Feb 18 19:33:21 crc kubenswrapper[4942]: I0218 19:33:21.826524 4942 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-b2r8r"] Feb 18 19:33:21 crc kubenswrapper[4942]: I0218 19:33:21.869970 4942 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-vvvkv"] Feb 18 19:33:21 crc kubenswrapper[4942]: I0218 19:33:21.877525 4942 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-vvvkv"] Feb 18 19:33:21 crc kubenswrapper[4942]: I0218 19:33:21.974098 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-7xrn9"] Feb 18 19:33:22 crc kubenswrapper[4942]: W0218 19:33:22.425172 4942 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda740e80f_15e5_4745_bb1d_96da2561f33b.slice/crio-3904bfe0d6114f76dfb37e616e4252bd758e05c00fdd53ca0dacf10defd1460e WatchSource:0}: Error finding container 3904bfe0d6114f76dfb37e616e4252bd758e05c00fdd53ca0dacf10defd1460e: Status 404 returned error can't find the container with id 3904bfe0d6114f76dfb37e616e4252bd758e05c00fdd53ca0dacf10defd1460e Feb 18 19:33:22 crc kubenswrapper[4942]: I0218 19:33:22.543004 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Feb 18 19:33:22 crc kubenswrapper[4942]: I0218 19:33:22.727807 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-7xrn9" event={"ID":"a740e80f-15e5-4745-bb1d-96da2561f33b","Type":"ContainerStarted","Data":"3904bfe0d6114f76dfb37e616e4252bd758e05c00fdd53ca0dacf10defd1460e"} Feb 18 19:33:23 crc kubenswrapper[4942]: I0218 19:33:23.044487 4942 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="73ffa88c-83a5-4da2-a7cb-6a0dbbd55a67" path="/var/lib/kubelet/pods/73ffa88c-83a5-4da2-a7cb-6a0dbbd55a67/volumes" Feb 18 19:33:23 crc kubenswrapper[4942]: I0218 19:33:23.044851 4942 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9fc86b17-5060-4828-9a92-7e40170ea226" path="/var/lib/kubelet/pods/9fc86b17-5060-4828-9a92-7e40170ea226/volumes" Feb 18 19:33:23 crc kubenswrapper[4942]: W0218 19:33:23.208027 4942 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4a1f9573_3ebf_4dbf_a269_938392cbd141.slice/crio-2f16d88a6d4b2c87ebe8b7a480b25a7dbf5ae8a21bf20643a0b443083877e239 WatchSource:0}: Error finding container 2f16d88a6d4b2c87ebe8b7a480b25a7dbf5ae8a21bf20643a0b443083877e239: Status 404 returned error can't find the container with id 2f16d88a6d4b2c87ebe8b7a480b25a7dbf5ae8a21bf20643a0b443083877e239 Feb 18 19:33:23 crc kubenswrapper[4942]: I0218 19:33:23.688255 4942 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-metrics-99sfz"] Feb 18 19:33:23 crc kubenswrapper[4942]: I0218 19:33:23.697485 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-99sfz" Feb 18 19:33:23 crc kubenswrapper[4942]: I0218 19:33:23.701307 4942 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-metrics-config" Feb 18 19:33:23 crc kubenswrapper[4942]: I0218 19:33:23.705546 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c3b96f02-5a44-4d7e-842c-392c9a0a73f3-config\") pod \"ovn-controller-metrics-99sfz\" (UID: \"c3b96f02-5a44-4d7e-842c-392c9a0a73f3\") " pod="openstack/ovn-controller-metrics-99sfz" Feb 18 19:33:23 crc kubenswrapper[4942]: I0218 19:33:23.706123 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jlscn\" (UniqueName: \"kubernetes.io/projected/c3b96f02-5a44-4d7e-842c-392c9a0a73f3-kube-api-access-jlscn\") pod \"ovn-controller-metrics-99sfz\" (UID: \"c3b96f02-5a44-4d7e-842c-392c9a0a73f3\") " pod="openstack/ovn-controller-metrics-99sfz" Feb 18 19:33:23 crc kubenswrapper[4942]: I0218 19:33:23.706191 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/c3b96f02-5a44-4d7e-842c-392c9a0a73f3-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-99sfz\" (UID: \"c3b96f02-5a44-4d7e-842c-392c9a0a73f3\") " pod="openstack/ovn-controller-metrics-99sfz" Feb 18 19:33:23 crc kubenswrapper[4942]: I0218 19:33:23.706277 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/c3b96f02-5a44-4d7e-842c-392c9a0a73f3-ovs-rundir\") pod \"ovn-controller-metrics-99sfz\" (UID: \"c3b96f02-5a44-4d7e-842c-392c9a0a73f3\") " pod="openstack/ovn-controller-metrics-99sfz" Feb 18 19:33:23 crc kubenswrapper[4942]: I0218 19:33:23.706385 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/c3b96f02-5a44-4d7e-842c-392c9a0a73f3-ovn-rundir\") pod \"ovn-controller-metrics-99sfz\" (UID: \"c3b96f02-5a44-4d7e-842c-392c9a0a73f3\") " pod="openstack/ovn-controller-metrics-99sfz" Feb 18 19:33:23 crc kubenswrapper[4942]: I0218 19:33:23.706441 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c3b96f02-5a44-4d7e-842c-392c9a0a73f3-combined-ca-bundle\") pod \"ovn-controller-metrics-99sfz\" (UID: \"c3b96f02-5a44-4d7e-842c-392c9a0a73f3\") " pod="openstack/ovn-controller-metrics-99sfz" Feb 18 19:33:23 crc kubenswrapper[4942]: I0218 19:33:23.723166 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-99sfz"] Feb 18 19:33:23 crc kubenswrapper[4942]: I0218 19:33:23.760625 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"4a1f9573-3ebf-4dbf-a269-938392cbd141","Type":"ContainerStarted","Data":"2f16d88a6d4b2c87ebe8b7a480b25a7dbf5ae8a21bf20643a0b443083877e239"} Feb 18 19:33:23 crc kubenswrapper[4942]: I0218 19:33:23.807581 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/c3b96f02-5a44-4d7e-842c-392c9a0a73f3-ovn-rundir\") pod \"ovn-controller-metrics-99sfz\" (UID: \"c3b96f02-5a44-4d7e-842c-392c9a0a73f3\") " pod="openstack/ovn-controller-metrics-99sfz" Feb 18 19:33:23 crc kubenswrapper[4942]: I0218 19:33:23.807641 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c3b96f02-5a44-4d7e-842c-392c9a0a73f3-combined-ca-bundle\") pod \"ovn-controller-metrics-99sfz\" (UID: \"c3b96f02-5a44-4d7e-842c-392c9a0a73f3\") " pod="openstack/ovn-controller-metrics-99sfz" Feb 18 19:33:23 crc kubenswrapper[4942]: I0218 19:33:23.807700 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c3b96f02-5a44-4d7e-842c-392c9a0a73f3-config\") pod \"ovn-controller-metrics-99sfz\" (UID: \"c3b96f02-5a44-4d7e-842c-392c9a0a73f3\") " pod="openstack/ovn-controller-metrics-99sfz" Feb 18 19:33:23 crc kubenswrapper[4942]: I0218 19:33:23.807726 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jlscn\" (UniqueName: \"kubernetes.io/projected/c3b96f02-5a44-4d7e-842c-392c9a0a73f3-kube-api-access-jlscn\") pod \"ovn-controller-metrics-99sfz\" (UID: \"c3b96f02-5a44-4d7e-842c-392c9a0a73f3\") " pod="openstack/ovn-controller-metrics-99sfz" Feb 18 19:33:23 crc kubenswrapper[4942]: I0218 19:33:23.807827 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/c3b96f02-5a44-4d7e-842c-392c9a0a73f3-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-99sfz\" (UID: \"c3b96f02-5a44-4d7e-842c-392c9a0a73f3\") " pod="openstack/ovn-controller-metrics-99sfz" Feb 18 19:33:23 crc kubenswrapper[4942]: I0218 19:33:23.807885 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/c3b96f02-5a44-4d7e-842c-392c9a0a73f3-ovs-rundir\") pod \"ovn-controller-metrics-99sfz\" (UID: \"c3b96f02-5a44-4d7e-842c-392c9a0a73f3\") " pod="openstack/ovn-controller-metrics-99sfz" Feb 18 19:33:23 crc kubenswrapper[4942]: I0218 19:33:23.808249 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/c3b96f02-5a44-4d7e-842c-392c9a0a73f3-ovs-rundir\") pod \"ovn-controller-metrics-99sfz\" (UID: \"c3b96f02-5a44-4d7e-842c-392c9a0a73f3\") " pod="openstack/ovn-controller-metrics-99sfz" Feb 18 19:33:23 crc kubenswrapper[4942]: I0218 19:33:23.808635 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c3b96f02-5a44-4d7e-842c-392c9a0a73f3-config\") pod \"ovn-controller-metrics-99sfz\" (UID: \"c3b96f02-5a44-4d7e-842c-392c9a0a73f3\") " pod="openstack/ovn-controller-metrics-99sfz" Feb 18 19:33:23 crc kubenswrapper[4942]: I0218 19:33:23.808728 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/c3b96f02-5a44-4d7e-842c-392c9a0a73f3-ovn-rundir\") pod \"ovn-controller-metrics-99sfz\" (UID: \"c3b96f02-5a44-4d7e-842c-392c9a0a73f3\") " pod="openstack/ovn-controller-metrics-99sfz" Feb 18 19:33:23 crc kubenswrapper[4942]: I0218 19:33:23.816908 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/c3b96f02-5a44-4d7e-842c-392c9a0a73f3-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-99sfz\" (UID: \"c3b96f02-5a44-4d7e-842c-392c9a0a73f3\") " pod="openstack/ovn-controller-metrics-99sfz" Feb 18 19:33:23 crc kubenswrapper[4942]: I0218 19:33:23.819462 4942 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-99h4x"] Feb 18 19:33:23 crc kubenswrapper[4942]: I0218 19:33:23.819678 4942 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-57d769cc4f-99h4x" podUID="b7887418-e8d9-434c-a8e3-fed787cbc8c8" containerName="dnsmasq-dns" containerID="cri-o://b0a08becdf6cd5acdde160303320ee77217b0a1a88e5089ff77de3d6134ce51a" gracePeriod=10 Feb 18 19:33:23 crc kubenswrapper[4942]: I0218 19:33:23.829861 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c3b96f02-5a44-4d7e-842c-392c9a0a73f3-combined-ca-bundle\") pod \"ovn-controller-metrics-99sfz\" (UID: \"c3b96f02-5a44-4d7e-842c-392c9a0a73f3\") " pod="openstack/ovn-controller-metrics-99sfz" Feb 18 19:33:23 crc kubenswrapper[4942]: I0218 19:33:23.830375 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jlscn\" (UniqueName: \"kubernetes.io/projected/c3b96f02-5a44-4d7e-842c-392c9a0a73f3-kube-api-access-jlscn\") pod \"ovn-controller-metrics-99sfz\" (UID: \"c3b96f02-5a44-4d7e-842c-392c9a0a73f3\") " pod="openstack/ovn-controller-metrics-99sfz" Feb 18 19:33:23 crc kubenswrapper[4942]: I0218 19:33:23.858651 4942 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-7fd796d7df-nblkl"] Feb 18 19:33:23 crc kubenswrapper[4942]: I0218 19:33:23.868288 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7fd796d7df-nblkl" Feb 18 19:33:23 crc kubenswrapper[4942]: I0218 19:33:23.873035 4942 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-nb" Feb 18 19:33:23 crc kubenswrapper[4942]: I0218 19:33:23.888316 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7fd796d7df-nblkl"] Feb 18 19:33:23 crc kubenswrapper[4942]: I0218 19:33:23.909384 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/782cbd43-a7c9-45f4-99e3-44fe770be6a5-config\") pod \"dnsmasq-dns-7fd796d7df-nblkl\" (UID: \"782cbd43-a7c9-45f4-99e3-44fe770be6a5\") " pod="openstack/dnsmasq-dns-7fd796d7df-nblkl" Feb 18 19:33:23 crc kubenswrapper[4942]: I0218 19:33:23.909486 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/782cbd43-a7c9-45f4-99e3-44fe770be6a5-ovsdbserver-nb\") pod \"dnsmasq-dns-7fd796d7df-nblkl\" (UID: \"782cbd43-a7c9-45f4-99e3-44fe770be6a5\") " pod="openstack/dnsmasq-dns-7fd796d7df-nblkl" Feb 18 19:33:23 crc kubenswrapper[4942]: I0218 19:33:23.909558 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/782cbd43-a7c9-45f4-99e3-44fe770be6a5-dns-svc\") pod \"dnsmasq-dns-7fd796d7df-nblkl\" (UID: \"782cbd43-a7c9-45f4-99e3-44fe770be6a5\") " pod="openstack/dnsmasq-dns-7fd796d7df-nblkl" Feb 18 19:33:23 crc kubenswrapper[4942]: I0218 19:33:23.909585 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5gltq\" (UniqueName: \"kubernetes.io/projected/782cbd43-a7c9-45f4-99e3-44fe770be6a5-kube-api-access-5gltq\") pod \"dnsmasq-dns-7fd796d7df-nblkl\" (UID: \"782cbd43-a7c9-45f4-99e3-44fe770be6a5\") " pod="openstack/dnsmasq-dns-7fd796d7df-nblkl" Feb 18 19:33:23 crc kubenswrapper[4942]: I0218 19:33:23.958011 4942 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-2mvhf"] Feb 18 19:33:23 crc kubenswrapper[4942]: I0218 19:33:23.999002 4942 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-6896v"] Feb 18 19:33:24 crc kubenswrapper[4942]: I0218 19:33:24.001328 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-86db49b7ff-6896v" Feb 18 19:33:24 crc kubenswrapper[4942]: I0218 19:33:24.006336 4942 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-sb" Feb 18 19:33:24 crc kubenswrapper[4942]: I0218 19:33:24.010256 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d783b8b1-2938-4635-8a04-df942aa84383-dns-svc\") pod \"dnsmasq-dns-86db49b7ff-6896v\" (UID: \"d783b8b1-2938-4635-8a04-df942aa84383\") " pod="openstack/dnsmasq-dns-86db49b7ff-6896v" Feb 18 19:33:24 crc kubenswrapper[4942]: I0218 19:33:24.010420 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d783b8b1-2938-4635-8a04-df942aa84383-ovsdbserver-sb\") pod \"dnsmasq-dns-86db49b7ff-6896v\" (UID: \"d783b8b1-2938-4635-8a04-df942aa84383\") " pod="openstack/dnsmasq-dns-86db49b7ff-6896v" Feb 18 19:33:24 crc kubenswrapper[4942]: I0218 19:33:24.010522 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/782cbd43-a7c9-45f4-99e3-44fe770be6a5-dns-svc\") pod \"dnsmasq-dns-7fd796d7df-nblkl\" (UID: \"782cbd43-a7c9-45f4-99e3-44fe770be6a5\") " pod="openstack/dnsmasq-dns-7fd796d7df-nblkl" Feb 18 19:33:24 crc kubenswrapper[4942]: I0218 19:33:24.010598 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5gltq\" (UniqueName: \"kubernetes.io/projected/782cbd43-a7c9-45f4-99e3-44fe770be6a5-kube-api-access-5gltq\") pod \"dnsmasq-dns-7fd796d7df-nblkl\" (UID: \"782cbd43-a7c9-45f4-99e3-44fe770be6a5\") " pod="openstack/dnsmasq-dns-7fd796d7df-nblkl" Feb 18 19:33:24 crc kubenswrapper[4942]: I0218 19:33:24.010695 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xzjg8\" (UniqueName: \"kubernetes.io/projected/d783b8b1-2938-4635-8a04-df942aa84383-kube-api-access-xzjg8\") pod \"dnsmasq-dns-86db49b7ff-6896v\" (UID: \"d783b8b1-2938-4635-8a04-df942aa84383\") " pod="openstack/dnsmasq-dns-86db49b7ff-6896v" Feb 18 19:33:24 crc kubenswrapper[4942]: I0218 19:33:24.010787 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d783b8b1-2938-4635-8a04-df942aa84383-ovsdbserver-nb\") pod \"dnsmasq-dns-86db49b7ff-6896v\" (UID: \"d783b8b1-2938-4635-8a04-df942aa84383\") " pod="openstack/dnsmasq-dns-86db49b7ff-6896v" Feb 18 19:33:24 crc kubenswrapper[4942]: I0218 19:33:24.010892 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/782cbd43-a7c9-45f4-99e3-44fe770be6a5-config\") pod \"dnsmasq-dns-7fd796d7df-nblkl\" (UID: \"782cbd43-a7c9-45f4-99e3-44fe770be6a5\") " pod="openstack/dnsmasq-dns-7fd796d7df-nblkl" Feb 18 19:33:24 crc kubenswrapper[4942]: I0218 19:33:24.010965 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d783b8b1-2938-4635-8a04-df942aa84383-config\") pod \"dnsmasq-dns-86db49b7ff-6896v\" (UID: \"d783b8b1-2938-4635-8a04-df942aa84383\") " pod="openstack/dnsmasq-dns-86db49b7ff-6896v" Feb 18 19:33:24 crc kubenswrapper[4942]: I0218 19:33:24.011063 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/782cbd43-a7c9-45f4-99e3-44fe770be6a5-ovsdbserver-nb\") pod \"dnsmasq-dns-7fd796d7df-nblkl\" (UID: \"782cbd43-a7c9-45f4-99e3-44fe770be6a5\") " pod="openstack/dnsmasq-dns-7fd796d7df-nblkl" Feb 18 19:33:24 crc kubenswrapper[4942]: I0218 19:33:24.012019 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/782cbd43-a7c9-45f4-99e3-44fe770be6a5-ovsdbserver-nb\") pod \"dnsmasq-dns-7fd796d7df-nblkl\" (UID: \"782cbd43-a7c9-45f4-99e3-44fe770be6a5\") " pod="openstack/dnsmasq-dns-7fd796d7df-nblkl" Feb 18 19:33:24 crc kubenswrapper[4942]: I0218 19:33:24.010279 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-6896v"] Feb 18 19:33:24 crc kubenswrapper[4942]: I0218 19:33:24.012808 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/782cbd43-a7c9-45f4-99e3-44fe770be6a5-dns-svc\") pod \"dnsmasq-dns-7fd796d7df-nblkl\" (UID: \"782cbd43-a7c9-45f4-99e3-44fe770be6a5\") " pod="openstack/dnsmasq-dns-7fd796d7df-nblkl" Feb 18 19:33:24 crc kubenswrapper[4942]: I0218 19:33:24.013702 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/782cbd43-a7c9-45f4-99e3-44fe770be6a5-config\") pod \"dnsmasq-dns-7fd796d7df-nblkl\" (UID: \"782cbd43-a7c9-45f4-99e3-44fe770be6a5\") " pod="openstack/dnsmasq-dns-7fd796d7df-nblkl" Feb 18 19:33:24 crc kubenswrapper[4942]: I0218 19:33:24.049224 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-99sfz" Feb 18 19:33:24 crc kubenswrapper[4942]: I0218 19:33:24.066819 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5gltq\" (UniqueName: \"kubernetes.io/projected/782cbd43-a7c9-45f4-99e3-44fe770be6a5-kube-api-access-5gltq\") pod \"dnsmasq-dns-7fd796d7df-nblkl\" (UID: \"782cbd43-a7c9-45f4-99e3-44fe770be6a5\") " pod="openstack/dnsmasq-dns-7fd796d7df-nblkl" Feb 18 19:33:24 crc kubenswrapper[4942]: I0218 19:33:24.112794 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xzjg8\" (UniqueName: \"kubernetes.io/projected/d783b8b1-2938-4635-8a04-df942aa84383-kube-api-access-xzjg8\") pod \"dnsmasq-dns-86db49b7ff-6896v\" (UID: \"d783b8b1-2938-4635-8a04-df942aa84383\") " pod="openstack/dnsmasq-dns-86db49b7ff-6896v" Feb 18 19:33:24 crc kubenswrapper[4942]: I0218 19:33:24.112852 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d783b8b1-2938-4635-8a04-df942aa84383-ovsdbserver-nb\") pod \"dnsmasq-dns-86db49b7ff-6896v\" (UID: \"d783b8b1-2938-4635-8a04-df942aa84383\") " pod="openstack/dnsmasq-dns-86db49b7ff-6896v" Feb 18 19:33:24 crc kubenswrapper[4942]: I0218 19:33:24.112922 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d783b8b1-2938-4635-8a04-df942aa84383-config\") pod \"dnsmasq-dns-86db49b7ff-6896v\" (UID: \"d783b8b1-2938-4635-8a04-df942aa84383\") " pod="openstack/dnsmasq-dns-86db49b7ff-6896v" Feb 18 19:33:24 crc kubenswrapper[4942]: I0218 19:33:24.112964 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d783b8b1-2938-4635-8a04-df942aa84383-dns-svc\") pod \"dnsmasq-dns-86db49b7ff-6896v\" (UID: \"d783b8b1-2938-4635-8a04-df942aa84383\") " pod="openstack/dnsmasq-dns-86db49b7ff-6896v" Feb 18 19:33:24 crc kubenswrapper[4942]: I0218 19:33:24.113009 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d783b8b1-2938-4635-8a04-df942aa84383-ovsdbserver-sb\") pod \"dnsmasq-dns-86db49b7ff-6896v\" (UID: \"d783b8b1-2938-4635-8a04-df942aa84383\") " pod="openstack/dnsmasq-dns-86db49b7ff-6896v" Feb 18 19:33:24 crc kubenswrapper[4942]: I0218 19:33:24.114255 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d783b8b1-2938-4635-8a04-df942aa84383-dns-svc\") pod \"dnsmasq-dns-86db49b7ff-6896v\" (UID: \"d783b8b1-2938-4635-8a04-df942aa84383\") " pod="openstack/dnsmasq-dns-86db49b7ff-6896v" Feb 18 19:33:24 crc kubenswrapper[4942]: I0218 19:33:24.114255 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d783b8b1-2938-4635-8a04-df942aa84383-config\") pod \"dnsmasq-dns-86db49b7ff-6896v\" (UID: \"d783b8b1-2938-4635-8a04-df942aa84383\") " pod="openstack/dnsmasq-dns-86db49b7ff-6896v" Feb 18 19:33:24 crc kubenswrapper[4942]: I0218 19:33:24.114339 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d783b8b1-2938-4635-8a04-df942aa84383-ovsdbserver-nb\") pod \"dnsmasq-dns-86db49b7ff-6896v\" (UID: \"d783b8b1-2938-4635-8a04-df942aa84383\") " pod="openstack/dnsmasq-dns-86db49b7ff-6896v" Feb 18 19:33:24 crc kubenswrapper[4942]: I0218 19:33:24.115399 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d783b8b1-2938-4635-8a04-df942aa84383-ovsdbserver-sb\") pod \"dnsmasq-dns-86db49b7ff-6896v\" (UID: \"d783b8b1-2938-4635-8a04-df942aa84383\") " pod="openstack/dnsmasq-dns-86db49b7ff-6896v" Feb 18 19:33:24 crc kubenswrapper[4942]: I0218 19:33:24.128624 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xzjg8\" (UniqueName: \"kubernetes.io/projected/d783b8b1-2938-4635-8a04-df942aa84383-kube-api-access-xzjg8\") pod \"dnsmasq-dns-86db49b7ff-6896v\" (UID: \"d783b8b1-2938-4635-8a04-df942aa84383\") " pod="openstack/dnsmasq-dns-86db49b7ff-6896v" Feb 18 19:33:24 crc kubenswrapper[4942]: I0218 19:33:24.273057 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7fd796d7df-nblkl" Feb 18 19:33:24 crc kubenswrapper[4942]: I0218 19:33:24.375429 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-86db49b7ff-6896v" Feb 18 19:33:24 crc kubenswrapper[4942]: I0218 19:33:24.768321 4942 generic.go:334] "Generic (PLEG): container finished" podID="b7887418-e8d9-434c-a8e3-fed787cbc8c8" containerID="b0a08becdf6cd5acdde160303320ee77217b0a1a88e5089ff77de3d6134ce51a" exitCode=0 Feb 18 19:33:24 crc kubenswrapper[4942]: I0218 19:33:24.768490 4942 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-666b6646f7-2mvhf" podUID="b34cdd67-e888-4718-8889-0dc284187fcc" containerName="dnsmasq-dns" containerID="cri-o://4bc4279f98eaf570cc3afb16c06101e758c31001023850a03a68af7e102724fc" gracePeriod=10 Feb 18 19:33:24 crc kubenswrapper[4942]: I0218 19:33:24.768631 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-99h4x" event={"ID":"b7887418-e8d9-434c-a8e3-fed787cbc8c8","Type":"ContainerDied","Data":"b0a08becdf6cd5acdde160303320ee77217b0a1a88e5089ff77de3d6134ce51a"} Feb 18 19:33:24 crc kubenswrapper[4942]: E0218 19:33:24.936188 4942 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb34cdd67_e888_4718_8889_0dc284187fcc.slice/crio-4bc4279f98eaf570cc3afb16c06101e758c31001023850a03a68af7e102724fc.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb34cdd67_e888_4718_8889_0dc284187fcc.slice/crio-conmon-4bc4279f98eaf570cc3afb16c06101e758c31001023850a03a68af7e102724fc.scope\": RecentStats: unable to find data in memory cache]" Feb 18 19:33:25 crc kubenswrapper[4942]: I0218 19:33:25.779323 4942 generic.go:334] "Generic (PLEG): container finished" podID="b34cdd67-e888-4718-8889-0dc284187fcc" containerID="4bc4279f98eaf570cc3afb16c06101e758c31001023850a03a68af7e102724fc" exitCode=0 Feb 18 19:33:25 crc kubenswrapper[4942]: I0218 19:33:25.779410 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-2mvhf" event={"ID":"b34cdd67-e888-4718-8889-0dc284187fcc","Type":"ContainerDied","Data":"4bc4279f98eaf570cc3afb16c06101e758c31001023850a03a68af7e102724fc"} Feb 18 19:33:25 crc kubenswrapper[4942]: I0218 19:33:25.863333 4942 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-666b6646f7-2mvhf" podUID="b34cdd67-e888-4718-8889-0dc284187fcc" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.103:5353: connect: connection refused" Feb 18 19:33:26 crc kubenswrapper[4942]: I0218 19:33:26.117429 4942 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-57d769cc4f-99h4x" podUID="b7887418-e8d9-434c-a8e3-fed787cbc8c8" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.104:5353: connect: connection refused" Feb 18 19:33:28 crc kubenswrapper[4942]: I0218 19:33:28.714951 4942 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-2mvhf" Feb 18 19:33:28 crc kubenswrapper[4942]: I0218 19:33:28.720425 4942 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-99h4x" Feb 18 19:33:28 crc kubenswrapper[4942]: I0218 19:33:28.784139 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vrdxp\" (UniqueName: \"kubernetes.io/projected/b7887418-e8d9-434c-a8e3-fed787cbc8c8-kube-api-access-vrdxp\") pod \"b7887418-e8d9-434c-a8e3-fed787cbc8c8\" (UID: \"b7887418-e8d9-434c-a8e3-fed787cbc8c8\") " Feb 18 19:33:28 crc kubenswrapper[4942]: I0218 19:33:28.784240 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qmzc4\" (UniqueName: \"kubernetes.io/projected/b34cdd67-e888-4718-8889-0dc284187fcc-kube-api-access-qmzc4\") pod \"b34cdd67-e888-4718-8889-0dc284187fcc\" (UID: \"b34cdd67-e888-4718-8889-0dc284187fcc\") " Feb 18 19:33:28 crc kubenswrapper[4942]: I0218 19:33:28.784287 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b7887418-e8d9-434c-a8e3-fed787cbc8c8-dns-svc\") pod \"b7887418-e8d9-434c-a8e3-fed787cbc8c8\" (UID: \"b7887418-e8d9-434c-a8e3-fed787cbc8c8\") " Feb 18 19:33:28 crc kubenswrapper[4942]: I0218 19:33:28.784354 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b34cdd67-e888-4718-8889-0dc284187fcc-dns-svc\") pod \"b34cdd67-e888-4718-8889-0dc284187fcc\" (UID: \"b34cdd67-e888-4718-8889-0dc284187fcc\") " Feb 18 19:33:28 crc kubenswrapper[4942]: I0218 19:33:28.784381 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b34cdd67-e888-4718-8889-0dc284187fcc-config\") pod \"b34cdd67-e888-4718-8889-0dc284187fcc\" (UID: \"b34cdd67-e888-4718-8889-0dc284187fcc\") " Feb 18 19:33:28 crc kubenswrapper[4942]: I0218 19:33:28.784407 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b7887418-e8d9-434c-a8e3-fed787cbc8c8-config\") pod \"b7887418-e8d9-434c-a8e3-fed787cbc8c8\" (UID: \"b7887418-e8d9-434c-a8e3-fed787cbc8c8\") " Feb 18 19:33:28 crc kubenswrapper[4942]: I0218 19:33:28.799056 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b34cdd67-e888-4718-8889-0dc284187fcc-kube-api-access-qmzc4" (OuterVolumeSpecName: "kube-api-access-qmzc4") pod "b34cdd67-e888-4718-8889-0dc284187fcc" (UID: "b34cdd67-e888-4718-8889-0dc284187fcc"). InnerVolumeSpecName "kube-api-access-qmzc4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:33:28 crc kubenswrapper[4942]: I0218 19:33:28.800693 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b7887418-e8d9-434c-a8e3-fed787cbc8c8-kube-api-access-vrdxp" (OuterVolumeSpecName: "kube-api-access-vrdxp") pod "b7887418-e8d9-434c-a8e3-fed787cbc8c8" (UID: "b7887418-e8d9-434c-a8e3-fed787cbc8c8"). InnerVolumeSpecName "kube-api-access-vrdxp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:33:28 crc kubenswrapper[4942]: I0218 19:33:28.813246 4942 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-2mvhf" Feb 18 19:33:28 crc kubenswrapper[4942]: I0218 19:33:28.813239 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-2mvhf" event={"ID":"b34cdd67-e888-4718-8889-0dc284187fcc","Type":"ContainerDied","Data":"2f50d2e2d9920890883c43f7e3f4d7d184c62c40d9aa2c1ba9d6825c0e37fee3"} Feb 18 19:33:28 crc kubenswrapper[4942]: I0218 19:33:28.813371 4942 scope.go:117] "RemoveContainer" containerID="4bc4279f98eaf570cc3afb16c06101e758c31001023850a03a68af7e102724fc" Feb 18 19:33:28 crc kubenswrapper[4942]: I0218 19:33:28.815135 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-99h4x" event={"ID":"b7887418-e8d9-434c-a8e3-fed787cbc8c8","Type":"ContainerDied","Data":"56b3e02a29b20e42401279e2e4fcf7e7debb435a70a1f70075eaa1d581cacb4f"} Feb 18 19:33:28 crc kubenswrapper[4942]: I0218 19:33:28.815178 4942 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-99h4x" Feb 18 19:33:28 crc kubenswrapper[4942]: I0218 19:33:28.834462 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b7887418-e8d9-434c-a8e3-fed787cbc8c8-config" (OuterVolumeSpecName: "config") pod "b7887418-e8d9-434c-a8e3-fed787cbc8c8" (UID: "b7887418-e8d9-434c-a8e3-fed787cbc8c8"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:33:28 crc kubenswrapper[4942]: I0218 19:33:28.839135 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b34cdd67-e888-4718-8889-0dc284187fcc-config" (OuterVolumeSpecName: "config") pod "b34cdd67-e888-4718-8889-0dc284187fcc" (UID: "b34cdd67-e888-4718-8889-0dc284187fcc"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:33:28 crc kubenswrapper[4942]: I0218 19:33:28.847263 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b7887418-e8d9-434c-a8e3-fed787cbc8c8-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "b7887418-e8d9-434c-a8e3-fed787cbc8c8" (UID: "b7887418-e8d9-434c-a8e3-fed787cbc8c8"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:33:28 crc kubenswrapper[4942]: I0218 19:33:28.847859 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b34cdd67-e888-4718-8889-0dc284187fcc-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "b34cdd67-e888-4718-8889-0dc284187fcc" (UID: "b34cdd67-e888-4718-8889-0dc284187fcc"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:33:28 crc kubenswrapper[4942]: I0218 19:33:28.886307 4942 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vrdxp\" (UniqueName: \"kubernetes.io/projected/b7887418-e8d9-434c-a8e3-fed787cbc8c8-kube-api-access-vrdxp\") on node \"crc\" DevicePath \"\"" Feb 18 19:33:28 crc kubenswrapper[4942]: I0218 19:33:28.886388 4942 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qmzc4\" (UniqueName: \"kubernetes.io/projected/b34cdd67-e888-4718-8889-0dc284187fcc-kube-api-access-qmzc4\") on node \"crc\" DevicePath \"\"" Feb 18 19:33:28 crc kubenswrapper[4942]: I0218 19:33:28.886404 4942 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b7887418-e8d9-434c-a8e3-fed787cbc8c8-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 18 19:33:28 crc kubenswrapper[4942]: I0218 19:33:28.886417 4942 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b34cdd67-e888-4718-8889-0dc284187fcc-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 18 19:33:28 crc kubenswrapper[4942]: I0218 19:33:28.886428 4942 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b34cdd67-e888-4718-8889-0dc284187fcc-config\") on node \"crc\" DevicePath \"\"" Feb 18 19:33:28 crc kubenswrapper[4942]: I0218 19:33:28.886439 4942 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b7887418-e8d9-434c-a8e3-fed787cbc8c8-config\") on node \"crc\" DevicePath \"\"" Feb 18 19:33:29 crc kubenswrapper[4942]: I0218 19:33:29.057527 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-6896v"] Feb 18 19:33:29 crc kubenswrapper[4942]: I0218 19:33:29.138285 4942 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-2mvhf"] Feb 18 19:33:29 crc kubenswrapper[4942]: I0218 19:33:29.150329 4942 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-2mvhf"] Feb 18 19:33:29 crc kubenswrapper[4942]: I0218 19:33:29.158462 4942 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-99h4x"] Feb 18 19:33:29 crc kubenswrapper[4942]: I0218 19:33:29.169437 4942 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-99h4x"] Feb 18 19:33:29 crc kubenswrapper[4942]: W0218 19:33:29.550995 4942 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd783b8b1_2938_4635_8a04_df942aa84383.slice/crio-448c589fbd7559c4745406aafcb7a6277e2c8e57050b505f7abd3899347233bb WatchSource:0}: Error finding container 448c589fbd7559c4745406aafcb7a6277e2c8e57050b505f7abd3899347233bb: Status 404 returned error can't find the container with id 448c589fbd7559c4745406aafcb7a6277e2c8e57050b505f7abd3899347233bb Feb 18 19:33:29 crc kubenswrapper[4942]: I0218 19:33:29.648438 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7fd796d7df-nblkl"] Feb 18 19:33:29 crc kubenswrapper[4942]: I0218 19:33:29.648577 4942 scope.go:117] "RemoveContainer" containerID="031b9ea9109a76a2044d40e6de17d03777ea8f76aba5a0391d56eb6c10d14754" Feb 18 19:33:29 crc kubenswrapper[4942]: W0218 19:33:29.719817 4942 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod782cbd43_a7c9_45f4_99e3_44fe770be6a5.slice/crio-81a7746b89eb6f40a6863d6c1c0673a32e9e2c5723e21fb76df57dee6d01c96b WatchSource:0}: Error finding container 81a7746b89eb6f40a6863d6c1c0673a32e9e2c5723e21fb76df57dee6d01c96b: Status 404 returned error can't find the container with id 81a7746b89eb6f40a6863d6c1c0673a32e9e2c5723e21fb76df57dee6d01c96b Feb 18 19:33:29 crc kubenswrapper[4942]: I0218 19:33:29.820787 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86db49b7ff-6896v" event={"ID":"d783b8b1-2938-4635-8a04-df942aa84383","Type":"ContainerStarted","Data":"448c589fbd7559c4745406aafcb7a6277e2c8e57050b505f7abd3899347233bb"} Feb 18 19:33:29 crc kubenswrapper[4942]: I0218 19:33:29.822638 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7fd796d7df-nblkl" event={"ID":"782cbd43-a7c9-45f4-99e3-44fe770be6a5","Type":"ContainerStarted","Data":"81a7746b89eb6f40a6863d6c1c0673a32e9e2c5723e21fb76df57dee6d01c96b"} Feb 18 19:33:29 crc kubenswrapper[4942]: I0218 19:33:29.863942 4942 scope.go:117] "RemoveContainer" containerID="b0a08becdf6cd5acdde160303320ee77217b0a1a88e5089ff77de3d6134ce51a" Feb 18 19:33:29 crc kubenswrapper[4942]: I0218 19:33:29.954378 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-99sfz"] Feb 18 19:33:30 crc kubenswrapper[4942]: I0218 19:33:30.343265 4942 scope.go:117] "RemoveContainer" containerID="756562a4164ba39c406456f5f9881491ae21aa337026dce4848f70b89d661fc0" Feb 18 19:33:30 crc kubenswrapper[4942]: I0218 19:33:30.830339 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-99sfz" event={"ID":"c3b96f02-5a44-4d7e-842c-392c9a0a73f3","Type":"ContainerStarted","Data":"ac194fb62e391ae98cae815de8bacf33b14e11c43ab0d45b0e7c3ee83dbc6409"} Feb 18 19:33:31 crc kubenswrapper[4942]: I0218 19:33:31.048887 4942 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b34cdd67-e888-4718-8889-0dc284187fcc" path="/var/lib/kubelet/pods/b34cdd67-e888-4718-8889-0dc284187fcc/volumes" Feb 18 19:33:31 crc kubenswrapper[4942]: I0218 19:33:31.049950 4942 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b7887418-e8d9-434c-a8e3-fed787cbc8c8" path="/var/lib/kubelet/pods/b7887418-e8d9-434c-a8e3-fed787cbc8c8/volumes" Feb 18 19:33:31 crc kubenswrapper[4942]: I0218 19:33:31.841435 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"242ed220-c516-4f30-bb5b-69f28626101a","Type":"ContainerStarted","Data":"4f943f7c87633faeb2b85c6c602161dec57abba9259fc1f9a6aa3507c0e0a0df"} Feb 18 19:33:31 crc kubenswrapper[4942]: I0218 19:33:31.841794 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/memcached-0" Feb 18 19:33:31 crc kubenswrapper[4942]: I0218 19:33:31.843197 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"4a1f9573-3ebf-4dbf-a269-938392cbd141","Type":"ContainerStarted","Data":"9873a75a0949d55b8ff400f2baf8d74407ceaed4597b522e83ec4925a27a4e86"} Feb 18 19:33:31 crc kubenswrapper[4942]: I0218 19:33:31.844375 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"b6b41292-c562-4964-bb25-d8945415b3da","Type":"ContainerStarted","Data":"c197a7dd3977502f99f2f3aa2cb1b55953ff18362b376d981b554df6b529f782"} Feb 18 19:33:31 crc kubenswrapper[4942]: I0218 19:33:31.846334 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"e39270f2-0125-43f1-a2b3-cda4813614dd","Type":"ContainerStarted","Data":"9f2574008b8624c11cba68f579e99cde6bb78c2c6d362c6e137f9045a10b1455"} Feb 18 19:33:31 crc kubenswrapper[4942]: I0218 19:33:31.848126 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"e07db76c-5ab3-430d-b9ad-eba96f02ab9e","Type":"ContainerStarted","Data":"ac00be85d3d8ee12a284874f5659d3c120ae8405f315e66cca8afed5300f1248"} Feb 18 19:33:31 crc kubenswrapper[4942]: I0218 19:33:31.850333 4942 generic.go:334] "Generic (PLEG): container finished" podID="d783b8b1-2938-4635-8a04-df942aa84383" containerID="b7b09518d61e90a2b8119c940a1f1623600d941819cf448300a00306d69169af" exitCode=0 Feb 18 19:33:31 crc kubenswrapper[4942]: I0218 19:33:31.850381 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86db49b7ff-6896v" event={"ID":"d783b8b1-2938-4635-8a04-df942aa84383","Type":"ContainerDied","Data":"b7b09518d61e90a2b8119c940a1f1623600d941819cf448300a00306d69169af"} Feb 18 19:33:31 crc kubenswrapper[4942]: I0218 19:33:31.853229 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"b9c56d4c-8421-4b07-992d-c0c45223259f","Type":"ContainerStarted","Data":"e3db5e44608e23b4b739d3efeab5f4582ba590d656f2c2a3f42a83bbd390e150"} Feb 18 19:33:31 crc kubenswrapper[4942]: I0218 19:33:31.860635 4942 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/memcached-0" podStartSLOduration=15.046268619 podStartE2EDuration="22.860613368s" podCreationTimestamp="2026-02-18 19:33:09 +0000 UTC" firstStartedPulling="2026-02-18 19:33:20.753715003 +0000 UTC m=+960.458647668" lastFinishedPulling="2026-02-18 19:33:28.568059752 +0000 UTC m=+968.272992417" observedRunningTime="2026-02-18 19:33:31.854869911 +0000 UTC m=+971.559802596" watchObservedRunningTime="2026-02-18 19:33:31.860613368 +0000 UTC m=+971.565546033" Feb 18 19:33:31 crc kubenswrapper[4942]: I0218 19:33:31.863505 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"a8f1712c-12df-4ca2-81d3-dc649c747868","Type":"ContainerStarted","Data":"91cd24a25481f6b5fa46205492b122ca37c2c2a0ef88de3487c62657546ed3a6"} Feb 18 19:33:31 crc kubenswrapper[4942]: I0218 19:33:31.864199 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Feb 18 19:33:31 crc kubenswrapper[4942]: I0218 19:33:31.978297 4942 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=10.416754996 podStartE2EDuration="19.978280466s" podCreationTimestamp="2026-02-18 19:33:12 +0000 UTC" firstStartedPulling="2026-02-18 19:33:21.124886442 +0000 UTC m=+960.829819107" lastFinishedPulling="2026-02-18 19:33:30.686411922 +0000 UTC m=+970.391344577" observedRunningTime="2026-02-18 19:33:31.973859162 +0000 UTC m=+971.678791827" watchObservedRunningTime="2026-02-18 19:33:31.978280466 +0000 UTC m=+971.683213131" Feb 18 19:33:32 crc kubenswrapper[4942]: I0218 19:33:32.870895 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"77de5cb0-e446-407d-9e32-b13f39c84ae2","Type":"ContainerStarted","Data":"e242de7f4af5755759f500d3c9dbc2395ec18d3bfe3fe38cf008cae5b5314de3"} Feb 18 19:33:32 crc kubenswrapper[4942]: I0218 19:33:32.879466 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-7xrn9" event={"ID":"a740e80f-15e5-4745-bb1d-96da2561f33b","Type":"ContainerStarted","Data":"cd738650668287e0a0fd738cee99e91bf889dfe4cc5467bd8f993b04d839a48f"} Feb 18 19:33:33 crc kubenswrapper[4942]: I0218 19:33:33.889495 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"543db3d4-08d8-473f-a6ad-7e6a5bb9734c","Type":"ContainerStarted","Data":"81a3193c7a82e4ed4f2a5322d29f8d82024b97bad905eacfd10f035fcf65ddf4"} Feb 18 19:33:33 crc kubenswrapper[4942]: I0218 19:33:33.893716 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"4a1f9573-3ebf-4dbf-a269-938392cbd141","Type":"ContainerStarted","Data":"2abfd758292841dc6a4b717ca2ef392a774d75701b7d447419a69c6bfa7204e7"} Feb 18 19:33:33 crc kubenswrapper[4942]: I0218 19:33:33.896338 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86db49b7ff-6896v" event={"ID":"d783b8b1-2938-4635-8a04-df942aa84383","Type":"ContainerStarted","Data":"d4f1c1c791b1dde07b4dcac7910f06502d1dd5c9b462e412bdca411f39c47164"} Feb 18 19:33:33 crc kubenswrapper[4942]: I0218 19:33:33.897295 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-86db49b7ff-6896v" Feb 18 19:33:33 crc kubenswrapper[4942]: I0218 19:33:33.899132 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-99sfz" event={"ID":"c3b96f02-5a44-4d7e-842c-392c9a0a73f3","Type":"ContainerStarted","Data":"61290553a67df46acac39c3afc33067e38282f15c3f5725b2cbf755f2022bc98"} Feb 18 19:33:33 crc kubenswrapper[4942]: I0218 19:33:33.900989 4942 generic.go:334] "Generic (PLEG): container finished" podID="782cbd43-a7c9-45f4-99e3-44fe770be6a5" containerID="7e686888a5f0752dbf5d7b1d5a9c7b87451890452b5fc3cae41ff40186646673" exitCode=0 Feb 18 19:33:33 crc kubenswrapper[4942]: I0218 19:33:33.901057 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7fd796d7df-nblkl" event={"ID":"782cbd43-a7c9-45f4-99e3-44fe770be6a5","Type":"ContainerDied","Data":"7e686888a5f0752dbf5d7b1d5a9c7b87451890452b5fc3cae41ff40186646673"} Feb 18 19:33:33 crc kubenswrapper[4942]: I0218 19:33:33.905837 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"b9c56d4c-8421-4b07-992d-c0c45223259f","Type":"ContainerStarted","Data":"991fc750407177fd31754e5947897f96b9cb1885b5ff5270501481a94860c3d9"} Feb 18 19:33:33 crc kubenswrapper[4942]: I0218 19:33:33.908807 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-llsph" event={"ID":"28fe292c-6cda-4e3b-bce3-544ded95930b","Type":"ContainerStarted","Data":"83cfc67a9754d3914d9e4c2da74236b9954b79567c5032678f8aea995daedba3"} Feb 18 19:33:33 crc kubenswrapper[4942]: I0218 19:33:33.909302 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-llsph" Feb 18 19:33:33 crc kubenswrapper[4942]: I0218 19:33:33.911014 4942 generic.go:334] "Generic (PLEG): container finished" podID="a740e80f-15e5-4745-bb1d-96da2561f33b" containerID="cd738650668287e0a0fd738cee99e91bf889dfe4cc5467bd8f993b04d839a48f" exitCode=0 Feb 18 19:33:33 crc kubenswrapper[4942]: I0218 19:33:33.911138 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-7xrn9" event={"ID":"a740e80f-15e5-4745-bb1d-96da2561f33b","Type":"ContainerDied","Data":"cd738650668287e0a0fd738cee99e91bf889dfe4cc5467bd8f993b04d839a48f"} Feb 18 19:33:33 crc kubenswrapper[4942]: I0218 19:33:33.945061 4942 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-sb-0" podStartSLOduration=6.669252565 podStartE2EDuration="15.945021487s" podCreationTimestamp="2026-02-18 19:33:18 +0000 UTC" firstStartedPulling="2026-02-18 19:33:23.210827781 +0000 UTC m=+962.915760446" lastFinishedPulling="2026-02-18 19:33:32.486596703 +0000 UTC m=+972.191529368" observedRunningTime="2026-02-18 19:33:33.938517901 +0000 UTC m=+973.643450576" watchObservedRunningTime="2026-02-18 19:33:33.945021487 +0000 UTC m=+973.649954152" Feb 18 19:33:33 crc kubenswrapper[4942]: I0218 19:33:33.997016 4942 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-86db49b7ff-6896v" podStartSLOduration=10.99699161 podStartE2EDuration="10.99699161s" podCreationTimestamp="2026-02-18 19:33:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:33:33.9688914 +0000 UTC m=+973.673824085" watchObservedRunningTime="2026-02-18 19:33:33.99699161 +0000 UTC m=+973.701924345" Feb 18 19:33:34 crc kubenswrapper[4942]: I0218 19:33:34.043227 4942 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-nb-0" podStartSLOduration=9.023651081 podStartE2EDuration="20.043210496s" podCreationTimestamp="2026-02-18 19:33:14 +0000 UTC" firstStartedPulling="2026-02-18 19:33:21.486581809 +0000 UTC m=+961.191514474" lastFinishedPulling="2026-02-18 19:33:32.506141224 +0000 UTC m=+972.211073889" observedRunningTime="2026-02-18 19:33:34.033355663 +0000 UTC m=+973.738288328" watchObservedRunningTime="2026-02-18 19:33:34.043210496 +0000 UTC m=+973.748143161" Feb 18 19:33:34 crc kubenswrapper[4942]: I0218 19:33:34.073919 4942 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-llsph" podStartSLOduration=10.602932863 podStartE2EDuration="19.07380879s" podCreationTimestamp="2026-02-18 19:33:15 +0000 UTC" firstStartedPulling="2026-02-18 19:33:21.329434828 +0000 UTC m=+961.034367493" lastFinishedPulling="2026-02-18 19:33:29.800310755 +0000 UTC m=+969.505243420" observedRunningTime="2026-02-18 19:33:34.052196026 +0000 UTC m=+973.757128681" watchObservedRunningTime="2026-02-18 19:33:34.07380879 +0000 UTC m=+973.778741475" Feb 18 19:33:34 crc kubenswrapper[4942]: I0218 19:33:34.085951 4942 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-metrics-99sfz" podStartSLOduration=8.919069585999999 podStartE2EDuration="11.085935361s" podCreationTimestamp="2026-02-18 19:33:23 +0000 UTC" firstStartedPulling="2026-02-18 19:33:30.320944679 +0000 UTC m=+970.025877364" lastFinishedPulling="2026-02-18 19:33:32.487810464 +0000 UTC m=+972.192743139" observedRunningTime="2026-02-18 19:33:34.073212185 +0000 UTC m=+973.778144850" watchObservedRunningTime="2026-02-18 19:33:34.085935361 +0000 UTC m=+973.790868026" Feb 18 19:33:34 crc kubenswrapper[4942]: I0218 19:33:34.175437 4942 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-nb-0" Feb 18 19:33:34 crc kubenswrapper[4942]: I0218 19:33:34.211297 4942 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-nb-0" Feb 18 19:33:34 crc kubenswrapper[4942]: I0218 19:33:34.892542 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-sb-0" Feb 18 19:33:34 crc kubenswrapper[4942]: I0218 19:33:34.893114 4942 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-sb-0" Feb 18 19:33:34 crc kubenswrapper[4942]: I0218 19:33:34.924495 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-7xrn9" event={"ID":"a740e80f-15e5-4745-bb1d-96da2561f33b","Type":"ContainerStarted","Data":"4c29434e4ea049fcd7d697109defd55fb1b1f3dcfda8e150d2804fc7850db638"} Feb 18 19:33:34 crc kubenswrapper[4942]: I0218 19:33:34.924544 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-7xrn9" event={"ID":"a740e80f-15e5-4745-bb1d-96da2561f33b","Type":"ContainerStarted","Data":"3f0f52902ab2a0187531db04018e8f0d7cef935288255fc274a26a8773c0630f"} Feb 18 19:33:34 crc kubenswrapper[4942]: I0218 19:33:34.924819 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-7xrn9" Feb 18 19:33:34 crc kubenswrapper[4942]: I0218 19:33:34.925063 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-7xrn9" Feb 18 19:33:34 crc kubenswrapper[4942]: I0218 19:33:34.929156 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7fd796d7df-nblkl" event={"ID":"782cbd43-a7c9-45f4-99e3-44fe770be6a5","Type":"ContainerStarted","Data":"9a09400e944780331e251a7d55ef689b64cf9b9306241c5f789fb2fd71f6617c"} Feb 18 19:33:34 crc kubenswrapper[4942]: I0218 19:33:34.931033 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-nb-0" Feb 18 19:33:34 crc kubenswrapper[4942]: I0218 19:33:34.956822 4942 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-ovs-7xrn9" podStartSLOduration=12.587098882 podStartE2EDuration="19.956802247s" podCreationTimestamp="2026-02-18 19:33:15 +0000 UTC" firstStartedPulling="2026-02-18 19:33:22.430475357 +0000 UTC m=+962.135408022" lastFinishedPulling="2026-02-18 19:33:29.800178712 +0000 UTC m=+969.505111387" observedRunningTime="2026-02-18 19:33:34.94987917 +0000 UTC m=+974.654811865" watchObservedRunningTime="2026-02-18 19:33:34.956802247 +0000 UTC m=+974.661734932" Feb 18 19:33:34 crc kubenswrapper[4942]: I0218 19:33:34.962642 4942 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-sb-0" Feb 18 19:33:34 crc kubenswrapper[4942]: I0218 19:33:34.998702 4942 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-7fd796d7df-nblkl" podStartSLOduration=11.998681121 podStartE2EDuration="11.998681121s" podCreationTimestamp="2026-02-18 19:33:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:33:34.9982347 +0000 UTC m=+974.703167365" watchObservedRunningTime="2026-02-18 19:33:34.998681121 +0000 UTC m=+974.703613796" Feb 18 19:33:35 crc kubenswrapper[4942]: I0218 19:33:35.936944 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-7fd796d7df-nblkl" Feb 18 19:33:36 crc kubenswrapper[4942]: I0218 19:33:36.211620 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-nb-0" Feb 18 19:33:36 crc kubenswrapper[4942]: I0218 19:33:36.945187 4942 generic.go:334] "Generic (PLEG): container finished" podID="e39270f2-0125-43f1-a2b3-cda4813614dd" containerID="9f2574008b8624c11cba68f579e99cde6bb78c2c6d362c6e137f9045a10b1455" exitCode=0 Feb 18 19:33:36 crc kubenswrapper[4942]: I0218 19:33:36.945268 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"e39270f2-0125-43f1-a2b3-cda4813614dd","Type":"ContainerDied","Data":"9f2574008b8624c11cba68f579e99cde6bb78c2c6d362c6e137f9045a10b1455"} Feb 18 19:33:36 crc kubenswrapper[4942]: I0218 19:33:36.947032 4942 generic.go:334] "Generic (PLEG): container finished" podID="e07db76c-5ab3-430d-b9ad-eba96f02ab9e" containerID="ac00be85d3d8ee12a284874f5659d3c120ae8405f315e66cca8afed5300f1248" exitCode=0 Feb 18 19:33:36 crc kubenswrapper[4942]: I0218 19:33:36.948164 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"e07db76c-5ab3-430d-b9ad-eba96f02ab9e","Type":"ContainerDied","Data":"ac00be85d3d8ee12a284874f5659d3c120ae8405f315e66cca8afed5300f1248"} Feb 18 19:33:37 crc kubenswrapper[4942]: I0218 19:33:37.011364 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-sb-0" Feb 18 19:33:37 crc kubenswrapper[4942]: I0218 19:33:37.194159 4942 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-northd-0"] Feb 18 19:33:37 crc kubenswrapper[4942]: E0218 19:33:37.194575 4942 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b34cdd67-e888-4718-8889-0dc284187fcc" containerName="init" Feb 18 19:33:37 crc kubenswrapper[4942]: I0218 19:33:37.194593 4942 state_mem.go:107] "Deleted CPUSet assignment" podUID="b34cdd67-e888-4718-8889-0dc284187fcc" containerName="init" Feb 18 19:33:37 crc kubenswrapper[4942]: E0218 19:33:37.194623 4942 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b7887418-e8d9-434c-a8e3-fed787cbc8c8" containerName="init" Feb 18 19:33:37 crc kubenswrapper[4942]: I0218 19:33:37.194631 4942 state_mem.go:107] "Deleted CPUSet assignment" podUID="b7887418-e8d9-434c-a8e3-fed787cbc8c8" containerName="init" Feb 18 19:33:37 crc kubenswrapper[4942]: E0218 19:33:37.194669 4942 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b7887418-e8d9-434c-a8e3-fed787cbc8c8" containerName="dnsmasq-dns" Feb 18 19:33:37 crc kubenswrapper[4942]: I0218 19:33:37.194678 4942 state_mem.go:107] "Deleted CPUSet assignment" podUID="b7887418-e8d9-434c-a8e3-fed787cbc8c8" containerName="dnsmasq-dns" Feb 18 19:33:37 crc kubenswrapper[4942]: E0218 19:33:37.194693 4942 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b34cdd67-e888-4718-8889-0dc284187fcc" containerName="dnsmasq-dns" Feb 18 19:33:37 crc kubenswrapper[4942]: I0218 19:33:37.194701 4942 state_mem.go:107] "Deleted CPUSet assignment" podUID="b34cdd67-e888-4718-8889-0dc284187fcc" containerName="dnsmasq-dns" Feb 18 19:33:37 crc kubenswrapper[4942]: I0218 19:33:37.194924 4942 memory_manager.go:354] "RemoveStaleState removing state" podUID="b7887418-e8d9-434c-a8e3-fed787cbc8c8" containerName="dnsmasq-dns" Feb 18 19:33:37 crc kubenswrapper[4942]: I0218 19:33:37.194953 4942 memory_manager.go:354] "RemoveStaleState removing state" podUID="b34cdd67-e888-4718-8889-0dc284187fcc" containerName="dnsmasq-dns" Feb 18 19:33:37 crc kubenswrapper[4942]: I0218 19:33:37.196032 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Feb 18 19:33:37 crc kubenswrapper[4942]: I0218 19:33:37.206327 4942 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-config" Feb 18 19:33:37 crc kubenswrapper[4942]: I0218 19:33:37.206541 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovnnorthd-ovnnorthd-dockercfg-tcwkq" Feb 18 19:33:37 crc kubenswrapper[4942]: I0218 19:33:37.206811 4942 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-scripts" Feb 18 19:33:37 crc kubenswrapper[4942]: I0218 19:33:37.206997 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovnnorthd-ovndbs" Feb 18 19:33:37 crc kubenswrapper[4942]: I0218 19:33:37.223992 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Feb 18 19:33:37 crc kubenswrapper[4942]: I0218 19:33:37.264458 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/056e639a-0805-4bb7-b0bd-620d9c67e6e2-scripts\") pod \"ovn-northd-0\" (UID: \"056e639a-0805-4bb7-b0bd-620d9c67e6e2\") " pod="openstack/ovn-northd-0" Feb 18 19:33:37 crc kubenswrapper[4942]: I0218 19:33:37.264528 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/056e639a-0805-4bb7-b0bd-620d9c67e6e2-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"056e639a-0805-4bb7-b0bd-620d9c67e6e2\") " pod="openstack/ovn-northd-0" Feb 18 19:33:37 crc kubenswrapper[4942]: I0218 19:33:37.264558 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/056e639a-0805-4bb7-b0bd-620d9c67e6e2-config\") pod \"ovn-northd-0\" (UID: \"056e639a-0805-4bb7-b0bd-620d9c67e6e2\") " pod="openstack/ovn-northd-0" Feb 18 19:33:37 crc kubenswrapper[4942]: I0218 19:33:37.264803 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hj477\" (UniqueName: \"kubernetes.io/projected/056e639a-0805-4bb7-b0bd-620d9c67e6e2-kube-api-access-hj477\") pod \"ovn-northd-0\" (UID: \"056e639a-0805-4bb7-b0bd-620d9c67e6e2\") " pod="openstack/ovn-northd-0" Feb 18 19:33:37 crc kubenswrapper[4942]: I0218 19:33:37.264885 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/056e639a-0805-4bb7-b0bd-620d9c67e6e2-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"056e639a-0805-4bb7-b0bd-620d9c67e6e2\") " pod="openstack/ovn-northd-0" Feb 18 19:33:37 crc kubenswrapper[4942]: I0218 19:33:37.264990 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/056e639a-0805-4bb7-b0bd-620d9c67e6e2-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"056e639a-0805-4bb7-b0bd-620d9c67e6e2\") " pod="openstack/ovn-northd-0" Feb 18 19:33:37 crc kubenswrapper[4942]: I0218 19:33:37.265071 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/056e639a-0805-4bb7-b0bd-620d9c67e6e2-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"056e639a-0805-4bb7-b0bd-620d9c67e6e2\") " pod="openstack/ovn-northd-0" Feb 18 19:33:37 crc kubenswrapper[4942]: I0218 19:33:37.366326 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hj477\" (UniqueName: \"kubernetes.io/projected/056e639a-0805-4bb7-b0bd-620d9c67e6e2-kube-api-access-hj477\") pod \"ovn-northd-0\" (UID: \"056e639a-0805-4bb7-b0bd-620d9c67e6e2\") " pod="openstack/ovn-northd-0" Feb 18 19:33:37 crc kubenswrapper[4942]: I0218 19:33:37.366574 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/056e639a-0805-4bb7-b0bd-620d9c67e6e2-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"056e639a-0805-4bb7-b0bd-620d9c67e6e2\") " pod="openstack/ovn-northd-0" Feb 18 19:33:37 crc kubenswrapper[4942]: I0218 19:33:37.366701 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/056e639a-0805-4bb7-b0bd-620d9c67e6e2-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"056e639a-0805-4bb7-b0bd-620d9c67e6e2\") " pod="openstack/ovn-northd-0" Feb 18 19:33:37 crc kubenswrapper[4942]: I0218 19:33:37.366799 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/056e639a-0805-4bb7-b0bd-620d9c67e6e2-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"056e639a-0805-4bb7-b0bd-620d9c67e6e2\") " pod="openstack/ovn-northd-0" Feb 18 19:33:37 crc kubenswrapper[4942]: I0218 19:33:37.366899 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/056e639a-0805-4bb7-b0bd-620d9c67e6e2-scripts\") pod \"ovn-northd-0\" (UID: \"056e639a-0805-4bb7-b0bd-620d9c67e6e2\") " pod="openstack/ovn-northd-0" Feb 18 19:33:37 crc kubenswrapper[4942]: I0218 19:33:37.366992 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/056e639a-0805-4bb7-b0bd-620d9c67e6e2-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"056e639a-0805-4bb7-b0bd-620d9c67e6e2\") " pod="openstack/ovn-northd-0" Feb 18 19:33:37 crc kubenswrapper[4942]: I0218 19:33:37.367074 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/056e639a-0805-4bb7-b0bd-620d9c67e6e2-config\") pod \"ovn-northd-0\" (UID: \"056e639a-0805-4bb7-b0bd-620d9c67e6e2\") " pod="openstack/ovn-northd-0" Feb 18 19:33:37 crc kubenswrapper[4942]: I0218 19:33:37.367996 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/056e639a-0805-4bb7-b0bd-620d9c67e6e2-config\") pod \"ovn-northd-0\" (UID: \"056e639a-0805-4bb7-b0bd-620d9c67e6e2\") " pod="openstack/ovn-northd-0" Feb 18 19:33:37 crc kubenswrapper[4942]: I0218 19:33:37.368959 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/056e639a-0805-4bb7-b0bd-620d9c67e6e2-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"056e639a-0805-4bb7-b0bd-620d9c67e6e2\") " pod="openstack/ovn-northd-0" Feb 18 19:33:37 crc kubenswrapper[4942]: I0218 19:33:37.369689 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/056e639a-0805-4bb7-b0bd-620d9c67e6e2-scripts\") pod \"ovn-northd-0\" (UID: \"056e639a-0805-4bb7-b0bd-620d9c67e6e2\") " pod="openstack/ovn-northd-0" Feb 18 19:33:37 crc kubenswrapper[4942]: I0218 19:33:37.372068 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/056e639a-0805-4bb7-b0bd-620d9c67e6e2-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"056e639a-0805-4bb7-b0bd-620d9c67e6e2\") " pod="openstack/ovn-northd-0" Feb 18 19:33:37 crc kubenswrapper[4942]: I0218 19:33:37.372331 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/056e639a-0805-4bb7-b0bd-620d9c67e6e2-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"056e639a-0805-4bb7-b0bd-620d9c67e6e2\") " pod="openstack/ovn-northd-0" Feb 18 19:33:37 crc kubenswrapper[4942]: I0218 19:33:37.374750 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/056e639a-0805-4bb7-b0bd-620d9c67e6e2-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"056e639a-0805-4bb7-b0bd-620d9c67e6e2\") " pod="openstack/ovn-northd-0" Feb 18 19:33:37 crc kubenswrapper[4942]: I0218 19:33:37.387405 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hj477\" (UniqueName: \"kubernetes.io/projected/056e639a-0805-4bb7-b0bd-620d9c67e6e2-kube-api-access-hj477\") pod \"ovn-northd-0\" (UID: \"056e639a-0805-4bb7-b0bd-620d9c67e6e2\") " pod="openstack/ovn-northd-0" Feb 18 19:33:37 crc kubenswrapper[4942]: I0218 19:33:37.549180 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Feb 18 19:33:37 crc kubenswrapper[4942]: I0218 19:33:37.955323 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"e39270f2-0125-43f1-a2b3-cda4813614dd","Type":"ContainerStarted","Data":"0286ede997b7695538dfeed071898e1e86cab2be007088c478ce52669aef1735"} Feb 18 19:33:37 crc kubenswrapper[4942]: I0218 19:33:37.958865 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"e07db76c-5ab3-430d-b9ad-eba96f02ab9e","Type":"ContainerStarted","Data":"cffbd64c3ff3004c7ad067e5a837278a8d7674871fc6d3d9d098323e8ab8da52"} Feb 18 19:33:37 crc kubenswrapper[4942]: I0218 19:33:37.974155 4942 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-galera-0" podStartSLOduration=22.281420347 podStartE2EDuration="30.974138424s" podCreationTimestamp="2026-02-18 19:33:07 +0000 UTC" firstStartedPulling="2026-02-18 19:33:21.108008069 +0000 UTC m=+960.812940744" lastFinishedPulling="2026-02-18 19:33:29.800726166 +0000 UTC m=+969.505658821" observedRunningTime="2026-02-18 19:33:37.974116064 +0000 UTC m=+977.679048729" watchObservedRunningTime="2026-02-18 19:33:37.974138424 +0000 UTC m=+977.679071089" Feb 18 19:33:38 crc kubenswrapper[4942]: I0218 19:33:38.009037 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Feb 18 19:33:38 crc kubenswrapper[4942]: I0218 19:33:38.009200 4942 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-cell1-galera-0" podStartSLOduration=22.599708368 podStartE2EDuration="30.009191353s" podCreationTimestamp="2026-02-18 19:33:08 +0000 UTC" firstStartedPulling="2026-02-18 19:33:21.16299116 +0000 UTC m=+960.867923835" lastFinishedPulling="2026-02-18 19:33:28.572474155 +0000 UTC m=+968.277406820" observedRunningTime="2026-02-18 19:33:38.003936418 +0000 UTC m=+977.708869103" watchObservedRunningTime="2026-02-18 19:33:38.009191353 +0000 UTC m=+977.714124018" Feb 18 19:33:38 crc kubenswrapper[4942]: W0218 19:33:38.011942 4942 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod056e639a_0805_4bb7_b0bd_620d9c67e6e2.slice/crio-39a341ee1f5374f3198fbc5e02b8a61178c88f6523e86a4fc2e4569d2e94a39b WatchSource:0}: Error finding container 39a341ee1f5374f3198fbc5e02b8a61178c88f6523e86a4fc2e4569d2e94a39b: Status 404 returned error can't find the container with id 39a341ee1f5374f3198fbc5e02b8a61178c88f6523e86a4fc2e4569d2e94a39b Feb 18 19:33:38 crc kubenswrapper[4942]: I0218 19:33:38.500731 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-galera-0" Feb 18 19:33:38 crc kubenswrapper[4942]: I0218 19:33:38.500919 4942 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-galera-0" Feb 18 19:33:38 crc kubenswrapper[4942]: I0218 19:33:38.970034 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"056e639a-0805-4bb7-b0bd-620d9c67e6e2","Type":"ContainerStarted","Data":"39a341ee1f5374f3198fbc5e02b8a61178c88f6523e86a4fc2e4569d2e94a39b"} Feb 18 19:33:39 crc kubenswrapper[4942]: I0218 19:33:39.277632 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-7fd796d7df-nblkl" Feb 18 19:33:39 crc kubenswrapper[4942]: I0218 19:33:39.380459 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-86db49b7ff-6896v" Feb 18 19:33:39 crc kubenswrapper[4942]: I0218 19:33:39.455281 4942 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7fd796d7df-nblkl"] Feb 18 19:33:39 crc kubenswrapper[4942]: I0218 19:33:39.978947 4942 generic.go:334] "Generic (PLEG): container finished" podID="543db3d4-08d8-473f-a6ad-7e6a5bb9734c" containerID="81a3193c7a82e4ed4f2a5322d29f8d82024b97bad905eacfd10f035fcf65ddf4" exitCode=0 Feb 18 19:33:39 crc kubenswrapper[4942]: I0218 19:33:39.979342 4942 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-7fd796d7df-nblkl" podUID="782cbd43-a7c9-45f4-99e3-44fe770be6a5" containerName="dnsmasq-dns" containerID="cri-o://9a09400e944780331e251a7d55ef689b64cf9b9306241c5f789fb2fd71f6617c" gracePeriod=10 Feb 18 19:33:39 crc kubenswrapper[4942]: I0218 19:33:39.979404 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"543db3d4-08d8-473f-a6ad-7e6a5bb9734c","Type":"ContainerDied","Data":"81a3193c7a82e4ed4f2a5322d29f8d82024b97bad905eacfd10f035fcf65ddf4"} Feb 18 19:33:40 crc kubenswrapper[4942]: I0218 19:33:40.181111 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-cell1-galera-0" Feb 18 19:33:40 crc kubenswrapper[4942]: I0218 19:33:40.182143 4942 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-cell1-galera-0" Feb 18 19:33:40 crc kubenswrapper[4942]: I0218 19:33:40.194974 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/memcached-0" Feb 18 19:33:40 crc kubenswrapper[4942]: I0218 19:33:40.990749 4942 generic.go:334] "Generic (PLEG): container finished" podID="782cbd43-a7c9-45f4-99e3-44fe770be6a5" containerID="9a09400e944780331e251a7d55ef689b64cf9b9306241c5f789fb2fd71f6617c" exitCode=0 Feb 18 19:33:40 crc kubenswrapper[4942]: I0218 19:33:40.990847 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7fd796d7df-nblkl" event={"ID":"782cbd43-a7c9-45f4-99e3-44fe770be6a5","Type":"ContainerDied","Data":"9a09400e944780331e251a7d55ef689b64cf9b9306241c5f789fb2fd71f6617c"} Feb 18 19:33:42 crc kubenswrapper[4942]: I0218 19:33:42.525682 4942 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-698758b865-nnzck"] Feb 18 19:33:42 crc kubenswrapper[4942]: I0218 19:33:42.529517 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-698758b865-nnzck" Feb 18 19:33:42 crc kubenswrapper[4942]: I0218 19:33:42.558008 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Feb 18 19:33:42 crc kubenswrapper[4942]: I0218 19:33:42.569448 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q8vt6\" (UniqueName: \"kubernetes.io/projected/1e919317-cae2-432d-959f-8cf1d4520b56-kube-api-access-q8vt6\") pod \"dnsmasq-dns-698758b865-nnzck\" (UID: \"1e919317-cae2-432d-959f-8cf1d4520b56\") " pod="openstack/dnsmasq-dns-698758b865-nnzck" Feb 18 19:33:42 crc kubenswrapper[4942]: I0218 19:33:42.569543 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1e919317-cae2-432d-959f-8cf1d4520b56-config\") pod \"dnsmasq-dns-698758b865-nnzck\" (UID: \"1e919317-cae2-432d-959f-8cf1d4520b56\") " pod="openstack/dnsmasq-dns-698758b865-nnzck" Feb 18 19:33:42 crc kubenswrapper[4942]: I0218 19:33:42.569567 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1e919317-cae2-432d-959f-8cf1d4520b56-ovsdbserver-sb\") pod \"dnsmasq-dns-698758b865-nnzck\" (UID: \"1e919317-cae2-432d-959f-8cf1d4520b56\") " pod="openstack/dnsmasq-dns-698758b865-nnzck" Feb 18 19:33:42 crc kubenswrapper[4942]: I0218 19:33:42.569586 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1e919317-cae2-432d-959f-8cf1d4520b56-ovsdbserver-nb\") pod \"dnsmasq-dns-698758b865-nnzck\" (UID: \"1e919317-cae2-432d-959f-8cf1d4520b56\") " pod="openstack/dnsmasq-dns-698758b865-nnzck" Feb 18 19:33:42 crc kubenswrapper[4942]: I0218 19:33:42.569634 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1e919317-cae2-432d-959f-8cf1d4520b56-dns-svc\") pod \"dnsmasq-dns-698758b865-nnzck\" (UID: \"1e919317-cae2-432d-959f-8cf1d4520b56\") " pod="openstack/dnsmasq-dns-698758b865-nnzck" Feb 18 19:33:42 crc kubenswrapper[4942]: I0218 19:33:42.579774 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-698758b865-nnzck"] Feb 18 19:33:42 crc kubenswrapper[4942]: I0218 19:33:42.675734 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1e919317-cae2-432d-959f-8cf1d4520b56-dns-svc\") pod \"dnsmasq-dns-698758b865-nnzck\" (UID: \"1e919317-cae2-432d-959f-8cf1d4520b56\") " pod="openstack/dnsmasq-dns-698758b865-nnzck" Feb 18 19:33:42 crc kubenswrapper[4942]: I0218 19:33:42.676191 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q8vt6\" (UniqueName: \"kubernetes.io/projected/1e919317-cae2-432d-959f-8cf1d4520b56-kube-api-access-q8vt6\") pod \"dnsmasq-dns-698758b865-nnzck\" (UID: \"1e919317-cae2-432d-959f-8cf1d4520b56\") " pod="openstack/dnsmasq-dns-698758b865-nnzck" Feb 18 19:33:42 crc kubenswrapper[4942]: I0218 19:33:42.676249 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1e919317-cae2-432d-959f-8cf1d4520b56-config\") pod \"dnsmasq-dns-698758b865-nnzck\" (UID: \"1e919317-cae2-432d-959f-8cf1d4520b56\") " pod="openstack/dnsmasq-dns-698758b865-nnzck" Feb 18 19:33:42 crc kubenswrapper[4942]: I0218 19:33:42.676270 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1e919317-cae2-432d-959f-8cf1d4520b56-ovsdbserver-sb\") pod \"dnsmasq-dns-698758b865-nnzck\" (UID: \"1e919317-cae2-432d-959f-8cf1d4520b56\") " pod="openstack/dnsmasq-dns-698758b865-nnzck" Feb 18 19:33:42 crc kubenswrapper[4942]: I0218 19:33:42.676295 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1e919317-cae2-432d-959f-8cf1d4520b56-ovsdbserver-nb\") pod \"dnsmasq-dns-698758b865-nnzck\" (UID: \"1e919317-cae2-432d-959f-8cf1d4520b56\") " pod="openstack/dnsmasq-dns-698758b865-nnzck" Feb 18 19:33:42 crc kubenswrapper[4942]: I0218 19:33:42.676706 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1e919317-cae2-432d-959f-8cf1d4520b56-dns-svc\") pod \"dnsmasq-dns-698758b865-nnzck\" (UID: \"1e919317-cae2-432d-959f-8cf1d4520b56\") " pod="openstack/dnsmasq-dns-698758b865-nnzck" Feb 18 19:33:42 crc kubenswrapper[4942]: I0218 19:33:42.677020 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1e919317-cae2-432d-959f-8cf1d4520b56-ovsdbserver-sb\") pod \"dnsmasq-dns-698758b865-nnzck\" (UID: \"1e919317-cae2-432d-959f-8cf1d4520b56\") " pod="openstack/dnsmasq-dns-698758b865-nnzck" Feb 18 19:33:42 crc kubenswrapper[4942]: I0218 19:33:42.677314 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1e919317-cae2-432d-959f-8cf1d4520b56-config\") pod \"dnsmasq-dns-698758b865-nnzck\" (UID: \"1e919317-cae2-432d-959f-8cf1d4520b56\") " pod="openstack/dnsmasq-dns-698758b865-nnzck" Feb 18 19:33:42 crc kubenswrapper[4942]: I0218 19:33:42.678031 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1e919317-cae2-432d-959f-8cf1d4520b56-ovsdbserver-nb\") pod \"dnsmasq-dns-698758b865-nnzck\" (UID: \"1e919317-cae2-432d-959f-8cf1d4520b56\") " pod="openstack/dnsmasq-dns-698758b865-nnzck" Feb 18 19:33:42 crc kubenswrapper[4942]: I0218 19:33:42.733633 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q8vt6\" (UniqueName: \"kubernetes.io/projected/1e919317-cae2-432d-959f-8cf1d4520b56-kube-api-access-q8vt6\") pod \"dnsmasq-dns-698758b865-nnzck\" (UID: \"1e919317-cae2-432d-959f-8cf1d4520b56\") " pod="openstack/dnsmasq-dns-698758b865-nnzck" Feb 18 19:33:42 crc kubenswrapper[4942]: I0218 19:33:42.855494 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-698758b865-nnzck" Feb 18 19:33:43 crc kubenswrapper[4942]: I0218 19:33:43.670718 4942 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-storage-0"] Feb 18 19:33:43 crc kubenswrapper[4942]: I0218 19:33:43.683033 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Feb 18 19:33:43 crc kubenswrapper[4942]: I0218 19:33:43.689073 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-swift-dockercfg-b2nqs" Feb 18 19:33:43 crc kubenswrapper[4942]: I0218 19:33:43.689276 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-conf" Feb 18 19:33:43 crc kubenswrapper[4942]: I0218 19:33:43.690400 4942 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-files" Feb 18 19:33:43 crc kubenswrapper[4942]: I0218 19:33:43.690566 4942 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-storage-config-data" Feb 18 19:33:43 crc kubenswrapper[4942]: I0218 19:33:43.711857 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-storage-0"] Feb 18 19:33:43 crc kubenswrapper[4942]: I0218 19:33:43.795677 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/125bdbb5-76a8-450f-b645-2133024a1bd0-combined-ca-bundle\") pod \"swift-storage-0\" (UID: \"125bdbb5-76a8-450f-b645-2133024a1bd0\") " pod="openstack/swift-storage-0" Feb 18 19:33:43 crc kubenswrapper[4942]: I0218 19:33:43.795734 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"swift-storage-0\" (UID: \"125bdbb5-76a8-450f-b645-2133024a1bd0\") " pod="openstack/swift-storage-0" Feb 18 19:33:43 crc kubenswrapper[4942]: I0218 19:33:43.795826 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/125bdbb5-76a8-450f-b645-2133024a1bd0-etc-swift\") pod \"swift-storage-0\" (UID: \"125bdbb5-76a8-450f-b645-2133024a1bd0\") " pod="openstack/swift-storage-0" Feb 18 19:33:43 crc kubenswrapper[4942]: I0218 19:33:43.795885 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/125bdbb5-76a8-450f-b645-2133024a1bd0-cache\") pod \"swift-storage-0\" (UID: \"125bdbb5-76a8-450f-b645-2133024a1bd0\") " pod="openstack/swift-storage-0" Feb 18 19:33:43 crc kubenswrapper[4942]: I0218 19:33:43.795903 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/125bdbb5-76a8-450f-b645-2133024a1bd0-lock\") pod \"swift-storage-0\" (UID: \"125bdbb5-76a8-450f-b645-2133024a1bd0\") " pod="openstack/swift-storage-0" Feb 18 19:33:43 crc kubenswrapper[4942]: I0218 19:33:43.795957 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f47m7\" (UniqueName: \"kubernetes.io/projected/125bdbb5-76a8-450f-b645-2133024a1bd0-kube-api-access-f47m7\") pod \"swift-storage-0\" (UID: \"125bdbb5-76a8-450f-b645-2133024a1bd0\") " pod="openstack/swift-storage-0" Feb 18 19:33:43 crc kubenswrapper[4942]: I0218 19:33:43.897873 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f47m7\" (UniqueName: \"kubernetes.io/projected/125bdbb5-76a8-450f-b645-2133024a1bd0-kube-api-access-f47m7\") pod \"swift-storage-0\" (UID: \"125bdbb5-76a8-450f-b645-2133024a1bd0\") " pod="openstack/swift-storage-0" Feb 18 19:33:43 crc kubenswrapper[4942]: I0218 19:33:43.897962 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/125bdbb5-76a8-450f-b645-2133024a1bd0-combined-ca-bundle\") pod \"swift-storage-0\" (UID: \"125bdbb5-76a8-450f-b645-2133024a1bd0\") " pod="openstack/swift-storage-0" Feb 18 19:33:43 crc kubenswrapper[4942]: I0218 19:33:43.897985 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"swift-storage-0\" (UID: \"125bdbb5-76a8-450f-b645-2133024a1bd0\") " pod="openstack/swift-storage-0" Feb 18 19:33:43 crc kubenswrapper[4942]: I0218 19:33:43.898028 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/125bdbb5-76a8-450f-b645-2133024a1bd0-etc-swift\") pod \"swift-storage-0\" (UID: \"125bdbb5-76a8-450f-b645-2133024a1bd0\") " pod="openstack/swift-storage-0" Feb 18 19:33:43 crc kubenswrapper[4942]: I0218 19:33:43.898066 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/125bdbb5-76a8-450f-b645-2133024a1bd0-cache\") pod \"swift-storage-0\" (UID: \"125bdbb5-76a8-450f-b645-2133024a1bd0\") " pod="openstack/swift-storage-0" Feb 18 19:33:43 crc kubenswrapper[4942]: I0218 19:33:43.898086 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/125bdbb5-76a8-450f-b645-2133024a1bd0-lock\") pod \"swift-storage-0\" (UID: \"125bdbb5-76a8-450f-b645-2133024a1bd0\") " pod="openstack/swift-storage-0" Feb 18 19:33:43 crc kubenswrapper[4942]: E0218 19:33:43.898274 4942 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Feb 18 19:33:43 crc kubenswrapper[4942]: E0218 19:33:43.898300 4942 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Feb 18 19:33:43 crc kubenswrapper[4942]: E0218 19:33:43.898376 4942 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/125bdbb5-76a8-450f-b645-2133024a1bd0-etc-swift podName:125bdbb5-76a8-450f-b645-2133024a1bd0 nodeName:}" failed. No retries permitted until 2026-02-18 19:33:44.398357545 +0000 UTC m=+984.103290210 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/125bdbb5-76a8-450f-b645-2133024a1bd0-etc-swift") pod "swift-storage-0" (UID: "125bdbb5-76a8-450f-b645-2133024a1bd0") : configmap "swift-ring-files" not found Feb 18 19:33:43 crc kubenswrapper[4942]: I0218 19:33:43.898668 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/125bdbb5-76a8-450f-b645-2133024a1bd0-lock\") pod \"swift-storage-0\" (UID: \"125bdbb5-76a8-450f-b645-2133024a1bd0\") " pod="openstack/swift-storage-0" Feb 18 19:33:43 crc kubenswrapper[4942]: I0218 19:33:43.898934 4942 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"swift-storage-0\" (UID: \"125bdbb5-76a8-450f-b645-2133024a1bd0\") device mount path \"/mnt/openstack/pv11\"" pod="openstack/swift-storage-0" Feb 18 19:33:43 crc kubenswrapper[4942]: I0218 19:33:43.898941 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/125bdbb5-76a8-450f-b645-2133024a1bd0-cache\") pod \"swift-storage-0\" (UID: \"125bdbb5-76a8-450f-b645-2133024a1bd0\") " pod="openstack/swift-storage-0" Feb 18 19:33:43 crc kubenswrapper[4942]: I0218 19:33:43.905522 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/125bdbb5-76a8-450f-b645-2133024a1bd0-combined-ca-bundle\") pod \"swift-storage-0\" (UID: \"125bdbb5-76a8-450f-b645-2133024a1bd0\") " pod="openstack/swift-storage-0" Feb 18 19:33:43 crc kubenswrapper[4942]: I0218 19:33:43.918643 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f47m7\" (UniqueName: \"kubernetes.io/projected/125bdbb5-76a8-450f-b645-2133024a1bd0-kube-api-access-f47m7\") pod \"swift-storage-0\" (UID: \"125bdbb5-76a8-450f-b645-2133024a1bd0\") " pod="openstack/swift-storage-0" Feb 18 19:33:43 crc kubenswrapper[4942]: I0218 19:33:43.934958 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"swift-storage-0\" (UID: \"125bdbb5-76a8-450f-b645-2133024a1bd0\") " pod="openstack/swift-storage-0" Feb 18 19:33:44 crc kubenswrapper[4942]: I0218 19:33:44.119066 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-698758b865-nnzck"] Feb 18 19:33:44 crc kubenswrapper[4942]: W0218 19:33:44.119504 4942 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1e919317_cae2_432d_959f_8cf1d4520b56.slice/crio-78b20f729f326e0f7c3c648fac44018c3d34b24ab3d2f709a7f976353f04998c WatchSource:0}: Error finding container 78b20f729f326e0f7c3c648fac44018c3d34b24ab3d2f709a7f976353f04998c: Status 404 returned error can't find the container with id 78b20f729f326e0f7c3c648fac44018c3d34b24ab3d2f709a7f976353f04998c Feb 18 19:33:44 crc kubenswrapper[4942]: I0218 19:33:44.216794 4942 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-ring-rebalance-cwjhb"] Feb 18 19:33:44 crc kubenswrapper[4942]: I0218 19:33:44.221116 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-cwjhb" Feb 18 19:33:44 crc kubenswrapper[4942]: I0218 19:33:44.224993 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-proxy-config-data" Feb 18 19:33:44 crc kubenswrapper[4942]: I0218 19:33:44.226230 4942 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-config-data" Feb 18 19:33:44 crc kubenswrapper[4942]: I0218 19:33:44.226280 4942 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-scripts" Feb 18 19:33:44 crc kubenswrapper[4942]: I0218 19:33:44.230291 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-cwjhb"] Feb 18 19:33:44 crc kubenswrapper[4942]: I0218 19:33:44.275704 4942 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-7fd796d7df-nblkl" podUID="782cbd43-a7c9-45f4-99e3-44fe770be6a5" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.117:5353: connect: connection refused" Feb 18 19:33:44 crc kubenswrapper[4942]: I0218 19:33:44.305387 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8wqmf\" (UniqueName: \"kubernetes.io/projected/2eb51639-e1f9-4c9f-baa9-30d64d3abb7a-kube-api-access-8wqmf\") pod \"swift-ring-rebalance-cwjhb\" (UID: \"2eb51639-e1f9-4c9f-baa9-30d64d3abb7a\") " pod="openstack/swift-ring-rebalance-cwjhb" Feb 18 19:33:44 crc kubenswrapper[4942]: I0218 19:33:44.305526 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/2eb51639-e1f9-4c9f-baa9-30d64d3abb7a-dispersionconf\") pod \"swift-ring-rebalance-cwjhb\" (UID: \"2eb51639-e1f9-4c9f-baa9-30d64d3abb7a\") " pod="openstack/swift-ring-rebalance-cwjhb" Feb 18 19:33:44 crc kubenswrapper[4942]: I0218 19:33:44.305553 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/2eb51639-e1f9-4c9f-baa9-30d64d3abb7a-swiftconf\") pod \"swift-ring-rebalance-cwjhb\" (UID: \"2eb51639-e1f9-4c9f-baa9-30d64d3abb7a\") " pod="openstack/swift-ring-rebalance-cwjhb" Feb 18 19:33:44 crc kubenswrapper[4942]: I0218 19:33:44.305627 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/2eb51639-e1f9-4c9f-baa9-30d64d3abb7a-etc-swift\") pod \"swift-ring-rebalance-cwjhb\" (UID: \"2eb51639-e1f9-4c9f-baa9-30d64d3abb7a\") " pod="openstack/swift-ring-rebalance-cwjhb" Feb 18 19:33:44 crc kubenswrapper[4942]: I0218 19:33:44.305660 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2eb51639-e1f9-4c9f-baa9-30d64d3abb7a-combined-ca-bundle\") pod \"swift-ring-rebalance-cwjhb\" (UID: \"2eb51639-e1f9-4c9f-baa9-30d64d3abb7a\") " pod="openstack/swift-ring-rebalance-cwjhb" Feb 18 19:33:44 crc kubenswrapper[4942]: I0218 19:33:44.305721 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/2eb51639-e1f9-4c9f-baa9-30d64d3abb7a-ring-data-devices\") pod \"swift-ring-rebalance-cwjhb\" (UID: \"2eb51639-e1f9-4c9f-baa9-30d64d3abb7a\") " pod="openstack/swift-ring-rebalance-cwjhb" Feb 18 19:33:44 crc kubenswrapper[4942]: I0218 19:33:44.305746 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/2eb51639-e1f9-4c9f-baa9-30d64d3abb7a-scripts\") pod \"swift-ring-rebalance-cwjhb\" (UID: \"2eb51639-e1f9-4c9f-baa9-30d64d3abb7a\") " pod="openstack/swift-ring-rebalance-cwjhb" Feb 18 19:33:44 crc kubenswrapper[4942]: I0218 19:33:44.407215 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/2eb51639-e1f9-4c9f-baa9-30d64d3abb7a-dispersionconf\") pod \"swift-ring-rebalance-cwjhb\" (UID: \"2eb51639-e1f9-4c9f-baa9-30d64d3abb7a\") " pod="openstack/swift-ring-rebalance-cwjhb" Feb 18 19:33:44 crc kubenswrapper[4942]: I0218 19:33:44.407652 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/2eb51639-e1f9-4c9f-baa9-30d64d3abb7a-swiftconf\") pod \"swift-ring-rebalance-cwjhb\" (UID: \"2eb51639-e1f9-4c9f-baa9-30d64d3abb7a\") " pod="openstack/swift-ring-rebalance-cwjhb" Feb 18 19:33:44 crc kubenswrapper[4942]: I0218 19:33:44.407747 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/2eb51639-e1f9-4c9f-baa9-30d64d3abb7a-etc-swift\") pod \"swift-ring-rebalance-cwjhb\" (UID: \"2eb51639-e1f9-4c9f-baa9-30d64d3abb7a\") " pod="openstack/swift-ring-rebalance-cwjhb" Feb 18 19:33:44 crc kubenswrapper[4942]: I0218 19:33:44.407874 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2eb51639-e1f9-4c9f-baa9-30d64d3abb7a-combined-ca-bundle\") pod \"swift-ring-rebalance-cwjhb\" (UID: \"2eb51639-e1f9-4c9f-baa9-30d64d3abb7a\") " pod="openstack/swift-ring-rebalance-cwjhb" Feb 18 19:33:44 crc kubenswrapper[4942]: I0218 19:33:44.407976 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/125bdbb5-76a8-450f-b645-2133024a1bd0-etc-swift\") pod \"swift-storage-0\" (UID: \"125bdbb5-76a8-450f-b645-2133024a1bd0\") " pod="openstack/swift-storage-0" Feb 18 19:33:44 crc kubenswrapper[4942]: I0218 19:33:44.408053 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/2eb51639-e1f9-4c9f-baa9-30d64d3abb7a-etc-swift\") pod \"swift-ring-rebalance-cwjhb\" (UID: \"2eb51639-e1f9-4c9f-baa9-30d64d3abb7a\") " pod="openstack/swift-ring-rebalance-cwjhb" Feb 18 19:33:44 crc kubenswrapper[4942]: I0218 19:33:44.408061 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/2eb51639-e1f9-4c9f-baa9-30d64d3abb7a-ring-data-devices\") pod \"swift-ring-rebalance-cwjhb\" (UID: \"2eb51639-e1f9-4c9f-baa9-30d64d3abb7a\") " pod="openstack/swift-ring-rebalance-cwjhb" Feb 18 19:33:44 crc kubenswrapper[4942]: E0218 19:33:44.408110 4942 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Feb 18 19:33:44 crc kubenswrapper[4942]: E0218 19:33:44.408135 4942 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Feb 18 19:33:44 crc kubenswrapper[4942]: I0218 19:33:44.408146 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/2eb51639-e1f9-4c9f-baa9-30d64d3abb7a-scripts\") pod \"swift-ring-rebalance-cwjhb\" (UID: \"2eb51639-e1f9-4c9f-baa9-30d64d3abb7a\") " pod="openstack/swift-ring-rebalance-cwjhb" Feb 18 19:33:44 crc kubenswrapper[4942]: E0218 19:33:44.408189 4942 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/125bdbb5-76a8-450f-b645-2133024a1bd0-etc-swift podName:125bdbb5-76a8-450f-b645-2133024a1bd0 nodeName:}" failed. No retries permitted until 2026-02-18 19:33:45.408171571 +0000 UTC m=+985.113104246 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/125bdbb5-76a8-450f-b645-2133024a1bd0-etc-swift") pod "swift-storage-0" (UID: "125bdbb5-76a8-450f-b645-2133024a1bd0") : configmap "swift-ring-files" not found Feb 18 19:33:44 crc kubenswrapper[4942]: I0218 19:33:44.408254 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8wqmf\" (UniqueName: \"kubernetes.io/projected/2eb51639-e1f9-4c9f-baa9-30d64d3abb7a-kube-api-access-8wqmf\") pod \"swift-ring-rebalance-cwjhb\" (UID: \"2eb51639-e1f9-4c9f-baa9-30d64d3abb7a\") " pod="openstack/swift-ring-rebalance-cwjhb" Feb 18 19:33:44 crc kubenswrapper[4942]: I0218 19:33:44.408756 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/2eb51639-e1f9-4c9f-baa9-30d64d3abb7a-scripts\") pod \"swift-ring-rebalance-cwjhb\" (UID: \"2eb51639-e1f9-4c9f-baa9-30d64d3abb7a\") " pod="openstack/swift-ring-rebalance-cwjhb" Feb 18 19:33:44 crc kubenswrapper[4942]: I0218 19:33:44.408954 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/2eb51639-e1f9-4c9f-baa9-30d64d3abb7a-ring-data-devices\") pod \"swift-ring-rebalance-cwjhb\" (UID: \"2eb51639-e1f9-4c9f-baa9-30d64d3abb7a\") " pod="openstack/swift-ring-rebalance-cwjhb" Feb 18 19:33:44 crc kubenswrapper[4942]: I0218 19:33:44.410652 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/2eb51639-e1f9-4c9f-baa9-30d64d3abb7a-swiftconf\") pod \"swift-ring-rebalance-cwjhb\" (UID: \"2eb51639-e1f9-4c9f-baa9-30d64d3abb7a\") " pod="openstack/swift-ring-rebalance-cwjhb" Feb 18 19:33:44 crc kubenswrapper[4942]: I0218 19:33:44.411896 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2eb51639-e1f9-4c9f-baa9-30d64d3abb7a-combined-ca-bundle\") pod \"swift-ring-rebalance-cwjhb\" (UID: \"2eb51639-e1f9-4c9f-baa9-30d64d3abb7a\") " pod="openstack/swift-ring-rebalance-cwjhb" Feb 18 19:33:44 crc kubenswrapper[4942]: I0218 19:33:44.412268 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/2eb51639-e1f9-4c9f-baa9-30d64d3abb7a-dispersionconf\") pod \"swift-ring-rebalance-cwjhb\" (UID: \"2eb51639-e1f9-4c9f-baa9-30d64d3abb7a\") " pod="openstack/swift-ring-rebalance-cwjhb" Feb 18 19:33:44 crc kubenswrapper[4942]: I0218 19:33:44.427877 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8wqmf\" (UniqueName: \"kubernetes.io/projected/2eb51639-e1f9-4c9f-baa9-30d64d3abb7a-kube-api-access-8wqmf\") pod \"swift-ring-rebalance-cwjhb\" (UID: \"2eb51639-e1f9-4c9f-baa9-30d64d3abb7a\") " pod="openstack/swift-ring-rebalance-cwjhb" Feb 18 19:33:44 crc kubenswrapper[4942]: I0218 19:33:44.596255 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-cwjhb" Feb 18 19:33:45 crc kubenswrapper[4942]: I0218 19:33:45.021089 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"056e639a-0805-4bb7-b0bd-620d9c67e6e2","Type":"ContainerStarted","Data":"9f17c3fd7994cefbe90968aeaa9c74ad52d177060d2b0fb715dfe39a46d6af5f"} Feb 18 19:33:45 crc kubenswrapper[4942]: I0218 19:33:45.022625 4942 generic.go:334] "Generic (PLEG): container finished" podID="1e919317-cae2-432d-959f-8cf1d4520b56" containerID="2d800ad31d40bf814e416ec398183ae11509cddedf514a96b60bf309617fbbde" exitCode=0 Feb 18 19:33:45 crc kubenswrapper[4942]: I0218 19:33:45.022665 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-698758b865-nnzck" event={"ID":"1e919317-cae2-432d-959f-8cf1d4520b56","Type":"ContainerDied","Data":"2d800ad31d40bf814e416ec398183ae11509cddedf514a96b60bf309617fbbde"} Feb 18 19:33:45 crc kubenswrapper[4942]: I0218 19:33:45.022680 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-698758b865-nnzck" event={"ID":"1e919317-cae2-432d-959f-8cf1d4520b56","Type":"ContainerStarted","Data":"78b20f729f326e0f7c3c648fac44018c3d34b24ab3d2f709a7f976353f04998c"} Feb 18 19:33:45 crc kubenswrapper[4942]: I0218 19:33:45.135986 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-cwjhb"] Feb 18 19:33:45 crc kubenswrapper[4942]: I0218 19:33:45.241414 4942 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7fd796d7df-nblkl" Feb 18 19:33:45 crc kubenswrapper[4942]: I0218 19:33:45.333912 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/782cbd43-a7c9-45f4-99e3-44fe770be6a5-config\") pod \"782cbd43-a7c9-45f4-99e3-44fe770be6a5\" (UID: \"782cbd43-a7c9-45f4-99e3-44fe770be6a5\") " Feb 18 19:33:45 crc kubenswrapper[4942]: I0218 19:33:45.333959 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5gltq\" (UniqueName: \"kubernetes.io/projected/782cbd43-a7c9-45f4-99e3-44fe770be6a5-kube-api-access-5gltq\") pod \"782cbd43-a7c9-45f4-99e3-44fe770be6a5\" (UID: \"782cbd43-a7c9-45f4-99e3-44fe770be6a5\") " Feb 18 19:33:45 crc kubenswrapper[4942]: I0218 19:33:45.334113 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/782cbd43-a7c9-45f4-99e3-44fe770be6a5-ovsdbserver-nb\") pod \"782cbd43-a7c9-45f4-99e3-44fe770be6a5\" (UID: \"782cbd43-a7c9-45f4-99e3-44fe770be6a5\") " Feb 18 19:33:45 crc kubenswrapper[4942]: I0218 19:33:45.334197 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/782cbd43-a7c9-45f4-99e3-44fe770be6a5-dns-svc\") pod \"782cbd43-a7c9-45f4-99e3-44fe770be6a5\" (UID: \"782cbd43-a7c9-45f4-99e3-44fe770be6a5\") " Feb 18 19:33:45 crc kubenswrapper[4942]: I0218 19:33:45.342943 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/782cbd43-a7c9-45f4-99e3-44fe770be6a5-kube-api-access-5gltq" (OuterVolumeSpecName: "kube-api-access-5gltq") pod "782cbd43-a7c9-45f4-99e3-44fe770be6a5" (UID: "782cbd43-a7c9-45f4-99e3-44fe770be6a5"). InnerVolumeSpecName "kube-api-access-5gltq". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:33:45 crc kubenswrapper[4942]: I0218 19:33:45.371529 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/782cbd43-a7c9-45f4-99e3-44fe770be6a5-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "782cbd43-a7c9-45f4-99e3-44fe770be6a5" (UID: "782cbd43-a7c9-45f4-99e3-44fe770be6a5"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:33:45 crc kubenswrapper[4942]: I0218 19:33:45.380715 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/782cbd43-a7c9-45f4-99e3-44fe770be6a5-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "782cbd43-a7c9-45f4-99e3-44fe770be6a5" (UID: "782cbd43-a7c9-45f4-99e3-44fe770be6a5"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:33:45 crc kubenswrapper[4942]: I0218 19:33:45.387917 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/782cbd43-a7c9-45f4-99e3-44fe770be6a5-config" (OuterVolumeSpecName: "config") pod "782cbd43-a7c9-45f4-99e3-44fe770be6a5" (UID: "782cbd43-a7c9-45f4-99e3-44fe770be6a5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:33:45 crc kubenswrapper[4942]: I0218 19:33:45.436489 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/125bdbb5-76a8-450f-b645-2133024a1bd0-etc-swift\") pod \"swift-storage-0\" (UID: \"125bdbb5-76a8-450f-b645-2133024a1bd0\") " pod="openstack/swift-storage-0" Feb 18 19:33:45 crc kubenswrapper[4942]: I0218 19:33:45.436562 4942 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/782cbd43-a7c9-45f4-99e3-44fe770be6a5-config\") on node \"crc\" DevicePath \"\"" Feb 18 19:33:45 crc kubenswrapper[4942]: I0218 19:33:45.436576 4942 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5gltq\" (UniqueName: \"kubernetes.io/projected/782cbd43-a7c9-45f4-99e3-44fe770be6a5-kube-api-access-5gltq\") on node \"crc\" DevicePath \"\"" Feb 18 19:33:45 crc kubenswrapper[4942]: I0218 19:33:45.436586 4942 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/782cbd43-a7c9-45f4-99e3-44fe770be6a5-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 18 19:33:45 crc kubenswrapper[4942]: I0218 19:33:45.436595 4942 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/782cbd43-a7c9-45f4-99e3-44fe770be6a5-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 18 19:33:45 crc kubenswrapper[4942]: E0218 19:33:45.436733 4942 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Feb 18 19:33:45 crc kubenswrapper[4942]: E0218 19:33:45.436774 4942 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Feb 18 19:33:45 crc kubenswrapper[4942]: E0218 19:33:45.436846 4942 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/125bdbb5-76a8-450f-b645-2133024a1bd0-etc-swift podName:125bdbb5-76a8-450f-b645-2133024a1bd0 nodeName:}" failed. No retries permitted until 2026-02-18 19:33:47.436823304 +0000 UTC m=+987.141756019 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/125bdbb5-76a8-450f-b645-2133024a1bd0-etc-swift") pod "swift-storage-0" (UID: "125bdbb5-76a8-450f-b645-2133024a1bd0") : configmap "swift-ring-files" not found Feb 18 19:33:45 crc kubenswrapper[4942]: E0218 19:33:45.816485 4942 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 38.102.83.188:39186->38.102.83.188:38981: write tcp 38.102.83.188:39186->38.102.83.188:38981: write: broken pipe Feb 18 19:33:46 crc kubenswrapper[4942]: I0218 19:33:46.033052 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-cwjhb" event={"ID":"2eb51639-e1f9-4c9f-baa9-30d64d3abb7a","Type":"ContainerStarted","Data":"98b157f8537f821e0f49062fdd12779fd66abc1af86316a5e1b821365807dd5d"} Feb 18 19:33:46 crc kubenswrapper[4942]: I0218 19:33:46.035180 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-698758b865-nnzck" event={"ID":"1e919317-cae2-432d-959f-8cf1d4520b56","Type":"ContainerStarted","Data":"c929bc7a17036437784be59c9727e4ee675c038074de07e36b3deb35090e3ae7"} Feb 18 19:33:46 crc kubenswrapper[4942]: I0218 19:33:46.035281 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-698758b865-nnzck" Feb 18 19:33:46 crc kubenswrapper[4942]: I0218 19:33:46.037418 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"056e639a-0805-4bb7-b0bd-620d9c67e6e2","Type":"ContainerStarted","Data":"c9513720d4cb93d7288ea798ccc25fec83217b1fdfa20e14c3870e1e4c7ac099"} Feb 18 19:33:46 crc kubenswrapper[4942]: I0218 19:33:46.037995 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-northd-0" Feb 18 19:33:46 crc kubenswrapper[4942]: I0218 19:33:46.039890 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7fd796d7df-nblkl" event={"ID":"782cbd43-a7c9-45f4-99e3-44fe770be6a5","Type":"ContainerDied","Data":"81a7746b89eb6f40a6863d6c1c0673a32e9e2c5723e21fb76df57dee6d01c96b"} Feb 18 19:33:46 crc kubenswrapper[4942]: I0218 19:33:46.039935 4942 scope.go:117] "RemoveContainer" containerID="9a09400e944780331e251a7d55ef689b64cf9b9306241c5f789fb2fd71f6617c" Feb 18 19:33:46 crc kubenswrapper[4942]: I0218 19:33:46.040000 4942 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7fd796d7df-nblkl" Feb 18 19:33:46 crc kubenswrapper[4942]: I0218 19:33:46.053097 4942 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-698758b865-nnzck" podStartSLOduration=4.053074479 podStartE2EDuration="4.053074479s" podCreationTimestamp="2026-02-18 19:33:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:33:46.051612132 +0000 UTC m=+985.756544817" watchObservedRunningTime="2026-02-18 19:33:46.053074479 +0000 UTC m=+985.758007144" Feb 18 19:33:46 crc kubenswrapper[4942]: I0218 19:33:46.069933 4942 scope.go:117] "RemoveContainer" containerID="7e686888a5f0752dbf5d7b1d5a9c7b87451890452b5fc3cae41ff40186646673" Feb 18 19:33:46 crc kubenswrapper[4942]: I0218 19:33:46.073512 4942 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-northd-0" podStartSLOduration=7.326389623 podStartE2EDuration="9.073499403s" podCreationTimestamp="2026-02-18 19:33:37 +0000 UTC" firstStartedPulling="2026-02-18 19:33:38.014088679 +0000 UTC m=+977.719021344" lastFinishedPulling="2026-02-18 19:33:39.761198459 +0000 UTC m=+979.466131124" observedRunningTime="2026-02-18 19:33:46.071684677 +0000 UTC m=+985.776617352" watchObservedRunningTime="2026-02-18 19:33:46.073499403 +0000 UTC m=+985.778432068" Feb 18 19:33:46 crc kubenswrapper[4942]: I0218 19:33:46.109068 4942 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7fd796d7df-nblkl"] Feb 18 19:33:46 crc kubenswrapper[4942]: I0218 19:33:46.110852 4942 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-7fd796d7df-nblkl"] Feb 18 19:33:47 crc kubenswrapper[4942]: I0218 19:33:47.059605 4942 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="782cbd43-a7c9-45f4-99e3-44fe770be6a5" path="/var/lib/kubelet/pods/782cbd43-a7c9-45f4-99e3-44fe770be6a5/volumes" Feb 18 19:33:47 crc kubenswrapper[4942]: I0218 19:33:47.478942 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/125bdbb5-76a8-450f-b645-2133024a1bd0-etc-swift\") pod \"swift-storage-0\" (UID: \"125bdbb5-76a8-450f-b645-2133024a1bd0\") " pod="openstack/swift-storage-0" Feb 18 19:33:47 crc kubenswrapper[4942]: E0218 19:33:47.479189 4942 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Feb 18 19:33:47 crc kubenswrapper[4942]: E0218 19:33:47.479369 4942 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Feb 18 19:33:47 crc kubenswrapper[4942]: E0218 19:33:47.479442 4942 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/125bdbb5-76a8-450f-b645-2133024a1bd0-etc-swift podName:125bdbb5-76a8-450f-b645-2133024a1bd0 nodeName:}" failed. No retries permitted until 2026-02-18 19:33:51.479416121 +0000 UTC m=+991.184348796 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/125bdbb5-76a8-450f-b645-2133024a1bd0-etc-swift") pod "swift-storage-0" (UID: "125bdbb5-76a8-450f-b645-2133024a1bd0") : configmap "swift-ring-files" not found Feb 18 19:33:48 crc kubenswrapper[4942]: I0218 19:33:48.299052 4942 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-cell1-galera-0" Feb 18 19:33:48 crc kubenswrapper[4942]: I0218 19:33:48.378773 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-cell1-galera-0" Feb 18 19:33:48 crc kubenswrapper[4942]: I0218 19:33:48.854366 4942 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-4vztq"] Feb 18 19:33:48 crc kubenswrapper[4942]: E0218 19:33:48.854700 4942 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="782cbd43-a7c9-45f4-99e3-44fe770be6a5" containerName="dnsmasq-dns" Feb 18 19:33:48 crc kubenswrapper[4942]: I0218 19:33:48.854716 4942 state_mem.go:107] "Deleted CPUSet assignment" podUID="782cbd43-a7c9-45f4-99e3-44fe770be6a5" containerName="dnsmasq-dns" Feb 18 19:33:48 crc kubenswrapper[4942]: E0218 19:33:48.854746 4942 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="782cbd43-a7c9-45f4-99e3-44fe770be6a5" containerName="init" Feb 18 19:33:48 crc kubenswrapper[4942]: I0218 19:33:48.854752 4942 state_mem.go:107] "Deleted CPUSet assignment" podUID="782cbd43-a7c9-45f4-99e3-44fe770be6a5" containerName="init" Feb 18 19:33:48 crc kubenswrapper[4942]: I0218 19:33:48.854920 4942 memory_manager.go:354] "RemoveStaleState removing state" podUID="782cbd43-a7c9-45f4-99e3-44fe770be6a5" containerName="dnsmasq-dns" Feb 18 19:33:48 crc kubenswrapper[4942]: I0218 19:33:48.855484 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-4vztq" Feb 18 19:33:48 crc kubenswrapper[4942]: I0218 19:33:48.859078 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-mariadb-root-db-secret" Feb 18 19:33:48 crc kubenswrapper[4942]: I0218 19:33:48.878289 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-4vztq"] Feb 18 19:33:48 crc kubenswrapper[4942]: I0218 19:33:48.916311 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t7rl9\" (UniqueName: \"kubernetes.io/projected/7ae58df9-2a9f-4592-a806-b6f5efd71155-kube-api-access-t7rl9\") pod \"root-account-create-update-4vztq\" (UID: \"7ae58df9-2a9f-4592-a806-b6f5efd71155\") " pod="openstack/root-account-create-update-4vztq" Feb 18 19:33:48 crc kubenswrapper[4942]: I0218 19:33:48.916367 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7ae58df9-2a9f-4592-a806-b6f5efd71155-operator-scripts\") pod \"root-account-create-update-4vztq\" (UID: \"7ae58df9-2a9f-4592-a806-b6f5efd71155\") " pod="openstack/root-account-create-update-4vztq" Feb 18 19:33:49 crc kubenswrapper[4942]: I0218 19:33:49.018605 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t7rl9\" (UniqueName: \"kubernetes.io/projected/7ae58df9-2a9f-4592-a806-b6f5efd71155-kube-api-access-t7rl9\") pod \"root-account-create-update-4vztq\" (UID: \"7ae58df9-2a9f-4592-a806-b6f5efd71155\") " pod="openstack/root-account-create-update-4vztq" Feb 18 19:33:49 crc kubenswrapper[4942]: I0218 19:33:49.019628 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7ae58df9-2a9f-4592-a806-b6f5efd71155-operator-scripts\") pod \"root-account-create-update-4vztq\" (UID: \"7ae58df9-2a9f-4592-a806-b6f5efd71155\") " pod="openstack/root-account-create-update-4vztq" Feb 18 19:33:49 crc kubenswrapper[4942]: I0218 19:33:49.020617 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7ae58df9-2a9f-4592-a806-b6f5efd71155-operator-scripts\") pod \"root-account-create-update-4vztq\" (UID: \"7ae58df9-2a9f-4592-a806-b6f5efd71155\") " pod="openstack/root-account-create-update-4vztq" Feb 18 19:33:49 crc kubenswrapper[4942]: I0218 19:33:49.037322 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t7rl9\" (UniqueName: \"kubernetes.io/projected/7ae58df9-2a9f-4592-a806-b6f5efd71155-kube-api-access-t7rl9\") pod \"root-account-create-update-4vztq\" (UID: \"7ae58df9-2a9f-4592-a806-b6f5efd71155\") " pod="openstack/root-account-create-update-4vztq" Feb 18 19:33:49 crc kubenswrapper[4942]: I0218 19:33:49.173048 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-4vztq" Feb 18 19:33:50 crc kubenswrapper[4942]: I0218 19:33:50.639821 4942 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-galera-0" Feb 18 19:33:50 crc kubenswrapper[4942]: I0218 19:33:50.717699 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-galera-0" Feb 18 19:33:51 crc kubenswrapper[4942]: I0218 19:33:51.292781 4942 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-create-h49cz"] Feb 18 19:33:51 crc kubenswrapper[4942]: I0218 19:33:51.293890 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-h49cz" Feb 18 19:33:51 crc kubenswrapper[4942]: I0218 19:33:51.302739 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-h49cz"] Feb 18 19:33:51 crc kubenswrapper[4942]: I0218 19:33:51.363609 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-76ck2\" (UniqueName: \"kubernetes.io/projected/a3564c8a-5e18-4c53-b225-7e9baf41a371-kube-api-access-76ck2\") pod \"keystone-db-create-h49cz\" (UID: \"a3564c8a-5e18-4c53-b225-7e9baf41a371\") " pod="openstack/keystone-db-create-h49cz" Feb 18 19:33:51 crc kubenswrapper[4942]: I0218 19:33:51.363691 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a3564c8a-5e18-4c53-b225-7e9baf41a371-operator-scripts\") pod \"keystone-db-create-h49cz\" (UID: \"a3564c8a-5e18-4c53-b225-7e9baf41a371\") " pod="openstack/keystone-db-create-h49cz" Feb 18 19:33:51 crc kubenswrapper[4942]: I0218 19:33:51.414441 4942 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-d9d4-account-create-update-7gsvf"] Feb 18 19:33:51 crc kubenswrapper[4942]: I0218 19:33:51.415473 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-d9d4-account-create-update-7gsvf" Feb 18 19:33:51 crc kubenswrapper[4942]: I0218 19:33:51.419325 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-db-secret" Feb 18 19:33:51 crc kubenswrapper[4942]: I0218 19:33:51.424554 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-d9d4-account-create-update-7gsvf"] Feb 18 19:33:51 crc kubenswrapper[4942]: I0218 19:33:51.465723 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rcd24\" (UniqueName: \"kubernetes.io/projected/646ba630-1210-431d-8902-b5c0968b35bb-kube-api-access-rcd24\") pod \"keystone-d9d4-account-create-update-7gsvf\" (UID: \"646ba630-1210-431d-8902-b5c0968b35bb\") " pod="openstack/keystone-d9d4-account-create-update-7gsvf" Feb 18 19:33:51 crc kubenswrapper[4942]: I0218 19:33:51.465810 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-76ck2\" (UniqueName: \"kubernetes.io/projected/a3564c8a-5e18-4c53-b225-7e9baf41a371-kube-api-access-76ck2\") pod \"keystone-db-create-h49cz\" (UID: \"a3564c8a-5e18-4c53-b225-7e9baf41a371\") " pod="openstack/keystone-db-create-h49cz" Feb 18 19:33:51 crc kubenswrapper[4942]: I0218 19:33:51.465878 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/646ba630-1210-431d-8902-b5c0968b35bb-operator-scripts\") pod \"keystone-d9d4-account-create-update-7gsvf\" (UID: \"646ba630-1210-431d-8902-b5c0968b35bb\") " pod="openstack/keystone-d9d4-account-create-update-7gsvf" Feb 18 19:33:51 crc kubenswrapper[4942]: I0218 19:33:51.465924 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a3564c8a-5e18-4c53-b225-7e9baf41a371-operator-scripts\") pod \"keystone-db-create-h49cz\" (UID: \"a3564c8a-5e18-4c53-b225-7e9baf41a371\") " pod="openstack/keystone-db-create-h49cz" Feb 18 19:33:51 crc kubenswrapper[4942]: I0218 19:33:51.466674 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a3564c8a-5e18-4c53-b225-7e9baf41a371-operator-scripts\") pod \"keystone-db-create-h49cz\" (UID: \"a3564c8a-5e18-4c53-b225-7e9baf41a371\") " pod="openstack/keystone-db-create-h49cz" Feb 18 19:33:51 crc kubenswrapper[4942]: I0218 19:33:51.499344 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-76ck2\" (UniqueName: \"kubernetes.io/projected/a3564c8a-5e18-4c53-b225-7e9baf41a371-kube-api-access-76ck2\") pod \"keystone-db-create-h49cz\" (UID: \"a3564c8a-5e18-4c53-b225-7e9baf41a371\") " pod="openstack/keystone-db-create-h49cz" Feb 18 19:33:51 crc kubenswrapper[4942]: I0218 19:33:51.505630 4942 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-create-9xsbj"] Feb 18 19:33:51 crc kubenswrapper[4942]: I0218 19:33:51.506794 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-9xsbj" Feb 18 19:33:51 crc kubenswrapper[4942]: I0218 19:33:51.517451 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-9xsbj"] Feb 18 19:33:51 crc kubenswrapper[4942]: I0218 19:33:51.567015 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6821c713-6163-44f5-a749-415f0c1d8337-operator-scripts\") pod \"placement-db-create-9xsbj\" (UID: \"6821c713-6163-44f5-a749-415f0c1d8337\") " pod="openstack/placement-db-create-9xsbj" Feb 18 19:33:51 crc kubenswrapper[4942]: I0218 19:33:51.567338 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rf9x9\" (UniqueName: \"kubernetes.io/projected/6821c713-6163-44f5-a749-415f0c1d8337-kube-api-access-rf9x9\") pod \"placement-db-create-9xsbj\" (UID: \"6821c713-6163-44f5-a749-415f0c1d8337\") " pod="openstack/placement-db-create-9xsbj" Feb 18 19:33:51 crc kubenswrapper[4942]: I0218 19:33:51.567375 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rcd24\" (UniqueName: \"kubernetes.io/projected/646ba630-1210-431d-8902-b5c0968b35bb-kube-api-access-rcd24\") pod \"keystone-d9d4-account-create-update-7gsvf\" (UID: \"646ba630-1210-431d-8902-b5c0968b35bb\") " pod="openstack/keystone-d9d4-account-create-update-7gsvf" Feb 18 19:33:51 crc kubenswrapper[4942]: I0218 19:33:51.567418 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/125bdbb5-76a8-450f-b645-2133024a1bd0-etc-swift\") pod \"swift-storage-0\" (UID: \"125bdbb5-76a8-450f-b645-2133024a1bd0\") " pod="openstack/swift-storage-0" Feb 18 19:33:51 crc kubenswrapper[4942]: I0218 19:33:51.567454 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/646ba630-1210-431d-8902-b5c0968b35bb-operator-scripts\") pod \"keystone-d9d4-account-create-update-7gsvf\" (UID: \"646ba630-1210-431d-8902-b5c0968b35bb\") " pod="openstack/keystone-d9d4-account-create-update-7gsvf" Feb 18 19:33:51 crc kubenswrapper[4942]: E0218 19:33:51.567577 4942 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Feb 18 19:33:51 crc kubenswrapper[4942]: E0218 19:33:51.567596 4942 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Feb 18 19:33:51 crc kubenswrapper[4942]: E0218 19:33:51.567641 4942 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/125bdbb5-76a8-450f-b645-2133024a1bd0-etc-swift podName:125bdbb5-76a8-450f-b645-2133024a1bd0 nodeName:}" failed. No retries permitted until 2026-02-18 19:33:59.567624894 +0000 UTC m=+999.272557559 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/125bdbb5-76a8-450f-b645-2133024a1bd0-etc-swift") pod "swift-storage-0" (UID: "125bdbb5-76a8-450f-b645-2133024a1bd0") : configmap "swift-ring-files" not found Feb 18 19:33:51 crc kubenswrapper[4942]: I0218 19:33:51.568133 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/646ba630-1210-431d-8902-b5c0968b35bb-operator-scripts\") pod \"keystone-d9d4-account-create-update-7gsvf\" (UID: \"646ba630-1210-431d-8902-b5c0968b35bb\") " pod="openstack/keystone-d9d4-account-create-update-7gsvf" Feb 18 19:33:51 crc kubenswrapper[4942]: I0218 19:33:51.598534 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rcd24\" (UniqueName: \"kubernetes.io/projected/646ba630-1210-431d-8902-b5c0968b35bb-kube-api-access-rcd24\") pod \"keystone-d9d4-account-create-update-7gsvf\" (UID: \"646ba630-1210-431d-8902-b5c0968b35bb\") " pod="openstack/keystone-d9d4-account-create-update-7gsvf" Feb 18 19:33:51 crc kubenswrapper[4942]: I0218 19:33:51.613508 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-h49cz" Feb 18 19:33:51 crc kubenswrapper[4942]: I0218 19:33:51.620790 4942 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-ce28-account-create-update-h5jjz"] Feb 18 19:33:51 crc kubenswrapper[4942]: I0218 19:33:51.623864 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-ce28-account-create-update-h5jjz" Feb 18 19:33:51 crc kubenswrapper[4942]: I0218 19:33:51.626628 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-db-secret" Feb 18 19:33:51 crc kubenswrapper[4942]: I0218 19:33:51.629543 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-ce28-account-create-update-h5jjz"] Feb 18 19:33:51 crc kubenswrapper[4942]: I0218 19:33:51.668858 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6821c713-6163-44f5-a749-415f0c1d8337-operator-scripts\") pod \"placement-db-create-9xsbj\" (UID: \"6821c713-6163-44f5-a749-415f0c1d8337\") " pod="openstack/placement-db-create-9xsbj" Feb 18 19:33:51 crc kubenswrapper[4942]: I0218 19:33:51.668900 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rf9x9\" (UniqueName: \"kubernetes.io/projected/6821c713-6163-44f5-a749-415f0c1d8337-kube-api-access-rf9x9\") pod \"placement-db-create-9xsbj\" (UID: \"6821c713-6163-44f5-a749-415f0c1d8337\") " pod="openstack/placement-db-create-9xsbj" Feb 18 19:33:51 crc kubenswrapper[4942]: I0218 19:33:51.668937 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mvxgs\" (UniqueName: \"kubernetes.io/projected/371430b6-c9b6-48ba-a1a7-d1ce72a001ec-kube-api-access-mvxgs\") pod \"placement-ce28-account-create-update-h5jjz\" (UID: \"371430b6-c9b6-48ba-a1a7-d1ce72a001ec\") " pod="openstack/placement-ce28-account-create-update-h5jjz" Feb 18 19:33:51 crc kubenswrapper[4942]: I0218 19:33:51.669016 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/371430b6-c9b6-48ba-a1a7-d1ce72a001ec-operator-scripts\") pod \"placement-ce28-account-create-update-h5jjz\" (UID: \"371430b6-c9b6-48ba-a1a7-d1ce72a001ec\") " pod="openstack/placement-ce28-account-create-update-h5jjz" Feb 18 19:33:51 crc kubenswrapper[4942]: I0218 19:33:51.669746 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6821c713-6163-44f5-a749-415f0c1d8337-operator-scripts\") pod \"placement-db-create-9xsbj\" (UID: \"6821c713-6163-44f5-a749-415f0c1d8337\") " pod="openstack/placement-db-create-9xsbj" Feb 18 19:33:51 crc kubenswrapper[4942]: I0218 19:33:51.707463 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rf9x9\" (UniqueName: \"kubernetes.io/projected/6821c713-6163-44f5-a749-415f0c1d8337-kube-api-access-rf9x9\") pod \"placement-db-create-9xsbj\" (UID: \"6821c713-6163-44f5-a749-415f0c1d8337\") " pod="openstack/placement-db-create-9xsbj" Feb 18 19:33:51 crc kubenswrapper[4942]: I0218 19:33:51.739329 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-d9d4-account-create-update-7gsvf" Feb 18 19:33:51 crc kubenswrapper[4942]: I0218 19:33:51.770255 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mvxgs\" (UniqueName: \"kubernetes.io/projected/371430b6-c9b6-48ba-a1a7-d1ce72a001ec-kube-api-access-mvxgs\") pod \"placement-ce28-account-create-update-h5jjz\" (UID: \"371430b6-c9b6-48ba-a1a7-d1ce72a001ec\") " pod="openstack/placement-ce28-account-create-update-h5jjz" Feb 18 19:33:51 crc kubenswrapper[4942]: I0218 19:33:51.770346 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/371430b6-c9b6-48ba-a1a7-d1ce72a001ec-operator-scripts\") pod \"placement-ce28-account-create-update-h5jjz\" (UID: \"371430b6-c9b6-48ba-a1a7-d1ce72a001ec\") " pod="openstack/placement-ce28-account-create-update-h5jjz" Feb 18 19:33:51 crc kubenswrapper[4942]: I0218 19:33:51.770984 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/371430b6-c9b6-48ba-a1a7-d1ce72a001ec-operator-scripts\") pod \"placement-ce28-account-create-update-h5jjz\" (UID: \"371430b6-c9b6-48ba-a1a7-d1ce72a001ec\") " pod="openstack/placement-ce28-account-create-update-h5jjz" Feb 18 19:33:51 crc kubenswrapper[4942]: I0218 19:33:51.795335 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mvxgs\" (UniqueName: \"kubernetes.io/projected/371430b6-c9b6-48ba-a1a7-d1ce72a001ec-kube-api-access-mvxgs\") pod \"placement-ce28-account-create-update-h5jjz\" (UID: \"371430b6-c9b6-48ba-a1a7-d1ce72a001ec\") " pod="openstack/placement-ce28-account-create-update-h5jjz" Feb 18 19:33:51 crc kubenswrapper[4942]: I0218 19:33:51.849121 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-9xsbj" Feb 18 19:33:51 crc kubenswrapper[4942]: I0218 19:33:51.945049 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-ce28-account-create-update-h5jjz" Feb 18 19:33:52 crc kubenswrapper[4942]: W0218 19:33:52.331795 4942 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod646ba630_1210_431d_8902_b5c0968b35bb.slice/crio-c2fe7ba176d2c472d430fa4f250787ec1a4f81f13679ca2be4b479e5f0b8e9f6 WatchSource:0}: Error finding container c2fe7ba176d2c472d430fa4f250787ec1a4f81f13679ca2be4b479e5f0b8e9f6: Status 404 returned error can't find the container with id c2fe7ba176d2c472d430fa4f250787ec1a4f81f13679ca2be4b479e5f0b8e9f6 Feb 18 19:33:52 crc kubenswrapper[4942]: I0218 19:33:52.335057 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-d9d4-account-create-update-7gsvf"] Feb 18 19:33:52 crc kubenswrapper[4942]: I0218 19:33:52.457802 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-h49cz"] Feb 18 19:33:52 crc kubenswrapper[4942]: I0218 19:33:52.471488 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-4vztq"] Feb 18 19:33:52 crc kubenswrapper[4942]: I0218 19:33:52.542796 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-9xsbj"] Feb 18 19:33:52 crc kubenswrapper[4942]: I0218 19:33:52.571862 4942 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/watcher-db-create-59tjm"] Feb 18 19:33:52 crc kubenswrapper[4942]: I0218 19:33:52.573385 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-db-create-59tjm" Feb 18 19:33:52 crc kubenswrapper[4942]: I0218 19:33:52.578365 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-db-create-59tjm"] Feb 18 19:33:52 crc kubenswrapper[4942]: I0218 19:33:52.643297 4942 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/watcher-9457-account-create-update-5hrw4"] Feb 18 19:33:52 crc kubenswrapper[4942]: I0218 19:33:52.644870 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-9457-account-create-update-5hrw4" Feb 18 19:33:52 crc kubenswrapper[4942]: I0218 19:33:52.649194 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"watcher-db-secret" Feb 18 19:33:52 crc kubenswrapper[4942]: I0218 19:33:52.659112 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-9457-account-create-update-5hrw4"] Feb 18 19:33:52 crc kubenswrapper[4942]: I0218 19:33:52.672414 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-ce28-account-create-update-h5jjz"] Feb 18 19:33:52 crc kubenswrapper[4942]: I0218 19:33:52.695581 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7jh74\" (UniqueName: \"kubernetes.io/projected/ba056ec7-86a5-43b6-aebd-a22b21843cc3-kube-api-access-7jh74\") pod \"watcher-9457-account-create-update-5hrw4\" (UID: \"ba056ec7-86a5-43b6-aebd-a22b21843cc3\") " pod="openstack/watcher-9457-account-create-update-5hrw4" Feb 18 19:33:52 crc kubenswrapper[4942]: I0218 19:33:52.695624 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cc7sh\" (UniqueName: \"kubernetes.io/projected/2f4f7b72-968a-4aed-b6e9-87f43677f342-kube-api-access-cc7sh\") pod \"watcher-db-create-59tjm\" (UID: \"2f4f7b72-968a-4aed-b6e9-87f43677f342\") " pod="openstack/watcher-db-create-59tjm" Feb 18 19:33:52 crc kubenswrapper[4942]: I0218 19:33:52.695685 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2f4f7b72-968a-4aed-b6e9-87f43677f342-operator-scripts\") pod \"watcher-db-create-59tjm\" (UID: \"2f4f7b72-968a-4aed-b6e9-87f43677f342\") " pod="openstack/watcher-db-create-59tjm" Feb 18 19:33:52 crc kubenswrapper[4942]: I0218 19:33:52.695728 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ba056ec7-86a5-43b6-aebd-a22b21843cc3-operator-scripts\") pod \"watcher-9457-account-create-update-5hrw4\" (UID: \"ba056ec7-86a5-43b6-aebd-a22b21843cc3\") " pod="openstack/watcher-9457-account-create-update-5hrw4" Feb 18 19:33:52 crc kubenswrapper[4942]: I0218 19:33:52.796996 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ba056ec7-86a5-43b6-aebd-a22b21843cc3-operator-scripts\") pod \"watcher-9457-account-create-update-5hrw4\" (UID: \"ba056ec7-86a5-43b6-aebd-a22b21843cc3\") " pod="openstack/watcher-9457-account-create-update-5hrw4" Feb 18 19:33:52 crc kubenswrapper[4942]: I0218 19:33:52.797137 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7jh74\" (UniqueName: \"kubernetes.io/projected/ba056ec7-86a5-43b6-aebd-a22b21843cc3-kube-api-access-7jh74\") pod \"watcher-9457-account-create-update-5hrw4\" (UID: \"ba056ec7-86a5-43b6-aebd-a22b21843cc3\") " pod="openstack/watcher-9457-account-create-update-5hrw4" Feb 18 19:33:52 crc kubenswrapper[4942]: I0218 19:33:52.797173 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cc7sh\" (UniqueName: \"kubernetes.io/projected/2f4f7b72-968a-4aed-b6e9-87f43677f342-kube-api-access-cc7sh\") pod \"watcher-db-create-59tjm\" (UID: \"2f4f7b72-968a-4aed-b6e9-87f43677f342\") " pod="openstack/watcher-db-create-59tjm" Feb 18 19:33:52 crc kubenswrapper[4942]: I0218 19:33:52.797253 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2f4f7b72-968a-4aed-b6e9-87f43677f342-operator-scripts\") pod \"watcher-db-create-59tjm\" (UID: \"2f4f7b72-968a-4aed-b6e9-87f43677f342\") " pod="openstack/watcher-db-create-59tjm" Feb 18 19:33:52 crc kubenswrapper[4942]: I0218 19:33:52.798288 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2f4f7b72-968a-4aed-b6e9-87f43677f342-operator-scripts\") pod \"watcher-db-create-59tjm\" (UID: \"2f4f7b72-968a-4aed-b6e9-87f43677f342\") " pod="openstack/watcher-db-create-59tjm" Feb 18 19:33:52 crc kubenswrapper[4942]: I0218 19:33:52.799240 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ba056ec7-86a5-43b6-aebd-a22b21843cc3-operator-scripts\") pod \"watcher-9457-account-create-update-5hrw4\" (UID: \"ba056ec7-86a5-43b6-aebd-a22b21843cc3\") " pod="openstack/watcher-9457-account-create-update-5hrw4" Feb 18 19:33:52 crc kubenswrapper[4942]: I0218 19:33:52.819609 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cc7sh\" (UniqueName: \"kubernetes.io/projected/2f4f7b72-968a-4aed-b6e9-87f43677f342-kube-api-access-cc7sh\") pod \"watcher-db-create-59tjm\" (UID: \"2f4f7b72-968a-4aed-b6e9-87f43677f342\") " pod="openstack/watcher-db-create-59tjm" Feb 18 19:33:52 crc kubenswrapper[4942]: I0218 19:33:52.819820 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7jh74\" (UniqueName: \"kubernetes.io/projected/ba056ec7-86a5-43b6-aebd-a22b21843cc3-kube-api-access-7jh74\") pod \"watcher-9457-account-create-update-5hrw4\" (UID: \"ba056ec7-86a5-43b6-aebd-a22b21843cc3\") " pod="openstack/watcher-9457-account-create-update-5hrw4" Feb 18 19:33:52 crc kubenswrapper[4942]: I0218 19:33:52.836640 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-9457-account-create-update-5hrw4" Feb 18 19:33:52 crc kubenswrapper[4942]: I0218 19:33:52.856893 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-698758b865-nnzck" Feb 18 19:33:52 crc kubenswrapper[4942]: I0218 19:33:52.910351 4942 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-6896v"] Feb 18 19:33:52 crc kubenswrapper[4942]: I0218 19:33:52.910842 4942 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-86db49b7ff-6896v" podUID="d783b8b1-2938-4635-8a04-df942aa84383" containerName="dnsmasq-dns" containerID="cri-o://d4f1c1c791b1dde07b4dcac7910f06502d1dd5c9b462e412bdca411f39c47164" gracePeriod=10 Feb 18 19:33:53 crc kubenswrapper[4942]: I0218 19:33:53.110064 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-db-create-59tjm" Feb 18 19:33:53 crc kubenswrapper[4942]: I0218 19:33:53.128283 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-4vztq" event={"ID":"7ae58df9-2a9f-4592-a806-b6f5efd71155","Type":"ContainerStarted","Data":"f5b0e5f07640ac134e229b85e5cd422e569347ef8859ca3988fc0a14ab76decb"} Feb 18 19:33:53 crc kubenswrapper[4942]: I0218 19:33:53.146158 4942 generic.go:334] "Generic (PLEG): container finished" podID="646ba630-1210-431d-8902-b5c0968b35bb" containerID="7973de763d55a77ffbc3e3d1001daee7ca68a526d4309188caa67a4ce4135e55" exitCode=0 Feb 18 19:33:53 crc kubenswrapper[4942]: I0218 19:33:53.147048 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-d9d4-account-create-update-7gsvf" event={"ID":"646ba630-1210-431d-8902-b5c0968b35bb","Type":"ContainerDied","Data":"7973de763d55a77ffbc3e3d1001daee7ca68a526d4309188caa67a4ce4135e55"} Feb 18 19:33:53 crc kubenswrapper[4942]: I0218 19:33:53.147127 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-d9d4-account-create-update-7gsvf" event={"ID":"646ba630-1210-431d-8902-b5c0968b35bb","Type":"ContainerStarted","Data":"c2fe7ba176d2c472d430fa4f250787ec1a4f81f13679ca2be4b479e5f0b8e9f6"} Feb 18 19:33:53 crc kubenswrapper[4942]: I0218 19:33:53.152478 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-cwjhb" event={"ID":"2eb51639-e1f9-4c9f-baa9-30d64d3abb7a","Type":"ContainerStarted","Data":"55829c9fbf3eef2bdd3e7606f5ad7942662f83792b2404329b3607ab1503d0ae"} Feb 18 19:33:53 crc kubenswrapper[4942]: I0218 19:33:53.177482 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-9xsbj" event={"ID":"6821c713-6163-44f5-a749-415f0c1d8337","Type":"ContainerStarted","Data":"1eb3204b9b0589d490ccc1c18591bfe59c0e4d3c2638fc8531a3fb7550c8d9bf"} Feb 18 19:33:53 crc kubenswrapper[4942]: I0218 19:33:53.185589 4942 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-ring-rebalance-cwjhb" podStartSLOduration=2.466857222 podStartE2EDuration="9.18556742s" podCreationTimestamp="2026-02-18 19:33:44 +0000 UTC" firstStartedPulling="2026-02-18 19:33:45.158090635 +0000 UTC m=+984.863023290" lastFinishedPulling="2026-02-18 19:33:51.876800833 +0000 UTC m=+991.581733488" observedRunningTime="2026-02-18 19:33:53.178222462 +0000 UTC m=+992.883155127" watchObservedRunningTime="2026-02-18 19:33:53.18556742 +0000 UTC m=+992.890500085" Feb 18 19:33:53 crc kubenswrapper[4942]: I0218 19:33:53.196857 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"543db3d4-08d8-473f-a6ad-7e6a5bb9734c","Type":"ContainerStarted","Data":"19ca73d07d23c2f4be951d7909e61b79e21cfc7d91c0a9ffd938eb9ea1e5646a"} Feb 18 19:33:53 crc kubenswrapper[4942]: I0218 19:33:53.202599 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-h49cz" event={"ID":"a3564c8a-5e18-4c53-b225-7e9baf41a371","Type":"ContainerStarted","Data":"66f2076a4d3224486921697544c5266b6a3f5f3fd789e15549a3d73d0240e056"} Feb 18 19:33:53 crc kubenswrapper[4942]: I0218 19:33:53.204352 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-ce28-account-create-update-h5jjz" event={"ID":"371430b6-c9b6-48ba-a1a7-d1ce72a001ec","Type":"ContainerStarted","Data":"4e49158c977b69109020d9375918418b28e7f6670849fc1495f27f4bb36f8420"} Feb 18 19:33:53 crc kubenswrapper[4942]: I0218 19:33:53.204378 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-ce28-account-create-update-h5jjz" event={"ID":"371430b6-c9b6-48ba-a1a7-d1ce72a001ec","Type":"ContainerStarted","Data":"eee16dd4bd5b8b487af0f0974bd123a4773635c28b49751dee93789a473f7b0b"} Feb 18 19:33:53 crc kubenswrapper[4942]: I0218 19:33:53.227018 4942 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-ce28-account-create-update-h5jjz" podStartSLOduration=2.226998103 podStartE2EDuration="2.226998103s" podCreationTimestamp="2026-02-18 19:33:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:33:53.225077414 +0000 UTC m=+992.930010079" watchObservedRunningTime="2026-02-18 19:33:53.226998103 +0000 UTC m=+992.931930778" Feb 18 19:33:53 crc kubenswrapper[4942]: I0218 19:33:53.314974 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-9457-account-create-update-5hrw4"] Feb 18 19:33:53 crc kubenswrapper[4942]: I0218 19:33:53.594884 4942 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-86db49b7ff-6896v" Feb 18 19:33:53 crc kubenswrapper[4942]: I0218 19:33:53.717922 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d783b8b1-2938-4635-8a04-df942aa84383-ovsdbserver-sb\") pod \"d783b8b1-2938-4635-8a04-df942aa84383\" (UID: \"d783b8b1-2938-4635-8a04-df942aa84383\") " Feb 18 19:33:53 crc kubenswrapper[4942]: I0218 19:33:53.718325 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xzjg8\" (UniqueName: \"kubernetes.io/projected/d783b8b1-2938-4635-8a04-df942aa84383-kube-api-access-xzjg8\") pod \"d783b8b1-2938-4635-8a04-df942aa84383\" (UID: \"d783b8b1-2938-4635-8a04-df942aa84383\") " Feb 18 19:33:53 crc kubenswrapper[4942]: I0218 19:33:53.718353 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d783b8b1-2938-4635-8a04-df942aa84383-ovsdbserver-nb\") pod \"d783b8b1-2938-4635-8a04-df942aa84383\" (UID: \"d783b8b1-2938-4635-8a04-df942aa84383\") " Feb 18 19:33:53 crc kubenswrapper[4942]: I0218 19:33:53.718469 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d783b8b1-2938-4635-8a04-df942aa84383-config\") pod \"d783b8b1-2938-4635-8a04-df942aa84383\" (UID: \"d783b8b1-2938-4635-8a04-df942aa84383\") " Feb 18 19:33:53 crc kubenswrapper[4942]: I0218 19:33:53.718535 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d783b8b1-2938-4635-8a04-df942aa84383-dns-svc\") pod \"d783b8b1-2938-4635-8a04-df942aa84383\" (UID: \"d783b8b1-2938-4635-8a04-df942aa84383\") " Feb 18 19:33:53 crc kubenswrapper[4942]: W0218 19:33:53.727728 4942 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2f4f7b72_968a_4aed_b6e9_87f43677f342.slice/crio-2b0a48b277512b0bf7d988c25ddbb0deb0bbee03c0e145e86446505884383033 WatchSource:0}: Error finding container 2b0a48b277512b0bf7d988c25ddbb0deb0bbee03c0e145e86446505884383033: Status 404 returned error can't find the container with id 2b0a48b277512b0bf7d988c25ddbb0deb0bbee03c0e145e86446505884383033 Feb 18 19:33:53 crc kubenswrapper[4942]: I0218 19:33:53.729163 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-db-create-59tjm"] Feb 18 19:33:53 crc kubenswrapper[4942]: I0218 19:33:53.740342 4942 patch_prober.go:28] interesting pod/machine-config-daemon-wqxh4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 19:33:53 crc kubenswrapper[4942]: I0218 19:33:53.740452 4942 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wqxh4" podUID="28921539-823a-4439-a230-3b5aed7085cc" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 19:33:53 crc kubenswrapper[4942]: I0218 19:33:53.741099 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d783b8b1-2938-4635-8a04-df942aa84383-kube-api-access-xzjg8" (OuterVolumeSpecName: "kube-api-access-xzjg8") pod "d783b8b1-2938-4635-8a04-df942aa84383" (UID: "d783b8b1-2938-4635-8a04-df942aa84383"). InnerVolumeSpecName "kube-api-access-xzjg8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:33:53 crc kubenswrapper[4942]: I0218 19:33:53.763470 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d783b8b1-2938-4635-8a04-df942aa84383-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "d783b8b1-2938-4635-8a04-df942aa84383" (UID: "d783b8b1-2938-4635-8a04-df942aa84383"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:33:53 crc kubenswrapper[4942]: I0218 19:33:53.764628 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d783b8b1-2938-4635-8a04-df942aa84383-config" (OuterVolumeSpecName: "config") pod "d783b8b1-2938-4635-8a04-df942aa84383" (UID: "d783b8b1-2938-4635-8a04-df942aa84383"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:33:53 crc kubenswrapper[4942]: I0218 19:33:53.767087 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d783b8b1-2938-4635-8a04-df942aa84383-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "d783b8b1-2938-4635-8a04-df942aa84383" (UID: "d783b8b1-2938-4635-8a04-df942aa84383"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:33:53 crc kubenswrapper[4942]: I0218 19:33:53.770228 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d783b8b1-2938-4635-8a04-df942aa84383-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "d783b8b1-2938-4635-8a04-df942aa84383" (UID: "d783b8b1-2938-4635-8a04-df942aa84383"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:33:53 crc kubenswrapper[4942]: I0218 19:33:53.820075 4942 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d783b8b1-2938-4635-8a04-df942aa84383-config\") on node \"crc\" DevicePath \"\"" Feb 18 19:33:53 crc kubenswrapper[4942]: I0218 19:33:53.820108 4942 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d783b8b1-2938-4635-8a04-df942aa84383-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 18 19:33:53 crc kubenswrapper[4942]: I0218 19:33:53.820118 4942 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d783b8b1-2938-4635-8a04-df942aa84383-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 18 19:33:53 crc kubenswrapper[4942]: I0218 19:33:53.820168 4942 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xzjg8\" (UniqueName: \"kubernetes.io/projected/d783b8b1-2938-4635-8a04-df942aa84383-kube-api-access-xzjg8\") on node \"crc\" DevicePath \"\"" Feb 18 19:33:53 crc kubenswrapper[4942]: I0218 19:33:53.820177 4942 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d783b8b1-2938-4635-8a04-df942aa84383-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 18 19:33:54 crc kubenswrapper[4942]: I0218 19:33:54.212811 4942 generic.go:334] "Generic (PLEG): container finished" podID="6821c713-6163-44f5-a749-415f0c1d8337" containerID="761092c069dfd66382418fe07bf3c15f0aee53ccbdf6b11196e33385aae3fc8b" exitCode=0 Feb 18 19:33:54 crc kubenswrapper[4942]: I0218 19:33:54.213162 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-9xsbj" event={"ID":"6821c713-6163-44f5-a749-415f0c1d8337","Type":"ContainerDied","Data":"761092c069dfd66382418fe07bf3c15f0aee53ccbdf6b11196e33385aae3fc8b"} Feb 18 19:33:54 crc kubenswrapper[4942]: I0218 19:33:54.214866 4942 generic.go:334] "Generic (PLEG): container finished" podID="a3564c8a-5e18-4c53-b225-7e9baf41a371" containerID="f3ac5111bbb6bd92f96a1d8bfbfe931ddce997416181ddc95500cf9c11a42867" exitCode=0 Feb 18 19:33:54 crc kubenswrapper[4942]: I0218 19:33:54.214977 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-h49cz" event={"ID":"a3564c8a-5e18-4c53-b225-7e9baf41a371","Type":"ContainerDied","Data":"f3ac5111bbb6bd92f96a1d8bfbfe931ddce997416181ddc95500cf9c11a42867"} Feb 18 19:33:54 crc kubenswrapper[4942]: I0218 19:33:54.216740 4942 generic.go:334] "Generic (PLEG): container finished" podID="371430b6-c9b6-48ba-a1a7-d1ce72a001ec" containerID="4e49158c977b69109020d9375918418b28e7f6670849fc1495f27f4bb36f8420" exitCode=0 Feb 18 19:33:54 crc kubenswrapper[4942]: I0218 19:33:54.216891 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-ce28-account-create-update-h5jjz" event={"ID":"371430b6-c9b6-48ba-a1a7-d1ce72a001ec","Type":"ContainerDied","Data":"4e49158c977b69109020d9375918418b28e7f6670849fc1495f27f4bb36f8420"} Feb 18 19:33:54 crc kubenswrapper[4942]: I0218 19:33:54.219031 4942 generic.go:334] "Generic (PLEG): container finished" podID="7ae58df9-2a9f-4592-a806-b6f5efd71155" containerID="91775cfa347502e2c1757de451b7156448b5de2986ec185b6afdfe4b5a592293" exitCode=0 Feb 18 19:33:54 crc kubenswrapper[4942]: I0218 19:33:54.219080 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-4vztq" event={"ID":"7ae58df9-2a9f-4592-a806-b6f5efd71155","Type":"ContainerDied","Data":"91775cfa347502e2c1757de451b7156448b5de2986ec185b6afdfe4b5a592293"} Feb 18 19:33:54 crc kubenswrapper[4942]: I0218 19:33:54.221711 4942 generic.go:334] "Generic (PLEG): container finished" podID="d783b8b1-2938-4635-8a04-df942aa84383" containerID="d4f1c1c791b1dde07b4dcac7910f06502d1dd5c9b462e412bdca411f39c47164" exitCode=0 Feb 18 19:33:54 crc kubenswrapper[4942]: I0218 19:33:54.221930 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86db49b7ff-6896v" event={"ID":"d783b8b1-2938-4635-8a04-df942aa84383","Type":"ContainerDied","Data":"d4f1c1c791b1dde07b4dcac7910f06502d1dd5c9b462e412bdca411f39c47164"} Feb 18 19:33:54 crc kubenswrapper[4942]: I0218 19:33:54.222044 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86db49b7ff-6896v" event={"ID":"d783b8b1-2938-4635-8a04-df942aa84383","Type":"ContainerDied","Data":"448c589fbd7559c4745406aafcb7a6277e2c8e57050b505f7abd3899347233bb"} Feb 18 19:33:54 crc kubenswrapper[4942]: I0218 19:33:54.222135 4942 scope.go:117] "RemoveContainer" containerID="d4f1c1c791b1dde07b4dcac7910f06502d1dd5c9b462e412bdca411f39c47164" Feb 18 19:33:54 crc kubenswrapper[4942]: I0218 19:33:54.222341 4942 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-86db49b7ff-6896v" Feb 18 19:33:54 crc kubenswrapper[4942]: I0218 19:33:54.230944 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-db-create-59tjm" event={"ID":"2f4f7b72-968a-4aed-b6e9-87f43677f342","Type":"ContainerStarted","Data":"2b0a48b277512b0bf7d988c25ddbb0deb0bbee03c0e145e86446505884383033"} Feb 18 19:33:54 crc kubenswrapper[4942]: I0218 19:33:54.233571 4942 generic.go:334] "Generic (PLEG): container finished" podID="ba056ec7-86a5-43b6-aebd-a22b21843cc3" containerID="376d0fc77c68f0c59dee539c15e1e9f915935989d2e259a07dc205d03784efe9" exitCode=0 Feb 18 19:33:54 crc kubenswrapper[4942]: I0218 19:33:54.234415 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-9457-account-create-update-5hrw4" event={"ID":"ba056ec7-86a5-43b6-aebd-a22b21843cc3","Type":"ContainerDied","Data":"376d0fc77c68f0c59dee539c15e1e9f915935989d2e259a07dc205d03784efe9"} Feb 18 19:33:54 crc kubenswrapper[4942]: I0218 19:33:54.234447 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-9457-account-create-update-5hrw4" event={"ID":"ba056ec7-86a5-43b6-aebd-a22b21843cc3","Type":"ContainerStarted","Data":"38b3c170c47184369c1ec21f9724664ac8066d16e78c42b7d539b1e87174297e"} Feb 18 19:33:54 crc kubenswrapper[4942]: I0218 19:33:54.284366 4942 scope.go:117] "RemoveContainer" containerID="b7b09518d61e90a2b8119c940a1f1623600d941819cf448300a00306d69169af" Feb 18 19:33:54 crc kubenswrapper[4942]: I0218 19:33:54.327035 4942 scope.go:117] "RemoveContainer" containerID="d4f1c1c791b1dde07b4dcac7910f06502d1dd5c9b462e412bdca411f39c47164" Feb 18 19:33:54 crc kubenswrapper[4942]: E0218 19:33:54.328713 4942 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d4f1c1c791b1dde07b4dcac7910f06502d1dd5c9b462e412bdca411f39c47164\": container with ID starting with d4f1c1c791b1dde07b4dcac7910f06502d1dd5c9b462e412bdca411f39c47164 not found: ID does not exist" containerID="d4f1c1c791b1dde07b4dcac7910f06502d1dd5c9b462e412bdca411f39c47164" Feb 18 19:33:54 crc kubenswrapper[4942]: I0218 19:33:54.328782 4942 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d4f1c1c791b1dde07b4dcac7910f06502d1dd5c9b462e412bdca411f39c47164"} err="failed to get container status \"d4f1c1c791b1dde07b4dcac7910f06502d1dd5c9b462e412bdca411f39c47164\": rpc error: code = NotFound desc = could not find container \"d4f1c1c791b1dde07b4dcac7910f06502d1dd5c9b462e412bdca411f39c47164\": container with ID starting with d4f1c1c791b1dde07b4dcac7910f06502d1dd5c9b462e412bdca411f39c47164 not found: ID does not exist" Feb 18 19:33:54 crc kubenswrapper[4942]: I0218 19:33:54.328819 4942 scope.go:117] "RemoveContainer" containerID="b7b09518d61e90a2b8119c940a1f1623600d941819cf448300a00306d69169af" Feb 18 19:33:54 crc kubenswrapper[4942]: E0218 19:33:54.329305 4942 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b7b09518d61e90a2b8119c940a1f1623600d941819cf448300a00306d69169af\": container with ID starting with b7b09518d61e90a2b8119c940a1f1623600d941819cf448300a00306d69169af not found: ID does not exist" containerID="b7b09518d61e90a2b8119c940a1f1623600d941819cf448300a00306d69169af" Feb 18 19:33:54 crc kubenswrapper[4942]: I0218 19:33:54.329343 4942 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b7b09518d61e90a2b8119c940a1f1623600d941819cf448300a00306d69169af"} err="failed to get container status \"b7b09518d61e90a2b8119c940a1f1623600d941819cf448300a00306d69169af\": rpc error: code = NotFound desc = could not find container \"b7b09518d61e90a2b8119c940a1f1623600d941819cf448300a00306d69169af\": container with ID starting with b7b09518d61e90a2b8119c940a1f1623600d941819cf448300a00306d69169af not found: ID does not exist" Feb 18 19:33:54 crc kubenswrapper[4942]: I0218 19:33:54.362904 4942 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-6896v"] Feb 18 19:33:54 crc kubenswrapper[4942]: I0218 19:33:54.369469 4942 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-6896v"] Feb 18 19:33:54 crc kubenswrapper[4942]: I0218 19:33:54.610068 4942 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-d9d4-account-create-update-7gsvf" Feb 18 19:33:54 crc kubenswrapper[4942]: I0218 19:33:54.642787 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rcd24\" (UniqueName: \"kubernetes.io/projected/646ba630-1210-431d-8902-b5c0968b35bb-kube-api-access-rcd24\") pod \"646ba630-1210-431d-8902-b5c0968b35bb\" (UID: \"646ba630-1210-431d-8902-b5c0968b35bb\") " Feb 18 19:33:54 crc kubenswrapper[4942]: I0218 19:33:54.642902 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/646ba630-1210-431d-8902-b5c0968b35bb-operator-scripts\") pod \"646ba630-1210-431d-8902-b5c0968b35bb\" (UID: \"646ba630-1210-431d-8902-b5c0968b35bb\") " Feb 18 19:33:54 crc kubenswrapper[4942]: I0218 19:33:54.644023 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/646ba630-1210-431d-8902-b5c0968b35bb-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "646ba630-1210-431d-8902-b5c0968b35bb" (UID: "646ba630-1210-431d-8902-b5c0968b35bb"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:33:54 crc kubenswrapper[4942]: I0218 19:33:54.650231 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/646ba630-1210-431d-8902-b5c0968b35bb-kube-api-access-rcd24" (OuterVolumeSpecName: "kube-api-access-rcd24") pod "646ba630-1210-431d-8902-b5c0968b35bb" (UID: "646ba630-1210-431d-8902-b5c0968b35bb"). InnerVolumeSpecName "kube-api-access-rcd24". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:33:54 crc kubenswrapper[4942]: I0218 19:33:54.745424 4942 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rcd24\" (UniqueName: \"kubernetes.io/projected/646ba630-1210-431d-8902-b5c0968b35bb-kube-api-access-rcd24\") on node \"crc\" DevicePath \"\"" Feb 18 19:33:54 crc kubenswrapper[4942]: I0218 19:33:54.745470 4942 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/646ba630-1210-431d-8902-b5c0968b35bb-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 18 19:33:55 crc kubenswrapper[4942]: I0218 19:33:55.049164 4942 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d783b8b1-2938-4635-8a04-df942aa84383" path="/var/lib/kubelet/pods/d783b8b1-2938-4635-8a04-df942aa84383/volumes" Feb 18 19:33:55 crc kubenswrapper[4942]: I0218 19:33:55.246012 4942 generic.go:334] "Generic (PLEG): container finished" podID="2f4f7b72-968a-4aed-b6e9-87f43677f342" containerID="837718ff91cb054c2e7fe10e6239bf44f02d0dd7d7855db97e09e837f3dcef65" exitCode=0 Feb 18 19:33:55 crc kubenswrapper[4942]: I0218 19:33:55.246098 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-db-create-59tjm" event={"ID":"2f4f7b72-968a-4aed-b6e9-87f43677f342","Type":"ContainerDied","Data":"837718ff91cb054c2e7fe10e6239bf44f02d0dd7d7855db97e09e837f3dcef65"} Feb 18 19:33:55 crc kubenswrapper[4942]: I0218 19:33:55.253513 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-d9d4-account-create-update-7gsvf" event={"ID":"646ba630-1210-431d-8902-b5c0968b35bb","Type":"ContainerDied","Data":"c2fe7ba176d2c472d430fa4f250787ec1a4f81f13679ca2be4b479e5f0b8e9f6"} Feb 18 19:33:55 crc kubenswrapper[4942]: I0218 19:33:55.253554 4942 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c2fe7ba176d2c472d430fa4f250787ec1a4f81f13679ca2be4b479e5f0b8e9f6" Feb 18 19:33:55 crc kubenswrapper[4942]: I0218 19:33:55.253611 4942 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-d9d4-account-create-update-7gsvf" Feb 18 19:33:55 crc kubenswrapper[4942]: I0218 19:33:55.272957 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"543db3d4-08d8-473f-a6ad-7e6a5bb9734c","Type":"ContainerStarted","Data":"ebae20c9222b3aee15451c1f0bbaa8cd79204c32bb3e86cff12a92b878e9497f"} Feb 18 19:33:55 crc kubenswrapper[4942]: I0218 19:33:55.708802 4942 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-9457-account-create-update-5hrw4" Feb 18 19:33:55 crc kubenswrapper[4942]: I0218 19:33:55.796047 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ba056ec7-86a5-43b6-aebd-a22b21843cc3-operator-scripts\") pod \"ba056ec7-86a5-43b6-aebd-a22b21843cc3\" (UID: \"ba056ec7-86a5-43b6-aebd-a22b21843cc3\") " Feb 18 19:33:55 crc kubenswrapper[4942]: I0218 19:33:55.796236 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7jh74\" (UniqueName: \"kubernetes.io/projected/ba056ec7-86a5-43b6-aebd-a22b21843cc3-kube-api-access-7jh74\") pod \"ba056ec7-86a5-43b6-aebd-a22b21843cc3\" (UID: \"ba056ec7-86a5-43b6-aebd-a22b21843cc3\") " Feb 18 19:33:55 crc kubenswrapper[4942]: I0218 19:33:55.797407 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ba056ec7-86a5-43b6-aebd-a22b21843cc3-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "ba056ec7-86a5-43b6-aebd-a22b21843cc3" (UID: "ba056ec7-86a5-43b6-aebd-a22b21843cc3"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:33:55 crc kubenswrapper[4942]: I0218 19:33:55.810469 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ba056ec7-86a5-43b6-aebd-a22b21843cc3-kube-api-access-7jh74" (OuterVolumeSpecName: "kube-api-access-7jh74") pod "ba056ec7-86a5-43b6-aebd-a22b21843cc3" (UID: "ba056ec7-86a5-43b6-aebd-a22b21843cc3"). InnerVolumeSpecName "kube-api-access-7jh74". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:33:55 crc kubenswrapper[4942]: I0218 19:33:55.904628 4942 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7jh74\" (UniqueName: \"kubernetes.io/projected/ba056ec7-86a5-43b6-aebd-a22b21843cc3-kube-api-access-7jh74\") on node \"crc\" DevicePath \"\"" Feb 18 19:33:55 crc kubenswrapper[4942]: I0218 19:33:55.904658 4942 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ba056ec7-86a5-43b6-aebd-a22b21843cc3-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 18 19:33:55 crc kubenswrapper[4942]: I0218 19:33:55.969886 4942 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-ce28-account-create-update-h5jjz" Feb 18 19:33:55 crc kubenswrapper[4942]: I0218 19:33:55.975026 4942 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-h49cz" Feb 18 19:33:55 crc kubenswrapper[4942]: I0218 19:33:55.983881 4942 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-4vztq" Feb 18 19:33:55 crc kubenswrapper[4942]: I0218 19:33:55.991270 4942 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-9xsbj" Feb 18 19:33:56 crc kubenswrapper[4942]: I0218 19:33:56.108218 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6821c713-6163-44f5-a749-415f0c1d8337-operator-scripts\") pod \"6821c713-6163-44f5-a749-415f0c1d8337\" (UID: \"6821c713-6163-44f5-a749-415f0c1d8337\") " Feb 18 19:33:56 crc kubenswrapper[4942]: I0218 19:33:56.108263 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a3564c8a-5e18-4c53-b225-7e9baf41a371-operator-scripts\") pod \"a3564c8a-5e18-4c53-b225-7e9baf41a371\" (UID: \"a3564c8a-5e18-4c53-b225-7e9baf41a371\") " Feb 18 19:33:56 crc kubenswrapper[4942]: I0218 19:33:56.108303 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mvxgs\" (UniqueName: \"kubernetes.io/projected/371430b6-c9b6-48ba-a1a7-d1ce72a001ec-kube-api-access-mvxgs\") pod \"371430b6-c9b6-48ba-a1a7-d1ce72a001ec\" (UID: \"371430b6-c9b6-48ba-a1a7-d1ce72a001ec\") " Feb 18 19:33:56 crc kubenswrapper[4942]: I0218 19:33:56.108341 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t7rl9\" (UniqueName: \"kubernetes.io/projected/7ae58df9-2a9f-4592-a806-b6f5efd71155-kube-api-access-t7rl9\") pod \"7ae58df9-2a9f-4592-a806-b6f5efd71155\" (UID: \"7ae58df9-2a9f-4592-a806-b6f5efd71155\") " Feb 18 19:33:56 crc kubenswrapper[4942]: I0218 19:33:56.108492 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7ae58df9-2a9f-4592-a806-b6f5efd71155-operator-scripts\") pod \"7ae58df9-2a9f-4592-a806-b6f5efd71155\" (UID: \"7ae58df9-2a9f-4592-a806-b6f5efd71155\") " Feb 18 19:33:56 crc kubenswrapper[4942]: I0218 19:33:56.108517 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/371430b6-c9b6-48ba-a1a7-d1ce72a001ec-operator-scripts\") pod \"371430b6-c9b6-48ba-a1a7-d1ce72a001ec\" (UID: \"371430b6-c9b6-48ba-a1a7-d1ce72a001ec\") " Feb 18 19:33:56 crc kubenswrapper[4942]: I0218 19:33:56.108582 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-76ck2\" (UniqueName: \"kubernetes.io/projected/a3564c8a-5e18-4c53-b225-7e9baf41a371-kube-api-access-76ck2\") pod \"a3564c8a-5e18-4c53-b225-7e9baf41a371\" (UID: \"a3564c8a-5e18-4c53-b225-7e9baf41a371\") " Feb 18 19:33:56 crc kubenswrapper[4942]: I0218 19:33:56.108645 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rf9x9\" (UniqueName: \"kubernetes.io/projected/6821c713-6163-44f5-a749-415f0c1d8337-kube-api-access-rf9x9\") pod \"6821c713-6163-44f5-a749-415f0c1d8337\" (UID: \"6821c713-6163-44f5-a749-415f0c1d8337\") " Feb 18 19:33:56 crc kubenswrapper[4942]: I0218 19:33:56.108871 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a3564c8a-5e18-4c53-b225-7e9baf41a371-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "a3564c8a-5e18-4c53-b225-7e9baf41a371" (UID: "a3564c8a-5e18-4c53-b225-7e9baf41a371"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:33:56 crc kubenswrapper[4942]: I0218 19:33:56.109299 4942 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a3564c8a-5e18-4c53-b225-7e9baf41a371-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 18 19:33:56 crc kubenswrapper[4942]: I0218 19:33:56.109654 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7ae58df9-2a9f-4592-a806-b6f5efd71155-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "7ae58df9-2a9f-4592-a806-b6f5efd71155" (UID: "7ae58df9-2a9f-4592-a806-b6f5efd71155"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:33:56 crc kubenswrapper[4942]: I0218 19:33:56.110476 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6821c713-6163-44f5-a749-415f0c1d8337-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "6821c713-6163-44f5-a749-415f0c1d8337" (UID: "6821c713-6163-44f5-a749-415f0c1d8337"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:33:56 crc kubenswrapper[4942]: I0218 19:33:56.111154 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/371430b6-c9b6-48ba-a1a7-d1ce72a001ec-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "371430b6-c9b6-48ba-a1a7-d1ce72a001ec" (UID: "371430b6-c9b6-48ba-a1a7-d1ce72a001ec"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:33:56 crc kubenswrapper[4942]: I0218 19:33:56.112844 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/371430b6-c9b6-48ba-a1a7-d1ce72a001ec-kube-api-access-mvxgs" (OuterVolumeSpecName: "kube-api-access-mvxgs") pod "371430b6-c9b6-48ba-a1a7-d1ce72a001ec" (UID: "371430b6-c9b6-48ba-a1a7-d1ce72a001ec"). InnerVolumeSpecName "kube-api-access-mvxgs". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:33:56 crc kubenswrapper[4942]: I0218 19:33:56.112879 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6821c713-6163-44f5-a749-415f0c1d8337-kube-api-access-rf9x9" (OuterVolumeSpecName: "kube-api-access-rf9x9") pod "6821c713-6163-44f5-a749-415f0c1d8337" (UID: "6821c713-6163-44f5-a749-415f0c1d8337"). InnerVolumeSpecName "kube-api-access-rf9x9". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:33:56 crc kubenswrapper[4942]: I0218 19:33:56.113690 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7ae58df9-2a9f-4592-a806-b6f5efd71155-kube-api-access-t7rl9" (OuterVolumeSpecName: "kube-api-access-t7rl9") pod "7ae58df9-2a9f-4592-a806-b6f5efd71155" (UID: "7ae58df9-2a9f-4592-a806-b6f5efd71155"). InnerVolumeSpecName "kube-api-access-t7rl9". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:33:56 crc kubenswrapper[4942]: I0218 19:33:56.114214 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a3564c8a-5e18-4c53-b225-7e9baf41a371-kube-api-access-76ck2" (OuterVolumeSpecName: "kube-api-access-76ck2") pod "a3564c8a-5e18-4c53-b225-7e9baf41a371" (UID: "a3564c8a-5e18-4c53-b225-7e9baf41a371"). InnerVolumeSpecName "kube-api-access-76ck2". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:33:56 crc kubenswrapper[4942]: I0218 19:33:56.210846 4942 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mvxgs\" (UniqueName: \"kubernetes.io/projected/371430b6-c9b6-48ba-a1a7-d1ce72a001ec-kube-api-access-mvxgs\") on node \"crc\" DevicePath \"\"" Feb 18 19:33:56 crc kubenswrapper[4942]: I0218 19:33:56.210882 4942 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t7rl9\" (UniqueName: \"kubernetes.io/projected/7ae58df9-2a9f-4592-a806-b6f5efd71155-kube-api-access-t7rl9\") on node \"crc\" DevicePath \"\"" Feb 18 19:33:56 crc kubenswrapper[4942]: I0218 19:33:56.210893 4942 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7ae58df9-2a9f-4592-a806-b6f5efd71155-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 18 19:33:56 crc kubenswrapper[4942]: I0218 19:33:56.210902 4942 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/371430b6-c9b6-48ba-a1a7-d1ce72a001ec-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 18 19:33:56 crc kubenswrapper[4942]: I0218 19:33:56.210912 4942 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-76ck2\" (UniqueName: \"kubernetes.io/projected/a3564c8a-5e18-4c53-b225-7e9baf41a371-kube-api-access-76ck2\") on node \"crc\" DevicePath \"\"" Feb 18 19:33:56 crc kubenswrapper[4942]: I0218 19:33:56.210921 4942 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rf9x9\" (UniqueName: \"kubernetes.io/projected/6821c713-6163-44f5-a749-415f0c1d8337-kube-api-access-rf9x9\") on node \"crc\" DevicePath \"\"" Feb 18 19:33:56 crc kubenswrapper[4942]: I0218 19:33:56.210932 4942 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6821c713-6163-44f5-a749-415f0c1d8337-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 18 19:33:56 crc kubenswrapper[4942]: I0218 19:33:56.299830 4942 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-9457-account-create-update-5hrw4" Feb 18 19:33:56 crc kubenswrapper[4942]: I0218 19:33:56.299805 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-9457-account-create-update-5hrw4" event={"ID":"ba056ec7-86a5-43b6-aebd-a22b21843cc3","Type":"ContainerDied","Data":"38b3c170c47184369c1ec21f9724664ac8066d16e78c42b7d539b1e87174297e"} Feb 18 19:33:56 crc kubenswrapper[4942]: I0218 19:33:56.300163 4942 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="38b3c170c47184369c1ec21f9724664ac8066d16e78c42b7d539b1e87174297e" Feb 18 19:33:56 crc kubenswrapper[4942]: I0218 19:33:56.303063 4942 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-9xsbj" Feb 18 19:33:56 crc kubenswrapper[4942]: I0218 19:33:56.303064 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-9xsbj" event={"ID":"6821c713-6163-44f5-a749-415f0c1d8337","Type":"ContainerDied","Data":"1eb3204b9b0589d490ccc1c18591bfe59c0e4d3c2638fc8531a3fb7550c8d9bf"} Feb 18 19:33:56 crc kubenswrapper[4942]: I0218 19:33:56.303091 4942 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1eb3204b9b0589d490ccc1c18591bfe59c0e4d3c2638fc8531a3fb7550c8d9bf" Feb 18 19:33:56 crc kubenswrapper[4942]: I0218 19:33:56.314520 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-h49cz" event={"ID":"a3564c8a-5e18-4c53-b225-7e9baf41a371","Type":"ContainerDied","Data":"66f2076a4d3224486921697544c5266b6a3f5f3fd789e15549a3d73d0240e056"} Feb 18 19:33:56 crc kubenswrapper[4942]: I0218 19:33:56.314564 4942 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="66f2076a4d3224486921697544c5266b6a3f5f3fd789e15549a3d73d0240e056" Feb 18 19:33:56 crc kubenswrapper[4942]: I0218 19:33:56.314638 4942 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-h49cz" Feb 18 19:33:56 crc kubenswrapper[4942]: I0218 19:33:56.316707 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-ce28-account-create-update-h5jjz" event={"ID":"371430b6-c9b6-48ba-a1a7-d1ce72a001ec","Type":"ContainerDied","Data":"eee16dd4bd5b8b487af0f0974bd123a4773635c28b49751dee93789a473f7b0b"} Feb 18 19:33:56 crc kubenswrapper[4942]: I0218 19:33:56.316750 4942 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="eee16dd4bd5b8b487af0f0974bd123a4773635c28b49751dee93789a473f7b0b" Feb 18 19:33:56 crc kubenswrapper[4942]: I0218 19:33:56.316839 4942 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-ce28-account-create-update-h5jjz" Feb 18 19:33:56 crc kubenswrapper[4942]: I0218 19:33:56.320907 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-4vztq" event={"ID":"7ae58df9-2a9f-4592-a806-b6f5efd71155","Type":"ContainerDied","Data":"f5b0e5f07640ac134e229b85e5cd422e569347ef8859ca3988fc0a14ab76decb"} Feb 18 19:33:56 crc kubenswrapper[4942]: I0218 19:33:56.320950 4942 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f5b0e5f07640ac134e229b85e5cd422e569347ef8859ca3988fc0a14ab76decb" Feb 18 19:33:56 crc kubenswrapper[4942]: I0218 19:33:56.320993 4942 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-4vztq" Feb 18 19:33:56 crc kubenswrapper[4942]: I0218 19:33:56.720696 4942 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-db-create-59tjm" Feb 18 19:33:56 crc kubenswrapper[4942]: I0218 19:33:56.820945 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2f4f7b72-968a-4aed-b6e9-87f43677f342-operator-scripts\") pod \"2f4f7b72-968a-4aed-b6e9-87f43677f342\" (UID: \"2f4f7b72-968a-4aed-b6e9-87f43677f342\") " Feb 18 19:33:56 crc kubenswrapper[4942]: I0218 19:33:56.821067 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cc7sh\" (UniqueName: \"kubernetes.io/projected/2f4f7b72-968a-4aed-b6e9-87f43677f342-kube-api-access-cc7sh\") pod \"2f4f7b72-968a-4aed-b6e9-87f43677f342\" (UID: \"2f4f7b72-968a-4aed-b6e9-87f43677f342\") " Feb 18 19:33:56 crc kubenswrapper[4942]: I0218 19:33:56.821588 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2f4f7b72-968a-4aed-b6e9-87f43677f342-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "2f4f7b72-968a-4aed-b6e9-87f43677f342" (UID: "2f4f7b72-968a-4aed-b6e9-87f43677f342"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:33:56 crc kubenswrapper[4942]: I0218 19:33:56.829199 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2f4f7b72-968a-4aed-b6e9-87f43677f342-kube-api-access-cc7sh" (OuterVolumeSpecName: "kube-api-access-cc7sh") pod "2f4f7b72-968a-4aed-b6e9-87f43677f342" (UID: "2f4f7b72-968a-4aed-b6e9-87f43677f342"). InnerVolumeSpecName "kube-api-access-cc7sh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:33:56 crc kubenswrapper[4942]: I0218 19:33:56.923540 4942 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2f4f7b72-968a-4aed-b6e9-87f43677f342-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 18 19:33:56 crc kubenswrapper[4942]: I0218 19:33:56.923583 4942 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cc7sh\" (UniqueName: \"kubernetes.io/projected/2f4f7b72-968a-4aed-b6e9-87f43677f342-kube-api-access-cc7sh\") on node \"crc\" DevicePath \"\"" Feb 18 19:33:57 crc kubenswrapper[4942]: I0218 19:33:57.152669 4942 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-4vztq"] Feb 18 19:33:57 crc kubenswrapper[4942]: I0218 19:33:57.160444 4942 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-4vztq"] Feb 18 19:33:57 crc kubenswrapper[4942]: I0218 19:33:57.240654 4942 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-g7rkd"] Feb 18 19:33:57 crc kubenswrapper[4942]: E0218 19:33:57.243685 4942 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d783b8b1-2938-4635-8a04-df942aa84383" containerName="init" Feb 18 19:33:57 crc kubenswrapper[4942]: I0218 19:33:57.243863 4942 state_mem.go:107] "Deleted CPUSet assignment" podUID="d783b8b1-2938-4635-8a04-df942aa84383" containerName="init" Feb 18 19:33:57 crc kubenswrapper[4942]: E0218 19:33:57.243952 4942 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7ae58df9-2a9f-4592-a806-b6f5efd71155" containerName="mariadb-account-create-update" Feb 18 19:33:57 crc kubenswrapper[4942]: I0218 19:33:57.244029 4942 state_mem.go:107] "Deleted CPUSet assignment" podUID="7ae58df9-2a9f-4592-a806-b6f5efd71155" containerName="mariadb-account-create-update" Feb 18 19:33:57 crc kubenswrapper[4942]: E0218 19:33:57.244109 4942 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a3564c8a-5e18-4c53-b225-7e9baf41a371" containerName="mariadb-database-create" Feb 18 19:33:57 crc kubenswrapper[4942]: I0218 19:33:57.244182 4942 state_mem.go:107] "Deleted CPUSet assignment" podUID="a3564c8a-5e18-4c53-b225-7e9baf41a371" containerName="mariadb-database-create" Feb 18 19:33:57 crc kubenswrapper[4942]: E0218 19:33:57.244366 4942 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="646ba630-1210-431d-8902-b5c0968b35bb" containerName="mariadb-account-create-update" Feb 18 19:33:57 crc kubenswrapper[4942]: I0218 19:33:57.244450 4942 state_mem.go:107] "Deleted CPUSet assignment" podUID="646ba630-1210-431d-8902-b5c0968b35bb" containerName="mariadb-account-create-update" Feb 18 19:33:57 crc kubenswrapper[4942]: E0218 19:33:57.244530 4942 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d783b8b1-2938-4635-8a04-df942aa84383" containerName="dnsmasq-dns" Feb 18 19:33:57 crc kubenswrapper[4942]: I0218 19:33:57.244796 4942 state_mem.go:107] "Deleted CPUSet assignment" podUID="d783b8b1-2938-4635-8a04-df942aa84383" containerName="dnsmasq-dns" Feb 18 19:33:57 crc kubenswrapper[4942]: E0218 19:33:57.244955 4942 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2f4f7b72-968a-4aed-b6e9-87f43677f342" containerName="mariadb-database-create" Feb 18 19:33:57 crc kubenswrapper[4942]: I0218 19:33:57.245039 4942 state_mem.go:107] "Deleted CPUSet assignment" podUID="2f4f7b72-968a-4aed-b6e9-87f43677f342" containerName="mariadb-database-create" Feb 18 19:33:57 crc kubenswrapper[4942]: E0218 19:33:57.245128 4942 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6821c713-6163-44f5-a749-415f0c1d8337" containerName="mariadb-database-create" Feb 18 19:33:57 crc kubenswrapper[4942]: I0218 19:33:57.245249 4942 state_mem.go:107] "Deleted CPUSet assignment" podUID="6821c713-6163-44f5-a749-415f0c1d8337" containerName="mariadb-database-create" Feb 18 19:33:57 crc kubenswrapper[4942]: E0218 19:33:57.245355 4942 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="371430b6-c9b6-48ba-a1a7-d1ce72a001ec" containerName="mariadb-account-create-update" Feb 18 19:33:57 crc kubenswrapper[4942]: I0218 19:33:57.245443 4942 state_mem.go:107] "Deleted CPUSet assignment" podUID="371430b6-c9b6-48ba-a1a7-d1ce72a001ec" containerName="mariadb-account-create-update" Feb 18 19:33:57 crc kubenswrapper[4942]: E0218 19:33:57.245537 4942 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ba056ec7-86a5-43b6-aebd-a22b21843cc3" containerName="mariadb-account-create-update" Feb 18 19:33:57 crc kubenswrapper[4942]: I0218 19:33:57.245632 4942 state_mem.go:107] "Deleted CPUSet assignment" podUID="ba056ec7-86a5-43b6-aebd-a22b21843cc3" containerName="mariadb-account-create-update" Feb 18 19:33:57 crc kubenswrapper[4942]: I0218 19:33:57.245897 4942 memory_manager.go:354] "RemoveStaleState removing state" podUID="a3564c8a-5e18-4c53-b225-7e9baf41a371" containerName="mariadb-database-create" Feb 18 19:33:57 crc kubenswrapper[4942]: I0218 19:33:57.246003 4942 memory_manager.go:354] "RemoveStaleState removing state" podUID="7ae58df9-2a9f-4592-a806-b6f5efd71155" containerName="mariadb-account-create-update" Feb 18 19:33:57 crc kubenswrapper[4942]: I0218 19:33:57.246123 4942 memory_manager.go:354] "RemoveStaleState removing state" podUID="d783b8b1-2938-4635-8a04-df942aa84383" containerName="dnsmasq-dns" Feb 18 19:33:57 crc kubenswrapper[4942]: I0218 19:33:57.246197 4942 memory_manager.go:354] "RemoveStaleState removing state" podUID="646ba630-1210-431d-8902-b5c0968b35bb" containerName="mariadb-account-create-update" Feb 18 19:33:57 crc kubenswrapper[4942]: I0218 19:33:57.246285 4942 memory_manager.go:354] "RemoveStaleState removing state" podUID="371430b6-c9b6-48ba-a1a7-d1ce72a001ec" containerName="mariadb-account-create-update" Feb 18 19:33:57 crc kubenswrapper[4942]: I0218 19:33:57.246367 4942 memory_manager.go:354] "RemoveStaleState removing state" podUID="ba056ec7-86a5-43b6-aebd-a22b21843cc3" containerName="mariadb-account-create-update" Feb 18 19:33:57 crc kubenswrapper[4942]: I0218 19:33:57.246448 4942 memory_manager.go:354] "RemoveStaleState removing state" podUID="6821c713-6163-44f5-a749-415f0c1d8337" containerName="mariadb-database-create" Feb 18 19:33:57 crc kubenswrapper[4942]: I0218 19:33:57.246523 4942 memory_manager.go:354] "RemoveStaleState removing state" podUID="2f4f7b72-968a-4aed-b6e9-87f43677f342" containerName="mariadb-database-create" Feb 18 19:33:57 crc kubenswrapper[4942]: I0218 19:33:57.247255 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-g7rkd" Feb 18 19:33:57 crc kubenswrapper[4942]: I0218 19:33:57.250244 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-g7rkd"] Feb 18 19:33:57 crc kubenswrapper[4942]: I0218 19:33:57.251272 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-mariadb-root-db-secret" Feb 18 19:33:57 crc kubenswrapper[4942]: I0218 19:33:57.330367 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7a52f4fe-2f25-4cf1-8373-3cf3a20f17eb-operator-scripts\") pod \"root-account-create-update-g7rkd\" (UID: \"7a52f4fe-2f25-4cf1-8373-3cf3a20f17eb\") " pod="openstack/root-account-create-update-g7rkd" Feb 18 19:33:57 crc kubenswrapper[4942]: I0218 19:33:57.330511 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8mkzm\" (UniqueName: \"kubernetes.io/projected/7a52f4fe-2f25-4cf1-8373-3cf3a20f17eb-kube-api-access-8mkzm\") pod \"root-account-create-update-g7rkd\" (UID: \"7a52f4fe-2f25-4cf1-8373-3cf3a20f17eb\") " pod="openstack/root-account-create-update-g7rkd" Feb 18 19:33:57 crc kubenswrapper[4942]: I0218 19:33:57.330818 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-db-create-59tjm" event={"ID":"2f4f7b72-968a-4aed-b6e9-87f43677f342","Type":"ContainerDied","Data":"2b0a48b277512b0bf7d988c25ddbb0deb0bbee03c0e145e86446505884383033"} Feb 18 19:33:57 crc kubenswrapper[4942]: I0218 19:33:57.330852 4942 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2b0a48b277512b0bf7d988c25ddbb0deb0bbee03c0e145e86446505884383033" Feb 18 19:33:57 crc kubenswrapper[4942]: I0218 19:33:57.330981 4942 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-db-create-59tjm" Feb 18 19:33:57 crc kubenswrapper[4942]: I0218 19:33:57.431852 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8mkzm\" (UniqueName: \"kubernetes.io/projected/7a52f4fe-2f25-4cf1-8373-3cf3a20f17eb-kube-api-access-8mkzm\") pod \"root-account-create-update-g7rkd\" (UID: \"7a52f4fe-2f25-4cf1-8373-3cf3a20f17eb\") " pod="openstack/root-account-create-update-g7rkd" Feb 18 19:33:57 crc kubenswrapper[4942]: I0218 19:33:57.432327 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7a52f4fe-2f25-4cf1-8373-3cf3a20f17eb-operator-scripts\") pod \"root-account-create-update-g7rkd\" (UID: \"7a52f4fe-2f25-4cf1-8373-3cf3a20f17eb\") " pod="openstack/root-account-create-update-g7rkd" Feb 18 19:33:57 crc kubenswrapper[4942]: I0218 19:33:57.433146 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7a52f4fe-2f25-4cf1-8373-3cf3a20f17eb-operator-scripts\") pod \"root-account-create-update-g7rkd\" (UID: \"7a52f4fe-2f25-4cf1-8373-3cf3a20f17eb\") " pod="openstack/root-account-create-update-g7rkd" Feb 18 19:33:57 crc kubenswrapper[4942]: I0218 19:33:57.458582 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8mkzm\" (UniqueName: \"kubernetes.io/projected/7a52f4fe-2f25-4cf1-8373-3cf3a20f17eb-kube-api-access-8mkzm\") pod \"root-account-create-update-g7rkd\" (UID: \"7a52f4fe-2f25-4cf1-8373-3cf3a20f17eb\") " pod="openstack/root-account-create-update-g7rkd" Feb 18 19:33:57 crc kubenswrapper[4942]: I0218 19:33:57.584497 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-g7rkd" Feb 18 19:33:57 crc kubenswrapper[4942]: I0218 19:33:57.614431 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-northd-0" Feb 18 19:33:58 crc kubenswrapper[4942]: I0218 19:33:58.821266 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-g7rkd"] Feb 18 19:33:59 crc kubenswrapper[4942]: I0218 19:33:59.055679 4942 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7ae58df9-2a9f-4592-a806-b6f5efd71155" path="/var/lib/kubelet/pods/7ae58df9-2a9f-4592-a806-b6f5efd71155/volumes" Feb 18 19:33:59 crc kubenswrapper[4942]: I0218 19:33:59.352585 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"543db3d4-08d8-473f-a6ad-7e6a5bb9734c","Type":"ContainerStarted","Data":"fd3aef2dcd467a4e4443cb718f2ad37e73afe0c2cc787eca566999184738b19b"} Feb 18 19:33:59 crc kubenswrapper[4942]: I0218 19:33:59.354535 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-g7rkd" event={"ID":"7a52f4fe-2f25-4cf1-8373-3cf3a20f17eb","Type":"ContainerStarted","Data":"3ca7995811727ed16b81c6dacf4b796cf8cb865100445c8661ce6034aba901d3"} Feb 18 19:33:59 crc kubenswrapper[4942]: I0218 19:33:59.354577 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-g7rkd" event={"ID":"7a52f4fe-2f25-4cf1-8373-3cf3a20f17eb","Type":"ContainerStarted","Data":"9091bd51dd260200eceb22826dedd139c79e73e5248d64dfbd7f691b19339ef5"} Feb 18 19:33:59 crc kubenswrapper[4942]: I0218 19:33:59.389115 4942 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/prometheus-metric-storage-0" podStartSLOduration=10.209447248 podStartE2EDuration="47.389100825s" podCreationTimestamp="2026-02-18 19:33:12 +0000 UTC" firstStartedPulling="2026-02-18 19:33:21.134631212 +0000 UTC m=+960.839563877" lastFinishedPulling="2026-02-18 19:33:58.314284789 +0000 UTC m=+998.019217454" observedRunningTime="2026-02-18 19:33:59.387436303 +0000 UTC m=+999.092368968" watchObservedRunningTime="2026-02-18 19:33:59.389100825 +0000 UTC m=+999.094033490" Feb 18 19:33:59 crc kubenswrapper[4942]: I0218 19:33:59.410737 4942 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/root-account-create-update-g7rkd" podStartSLOduration=2.41071641 podStartE2EDuration="2.41071641s" podCreationTimestamp="2026-02-18 19:33:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:33:59.409652982 +0000 UTC m=+999.114585687" watchObservedRunningTime="2026-02-18 19:33:59.41071641 +0000 UTC m=+999.115649075" Feb 18 19:33:59 crc kubenswrapper[4942]: I0218 19:33:59.602918 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/125bdbb5-76a8-450f-b645-2133024a1bd0-etc-swift\") pod \"swift-storage-0\" (UID: \"125bdbb5-76a8-450f-b645-2133024a1bd0\") " pod="openstack/swift-storage-0" Feb 18 19:33:59 crc kubenswrapper[4942]: E0218 19:33:59.603164 4942 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Feb 18 19:33:59 crc kubenswrapper[4942]: E0218 19:33:59.603933 4942 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Feb 18 19:33:59 crc kubenswrapper[4942]: E0218 19:33:59.604019 4942 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/125bdbb5-76a8-450f-b645-2133024a1bd0-etc-swift podName:125bdbb5-76a8-450f-b645-2133024a1bd0 nodeName:}" failed. No retries permitted until 2026-02-18 19:34:15.603995437 +0000 UTC m=+1015.308928102 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/125bdbb5-76a8-450f-b645-2133024a1bd0-etc-swift") pod "swift-storage-0" (UID: "125bdbb5-76a8-450f-b645-2133024a1bd0") : configmap "swift-ring-files" not found Feb 18 19:34:00 crc kubenswrapper[4942]: I0218 19:34:00.334218 4942 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-create-tjf5x"] Feb 18 19:34:00 crc kubenswrapper[4942]: I0218 19:34:00.335715 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-tjf5x" Feb 18 19:34:00 crc kubenswrapper[4942]: I0218 19:34:00.354534 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-tjf5x"] Feb 18 19:34:00 crc kubenswrapper[4942]: I0218 19:34:00.365899 4942 generic.go:334] "Generic (PLEG): container finished" podID="2eb51639-e1f9-4c9f-baa9-30d64d3abb7a" containerID="55829c9fbf3eef2bdd3e7606f5ad7942662f83792b2404329b3607ab1503d0ae" exitCode=0 Feb 18 19:34:00 crc kubenswrapper[4942]: I0218 19:34:00.365967 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-cwjhb" event={"ID":"2eb51639-e1f9-4c9f-baa9-30d64d3abb7a","Type":"ContainerDied","Data":"55829c9fbf3eef2bdd3e7606f5ad7942662f83792b2404329b3607ab1503d0ae"} Feb 18 19:34:00 crc kubenswrapper[4942]: I0218 19:34:00.368220 4942 generic.go:334] "Generic (PLEG): container finished" podID="7a52f4fe-2f25-4cf1-8373-3cf3a20f17eb" containerID="3ca7995811727ed16b81c6dacf4b796cf8cb865100445c8661ce6034aba901d3" exitCode=0 Feb 18 19:34:00 crc kubenswrapper[4942]: I0218 19:34:00.368263 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-g7rkd" event={"ID":"7a52f4fe-2f25-4cf1-8373-3cf3a20f17eb","Type":"ContainerDied","Data":"3ca7995811727ed16b81c6dacf4b796cf8cb865100445c8661ce6034aba901d3"} Feb 18 19:34:00 crc kubenswrapper[4942]: I0218 19:34:00.516626 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6a1ca129-f896-4d68-b119-701a991fe0ba-operator-scripts\") pod \"glance-db-create-tjf5x\" (UID: \"6a1ca129-f896-4d68-b119-701a991fe0ba\") " pod="openstack/glance-db-create-tjf5x" Feb 18 19:34:00 crc kubenswrapper[4942]: I0218 19:34:00.516676 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7qkc8\" (UniqueName: \"kubernetes.io/projected/6a1ca129-f896-4d68-b119-701a991fe0ba-kube-api-access-7qkc8\") pod \"glance-db-create-tjf5x\" (UID: \"6a1ca129-f896-4d68-b119-701a991fe0ba\") " pod="openstack/glance-db-create-tjf5x" Feb 18 19:34:00 crc kubenswrapper[4942]: I0218 19:34:00.537336 4942 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-8ff9-account-create-update-k7n8f"] Feb 18 19:34:00 crc kubenswrapper[4942]: I0218 19:34:00.538573 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-8ff9-account-create-update-k7n8f" Feb 18 19:34:00 crc kubenswrapper[4942]: I0218 19:34:00.541860 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-db-secret" Feb 18 19:34:00 crc kubenswrapper[4942]: I0218 19:34:00.550697 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-8ff9-account-create-update-k7n8f"] Feb 18 19:34:00 crc kubenswrapper[4942]: I0218 19:34:00.622649 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lzmr5\" (UniqueName: \"kubernetes.io/projected/8611c14f-da0c-410e-9c3a-dc6cb5a698a7-kube-api-access-lzmr5\") pod \"glance-8ff9-account-create-update-k7n8f\" (UID: \"8611c14f-da0c-410e-9c3a-dc6cb5a698a7\") " pod="openstack/glance-8ff9-account-create-update-k7n8f" Feb 18 19:34:00 crc kubenswrapper[4942]: I0218 19:34:00.622745 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6a1ca129-f896-4d68-b119-701a991fe0ba-operator-scripts\") pod \"glance-db-create-tjf5x\" (UID: \"6a1ca129-f896-4d68-b119-701a991fe0ba\") " pod="openstack/glance-db-create-tjf5x" Feb 18 19:34:00 crc kubenswrapper[4942]: I0218 19:34:00.622793 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7qkc8\" (UniqueName: \"kubernetes.io/projected/6a1ca129-f896-4d68-b119-701a991fe0ba-kube-api-access-7qkc8\") pod \"glance-db-create-tjf5x\" (UID: \"6a1ca129-f896-4d68-b119-701a991fe0ba\") " pod="openstack/glance-db-create-tjf5x" Feb 18 19:34:00 crc kubenswrapper[4942]: I0218 19:34:00.622825 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8611c14f-da0c-410e-9c3a-dc6cb5a698a7-operator-scripts\") pod \"glance-8ff9-account-create-update-k7n8f\" (UID: \"8611c14f-da0c-410e-9c3a-dc6cb5a698a7\") " pod="openstack/glance-8ff9-account-create-update-k7n8f" Feb 18 19:34:00 crc kubenswrapper[4942]: I0218 19:34:00.623653 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6a1ca129-f896-4d68-b119-701a991fe0ba-operator-scripts\") pod \"glance-db-create-tjf5x\" (UID: \"6a1ca129-f896-4d68-b119-701a991fe0ba\") " pod="openstack/glance-db-create-tjf5x" Feb 18 19:34:00 crc kubenswrapper[4942]: I0218 19:34:00.642273 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7qkc8\" (UniqueName: \"kubernetes.io/projected/6a1ca129-f896-4d68-b119-701a991fe0ba-kube-api-access-7qkc8\") pod \"glance-db-create-tjf5x\" (UID: \"6a1ca129-f896-4d68-b119-701a991fe0ba\") " pod="openstack/glance-db-create-tjf5x" Feb 18 19:34:00 crc kubenswrapper[4942]: I0218 19:34:00.694531 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-tjf5x" Feb 18 19:34:00 crc kubenswrapper[4942]: I0218 19:34:00.723889 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8611c14f-da0c-410e-9c3a-dc6cb5a698a7-operator-scripts\") pod \"glance-8ff9-account-create-update-k7n8f\" (UID: \"8611c14f-da0c-410e-9c3a-dc6cb5a698a7\") " pod="openstack/glance-8ff9-account-create-update-k7n8f" Feb 18 19:34:00 crc kubenswrapper[4942]: I0218 19:34:00.724030 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lzmr5\" (UniqueName: \"kubernetes.io/projected/8611c14f-da0c-410e-9c3a-dc6cb5a698a7-kube-api-access-lzmr5\") pod \"glance-8ff9-account-create-update-k7n8f\" (UID: \"8611c14f-da0c-410e-9c3a-dc6cb5a698a7\") " pod="openstack/glance-8ff9-account-create-update-k7n8f" Feb 18 19:34:00 crc kubenswrapper[4942]: I0218 19:34:00.724652 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8611c14f-da0c-410e-9c3a-dc6cb5a698a7-operator-scripts\") pod \"glance-8ff9-account-create-update-k7n8f\" (UID: \"8611c14f-da0c-410e-9c3a-dc6cb5a698a7\") " pod="openstack/glance-8ff9-account-create-update-k7n8f" Feb 18 19:34:00 crc kubenswrapper[4942]: I0218 19:34:00.743290 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lzmr5\" (UniqueName: \"kubernetes.io/projected/8611c14f-da0c-410e-9c3a-dc6cb5a698a7-kube-api-access-lzmr5\") pod \"glance-8ff9-account-create-update-k7n8f\" (UID: \"8611c14f-da0c-410e-9c3a-dc6cb5a698a7\") " pod="openstack/glance-8ff9-account-create-update-k7n8f" Feb 18 19:34:00 crc kubenswrapper[4942]: I0218 19:34:00.857713 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-8ff9-account-create-update-k7n8f" Feb 18 19:34:01 crc kubenswrapper[4942]: I0218 19:34:01.136745 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-tjf5x"] Feb 18 19:34:01 crc kubenswrapper[4942]: W0218 19:34:01.139655 4942 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6a1ca129_f896_4d68_b119_701a991fe0ba.slice/crio-48727d3eb0eedecfb6dafc81742628ccd42d08cfb10aa697d75200f08ec66f17 WatchSource:0}: Error finding container 48727d3eb0eedecfb6dafc81742628ccd42d08cfb10aa697d75200f08ec66f17: Status 404 returned error can't find the container with id 48727d3eb0eedecfb6dafc81742628ccd42d08cfb10aa697d75200f08ec66f17 Feb 18 19:34:01 crc kubenswrapper[4942]: I0218 19:34:01.291680 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-8ff9-account-create-update-k7n8f"] Feb 18 19:34:01 crc kubenswrapper[4942]: W0218 19:34:01.292746 4942 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8611c14f_da0c_410e_9c3a_dc6cb5a698a7.slice/crio-212d831cb56941ea16551014811164e2bba8ae62aba8307e744d2ba3a32d9f46 WatchSource:0}: Error finding container 212d831cb56941ea16551014811164e2bba8ae62aba8307e744d2ba3a32d9f46: Status 404 returned error can't find the container with id 212d831cb56941ea16551014811164e2bba8ae62aba8307e744d2ba3a32d9f46 Feb 18 19:34:01 crc kubenswrapper[4942]: I0218 19:34:01.378747 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-8ff9-account-create-update-k7n8f" event={"ID":"8611c14f-da0c-410e-9c3a-dc6cb5a698a7","Type":"ContainerStarted","Data":"212d831cb56941ea16551014811164e2bba8ae62aba8307e744d2ba3a32d9f46"} Feb 18 19:34:01 crc kubenswrapper[4942]: I0218 19:34:01.380845 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-tjf5x" event={"ID":"6a1ca129-f896-4d68-b119-701a991fe0ba","Type":"ContainerStarted","Data":"c942add3a433a64faf7638403a168e22e7b5e2f26ceaa17e1731c6044072942d"} Feb 18 19:34:01 crc kubenswrapper[4942]: I0218 19:34:01.380900 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-tjf5x" event={"ID":"6a1ca129-f896-4d68-b119-701a991fe0ba","Type":"ContainerStarted","Data":"48727d3eb0eedecfb6dafc81742628ccd42d08cfb10aa697d75200f08ec66f17"} Feb 18 19:34:01 crc kubenswrapper[4942]: I0218 19:34:01.407125 4942 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-db-create-tjf5x" podStartSLOduration=1.407104583 podStartE2EDuration="1.407104583s" podCreationTimestamp="2026-02-18 19:34:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:34:01.397136727 +0000 UTC m=+1001.102069392" watchObservedRunningTime="2026-02-18 19:34:01.407104583 +0000 UTC m=+1001.112037268" Feb 18 19:34:01 crc kubenswrapper[4942]: I0218 19:34:01.844791 4942 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-cwjhb" Feb 18 19:34:01 crc kubenswrapper[4942]: I0218 19:34:01.851158 4942 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-g7rkd" Feb 18 19:34:01 crc kubenswrapper[4942]: I0218 19:34:01.944192 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/2eb51639-e1f9-4c9f-baa9-30d64d3abb7a-dispersionconf\") pod \"2eb51639-e1f9-4c9f-baa9-30d64d3abb7a\" (UID: \"2eb51639-e1f9-4c9f-baa9-30d64d3abb7a\") " Feb 18 19:34:01 crc kubenswrapper[4942]: I0218 19:34:01.944244 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7a52f4fe-2f25-4cf1-8373-3cf3a20f17eb-operator-scripts\") pod \"7a52f4fe-2f25-4cf1-8373-3cf3a20f17eb\" (UID: \"7a52f4fe-2f25-4cf1-8373-3cf3a20f17eb\") " Feb 18 19:34:01 crc kubenswrapper[4942]: I0218 19:34:01.944297 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/2eb51639-e1f9-4c9f-baa9-30d64d3abb7a-ring-data-devices\") pod \"2eb51639-e1f9-4c9f-baa9-30d64d3abb7a\" (UID: \"2eb51639-e1f9-4c9f-baa9-30d64d3abb7a\") " Feb 18 19:34:01 crc kubenswrapper[4942]: I0218 19:34:01.944323 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/2eb51639-e1f9-4c9f-baa9-30d64d3abb7a-scripts\") pod \"2eb51639-e1f9-4c9f-baa9-30d64d3abb7a\" (UID: \"2eb51639-e1f9-4c9f-baa9-30d64d3abb7a\") " Feb 18 19:34:01 crc kubenswrapper[4942]: I0218 19:34:01.944381 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8wqmf\" (UniqueName: \"kubernetes.io/projected/2eb51639-e1f9-4c9f-baa9-30d64d3abb7a-kube-api-access-8wqmf\") pod \"2eb51639-e1f9-4c9f-baa9-30d64d3abb7a\" (UID: \"2eb51639-e1f9-4c9f-baa9-30d64d3abb7a\") " Feb 18 19:34:01 crc kubenswrapper[4942]: I0218 19:34:01.944400 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/2eb51639-e1f9-4c9f-baa9-30d64d3abb7a-swiftconf\") pod \"2eb51639-e1f9-4c9f-baa9-30d64d3abb7a\" (UID: \"2eb51639-e1f9-4c9f-baa9-30d64d3abb7a\") " Feb 18 19:34:01 crc kubenswrapper[4942]: I0218 19:34:01.944415 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2eb51639-e1f9-4c9f-baa9-30d64d3abb7a-combined-ca-bundle\") pod \"2eb51639-e1f9-4c9f-baa9-30d64d3abb7a\" (UID: \"2eb51639-e1f9-4c9f-baa9-30d64d3abb7a\") " Feb 18 19:34:01 crc kubenswrapper[4942]: I0218 19:34:01.944463 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/2eb51639-e1f9-4c9f-baa9-30d64d3abb7a-etc-swift\") pod \"2eb51639-e1f9-4c9f-baa9-30d64d3abb7a\" (UID: \"2eb51639-e1f9-4c9f-baa9-30d64d3abb7a\") " Feb 18 19:34:01 crc kubenswrapper[4942]: I0218 19:34:01.944478 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8mkzm\" (UniqueName: \"kubernetes.io/projected/7a52f4fe-2f25-4cf1-8373-3cf3a20f17eb-kube-api-access-8mkzm\") pod \"7a52f4fe-2f25-4cf1-8373-3cf3a20f17eb\" (UID: \"7a52f4fe-2f25-4cf1-8373-3cf3a20f17eb\") " Feb 18 19:34:01 crc kubenswrapper[4942]: I0218 19:34:01.946679 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2eb51639-e1f9-4c9f-baa9-30d64d3abb7a-ring-data-devices" (OuterVolumeSpecName: "ring-data-devices") pod "2eb51639-e1f9-4c9f-baa9-30d64d3abb7a" (UID: "2eb51639-e1f9-4c9f-baa9-30d64d3abb7a"). InnerVolumeSpecName "ring-data-devices". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:34:01 crc kubenswrapper[4942]: I0218 19:34:01.946690 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7a52f4fe-2f25-4cf1-8373-3cf3a20f17eb-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "7a52f4fe-2f25-4cf1-8373-3cf3a20f17eb" (UID: "7a52f4fe-2f25-4cf1-8373-3cf3a20f17eb"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:34:01 crc kubenswrapper[4942]: I0218 19:34:01.949736 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2eb51639-e1f9-4c9f-baa9-30d64d3abb7a-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "2eb51639-e1f9-4c9f-baa9-30d64d3abb7a" (UID: "2eb51639-e1f9-4c9f-baa9-30d64d3abb7a"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 19:34:01 crc kubenswrapper[4942]: I0218 19:34:01.955978 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7a52f4fe-2f25-4cf1-8373-3cf3a20f17eb-kube-api-access-8mkzm" (OuterVolumeSpecName: "kube-api-access-8mkzm") pod "7a52f4fe-2f25-4cf1-8373-3cf3a20f17eb" (UID: "7a52f4fe-2f25-4cf1-8373-3cf3a20f17eb"). InnerVolumeSpecName "kube-api-access-8mkzm". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:34:01 crc kubenswrapper[4942]: I0218 19:34:01.956202 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2eb51639-e1f9-4c9f-baa9-30d64d3abb7a-kube-api-access-8wqmf" (OuterVolumeSpecName: "kube-api-access-8wqmf") pod "2eb51639-e1f9-4c9f-baa9-30d64d3abb7a" (UID: "2eb51639-e1f9-4c9f-baa9-30d64d3abb7a"). InnerVolumeSpecName "kube-api-access-8wqmf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:34:01 crc kubenswrapper[4942]: I0218 19:34:01.958851 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2eb51639-e1f9-4c9f-baa9-30d64d3abb7a-dispersionconf" (OuterVolumeSpecName: "dispersionconf") pod "2eb51639-e1f9-4c9f-baa9-30d64d3abb7a" (UID: "2eb51639-e1f9-4c9f-baa9-30d64d3abb7a"). InnerVolumeSpecName "dispersionconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:34:01 crc kubenswrapper[4942]: I0218 19:34:01.987741 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2eb51639-e1f9-4c9f-baa9-30d64d3abb7a-swiftconf" (OuterVolumeSpecName: "swiftconf") pod "2eb51639-e1f9-4c9f-baa9-30d64d3abb7a" (UID: "2eb51639-e1f9-4c9f-baa9-30d64d3abb7a"). InnerVolumeSpecName "swiftconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:34:01 crc kubenswrapper[4942]: I0218 19:34:01.988258 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2eb51639-e1f9-4c9f-baa9-30d64d3abb7a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "2eb51639-e1f9-4c9f-baa9-30d64d3abb7a" (UID: "2eb51639-e1f9-4c9f-baa9-30d64d3abb7a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:34:01 crc kubenswrapper[4942]: I0218 19:34:01.994162 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2eb51639-e1f9-4c9f-baa9-30d64d3abb7a-scripts" (OuterVolumeSpecName: "scripts") pod "2eb51639-e1f9-4c9f-baa9-30d64d3abb7a" (UID: "2eb51639-e1f9-4c9f-baa9-30d64d3abb7a"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:34:02 crc kubenswrapper[4942]: I0218 19:34:02.047018 4942 reconciler_common.go:293] "Volume detached for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/2eb51639-e1f9-4c9f-baa9-30d64d3abb7a-ring-data-devices\") on node \"crc\" DevicePath \"\"" Feb 18 19:34:02 crc kubenswrapper[4942]: I0218 19:34:02.047052 4942 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/2eb51639-e1f9-4c9f-baa9-30d64d3abb7a-scripts\") on node \"crc\" DevicePath \"\"" Feb 18 19:34:02 crc kubenswrapper[4942]: I0218 19:34:02.047061 4942 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8wqmf\" (UniqueName: \"kubernetes.io/projected/2eb51639-e1f9-4c9f-baa9-30d64d3abb7a-kube-api-access-8wqmf\") on node \"crc\" DevicePath \"\"" Feb 18 19:34:02 crc kubenswrapper[4942]: I0218 19:34:02.047072 4942 reconciler_common.go:293] "Volume detached for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/2eb51639-e1f9-4c9f-baa9-30d64d3abb7a-swiftconf\") on node \"crc\" DevicePath \"\"" Feb 18 19:34:02 crc kubenswrapper[4942]: I0218 19:34:02.047080 4942 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2eb51639-e1f9-4c9f-baa9-30d64d3abb7a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 19:34:02 crc kubenswrapper[4942]: I0218 19:34:02.047088 4942 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/2eb51639-e1f9-4c9f-baa9-30d64d3abb7a-etc-swift\") on node \"crc\" DevicePath \"\"" Feb 18 19:34:02 crc kubenswrapper[4942]: I0218 19:34:02.047097 4942 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8mkzm\" (UniqueName: \"kubernetes.io/projected/7a52f4fe-2f25-4cf1-8373-3cf3a20f17eb-kube-api-access-8mkzm\") on node \"crc\" DevicePath \"\"" Feb 18 19:34:02 crc kubenswrapper[4942]: I0218 19:34:02.047106 4942 reconciler_common.go:293] "Volume detached for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/2eb51639-e1f9-4c9f-baa9-30d64d3abb7a-dispersionconf\") on node \"crc\" DevicePath \"\"" Feb 18 19:34:02 crc kubenswrapper[4942]: I0218 19:34:02.047114 4942 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7a52f4fe-2f25-4cf1-8373-3cf3a20f17eb-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 18 19:34:02 crc kubenswrapper[4942]: I0218 19:34:02.394400 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-g7rkd" event={"ID":"7a52f4fe-2f25-4cf1-8373-3cf3a20f17eb","Type":"ContainerDied","Data":"9091bd51dd260200eceb22826dedd139c79e73e5248d64dfbd7f691b19339ef5"} Feb 18 19:34:02 crc kubenswrapper[4942]: I0218 19:34:02.394437 4942 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9091bd51dd260200eceb22826dedd139c79e73e5248d64dfbd7f691b19339ef5" Feb 18 19:34:02 crc kubenswrapper[4942]: I0218 19:34:02.394488 4942 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-g7rkd" Feb 18 19:34:02 crc kubenswrapper[4942]: I0218 19:34:02.408812 4942 generic.go:334] "Generic (PLEG): container finished" podID="8611c14f-da0c-410e-9c3a-dc6cb5a698a7" containerID="549770ba7dc9b2efdf1b7dbd1827ec366b9e1e693aeec0f1a695091cdbeda9bc" exitCode=0 Feb 18 19:34:02 crc kubenswrapper[4942]: I0218 19:34:02.408922 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-8ff9-account-create-update-k7n8f" event={"ID":"8611c14f-da0c-410e-9c3a-dc6cb5a698a7","Type":"ContainerDied","Data":"549770ba7dc9b2efdf1b7dbd1827ec366b9e1e693aeec0f1a695091cdbeda9bc"} Feb 18 19:34:02 crc kubenswrapper[4942]: I0218 19:34:02.421744 4942 generic.go:334] "Generic (PLEG): container finished" podID="6a1ca129-f896-4d68-b119-701a991fe0ba" containerID="c942add3a433a64faf7638403a168e22e7b5e2f26ceaa17e1731c6044072942d" exitCode=0 Feb 18 19:34:02 crc kubenswrapper[4942]: I0218 19:34:02.422016 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-tjf5x" event={"ID":"6a1ca129-f896-4d68-b119-701a991fe0ba","Type":"ContainerDied","Data":"c942add3a433a64faf7638403a168e22e7b5e2f26ceaa17e1731c6044072942d"} Feb 18 19:34:02 crc kubenswrapper[4942]: I0218 19:34:02.426670 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-cwjhb" event={"ID":"2eb51639-e1f9-4c9f-baa9-30d64d3abb7a","Type":"ContainerDied","Data":"98b157f8537f821e0f49062fdd12779fd66abc1af86316a5e1b821365807dd5d"} Feb 18 19:34:02 crc kubenswrapper[4942]: I0218 19:34:02.426706 4942 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="98b157f8537f821e0f49062fdd12779fd66abc1af86316a5e1b821365807dd5d" Feb 18 19:34:02 crc kubenswrapper[4942]: I0218 19:34:02.426788 4942 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-cwjhb" Feb 18 19:34:03 crc kubenswrapper[4942]: I0218 19:34:03.873837 4942 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-tjf5x" Feb 18 19:34:03 crc kubenswrapper[4942]: I0218 19:34:03.881247 4942 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-8ff9-account-create-update-k7n8f" Feb 18 19:34:03 crc kubenswrapper[4942]: I0218 19:34:03.882244 4942 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-g7rkd"] Feb 18 19:34:03 crc kubenswrapper[4942]: I0218 19:34:03.889855 4942 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-g7rkd"] Feb 18 19:34:03 crc kubenswrapper[4942]: I0218 19:34:03.977954 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/prometheus-metric-storage-0" Feb 18 19:34:03 crc kubenswrapper[4942]: I0218 19:34:03.987008 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6a1ca129-f896-4d68-b119-701a991fe0ba-operator-scripts\") pod \"6a1ca129-f896-4d68-b119-701a991fe0ba\" (UID: \"6a1ca129-f896-4d68-b119-701a991fe0ba\") " Feb 18 19:34:03 crc kubenswrapper[4942]: I0218 19:34:03.987076 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7qkc8\" (UniqueName: \"kubernetes.io/projected/6a1ca129-f896-4d68-b119-701a991fe0ba-kube-api-access-7qkc8\") pod \"6a1ca129-f896-4d68-b119-701a991fe0ba\" (UID: \"6a1ca129-f896-4d68-b119-701a991fe0ba\") " Feb 18 19:34:03 crc kubenswrapper[4942]: I0218 19:34:03.987122 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lzmr5\" (UniqueName: \"kubernetes.io/projected/8611c14f-da0c-410e-9c3a-dc6cb5a698a7-kube-api-access-lzmr5\") pod \"8611c14f-da0c-410e-9c3a-dc6cb5a698a7\" (UID: \"8611c14f-da0c-410e-9c3a-dc6cb5a698a7\") " Feb 18 19:34:03 crc kubenswrapper[4942]: I0218 19:34:03.987141 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8611c14f-da0c-410e-9c3a-dc6cb5a698a7-operator-scripts\") pod \"8611c14f-da0c-410e-9c3a-dc6cb5a698a7\" (UID: \"8611c14f-da0c-410e-9c3a-dc6cb5a698a7\") " Feb 18 19:34:03 crc kubenswrapper[4942]: I0218 19:34:03.988157 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8611c14f-da0c-410e-9c3a-dc6cb5a698a7-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "8611c14f-da0c-410e-9c3a-dc6cb5a698a7" (UID: "8611c14f-da0c-410e-9c3a-dc6cb5a698a7"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:34:03 crc kubenswrapper[4942]: I0218 19:34:03.988277 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6a1ca129-f896-4d68-b119-701a991fe0ba-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "6a1ca129-f896-4d68-b119-701a991fe0ba" (UID: "6a1ca129-f896-4d68-b119-701a991fe0ba"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:34:03 crc kubenswrapper[4942]: I0218 19:34:03.994537 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8611c14f-da0c-410e-9c3a-dc6cb5a698a7-kube-api-access-lzmr5" (OuterVolumeSpecName: "kube-api-access-lzmr5") pod "8611c14f-da0c-410e-9c3a-dc6cb5a698a7" (UID: "8611c14f-da0c-410e-9c3a-dc6cb5a698a7"). InnerVolumeSpecName "kube-api-access-lzmr5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:34:03 crc kubenswrapper[4942]: I0218 19:34:03.996209 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6a1ca129-f896-4d68-b119-701a991fe0ba-kube-api-access-7qkc8" (OuterVolumeSpecName: "kube-api-access-7qkc8") pod "6a1ca129-f896-4d68-b119-701a991fe0ba" (UID: "6a1ca129-f896-4d68-b119-701a991fe0ba"). InnerVolumeSpecName "kube-api-access-7qkc8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:34:04 crc kubenswrapper[4942]: I0218 19:34:04.089120 4942 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6a1ca129-f896-4d68-b119-701a991fe0ba-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 18 19:34:04 crc kubenswrapper[4942]: I0218 19:34:04.089154 4942 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7qkc8\" (UniqueName: \"kubernetes.io/projected/6a1ca129-f896-4d68-b119-701a991fe0ba-kube-api-access-7qkc8\") on node \"crc\" DevicePath \"\"" Feb 18 19:34:04 crc kubenswrapper[4942]: I0218 19:34:04.089168 4942 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lzmr5\" (UniqueName: \"kubernetes.io/projected/8611c14f-da0c-410e-9c3a-dc6cb5a698a7-kube-api-access-lzmr5\") on node \"crc\" DevicePath \"\"" Feb 18 19:34:04 crc kubenswrapper[4942]: I0218 19:34:04.089208 4942 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8611c14f-da0c-410e-9c3a-dc6cb5a698a7-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 18 19:34:04 crc kubenswrapper[4942]: I0218 19:34:04.459086 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-tjf5x" event={"ID":"6a1ca129-f896-4d68-b119-701a991fe0ba","Type":"ContainerDied","Data":"48727d3eb0eedecfb6dafc81742628ccd42d08cfb10aa697d75200f08ec66f17"} Feb 18 19:34:04 crc kubenswrapper[4942]: I0218 19:34:04.459460 4942 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="48727d3eb0eedecfb6dafc81742628ccd42d08cfb10aa697d75200f08ec66f17" Feb 18 19:34:04 crc kubenswrapper[4942]: I0218 19:34:04.459605 4942 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-tjf5x" Feb 18 19:34:04 crc kubenswrapper[4942]: I0218 19:34:04.461548 4942 generic.go:334] "Generic (PLEG): container finished" podID="b6b41292-c562-4964-bb25-d8945415b3da" containerID="c197a7dd3977502f99f2f3aa2cb1b55953ff18362b376d981b554df6b529f782" exitCode=0 Feb 18 19:34:04 crc kubenswrapper[4942]: I0218 19:34:04.461826 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"b6b41292-c562-4964-bb25-d8945415b3da","Type":"ContainerDied","Data":"c197a7dd3977502f99f2f3aa2cb1b55953ff18362b376d981b554df6b529f782"} Feb 18 19:34:04 crc kubenswrapper[4942]: I0218 19:34:04.467279 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-8ff9-account-create-update-k7n8f" event={"ID":"8611c14f-da0c-410e-9c3a-dc6cb5a698a7","Type":"ContainerDied","Data":"212d831cb56941ea16551014811164e2bba8ae62aba8307e744d2ba3a32d9f46"} Feb 18 19:34:04 crc kubenswrapper[4942]: I0218 19:34:04.467346 4942 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="212d831cb56941ea16551014811164e2bba8ae62aba8307e744d2ba3a32d9f46" Feb 18 19:34:04 crc kubenswrapper[4942]: I0218 19:34:04.467469 4942 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-8ff9-account-create-update-k7n8f" Feb 18 19:34:05 crc kubenswrapper[4942]: I0218 19:34:05.046842 4942 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7a52f4fe-2f25-4cf1-8373-3cf3a20f17eb" path="/var/lib/kubelet/pods/7a52f4fe-2f25-4cf1-8373-3cf3a20f17eb/volumes" Feb 18 19:34:05 crc kubenswrapper[4942]: I0218 19:34:05.478451 4942 generic.go:334] "Generic (PLEG): container finished" podID="77de5cb0-e446-407d-9e32-b13f39c84ae2" containerID="e242de7f4af5755759f500d3c9dbc2395ec18d3bfe3fe38cf008cae5b5314de3" exitCode=0 Feb 18 19:34:05 crc kubenswrapper[4942]: I0218 19:34:05.478534 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"77de5cb0-e446-407d-9e32-b13f39c84ae2","Type":"ContainerDied","Data":"e242de7f4af5755759f500d3c9dbc2395ec18d3bfe3fe38cf008cae5b5314de3"} Feb 18 19:34:05 crc kubenswrapper[4942]: I0218 19:34:05.481261 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"b6b41292-c562-4964-bb25-d8945415b3da","Type":"ContainerStarted","Data":"4f752d07e5ee2189bcc31aa4e606e8bcb5f06355b290a2073d2a7609686ffd94"} Feb 18 19:34:05 crc kubenswrapper[4942]: I0218 19:34:05.481923 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Feb 18 19:34:05 crc kubenswrapper[4942]: I0218 19:34:05.551670 4942 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=52.589590562 podStartE2EDuration="1m0.55165067s" podCreationTimestamp="2026-02-18 19:33:05 +0000 UTC" firstStartedPulling="2026-02-18 19:33:20.608427606 +0000 UTC m=+960.313360271" lastFinishedPulling="2026-02-18 19:33:28.570487714 +0000 UTC m=+968.275420379" observedRunningTime="2026-02-18 19:34:05.539351144 +0000 UTC m=+1005.244283829" watchObservedRunningTime="2026-02-18 19:34:05.55165067 +0000 UTC m=+1005.256583335" Feb 18 19:34:05 crc kubenswrapper[4942]: I0218 19:34:05.680995 4942 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-sync-zw8ls"] Feb 18 19:34:05 crc kubenswrapper[4942]: E0218 19:34:05.681353 4942 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7a52f4fe-2f25-4cf1-8373-3cf3a20f17eb" containerName="mariadb-account-create-update" Feb 18 19:34:05 crc kubenswrapper[4942]: I0218 19:34:05.681371 4942 state_mem.go:107] "Deleted CPUSet assignment" podUID="7a52f4fe-2f25-4cf1-8373-3cf3a20f17eb" containerName="mariadb-account-create-update" Feb 18 19:34:05 crc kubenswrapper[4942]: E0218 19:34:05.681387 4942 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2eb51639-e1f9-4c9f-baa9-30d64d3abb7a" containerName="swift-ring-rebalance" Feb 18 19:34:05 crc kubenswrapper[4942]: I0218 19:34:05.681393 4942 state_mem.go:107] "Deleted CPUSet assignment" podUID="2eb51639-e1f9-4c9f-baa9-30d64d3abb7a" containerName="swift-ring-rebalance" Feb 18 19:34:05 crc kubenswrapper[4942]: E0218 19:34:05.681408 4942 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6a1ca129-f896-4d68-b119-701a991fe0ba" containerName="mariadb-database-create" Feb 18 19:34:05 crc kubenswrapper[4942]: I0218 19:34:05.681416 4942 state_mem.go:107] "Deleted CPUSet assignment" podUID="6a1ca129-f896-4d68-b119-701a991fe0ba" containerName="mariadb-database-create" Feb 18 19:34:05 crc kubenswrapper[4942]: E0218 19:34:05.681427 4942 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8611c14f-da0c-410e-9c3a-dc6cb5a698a7" containerName="mariadb-account-create-update" Feb 18 19:34:05 crc kubenswrapper[4942]: I0218 19:34:05.681434 4942 state_mem.go:107] "Deleted CPUSet assignment" podUID="8611c14f-da0c-410e-9c3a-dc6cb5a698a7" containerName="mariadb-account-create-update" Feb 18 19:34:05 crc kubenswrapper[4942]: I0218 19:34:05.681575 4942 memory_manager.go:354] "RemoveStaleState removing state" podUID="2eb51639-e1f9-4c9f-baa9-30d64d3abb7a" containerName="swift-ring-rebalance" Feb 18 19:34:05 crc kubenswrapper[4942]: I0218 19:34:05.681593 4942 memory_manager.go:354] "RemoveStaleState removing state" podUID="6a1ca129-f896-4d68-b119-701a991fe0ba" containerName="mariadb-database-create" Feb 18 19:34:05 crc kubenswrapper[4942]: I0218 19:34:05.681604 4942 memory_manager.go:354] "RemoveStaleState removing state" podUID="7a52f4fe-2f25-4cf1-8373-3cf3a20f17eb" containerName="mariadb-account-create-update" Feb 18 19:34:05 crc kubenswrapper[4942]: I0218 19:34:05.681610 4942 memory_manager.go:354] "RemoveStaleState removing state" podUID="8611c14f-da0c-410e-9c3a-dc6cb5a698a7" containerName="mariadb-account-create-update" Feb 18 19:34:05 crc kubenswrapper[4942]: I0218 19:34:05.682342 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-zw8ls" Feb 18 19:34:05 crc kubenswrapper[4942]: I0218 19:34:05.684726 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-j6c2t" Feb 18 19:34:05 crc kubenswrapper[4942]: I0218 19:34:05.684746 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-config-data" Feb 18 19:34:05 crc kubenswrapper[4942]: I0218 19:34:05.698672 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-zw8ls"] Feb 18 19:34:05 crc kubenswrapper[4942]: I0218 19:34:05.823399 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/72c2952a-cb2e-4a2e-bdb3-bf97a901cbe3-combined-ca-bundle\") pod \"glance-db-sync-zw8ls\" (UID: \"72c2952a-cb2e-4a2e-bdb3-bf97a901cbe3\") " pod="openstack/glance-db-sync-zw8ls" Feb 18 19:34:05 crc kubenswrapper[4942]: I0218 19:34:05.823480 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/72c2952a-cb2e-4a2e-bdb3-bf97a901cbe3-db-sync-config-data\") pod \"glance-db-sync-zw8ls\" (UID: \"72c2952a-cb2e-4a2e-bdb3-bf97a901cbe3\") " pod="openstack/glance-db-sync-zw8ls" Feb 18 19:34:05 crc kubenswrapper[4942]: I0218 19:34:05.823518 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gb9h8\" (UniqueName: \"kubernetes.io/projected/72c2952a-cb2e-4a2e-bdb3-bf97a901cbe3-kube-api-access-gb9h8\") pod \"glance-db-sync-zw8ls\" (UID: \"72c2952a-cb2e-4a2e-bdb3-bf97a901cbe3\") " pod="openstack/glance-db-sync-zw8ls" Feb 18 19:34:05 crc kubenswrapper[4942]: I0218 19:34:05.823641 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/72c2952a-cb2e-4a2e-bdb3-bf97a901cbe3-config-data\") pod \"glance-db-sync-zw8ls\" (UID: \"72c2952a-cb2e-4a2e-bdb3-bf97a901cbe3\") " pod="openstack/glance-db-sync-zw8ls" Feb 18 19:34:05 crc kubenswrapper[4942]: I0218 19:34:05.924960 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/72c2952a-cb2e-4a2e-bdb3-bf97a901cbe3-db-sync-config-data\") pod \"glance-db-sync-zw8ls\" (UID: \"72c2952a-cb2e-4a2e-bdb3-bf97a901cbe3\") " pod="openstack/glance-db-sync-zw8ls" Feb 18 19:34:05 crc kubenswrapper[4942]: I0218 19:34:05.925023 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gb9h8\" (UniqueName: \"kubernetes.io/projected/72c2952a-cb2e-4a2e-bdb3-bf97a901cbe3-kube-api-access-gb9h8\") pod \"glance-db-sync-zw8ls\" (UID: \"72c2952a-cb2e-4a2e-bdb3-bf97a901cbe3\") " pod="openstack/glance-db-sync-zw8ls" Feb 18 19:34:05 crc kubenswrapper[4942]: I0218 19:34:05.925075 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/72c2952a-cb2e-4a2e-bdb3-bf97a901cbe3-config-data\") pod \"glance-db-sync-zw8ls\" (UID: \"72c2952a-cb2e-4a2e-bdb3-bf97a901cbe3\") " pod="openstack/glance-db-sync-zw8ls" Feb 18 19:34:05 crc kubenswrapper[4942]: I0218 19:34:05.925127 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/72c2952a-cb2e-4a2e-bdb3-bf97a901cbe3-combined-ca-bundle\") pod \"glance-db-sync-zw8ls\" (UID: \"72c2952a-cb2e-4a2e-bdb3-bf97a901cbe3\") " pod="openstack/glance-db-sync-zw8ls" Feb 18 19:34:05 crc kubenswrapper[4942]: I0218 19:34:05.929344 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/72c2952a-cb2e-4a2e-bdb3-bf97a901cbe3-db-sync-config-data\") pod \"glance-db-sync-zw8ls\" (UID: \"72c2952a-cb2e-4a2e-bdb3-bf97a901cbe3\") " pod="openstack/glance-db-sync-zw8ls" Feb 18 19:34:05 crc kubenswrapper[4942]: I0218 19:34:05.929694 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/72c2952a-cb2e-4a2e-bdb3-bf97a901cbe3-combined-ca-bundle\") pod \"glance-db-sync-zw8ls\" (UID: \"72c2952a-cb2e-4a2e-bdb3-bf97a901cbe3\") " pod="openstack/glance-db-sync-zw8ls" Feb 18 19:34:05 crc kubenswrapper[4942]: I0218 19:34:05.930852 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/72c2952a-cb2e-4a2e-bdb3-bf97a901cbe3-config-data\") pod \"glance-db-sync-zw8ls\" (UID: \"72c2952a-cb2e-4a2e-bdb3-bf97a901cbe3\") " pod="openstack/glance-db-sync-zw8ls" Feb 18 19:34:05 crc kubenswrapper[4942]: I0218 19:34:05.942436 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gb9h8\" (UniqueName: \"kubernetes.io/projected/72c2952a-cb2e-4a2e-bdb3-bf97a901cbe3-kube-api-access-gb9h8\") pod \"glance-db-sync-zw8ls\" (UID: \"72c2952a-cb2e-4a2e-bdb3-bf97a901cbe3\") " pod="openstack/glance-db-sync-zw8ls" Feb 18 19:34:06 crc kubenswrapper[4942]: I0218 19:34:06.003349 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-zw8ls" Feb 18 19:34:06 crc kubenswrapper[4942]: I0218 19:34:06.081156 4942 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-llsph" podUID="28fe292c-6cda-4e3b-bce3-544ded95930b" containerName="ovn-controller" probeResult="failure" output=< Feb 18 19:34:06 crc kubenswrapper[4942]: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status Feb 18 19:34:06 crc kubenswrapper[4942]: > Feb 18 19:34:06 crc kubenswrapper[4942]: I0218 19:34:06.084068 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-7xrn9" Feb 18 19:34:06 crc kubenswrapper[4942]: I0218 19:34:06.096511 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-7xrn9" Feb 18 19:34:06 crc kubenswrapper[4942]: I0218 19:34:06.490708 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"77de5cb0-e446-407d-9e32-b13f39c84ae2","Type":"ContainerStarted","Data":"2a06461943313e923de9b2391c5eb34c6a9c08986670b8d6bae063427214e0e7"} Feb 18 19:34:06 crc kubenswrapper[4942]: I0218 19:34:06.491089 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Feb 18 19:34:06 crc kubenswrapper[4942]: I0218 19:34:06.561298 4942 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=52.119115986 podStartE2EDuration="1m1.561280825s" podCreationTimestamp="2026-02-18 19:33:05 +0000 UTC" firstStartedPulling="2026-02-18 19:33:20.106938004 +0000 UTC m=+959.811870669" lastFinishedPulling="2026-02-18 19:33:29.549102843 +0000 UTC m=+969.254035508" observedRunningTime="2026-02-18 19:34:06.557303343 +0000 UTC m=+1006.262236008" watchObservedRunningTime="2026-02-18 19:34:06.561280825 +0000 UTC m=+1006.266213480" Feb 18 19:34:06 crc kubenswrapper[4942]: I0218 19:34:06.583283 4942 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-llsph-config-56xlq"] Feb 18 19:34:06 crc kubenswrapper[4942]: I0218 19:34:06.584495 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-llsph-config-56xlq" Feb 18 19:34:06 crc kubenswrapper[4942]: I0218 19:34:06.587035 4942 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-extra-scripts" Feb 18 19:34:06 crc kubenswrapper[4942]: I0218 19:34:06.594889 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-llsph-config-56xlq"] Feb 18 19:34:06 crc kubenswrapper[4942]: I0218 19:34:06.618692 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-zw8ls"] Feb 18 19:34:06 crc kubenswrapper[4942]: I0218 19:34:06.743306 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/4cf16fd5-4915-49f5-b08b-d1bad49cd27a-additional-scripts\") pod \"ovn-controller-llsph-config-56xlq\" (UID: \"4cf16fd5-4915-49f5-b08b-d1bad49cd27a\") " pod="openstack/ovn-controller-llsph-config-56xlq" Feb 18 19:34:06 crc kubenswrapper[4942]: I0218 19:34:06.743611 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/4cf16fd5-4915-49f5-b08b-d1bad49cd27a-var-run\") pod \"ovn-controller-llsph-config-56xlq\" (UID: \"4cf16fd5-4915-49f5-b08b-d1bad49cd27a\") " pod="openstack/ovn-controller-llsph-config-56xlq" Feb 18 19:34:06 crc kubenswrapper[4942]: I0218 19:34:06.743642 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7gfms\" (UniqueName: \"kubernetes.io/projected/4cf16fd5-4915-49f5-b08b-d1bad49cd27a-kube-api-access-7gfms\") pod \"ovn-controller-llsph-config-56xlq\" (UID: \"4cf16fd5-4915-49f5-b08b-d1bad49cd27a\") " pod="openstack/ovn-controller-llsph-config-56xlq" Feb 18 19:34:06 crc kubenswrapper[4942]: I0218 19:34:06.743722 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/4cf16fd5-4915-49f5-b08b-d1bad49cd27a-var-run-ovn\") pod \"ovn-controller-llsph-config-56xlq\" (UID: \"4cf16fd5-4915-49f5-b08b-d1bad49cd27a\") " pod="openstack/ovn-controller-llsph-config-56xlq" Feb 18 19:34:06 crc kubenswrapper[4942]: I0218 19:34:06.743817 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/4cf16fd5-4915-49f5-b08b-d1bad49cd27a-var-log-ovn\") pod \"ovn-controller-llsph-config-56xlq\" (UID: \"4cf16fd5-4915-49f5-b08b-d1bad49cd27a\") " pod="openstack/ovn-controller-llsph-config-56xlq" Feb 18 19:34:06 crc kubenswrapper[4942]: I0218 19:34:06.743952 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/4cf16fd5-4915-49f5-b08b-d1bad49cd27a-scripts\") pod \"ovn-controller-llsph-config-56xlq\" (UID: \"4cf16fd5-4915-49f5-b08b-d1bad49cd27a\") " pod="openstack/ovn-controller-llsph-config-56xlq" Feb 18 19:34:06 crc kubenswrapper[4942]: I0218 19:34:06.845969 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/4cf16fd5-4915-49f5-b08b-d1bad49cd27a-scripts\") pod \"ovn-controller-llsph-config-56xlq\" (UID: \"4cf16fd5-4915-49f5-b08b-d1bad49cd27a\") " pod="openstack/ovn-controller-llsph-config-56xlq" Feb 18 19:34:06 crc kubenswrapper[4942]: I0218 19:34:06.846026 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/4cf16fd5-4915-49f5-b08b-d1bad49cd27a-additional-scripts\") pod \"ovn-controller-llsph-config-56xlq\" (UID: \"4cf16fd5-4915-49f5-b08b-d1bad49cd27a\") " pod="openstack/ovn-controller-llsph-config-56xlq" Feb 18 19:34:06 crc kubenswrapper[4942]: I0218 19:34:06.846107 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/4cf16fd5-4915-49f5-b08b-d1bad49cd27a-var-run\") pod \"ovn-controller-llsph-config-56xlq\" (UID: \"4cf16fd5-4915-49f5-b08b-d1bad49cd27a\") " pod="openstack/ovn-controller-llsph-config-56xlq" Feb 18 19:34:06 crc kubenswrapper[4942]: I0218 19:34:06.846147 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7gfms\" (UniqueName: \"kubernetes.io/projected/4cf16fd5-4915-49f5-b08b-d1bad49cd27a-kube-api-access-7gfms\") pod \"ovn-controller-llsph-config-56xlq\" (UID: \"4cf16fd5-4915-49f5-b08b-d1bad49cd27a\") " pod="openstack/ovn-controller-llsph-config-56xlq" Feb 18 19:34:06 crc kubenswrapper[4942]: I0218 19:34:06.846179 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/4cf16fd5-4915-49f5-b08b-d1bad49cd27a-var-run-ovn\") pod \"ovn-controller-llsph-config-56xlq\" (UID: \"4cf16fd5-4915-49f5-b08b-d1bad49cd27a\") " pod="openstack/ovn-controller-llsph-config-56xlq" Feb 18 19:34:06 crc kubenswrapper[4942]: I0218 19:34:06.846480 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/4cf16fd5-4915-49f5-b08b-d1bad49cd27a-var-log-ovn\") pod \"ovn-controller-llsph-config-56xlq\" (UID: \"4cf16fd5-4915-49f5-b08b-d1bad49cd27a\") " pod="openstack/ovn-controller-llsph-config-56xlq" Feb 18 19:34:06 crc kubenswrapper[4942]: I0218 19:34:06.846488 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/4cf16fd5-4915-49f5-b08b-d1bad49cd27a-var-run\") pod \"ovn-controller-llsph-config-56xlq\" (UID: \"4cf16fd5-4915-49f5-b08b-d1bad49cd27a\") " pod="openstack/ovn-controller-llsph-config-56xlq" Feb 18 19:34:06 crc kubenswrapper[4942]: I0218 19:34:06.846492 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/4cf16fd5-4915-49f5-b08b-d1bad49cd27a-var-run-ovn\") pod \"ovn-controller-llsph-config-56xlq\" (UID: \"4cf16fd5-4915-49f5-b08b-d1bad49cd27a\") " pod="openstack/ovn-controller-llsph-config-56xlq" Feb 18 19:34:06 crc kubenswrapper[4942]: I0218 19:34:06.846552 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/4cf16fd5-4915-49f5-b08b-d1bad49cd27a-var-log-ovn\") pod \"ovn-controller-llsph-config-56xlq\" (UID: \"4cf16fd5-4915-49f5-b08b-d1bad49cd27a\") " pod="openstack/ovn-controller-llsph-config-56xlq" Feb 18 19:34:06 crc kubenswrapper[4942]: I0218 19:34:06.847100 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/4cf16fd5-4915-49f5-b08b-d1bad49cd27a-additional-scripts\") pod \"ovn-controller-llsph-config-56xlq\" (UID: \"4cf16fd5-4915-49f5-b08b-d1bad49cd27a\") " pod="openstack/ovn-controller-llsph-config-56xlq" Feb 18 19:34:06 crc kubenswrapper[4942]: I0218 19:34:06.847904 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/4cf16fd5-4915-49f5-b08b-d1bad49cd27a-scripts\") pod \"ovn-controller-llsph-config-56xlq\" (UID: \"4cf16fd5-4915-49f5-b08b-d1bad49cd27a\") " pod="openstack/ovn-controller-llsph-config-56xlq" Feb 18 19:34:06 crc kubenswrapper[4942]: I0218 19:34:06.866103 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7gfms\" (UniqueName: \"kubernetes.io/projected/4cf16fd5-4915-49f5-b08b-d1bad49cd27a-kube-api-access-7gfms\") pod \"ovn-controller-llsph-config-56xlq\" (UID: \"4cf16fd5-4915-49f5-b08b-d1bad49cd27a\") " pod="openstack/ovn-controller-llsph-config-56xlq" Feb 18 19:34:06 crc kubenswrapper[4942]: I0218 19:34:06.904422 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-llsph-config-56xlq" Feb 18 19:34:07 crc kubenswrapper[4942]: I0218 19:34:07.171770 4942 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-trjtn"] Feb 18 19:34:07 crc kubenswrapper[4942]: I0218 19:34:07.173073 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-trjtn" Feb 18 19:34:07 crc kubenswrapper[4942]: I0218 19:34:07.174654 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-mariadb-root-db-secret" Feb 18 19:34:07 crc kubenswrapper[4942]: I0218 19:34:07.186275 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-trjtn"] Feb 18 19:34:07 crc kubenswrapper[4942]: I0218 19:34:07.253023 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/22b30cc6-6022-4a4f-9911-7a47df5f2c98-operator-scripts\") pod \"root-account-create-update-trjtn\" (UID: \"22b30cc6-6022-4a4f-9911-7a47df5f2c98\") " pod="openstack/root-account-create-update-trjtn" Feb 18 19:34:07 crc kubenswrapper[4942]: I0218 19:34:07.253284 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-64zb7\" (UniqueName: \"kubernetes.io/projected/22b30cc6-6022-4a4f-9911-7a47df5f2c98-kube-api-access-64zb7\") pod \"root-account-create-update-trjtn\" (UID: \"22b30cc6-6022-4a4f-9911-7a47df5f2c98\") " pod="openstack/root-account-create-update-trjtn" Feb 18 19:34:07 crc kubenswrapper[4942]: I0218 19:34:07.355249 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/22b30cc6-6022-4a4f-9911-7a47df5f2c98-operator-scripts\") pod \"root-account-create-update-trjtn\" (UID: \"22b30cc6-6022-4a4f-9911-7a47df5f2c98\") " pod="openstack/root-account-create-update-trjtn" Feb 18 19:34:07 crc kubenswrapper[4942]: I0218 19:34:07.355441 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-64zb7\" (UniqueName: \"kubernetes.io/projected/22b30cc6-6022-4a4f-9911-7a47df5f2c98-kube-api-access-64zb7\") pod \"root-account-create-update-trjtn\" (UID: \"22b30cc6-6022-4a4f-9911-7a47df5f2c98\") " pod="openstack/root-account-create-update-trjtn" Feb 18 19:34:07 crc kubenswrapper[4942]: I0218 19:34:07.356216 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/22b30cc6-6022-4a4f-9911-7a47df5f2c98-operator-scripts\") pod \"root-account-create-update-trjtn\" (UID: \"22b30cc6-6022-4a4f-9911-7a47df5f2c98\") " pod="openstack/root-account-create-update-trjtn" Feb 18 19:34:07 crc kubenswrapper[4942]: I0218 19:34:07.377952 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-64zb7\" (UniqueName: \"kubernetes.io/projected/22b30cc6-6022-4a4f-9911-7a47df5f2c98-kube-api-access-64zb7\") pod \"root-account-create-update-trjtn\" (UID: \"22b30cc6-6022-4a4f-9911-7a47df5f2c98\") " pod="openstack/root-account-create-update-trjtn" Feb 18 19:34:07 crc kubenswrapper[4942]: I0218 19:34:07.495163 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-trjtn" Feb 18 19:34:07 crc kubenswrapper[4942]: I0218 19:34:07.504878 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-zw8ls" event={"ID":"72c2952a-cb2e-4a2e-bdb3-bf97a901cbe3","Type":"ContainerStarted","Data":"e983b61464f792023c5c202bd16dd9437e3b945f9e2f82c09b596638a70e9520"} Feb 18 19:34:08 crc kubenswrapper[4942]: I0218 19:34:07.858454 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-llsph-config-56xlq"] Feb 18 19:34:08 crc kubenswrapper[4942]: W0218 19:34:07.868505 4942 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4cf16fd5_4915_49f5_b08b_d1bad49cd27a.slice/crio-7d8fd65744afa0050e57481e8dbc7b7a75d872cbbfb24132783ee4dc0a627056 WatchSource:0}: Error finding container 7d8fd65744afa0050e57481e8dbc7b7a75d872cbbfb24132783ee4dc0a627056: Status 404 returned error can't find the container with id 7d8fd65744afa0050e57481e8dbc7b7a75d872cbbfb24132783ee4dc0a627056 Feb 18 19:34:08 crc kubenswrapper[4942]: I0218 19:34:08.018376 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-trjtn"] Feb 18 19:34:08 crc kubenswrapper[4942]: I0218 19:34:08.520210 4942 generic.go:334] "Generic (PLEG): container finished" podID="22b30cc6-6022-4a4f-9911-7a47df5f2c98" containerID="f762c8a9d2890b0c6a5aa76b7b4d8dbd055509fafd584287df55f4c0629feaed" exitCode=0 Feb 18 19:34:08 crc kubenswrapper[4942]: I0218 19:34:08.520419 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-trjtn" event={"ID":"22b30cc6-6022-4a4f-9911-7a47df5f2c98","Type":"ContainerDied","Data":"f762c8a9d2890b0c6a5aa76b7b4d8dbd055509fafd584287df55f4c0629feaed"} Feb 18 19:34:08 crc kubenswrapper[4942]: I0218 19:34:08.520470 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-trjtn" event={"ID":"22b30cc6-6022-4a4f-9911-7a47df5f2c98","Type":"ContainerStarted","Data":"6eaccd964fe5040d0302d45e130d879648b3963ccef106d16cd8f783846424c0"} Feb 18 19:34:08 crc kubenswrapper[4942]: I0218 19:34:08.527188 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-llsph-config-56xlq" event={"ID":"4cf16fd5-4915-49f5-b08b-d1bad49cd27a","Type":"ContainerStarted","Data":"7d25a210ee23b71ffe8e6422d5c4b01d726dcdfde682e5219625754a6f1f5d53"} Feb 18 19:34:08 crc kubenswrapper[4942]: I0218 19:34:08.527221 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-llsph-config-56xlq" event={"ID":"4cf16fd5-4915-49f5-b08b-d1bad49cd27a","Type":"ContainerStarted","Data":"7d8fd65744afa0050e57481e8dbc7b7a75d872cbbfb24132783ee4dc0a627056"} Feb 18 19:34:08 crc kubenswrapper[4942]: I0218 19:34:08.562034 4942 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-llsph-config-56xlq" podStartSLOduration=2.562013479 podStartE2EDuration="2.562013479s" podCreationTimestamp="2026-02-18 19:34:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:34:08.54997323 +0000 UTC m=+1008.254905895" watchObservedRunningTime="2026-02-18 19:34:08.562013479 +0000 UTC m=+1008.266946144" Feb 18 19:34:09 crc kubenswrapper[4942]: I0218 19:34:09.541109 4942 generic.go:334] "Generic (PLEG): container finished" podID="4cf16fd5-4915-49f5-b08b-d1bad49cd27a" containerID="7d25a210ee23b71ffe8e6422d5c4b01d726dcdfde682e5219625754a6f1f5d53" exitCode=0 Feb 18 19:34:09 crc kubenswrapper[4942]: I0218 19:34:09.541252 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-llsph-config-56xlq" event={"ID":"4cf16fd5-4915-49f5-b08b-d1bad49cd27a","Type":"ContainerDied","Data":"7d25a210ee23b71ffe8e6422d5c4b01d726dcdfde682e5219625754a6f1f5d53"} Feb 18 19:34:09 crc kubenswrapper[4942]: I0218 19:34:09.927303 4942 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-trjtn" Feb 18 19:34:10 crc kubenswrapper[4942]: I0218 19:34:10.007146 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/22b30cc6-6022-4a4f-9911-7a47df5f2c98-operator-scripts\") pod \"22b30cc6-6022-4a4f-9911-7a47df5f2c98\" (UID: \"22b30cc6-6022-4a4f-9911-7a47df5f2c98\") " Feb 18 19:34:10 crc kubenswrapper[4942]: I0218 19:34:10.007396 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-64zb7\" (UniqueName: \"kubernetes.io/projected/22b30cc6-6022-4a4f-9911-7a47df5f2c98-kube-api-access-64zb7\") pod \"22b30cc6-6022-4a4f-9911-7a47df5f2c98\" (UID: \"22b30cc6-6022-4a4f-9911-7a47df5f2c98\") " Feb 18 19:34:10 crc kubenswrapper[4942]: I0218 19:34:10.008013 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22b30cc6-6022-4a4f-9911-7a47df5f2c98-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "22b30cc6-6022-4a4f-9911-7a47df5f2c98" (UID: "22b30cc6-6022-4a4f-9911-7a47df5f2c98"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:34:10 crc kubenswrapper[4942]: I0218 19:34:10.019578 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/22b30cc6-6022-4a4f-9911-7a47df5f2c98-kube-api-access-64zb7" (OuterVolumeSpecName: "kube-api-access-64zb7") pod "22b30cc6-6022-4a4f-9911-7a47df5f2c98" (UID: "22b30cc6-6022-4a4f-9911-7a47df5f2c98"). InnerVolumeSpecName "kube-api-access-64zb7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:34:10 crc kubenswrapper[4942]: I0218 19:34:10.109493 4942 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/22b30cc6-6022-4a4f-9911-7a47df5f2c98-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 18 19:34:10 crc kubenswrapper[4942]: I0218 19:34:10.109531 4942 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-64zb7\" (UniqueName: \"kubernetes.io/projected/22b30cc6-6022-4a4f-9911-7a47df5f2c98-kube-api-access-64zb7\") on node \"crc\" DevicePath \"\"" Feb 18 19:34:10 crc kubenswrapper[4942]: I0218 19:34:10.551834 4942 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-trjtn" Feb 18 19:34:10 crc kubenswrapper[4942]: I0218 19:34:10.553069 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-trjtn" event={"ID":"22b30cc6-6022-4a4f-9911-7a47df5f2c98","Type":"ContainerDied","Data":"6eaccd964fe5040d0302d45e130d879648b3963ccef106d16cd8f783846424c0"} Feb 18 19:34:10 crc kubenswrapper[4942]: I0218 19:34:10.554013 4942 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6eaccd964fe5040d0302d45e130d879648b3963ccef106d16cd8f783846424c0" Feb 18 19:34:10 crc kubenswrapper[4942]: I0218 19:34:10.924447 4942 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-llsph-config-56xlq" Feb 18 19:34:11 crc kubenswrapper[4942]: I0218 19:34:11.024564 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7gfms\" (UniqueName: \"kubernetes.io/projected/4cf16fd5-4915-49f5-b08b-d1bad49cd27a-kube-api-access-7gfms\") pod \"4cf16fd5-4915-49f5-b08b-d1bad49cd27a\" (UID: \"4cf16fd5-4915-49f5-b08b-d1bad49cd27a\") " Feb 18 19:34:11 crc kubenswrapper[4942]: I0218 19:34:11.024616 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/4cf16fd5-4915-49f5-b08b-d1bad49cd27a-var-log-ovn\") pod \"4cf16fd5-4915-49f5-b08b-d1bad49cd27a\" (UID: \"4cf16fd5-4915-49f5-b08b-d1bad49cd27a\") " Feb 18 19:34:11 crc kubenswrapper[4942]: I0218 19:34:11.024645 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/4cf16fd5-4915-49f5-b08b-d1bad49cd27a-scripts\") pod \"4cf16fd5-4915-49f5-b08b-d1bad49cd27a\" (UID: \"4cf16fd5-4915-49f5-b08b-d1bad49cd27a\") " Feb 18 19:34:11 crc kubenswrapper[4942]: I0218 19:34:11.024663 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4cf16fd5-4915-49f5-b08b-d1bad49cd27a-var-log-ovn" (OuterVolumeSpecName: "var-log-ovn") pod "4cf16fd5-4915-49f5-b08b-d1bad49cd27a" (UID: "4cf16fd5-4915-49f5-b08b-d1bad49cd27a"). InnerVolumeSpecName "var-log-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 18 19:34:11 crc kubenswrapper[4942]: I0218 19:34:11.025646 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4cf16fd5-4915-49f5-b08b-d1bad49cd27a-scripts" (OuterVolumeSpecName: "scripts") pod "4cf16fd5-4915-49f5-b08b-d1bad49cd27a" (UID: "4cf16fd5-4915-49f5-b08b-d1bad49cd27a"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:34:11 crc kubenswrapper[4942]: I0218 19:34:11.026099 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/4cf16fd5-4915-49f5-b08b-d1bad49cd27a-var-run\") pod \"4cf16fd5-4915-49f5-b08b-d1bad49cd27a\" (UID: \"4cf16fd5-4915-49f5-b08b-d1bad49cd27a\") " Feb 18 19:34:11 crc kubenswrapper[4942]: I0218 19:34:11.026286 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/4cf16fd5-4915-49f5-b08b-d1bad49cd27a-additional-scripts\") pod \"4cf16fd5-4915-49f5-b08b-d1bad49cd27a\" (UID: \"4cf16fd5-4915-49f5-b08b-d1bad49cd27a\") " Feb 18 19:34:11 crc kubenswrapper[4942]: I0218 19:34:11.026332 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/4cf16fd5-4915-49f5-b08b-d1bad49cd27a-var-run-ovn\") pod \"4cf16fd5-4915-49f5-b08b-d1bad49cd27a\" (UID: \"4cf16fd5-4915-49f5-b08b-d1bad49cd27a\") " Feb 18 19:34:11 crc kubenswrapper[4942]: I0218 19:34:11.026283 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4cf16fd5-4915-49f5-b08b-d1bad49cd27a-var-run" (OuterVolumeSpecName: "var-run") pod "4cf16fd5-4915-49f5-b08b-d1bad49cd27a" (UID: "4cf16fd5-4915-49f5-b08b-d1bad49cd27a"). InnerVolumeSpecName "var-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 18 19:34:11 crc kubenswrapper[4942]: I0218 19:34:11.026447 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4cf16fd5-4915-49f5-b08b-d1bad49cd27a-var-run-ovn" (OuterVolumeSpecName: "var-run-ovn") pod "4cf16fd5-4915-49f5-b08b-d1bad49cd27a" (UID: "4cf16fd5-4915-49f5-b08b-d1bad49cd27a"). InnerVolumeSpecName "var-run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 18 19:34:11 crc kubenswrapper[4942]: I0218 19:34:11.026526 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4cf16fd5-4915-49f5-b08b-d1bad49cd27a-additional-scripts" (OuterVolumeSpecName: "additional-scripts") pod "4cf16fd5-4915-49f5-b08b-d1bad49cd27a" (UID: "4cf16fd5-4915-49f5-b08b-d1bad49cd27a"). InnerVolumeSpecName "additional-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:34:11 crc kubenswrapper[4942]: I0218 19:34:11.027023 4942 reconciler_common.go:293] "Volume detached for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/4cf16fd5-4915-49f5-b08b-d1bad49cd27a-var-log-ovn\") on node \"crc\" DevicePath \"\"" Feb 18 19:34:11 crc kubenswrapper[4942]: I0218 19:34:11.027042 4942 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/4cf16fd5-4915-49f5-b08b-d1bad49cd27a-scripts\") on node \"crc\" DevicePath \"\"" Feb 18 19:34:11 crc kubenswrapper[4942]: I0218 19:34:11.027052 4942 reconciler_common.go:293] "Volume detached for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/4cf16fd5-4915-49f5-b08b-d1bad49cd27a-var-run\") on node \"crc\" DevicePath \"\"" Feb 18 19:34:11 crc kubenswrapper[4942]: I0218 19:34:11.027062 4942 reconciler_common.go:293] "Volume detached for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/4cf16fd5-4915-49f5-b08b-d1bad49cd27a-additional-scripts\") on node \"crc\" DevicePath \"\"" Feb 18 19:34:11 crc kubenswrapper[4942]: I0218 19:34:11.027072 4942 reconciler_common.go:293] "Volume detached for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/4cf16fd5-4915-49f5-b08b-d1bad49cd27a-var-run-ovn\") on node \"crc\" DevicePath \"\"" Feb 18 19:34:11 crc kubenswrapper[4942]: I0218 19:34:11.042519 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4cf16fd5-4915-49f5-b08b-d1bad49cd27a-kube-api-access-7gfms" (OuterVolumeSpecName: "kube-api-access-7gfms") pod "4cf16fd5-4915-49f5-b08b-d1bad49cd27a" (UID: "4cf16fd5-4915-49f5-b08b-d1bad49cd27a"). InnerVolumeSpecName "kube-api-access-7gfms". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:34:11 crc kubenswrapper[4942]: I0218 19:34:11.097904 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-llsph" Feb 18 19:34:11 crc kubenswrapper[4942]: I0218 19:34:11.128834 4942 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7gfms\" (UniqueName: \"kubernetes.io/projected/4cf16fd5-4915-49f5-b08b-d1bad49cd27a-kube-api-access-7gfms\") on node \"crc\" DevicePath \"\"" Feb 18 19:34:11 crc kubenswrapper[4942]: I0218 19:34:11.560049 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-llsph-config-56xlq" event={"ID":"4cf16fd5-4915-49f5-b08b-d1bad49cd27a","Type":"ContainerDied","Data":"7d8fd65744afa0050e57481e8dbc7b7a75d872cbbfb24132783ee4dc0a627056"} Feb 18 19:34:11 crc kubenswrapper[4942]: I0218 19:34:11.560088 4942 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7d8fd65744afa0050e57481e8dbc7b7a75d872cbbfb24132783ee4dc0a627056" Feb 18 19:34:11 crc kubenswrapper[4942]: I0218 19:34:11.560134 4942 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-llsph-config-56xlq" Feb 18 19:34:12 crc kubenswrapper[4942]: I0218 19:34:12.028793 4942 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-llsph-config-56xlq"] Feb 18 19:34:12 crc kubenswrapper[4942]: I0218 19:34:12.035342 4942 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-llsph-config-56xlq"] Feb 18 19:34:13 crc kubenswrapper[4942]: I0218 19:34:13.049455 4942 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4cf16fd5-4915-49f5-b08b-d1bad49cd27a" path="/var/lib/kubelet/pods/4cf16fd5-4915-49f5-b08b-d1bad49cd27a/volumes" Feb 18 19:34:13 crc kubenswrapper[4942]: I0218 19:34:13.901843 4942 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-trjtn"] Feb 18 19:34:13 crc kubenswrapper[4942]: I0218 19:34:13.908285 4942 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-trjtn"] Feb 18 19:34:13 crc kubenswrapper[4942]: I0218 19:34:13.977656 4942 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/prometheus-metric-storage-0" Feb 18 19:34:13 crc kubenswrapper[4942]: I0218 19:34:13.980269 4942 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/prometheus-metric-storage-0" Feb 18 19:34:14 crc kubenswrapper[4942]: I0218 19:34:14.585877 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/prometheus-metric-storage-0" Feb 18 19:34:15 crc kubenswrapper[4942]: I0218 19:34:15.053728 4942 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="22b30cc6-6022-4a4f-9911-7a47df5f2c98" path="/var/lib/kubelet/pods/22b30cc6-6022-4a4f-9911-7a47df5f2c98/volumes" Feb 18 19:34:15 crc kubenswrapper[4942]: I0218 19:34:15.606334 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/125bdbb5-76a8-450f-b645-2133024a1bd0-etc-swift\") pod \"swift-storage-0\" (UID: \"125bdbb5-76a8-450f-b645-2133024a1bd0\") " pod="openstack/swift-storage-0" Feb 18 19:34:15 crc kubenswrapper[4942]: I0218 19:34:15.621188 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/125bdbb5-76a8-450f-b645-2133024a1bd0-etc-swift\") pod \"swift-storage-0\" (UID: \"125bdbb5-76a8-450f-b645-2133024a1bd0\") " pod="openstack/swift-storage-0" Feb 18 19:34:15 crc kubenswrapper[4942]: I0218 19:34:15.874146 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Feb 18 19:34:16 crc kubenswrapper[4942]: I0218 19:34:16.820237 4942 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/prometheus-metric-storage-0"] Feb 18 19:34:16 crc kubenswrapper[4942]: I0218 19:34:16.820782 4942 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/prometheus-metric-storage-0" podUID="543db3d4-08d8-473f-a6ad-7e6a5bb9734c" containerName="prometheus" containerID="cri-o://19ca73d07d23c2f4be951d7909e61b79e21cfc7d91c0a9ffd938eb9ea1e5646a" gracePeriod=600 Feb 18 19:34:16 crc kubenswrapper[4942]: I0218 19:34:16.820867 4942 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/prometheus-metric-storage-0" podUID="543db3d4-08d8-473f-a6ad-7e6a5bb9734c" containerName="thanos-sidecar" containerID="cri-o://fd3aef2dcd467a4e4443cb718f2ad37e73afe0c2cc787eca566999184738b19b" gracePeriod=600 Feb 18 19:34:16 crc kubenswrapper[4942]: I0218 19:34:16.820890 4942 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/prometheus-metric-storage-0" podUID="543db3d4-08d8-473f-a6ad-7e6a5bb9734c" containerName="config-reloader" containerID="cri-o://ebae20c9222b3aee15451c1f0bbaa8cd79204c32bb3e86cff12a92b878e9497f" gracePeriod=600 Feb 18 19:34:17 crc kubenswrapper[4942]: I0218 19:34:17.034010 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Feb 18 19:34:17 crc kubenswrapper[4942]: I0218 19:34:17.327916 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Feb 18 19:34:17 crc kubenswrapper[4942]: I0218 19:34:17.392668 4942 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-create-4zlhp"] Feb 18 19:34:17 crc kubenswrapper[4942]: E0218 19:34:17.393011 4942 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="22b30cc6-6022-4a4f-9911-7a47df5f2c98" containerName="mariadb-account-create-update" Feb 18 19:34:17 crc kubenswrapper[4942]: I0218 19:34:17.393028 4942 state_mem.go:107] "Deleted CPUSet assignment" podUID="22b30cc6-6022-4a4f-9911-7a47df5f2c98" containerName="mariadb-account-create-update" Feb 18 19:34:17 crc kubenswrapper[4942]: E0218 19:34:17.393053 4942 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4cf16fd5-4915-49f5-b08b-d1bad49cd27a" containerName="ovn-config" Feb 18 19:34:17 crc kubenswrapper[4942]: I0218 19:34:17.393060 4942 state_mem.go:107] "Deleted CPUSet assignment" podUID="4cf16fd5-4915-49f5-b08b-d1bad49cd27a" containerName="ovn-config" Feb 18 19:34:17 crc kubenswrapper[4942]: I0218 19:34:17.393209 4942 memory_manager.go:354] "RemoveStaleState removing state" podUID="4cf16fd5-4915-49f5-b08b-d1bad49cd27a" containerName="ovn-config" Feb 18 19:34:17 crc kubenswrapper[4942]: I0218 19:34:17.393235 4942 memory_manager.go:354] "RemoveStaleState removing state" podUID="22b30cc6-6022-4a4f-9911-7a47df5f2c98" containerName="mariadb-account-create-update" Feb 18 19:34:17 crc kubenswrapper[4942]: I0218 19:34:17.393807 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-4zlhp" Feb 18 19:34:17 crc kubenswrapper[4942]: I0218 19:34:17.413608 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-4zlhp"] Feb 18 19:34:17 crc kubenswrapper[4942]: I0218 19:34:17.455534 4942 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/watcher-db-sync-4h9n5"] Feb 18 19:34:17 crc kubenswrapper[4942]: I0218 19:34:17.456799 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-db-sync-4h9n5" Feb 18 19:34:17 crc kubenswrapper[4942]: I0218 19:34:17.459205 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"watcher-watcher-dockercfg-jp82k" Feb 18 19:34:17 crc kubenswrapper[4942]: I0218 19:34:17.459287 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"watcher-config-data" Feb 18 19:34:17 crc kubenswrapper[4942]: I0218 19:34:17.479348 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-db-sync-4h9n5"] Feb 18 19:34:17 crc kubenswrapper[4942]: I0218 19:34:17.549286 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9a8e424f-44a5-4eaa-9f3f-882f070aa404-operator-scripts\") pod \"cinder-db-create-4zlhp\" (UID: \"9a8e424f-44a5-4eaa-9f3f-882f070aa404\") " pod="openstack/cinder-db-create-4zlhp" Feb 18 19:34:17 crc kubenswrapper[4942]: I0218 19:34:17.549621 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q5lhw\" (UniqueName: \"kubernetes.io/projected/9a8e424f-44a5-4eaa-9f3f-882f070aa404-kube-api-access-q5lhw\") pod \"cinder-db-create-4zlhp\" (UID: \"9a8e424f-44a5-4eaa-9f3f-882f070aa404\") " pod="openstack/cinder-db-create-4zlhp" Feb 18 19:34:17 crc kubenswrapper[4942]: I0218 19:34:17.577839 4942 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-e916-account-create-update-lm2r5"] Feb 18 19:34:17 crc kubenswrapper[4942]: I0218 19:34:17.579470 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-e916-account-create-update-lm2r5" Feb 18 19:34:17 crc kubenswrapper[4942]: I0218 19:34:17.583683 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-db-secret" Feb 18 19:34:17 crc kubenswrapper[4942]: I0218 19:34:17.592202 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-e916-account-create-update-lm2r5"] Feb 18 19:34:17 crc kubenswrapper[4942]: I0218 19:34:17.613945 4942 generic.go:334] "Generic (PLEG): container finished" podID="543db3d4-08d8-473f-a6ad-7e6a5bb9734c" containerID="fd3aef2dcd467a4e4443cb718f2ad37e73afe0c2cc787eca566999184738b19b" exitCode=0 Feb 18 19:34:17 crc kubenswrapper[4942]: I0218 19:34:17.613967 4942 generic.go:334] "Generic (PLEG): container finished" podID="543db3d4-08d8-473f-a6ad-7e6a5bb9734c" containerID="ebae20c9222b3aee15451c1f0bbaa8cd79204c32bb3e86cff12a92b878e9497f" exitCode=0 Feb 18 19:34:17 crc kubenswrapper[4942]: I0218 19:34:17.613974 4942 generic.go:334] "Generic (PLEG): container finished" podID="543db3d4-08d8-473f-a6ad-7e6a5bb9734c" containerID="19ca73d07d23c2f4be951d7909e61b79e21cfc7d91c0a9ffd938eb9ea1e5646a" exitCode=0 Feb 18 19:34:17 crc kubenswrapper[4942]: I0218 19:34:17.613993 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"543db3d4-08d8-473f-a6ad-7e6a5bb9734c","Type":"ContainerDied","Data":"fd3aef2dcd467a4e4443cb718f2ad37e73afe0c2cc787eca566999184738b19b"} Feb 18 19:34:17 crc kubenswrapper[4942]: I0218 19:34:17.614021 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"543db3d4-08d8-473f-a6ad-7e6a5bb9734c","Type":"ContainerDied","Data":"ebae20c9222b3aee15451c1f0bbaa8cd79204c32bb3e86cff12a92b878e9497f"} Feb 18 19:34:17 crc kubenswrapper[4942]: I0218 19:34:17.614032 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"543db3d4-08d8-473f-a6ad-7e6a5bb9734c","Type":"ContainerDied","Data":"19ca73d07d23c2f4be951d7909e61b79e21cfc7d91c0a9ffd938eb9ea1e5646a"} Feb 18 19:34:17 crc kubenswrapper[4942]: I0218 19:34:17.651562 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/983d5293-8413-4a29-88b2-ba775b3b4a8b-db-sync-config-data\") pod \"watcher-db-sync-4h9n5\" (UID: \"983d5293-8413-4a29-88b2-ba775b3b4a8b\") " pod="openstack/watcher-db-sync-4h9n5" Feb 18 19:34:17 crc kubenswrapper[4942]: I0218 19:34:17.651615 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q5lhw\" (UniqueName: \"kubernetes.io/projected/9a8e424f-44a5-4eaa-9f3f-882f070aa404-kube-api-access-q5lhw\") pod \"cinder-db-create-4zlhp\" (UID: \"9a8e424f-44a5-4eaa-9f3f-882f070aa404\") " pod="openstack/cinder-db-create-4zlhp" Feb 18 19:34:17 crc kubenswrapper[4942]: I0218 19:34:17.651663 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/983d5293-8413-4a29-88b2-ba775b3b4a8b-config-data\") pod \"watcher-db-sync-4h9n5\" (UID: \"983d5293-8413-4a29-88b2-ba775b3b4a8b\") " pod="openstack/watcher-db-sync-4h9n5" Feb 18 19:34:17 crc kubenswrapper[4942]: I0218 19:34:17.651712 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mfbhf\" (UniqueName: \"kubernetes.io/projected/983d5293-8413-4a29-88b2-ba775b3b4a8b-kube-api-access-mfbhf\") pod \"watcher-db-sync-4h9n5\" (UID: \"983d5293-8413-4a29-88b2-ba775b3b4a8b\") " pod="openstack/watcher-db-sync-4h9n5" Feb 18 19:34:17 crc kubenswrapper[4942]: I0218 19:34:17.651747 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9a8e424f-44a5-4eaa-9f3f-882f070aa404-operator-scripts\") pod \"cinder-db-create-4zlhp\" (UID: \"9a8e424f-44a5-4eaa-9f3f-882f070aa404\") " pod="openstack/cinder-db-create-4zlhp" Feb 18 19:34:17 crc kubenswrapper[4942]: I0218 19:34:17.651808 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/983d5293-8413-4a29-88b2-ba775b3b4a8b-combined-ca-bundle\") pod \"watcher-db-sync-4h9n5\" (UID: \"983d5293-8413-4a29-88b2-ba775b3b4a8b\") " pod="openstack/watcher-db-sync-4h9n5" Feb 18 19:34:17 crc kubenswrapper[4942]: I0218 19:34:17.652703 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9a8e424f-44a5-4eaa-9f3f-882f070aa404-operator-scripts\") pod \"cinder-db-create-4zlhp\" (UID: \"9a8e424f-44a5-4eaa-9f3f-882f070aa404\") " pod="openstack/cinder-db-create-4zlhp" Feb 18 19:34:17 crc kubenswrapper[4942]: I0218 19:34:17.667378 4942 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-create-njfd6"] Feb 18 19:34:17 crc kubenswrapper[4942]: I0218 19:34:17.668785 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-njfd6" Feb 18 19:34:17 crc kubenswrapper[4942]: I0218 19:34:17.700478 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-njfd6"] Feb 18 19:34:17 crc kubenswrapper[4942]: I0218 19:34:17.705957 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q5lhw\" (UniqueName: \"kubernetes.io/projected/9a8e424f-44a5-4eaa-9f3f-882f070aa404-kube-api-access-q5lhw\") pod \"cinder-db-create-4zlhp\" (UID: \"9a8e424f-44a5-4eaa-9f3f-882f070aa404\") " pod="openstack/cinder-db-create-4zlhp" Feb 18 19:34:17 crc kubenswrapper[4942]: I0218 19:34:17.710055 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-4zlhp" Feb 18 19:34:17 crc kubenswrapper[4942]: I0218 19:34:17.753709 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/983d5293-8413-4a29-88b2-ba775b3b4a8b-db-sync-config-data\") pod \"watcher-db-sync-4h9n5\" (UID: \"983d5293-8413-4a29-88b2-ba775b3b4a8b\") " pod="openstack/watcher-db-sync-4h9n5" Feb 18 19:34:17 crc kubenswrapper[4942]: I0218 19:34:17.754176 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/983d5293-8413-4a29-88b2-ba775b3b4a8b-config-data\") pod \"watcher-db-sync-4h9n5\" (UID: \"983d5293-8413-4a29-88b2-ba775b3b4a8b\") " pod="openstack/watcher-db-sync-4h9n5" Feb 18 19:34:17 crc kubenswrapper[4942]: I0218 19:34:17.754301 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fcea68e2-0d37-4812-a7ad-403e59b7b556-operator-scripts\") pod \"cinder-e916-account-create-update-lm2r5\" (UID: \"fcea68e2-0d37-4812-a7ad-403e59b7b556\") " pod="openstack/cinder-e916-account-create-update-lm2r5" Feb 18 19:34:17 crc kubenswrapper[4942]: I0218 19:34:17.754493 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mfbhf\" (UniqueName: \"kubernetes.io/projected/983d5293-8413-4a29-88b2-ba775b3b4a8b-kube-api-access-mfbhf\") pod \"watcher-db-sync-4h9n5\" (UID: \"983d5293-8413-4a29-88b2-ba775b3b4a8b\") " pod="openstack/watcher-db-sync-4h9n5" Feb 18 19:34:17 crc kubenswrapper[4942]: I0218 19:34:17.754683 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/983d5293-8413-4a29-88b2-ba775b3b4a8b-combined-ca-bundle\") pod \"watcher-db-sync-4h9n5\" (UID: \"983d5293-8413-4a29-88b2-ba775b3b4a8b\") " pod="openstack/watcher-db-sync-4h9n5" Feb 18 19:34:17 crc kubenswrapper[4942]: I0218 19:34:17.754950 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kmdm8\" (UniqueName: \"kubernetes.io/projected/fcea68e2-0d37-4812-a7ad-403e59b7b556-kube-api-access-kmdm8\") pod \"cinder-e916-account-create-update-lm2r5\" (UID: \"fcea68e2-0d37-4812-a7ad-403e59b7b556\") " pod="openstack/cinder-e916-account-create-update-lm2r5" Feb 18 19:34:17 crc kubenswrapper[4942]: I0218 19:34:17.768229 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/983d5293-8413-4a29-88b2-ba775b3b4a8b-db-sync-config-data\") pod \"watcher-db-sync-4h9n5\" (UID: \"983d5293-8413-4a29-88b2-ba775b3b4a8b\") " pod="openstack/watcher-db-sync-4h9n5" Feb 18 19:34:17 crc kubenswrapper[4942]: I0218 19:34:17.771715 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/983d5293-8413-4a29-88b2-ba775b3b4a8b-config-data\") pod \"watcher-db-sync-4h9n5\" (UID: \"983d5293-8413-4a29-88b2-ba775b3b4a8b\") " pod="openstack/watcher-db-sync-4h9n5" Feb 18 19:34:17 crc kubenswrapper[4942]: I0218 19:34:17.771810 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/983d5293-8413-4a29-88b2-ba775b3b4a8b-combined-ca-bundle\") pod \"watcher-db-sync-4h9n5\" (UID: \"983d5293-8413-4a29-88b2-ba775b3b4a8b\") " pod="openstack/watcher-db-sync-4h9n5" Feb 18 19:34:17 crc kubenswrapper[4942]: I0218 19:34:17.785997 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mfbhf\" (UniqueName: \"kubernetes.io/projected/983d5293-8413-4a29-88b2-ba775b3b4a8b-kube-api-access-mfbhf\") pod \"watcher-db-sync-4h9n5\" (UID: \"983d5293-8413-4a29-88b2-ba775b3b4a8b\") " pod="openstack/watcher-db-sync-4h9n5" Feb 18 19:34:17 crc kubenswrapper[4942]: I0218 19:34:17.796260 4942 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-fee6-account-create-update-jhlbn"] Feb 18 19:34:17 crc kubenswrapper[4942]: I0218 19:34:17.797267 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-fee6-account-create-update-jhlbn" Feb 18 19:34:17 crc kubenswrapper[4942]: I0218 19:34:17.799051 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-db-secret" Feb 18 19:34:17 crc kubenswrapper[4942]: I0218 19:34:17.807843 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-fee6-account-create-update-jhlbn"] Feb 18 19:34:17 crc kubenswrapper[4942]: I0218 19:34:17.856540 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fcea68e2-0d37-4812-a7ad-403e59b7b556-operator-scripts\") pod \"cinder-e916-account-create-update-lm2r5\" (UID: \"fcea68e2-0d37-4812-a7ad-403e59b7b556\") " pod="openstack/cinder-e916-account-create-update-lm2r5" Feb 18 19:34:17 crc kubenswrapper[4942]: I0218 19:34:17.857665 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zvwzc\" (UniqueName: \"kubernetes.io/projected/dddbc305-d881-4ef9-ada1-49e8f180162c-kube-api-access-zvwzc\") pod \"barbican-db-create-njfd6\" (UID: \"dddbc305-d881-4ef9-ada1-49e8f180162c\") " pod="openstack/barbican-db-create-njfd6" Feb 18 19:34:17 crc kubenswrapper[4942]: I0218 19:34:17.857678 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fcea68e2-0d37-4812-a7ad-403e59b7b556-operator-scripts\") pod \"cinder-e916-account-create-update-lm2r5\" (UID: \"fcea68e2-0d37-4812-a7ad-403e59b7b556\") " pod="openstack/cinder-e916-account-create-update-lm2r5" Feb 18 19:34:17 crc kubenswrapper[4942]: I0218 19:34:17.857963 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/dddbc305-d881-4ef9-ada1-49e8f180162c-operator-scripts\") pod \"barbican-db-create-njfd6\" (UID: \"dddbc305-d881-4ef9-ada1-49e8f180162c\") " pod="openstack/barbican-db-create-njfd6" Feb 18 19:34:17 crc kubenswrapper[4942]: I0218 19:34:17.858093 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kmdm8\" (UniqueName: \"kubernetes.io/projected/fcea68e2-0d37-4812-a7ad-403e59b7b556-kube-api-access-kmdm8\") pod \"cinder-e916-account-create-update-lm2r5\" (UID: \"fcea68e2-0d37-4812-a7ad-403e59b7b556\") " pod="openstack/cinder-e916-account-create-update-lm2r5" Feb 18 19:34:17 crc kubenswrapper[4942]: I0218 19:34:17.876698 4942 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-sync-87p82"] Feb 18 19:34:17 crc kubenswrapper[4942]: I0218 19:34:17.877964 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-87p82" Feb 18 19:34:17 crc kubenswrapper[4942]: I0218 19:34:17.880378 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Feb 18 19:34:17 crc kubenswrapper[4942]: I0218 19:34:17.881928 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-9szpl" Feb 18 19:34:17 crc kubenswrapper[4942]: I0218 19:34:17.882359 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Feb 18 19:34:17 crc kubenswrapper[4942]: I0218 19:34:17.883452 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Feb 18 19:34:17 crc kubenswrapper[4942]: I0218 19:34:17.888221 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-87p82"] Feb 18 19:34:17 crc kubenswrapper[4942]: I0218 19:34:17.901241 4942 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-f862-account-create-update-29qlq"] Feb 18 19:34:17 crc kubenswrapper[4942]: I0218 19:34:17.901662 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kmdm8\" (UniqueName: \"kubernetes.io/projected/fcea68e2-0d37-4812-a7ad-403e59b7b556-kube-api-access-kmdm8\") pod \"cinder-e916-account-create-update-lm2r5\" (UID: \"fcea68e2-0d37-4812-a7ad-403e59b7b556\") " pod="openstack/cinder-e916-account-create-update-lm2r5" Feb 18 19:34:17 crc kubenswrapper[4942]: I0218 19:34:17.913173 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-f862-account-create-update-29qlq" Feb 18 19:34:17 crc kubenswrapper[4942]: I0218 19:34:17.920141 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-f862-account-create-update-29qlq"] Feb 18 19:34:17 crc kubenswrapper[4942]: I0218 19:34:17.920221 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-db-secret" Feb 18 19:34:17 crc kubenswrapper[4942]: I0218 19:34:17.959398 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zvwzc\" (UniqueName: \"kubernetes.io/projected/dddbc305-d881-4ef9-ada1-49e8f180162c-kube-api-access-zvwzc\") pod \"barbican-db-create-njfd6\" (UID: \"dddbc305-d881-4ef9-ada1-49e8f180162c\") " pod="openstack/barbican-db-create-njfd6" Feb 18 19:34:17 crc kubenswrapper[4942]: I0218 19:34:17.959464 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nshl4\" (UniqueName: \"kubernetes.io/projected/c903d652-2880-43bd-9445-f1b03764f413-kube-api-access-nshl4\") pod \"barbican-fee6-account-create-update-jhlbn\" (UID: \"c903d652-2880-43bd-9445-f1b03764f413\") " pod="openstack/barbican-fee6-account-create-update-jhlbn" Feb 18 19:34:17 crc kubenswrapper[4942]: I0218 19:34:17.959517 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/dddbc305-d881-4ef9-ada1-49e8f180162c-operator-scripts\") pod \"barbican-db-create-njfd6\" (UID: \"dddbc305-d881-4ef9-ada1-49e8f180162c\") " pod="openstack/barbican-db-create-njfd6" Feb 18 19:34:17 crc kubenswrapper[4942]: I0218 19:34:17.959548 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c903d652-2880-43bd-9445-f1b03764f413-operator-scripts\") pod \"barbican-fee6-account-create-update-jhlbn\" (UID: \"c903d652-2880-43bd-9445-f1b03764f413\") " pod="openstack/barbican-fee6-account-create-update-jhlbn" Feb 18 19:34:17 crc kubenswrapper[4942]: I0218 19:34:17.961550 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/dddbc305-d881-4ef9-ada1-49e8f180162c-operator-scripts\") pod \"barbican-db-create-njfd6\" (UID: \"dddbc305-d881-4ef9-ada1-49e8f180162c\") " pod="openstack/barbican-db-create-njfd6" Feb 18 19:34:17 crc kubenswrapper[4942]: I0218 19:34:17.985929 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zvwzc\" (UniqueName: \"kubernetes.io/projected/dddbc305-d881-4ef9-ada1-49e8f180162c-kube-api-access-zvwzc\") pod \"barbican-db-create-njfd6\" (UID: \"dddbc305-d881-4ef9-ada1-49e8f180162c\") " pod="openstack/barbican-db-create-njfd6" Feb 18 19:34:18 crc kubenswrapper[4942]: I0218 19:34:18.022988 4942 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-create-s54gq"] Feb 18 19:34:18 crc kubenswrapper[4942]: I0218 19:34:18.024235 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-s54gq" Feb 18 19:34:18 crc kubenswrapper[4942]: I0218 19:34:18.033536 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-njfd6" Feb 18 19:34:18 crc kubenswrapper[4942]: I0218 19:34:18.044413 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-s54gq"] Feb 18 19:34:18 crc kubenswrapper[4942]: I0218 19:34:18.062814 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zdmfn\" (UniqueName: \"kubernetes.io/projected/35dbdf24-b5f9-4a19-96f9-1fe390df90e1-kube-api-access-zdmfn\") pod \"neutron-f862-account-create-update-29qlq\" (UID: \"35dbdf24-b5f9-4a19-96f9-1fe390df90e1\") " pod="openstack/neutron-f862-account-create-update-29qlq" Feb 18 19:34:18 crc kubenswrapper[4942]: I0218 19:34:18.062881 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nshl4\" (UniqueName: \"kubernetes.io/projected/c903d652-2880-43bd-9445-f1b03764f413-kube-api-access-nshl4\") pod \"barbican-fee6-account-create-update-jhlbn\" (UID: \"c903d652-2880-43bd-9445-f1b03764f413\") " pod="openstack/barbican-fee6-account-create-update-jhlbn" Feb 18 19:34:18 crc kubenswrapper[4942]: I0218 19:34:18.062906 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7ed4f34d-fe0d-402c-95d3-171e73eb5bd5-config-data\") pod \"keystone-db-sync-87p82\" (UID: \"7ed4f34d-fe0d-402c-95d3-171e73eb5bd5\") " pod="openstack/keystone-db-sync-87p82" Feb 18 19:34:18 crc kubenswrapper[4942]: I0218 19:34:18.062963 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7ed4f34d-fe0d-402c-95d3-171e73eb5bd5-combined-ca-bundle\") pod \"keystone-db-sync-87p82\" (UID: \"7ed4f34d-fe0d-402c-95d3-171e73eb5bd5\") " pod="openstack/keystone-db-sync-87p82" Feb 18 19:34:18 crc kubenswrapper[4942]: I0218 19:34:18.062997 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c903d652-2880-43bd-9445-f1b03764f413-operator-scripts\") pod \"barbican-fee6-account-create-update-jhlbn\" (UID: \"c903d652-2880-43bd-9445-f1b03764f413\") " pod="openstack/barbican-fee6-account-create-update-jhlbn" Feb 18 19:34:18 crc kubenswrapper[4942]: I0218 19:34:18.063031 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f4b9x\" (UniqueName: \"kubernetes.io/projected/7ed4f34d-fe0d-402c-95d3-171e73eb5bd5-kube-api-access-f4b9x\") pod \"keystone-db-sync-87p82\" (UID: \"7ed4f34d-fe0d-402c-95d3-171e73eb5bd5\") " pod="openstack/keystone-db-sync-87p82" Feb 18 19:34:18 crc kubenswrapper[4942]: I0218 19:34:18.063069 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/35dbdf24-b5f9-4a19-96f9-1fe390df90e1-operator-scripts\") pod \"neutron-f862-account-create-update-29qlq\" (UID: \"35dbdf24-b5f9-4a19-96f9-1fe390df90e1\") " pod="openstack/neutron-f862-account-create-update-29qlq" Feb 18 19:34:18 crc kubenswrapper[4942]: I0218 19:34:18.064315 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c903d652-2880-43bd-9445-f1b03764f413-operator-scripts\") pod \"barbican-fee6-account-create-update-jhlbn\" (UID: \"c903d652-2880-43bd-9445-f1b03764f413\") " pod="openstack/barbican-fee6-account-create-update-jhlbn" Feb 18 19:34:18 crc kubenswrapper[4942]: I0218 19:34:18.076593 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-db-sync-4h9n5" Feb 18 19:34:18 crc kubenswrapper[4942]: I0218 19:34:18.109279 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nshl4\" (UniqueName: \"kubernetes.io/projected/c903d652-2880-43bd-9445-f1b03764f413-kube-api-access-nshl4\") pod \"barbican-fee6-account-create-update-jhlbn\" (UID: \"c903d652-2880-43bd-9445-f1b03764f413\") " pod="openstack/barbican-fee6-account-create-update-jhlbn" Feb 18 19:34:18 crc kubenswrapper[4942]: I0218 19:34:18.140174 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-fee6-account-create-update-jhlbn" Feb 18 19:34:18 crc kubenswrapper[4942]: I0218 19:34:18.164345 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/35dbdf24-b5f9-4a19-96f9-1fe390df90e1-operator-scripts\") pod \"neutron-f862-account-create-update-29qlq\" (UID: \"35dbdf24-b5f9-4a19-96f9-1fe390df90e1\") " pod="openstack/neutron-f862-account-create-update-29qlq" Feb 18 19:34:18 crc kubenswrapper[4942]: I0218 19:34:18.164466 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zdmfn\" (UniqueName: \"kubernetes.io/projected/35dbdf24-b5f9-4a19-96f9-1fe390df90e1-kube-api-access-zdmfn\") pod \"neutron-f862-account-create-update-29qlq\" (UID: \"35dbdf24-b5f9-4a19-96f9-1fe390df90e1\") " pod="openstack/neutron-f862-account-create-update-29qlq" Feb 18 19:34:18 crc kubenswrapper[4942]: I0218 19:34:18.164491 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7ed4f34d-fe0d-402c-95d3-171e73eb5bd5-config-data\") pod \"keystone-db-sync-87p82\" (UID: \"7ed4f34d-fe0d-402c-95d3-171e73eb5bd5\") " pod="openstack/keystone-db-sync-87p82" Feb 18 19:34:18 crc kubenswrapper[4942]: I0218 19:34:18.164542 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7ed4f34d-fe0d-402c-95d3-171e73eb5bd5-combined-ca-bundle\") pod \"keystone-db-sync-87p82\" (UID: \"7ed4f34d-fe0d-402c-95d3-171e73eb5bd5\") " pod="openstack/keystone-db-sync-87p82" Feb 18 19:34:18 crc kubenswrapper[4942]: I0218 19:34:18.164574 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5fl2t\" (UniqueName: \"kubernetes.io/projected/fd491cd9-f58f-4821-8004-a5a4762d6bdb-kube-api-access-5fl2t\") pod \"neutron-db-create-s54gq\" (UID: \"fd491cd9-f58f-4821-8004-a5a4762d6bdb\") " pod="openstack/neutron-db-create-s54gq" Feb 18 19:34:18 crc kubenswrapper[4942]: I0218 19:34:18.164627 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fd491cd9-f58f-4821-8004-a5a4762d6bdb-operator-scripts\") pod \"neutron-db-create-s54gq\" (UID: \"fd491cd9-f58f-4821-8004-a5a4762d6bdb\") " pod="openstack/neutron-db-create-s54gq" Feb 18 19:34:18 crc kubenswrapper[4942]: I0218 19:34:18.164661 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f4b9x\" (UniqueName: \"kubernetes.io/projected/7ed4f34d-fe0d-402c-95d3-171e73eb5bd5-kube-api-access-f4b9x\") pod \"keystone-db-sync-87p82\" (UID: \"7ed4f34d-fe0d-402c-95d3-171e73eb5bd5\") " pod="openstack/keystone-db-sync-87p82" Feb 18 19:34:18 crc kubenswrapper[4942]: I0218 19:34:18.166942 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/35dbdf24-b5f9-4a19-96f9-1fe390df90e1-operator-scripts\") pod \"neutron-f862-account-create-update-29qlq\" (UID: \"35dbdf24-b5f9-4a19-96f9-1fe390df90e1\") " pod="openstack/neutron-f862-account-create-update-29qlq" Feb 18 19:34:18 crc kubenswrapper[4942]: I0218 19:34:18.168577 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7ed4f34d-fe0d-402c-95d3-171e73eb5bd5-config-data\") pod \"keystone-db-sync-87p82\" (UID: \"7ed4f34d-fe0d-402c-95d3-171e73eb5bd5\") " pod="openstack/keystone-db-sync-87p82" Feb 18 19:34:18 crc kubenswrapper[4942]: I0218 19:34:18.170524 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7ed4f34d-fe0d-402c-95d3-171e73eb5bd5-combined-ca-bundle\") pod \"keystone-db-sync-87p82\" (UID: \"7ed4f34d-fe0d-402c-95d3-171e73eb5bd5\") " pod="openstack/keystone-db-sync-87p82" Feb 18 19:34:18 crc kubenswrapper[4942]: I0218 19:34:18.186261 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f4b9x\" (UniqueName: \"kubernetes.io/projected/7ed4f34d-fe0d-402c-95d3-171e73eb5bd5-kube-api-access-f4b9x\") pod \"keystone-db-sync-87p82\" (UID: \"7ed4f34d-fe0d-402c-95d3-171e73eb5bd5\") " pod="openstack/keystone-db-sync-87p82" Feb 18 19:34:18 crc kubenswrapper[4942]: I0218 19:34:18.193276 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-87p82" Feb 18 19:34:18 crc kubenswrapper[4942]: I0218 19:34:18.197360 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-e916-account-create-update-lm2r5" Feb 18 19:34:18 crc kubenswrapper[4942]: I0218 19:34:18.207310 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zdmfn\" (UniqueName: \"kubernetes.io/projected/35dbdf24-b5f9-4a19-96f9-1fe390df90e1-kube-api-access-zdmfn\") pod \"neutron-f862-account-create-update-29qlq\" (UID: \"35dbdf24-b5f9-4a19-96f9-1fe390df90e1\") " pod="openstack/neutron-f862-account-create-update-29qlq" Feb 18 19:34:18 crc kubenswrapper[4942]: I0218 19:34:18.248703 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-f862-account-create-update-29qlq" Feb 18 19:34:18 crc kubenswrapper[4942]: I0218 19:34:18.266074 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fd491cd9-f58f-4821-8004-a5a4762d6bdb-operator-scripts\") pod \"neutron-db-create-s54gq\" (UID: \"fd491cd9-f58f-4821-8004-a5a4762d6bdb\") " pod="openstack/neutron-db-create-s54gq" Feb 18 19:34:18 crc kubenswrapper[4942]: I0218 19:34:18.266224 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5fl2t\" (UniqueName: \"kubernetes.io/projected/fd491cd9-f58f-4821-8004-a5a4762d6bdb-kube-api-access-5fl2t\") pod \"neutron-db-create-s54gq\" (UID: \"fd491cd9-f58f-4821-8004-a5a4762d6bdb\") " pod="openstack/neutron-db-create-s54gq" Feb 18 19:34:18 crc kubenswrapper[4942]: I0218 19:34:18.267038 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fd491cd9-f58f-4821-8004-a5a4762d6bdb-operator-scripts\") pod \"neutron-db-create-s54gq\" (UID: \"fd491cd9-f58f-4821-8004-a5a4762d6bdb\") " pod="openstack/neutron-db-create-s54gq" Feb 18 19:34:18 crc kubenswrapper[4942]: I0218 19:34:18.288349 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5fl2t\" (UniqueName: \"kubernetes.io/projected/fd491cd9-f58f-4821-8004-a5a4762d6bdb-kube-api-access-5fl2t\") pod \"neutron-db-create-s54gq\" (UID: \"fd491cd9-f58f-4821-8004-a5a4762d6bdb\") " pod="openstack/neutron-db-create-s54gq" Feb 18 19:34:18 crc kubenswrapper[4942]: I0218 19:34:18.396796 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-s54gq" Feb 18 19:34:18 crc kubenswrapper[4942]: I0218 19:34:18.979409 4942 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/prometheus-metric-storage-0" podUID="543db3d4-08d8-473f-a6ad-7e6a5bb9734c" containerName="prometheus" probeResult="failure" output="Get \"http://10.217.0.111:9090/-/ready\": dial tcp 10.217.0.111:9090: connect: connection refused" Feb 18 19:34:19 crc kubenswrapper[4942]: I0218 19:34:19.011072 4942 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-8f782"] Feb 18 19:34:19 crc kubenswrapper[4942]: I0218 19:34:19.012182 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-8f782" Feb 18 19:34:19 crc kubenswrapper[4942]: I0218 19:34:19.014629 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-mariadb-root-db-secret" Feb 18 19:34:19 crc kubenswrapper[4942]: I0218 19:34:19.028123 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-8f782"] Feb 18 19:34:19 crc kubenswrapper[4942]: I0218 19:34:19.080382 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cq5mf\" (UniqueName: \"kubernetes.io/projected/4edc6296-1ba6-43f7-a076-93f94c77a2c9-kube-api-access-cq5mf\") pod \"root-account-create-update-8f782\" (UID: \"4edc6296-1ba6-43f7-a076-93f94c77a2c9\") " pod="openstack/root-account-create-update-8f782" Feb 18 19:34:19 crc kubenswrapper[4942]: I0218 19:34:19.080850 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4edc6296-1ba6-43f7-a076-93f94c77a2c9-operator-scripts\") pod \"root-account-create-update-8f782\" (UID: \"4edc6296-1ba6-43f7-a076-93f94c77a2c9\") " pod="openstack/root-account-create-update-8f782" Feb 18 19:34:19 crc kubenswrapper[4942]: I0218 19:34:19.182273 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cq5mf\" (UniqueName: \"kubernetes.io/projected/4edc6296-1ba6-43f7-a076-93f94c77a2c9-kube-api-access-cq5mf\") pod \"root-account-create-update-8f782\" (UID: \"4edc6296-1ba6-43f7-a076-93f94c77a2c9\") " pod="openstack/root-account-create-update-8f782" Feb 18 19:34:19 crc kubenswrapper[4942]: I0218 19:34:19.182476 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4edc6296-1ba6-43f7-a076-93f94c77a2c9-operator-scripts\") pod \"root-account-create-update-8f782\" (UID: \"4edc6296-1ba6-43f7-a076-93f94c77a2c9\") " pod="openstack/root-account-create-update-8f782" Feb 18 19:34:19 crc kubenswrapper[4942]: I0218 19:34:19.185794 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4edc6296-1ba6-43f7-a076-93f94c77a2c9-operator-scripts\") pod \"root-account-create-update-8f782\" (UID: \"4edc6296-1ba6-43f7-a076-93f94c77a2c9\") " pod="openstack/root-account-create-update-8f782" Feb 18 19:34:19 crc kubenswrapper[4942]: I0218 19:34:19.200899 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cq5mf\" (UniqueName: \"kubernetes.io/projected/4edc6296-1ba6-43f7-a076-93f94c77a2c9-kube-api-access-cq5mf\") pod \"root-account-create-update-8f782\" (UID: \"4edc6296-1ba6-43f7-a076-93f94c77a2c9\") " pod="openstack/root-account-create-update-8f782" Feb 18 19:34:19 crc kubenswrapper[4942]: I0218 19:34:19.336580 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-8f782" Feb 18 19:34:23 crc kubenswrapper[4942]: E0218 19:34:23.047708 4942 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-glance-api:current-podified" Feb 18 19:34:23 crc kubenswrapper[4942]: E0218 19:34:23.047907 4942 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:glance-db-sync,Image:quay.io/podified-antelope-centos9/openstack-glance-api:current-podified,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:true,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/glance/glance.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:db-sync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-gb9h8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42415,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:*42415,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod glance-db-sync-zw8ls_openstack(72c2952a-cb2e-4a2e-bdb3-bf97a901cbe3): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 18 19:34:23 crc kubenswrapper[4942]: E0218 19:34:23.049624 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"glance-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/glance-db-sync-zw8ls" podUID="72c2952a-cb2e-4a2e-bdb3-bf97a901cbe3" Feb 18 19:34:23 crc kubenswrapper[4942]: I0218 19:34:23.571886 4942 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Feb 18 19:34:23 crc kubenswrapper[4942]: I0218 19:34:23.665246 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-f862-account-create-update-29qlq"] Feb 18 19:34:23 crc kubenswrapper[4942]: I0218 19:34:23.674119 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"543db3d4-08d8-473f-a6ad-7e6a5bb9734c","Type":"ContainerDied","Data":"1193c3f2b445b73f045913a6f677cad12654f417ef42c816b25977d36d83acd7"} Feb 18 19:34:23 crc kubenswrapper[4942]: I0218 19:34:23.674186 4942 scope.go:117] "RemoveContainer" containerID="fd3aef2dcd467a4e4443cb718f2ad37e73afe0c2cc787eca566999184738b19b" Feb 18 19:34:23 crc kubenswrapper[4942]: I0218 19:34:23.674210 4942 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Feb 18 19:34:23 crc kubenswrapper[4942]: E0218 19:34:23.675080 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"glance-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-glance-api:current-podified\\\"\"" pod="openstack/glance-db-sync-zw8ls" podUID="72c2952a-cb2e-4a2e-bdb3-bf97a901cbe3" Feb 18 19:34:23 crc kubenswrapper[4942]: I0218 19:34:23.678225 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-njfd6"] Feb 18 19:34:23 crc kubenswrapper[4942]: I0218 19:34:23.681326 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/543db3d4-08d8-473f-a6ad-7e6a5bb9734c-thanos-prometheus-http-client-file\") pod \"543db3d4-08d8-473f-a6ad-7e6a5bb9734c\" (UID: \"543db3d4-08d8-473f-a6ad-7e6a5bb9734c\") " Feb 18 19:34:23 crc kubenswrapper[4942]: I0218 19:34:23.682447 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/543db3d4-08d8-473f-a6ad-7e6a5bb9734c-config\") pod \"543db3d4-08d8-473f-a6ad-7e6a5bb9734c\" (UID: \"543db3d4-08d8-473f-a6ad-7e6a5bb9734c\") " Feb 18 19:34:23 crc kubenswrapper[4942]: I0218 19:34:23.682522 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/543db3d4-08d8-473f-a6ad-7e6a5bb9734c-prometheus-metric-storage-rulefiles-2\") pod \"543db3d4-08d8-473f-a6ad-7e6a5bb9734c\" (UID: \"543db3d4-08d8-473f-a6ad-7e6a5bb9734c\") " Feb 18 19:34:23 crc kubenswrapper[4942]: I0218 19:34:23.682556 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/543db3d4-08d8-473f-a6ad-7e6a5bb9734c-config-out\") pod \"543db3d4-08d8-473f-a6ad-7e6a5bb9734c\" (UID: \"543db3d4-08d8-473f-a6ad-7e6a5bb9734c\") " Feb 18 19:34:23 crc kubenswrapper[4942]: I0218 19:34:23.682600 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/543db3d4-08d8-473f-a6ad-7e6a5bb9734c-web-config\") pod \"543db3d4-08d8-473f-a6ad-7e6a5bb9734c\" (UID: \"543db3d4-08d8-473f-a6ad-7e6a5bb9734c\") " Feb 18 19:34:23 crc kubenswrapper[4942]: I0218 19:34:23.682651 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6pvjr\" (UniqueName: \"kubernetes.io/projected/543db3d4-08d8-473f-a6ad-7e6a5bb9734c-kube-api-access-6pvjr\") pod \"543db3d4-08d8-473f-a6ad-7e6a5bb9734c\" (UID: \"543db3d4-08d8-473f-a6ad-7e6a5bb9734c\") " Feb 18 19:34:23 crc kubenswrapper[4942]: I0218 19:34:23.682708 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/543db3d4-08d8-473f-a6ad-7e6a5bb9734c-tls-assets\") pod \"543db3d4-08d8-473f-a6ad-7e6a5bb9734c\" (UID: \"543db3d4-08d8-473f-a6ad-7e6a5bb9734c\") " Feb 18 19:34:23 crc kubenswrapper[4942]: I0218 19:34:23.682745 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/543db3d4-08d8-473f-a6ad-7e6a5bb9734c-prometheus-metric-storage-rulefiles-1\") pod \"543db3d4-08d8-473f-a6ad-7e6a5bb9734c\" (UID: \"543db3d4-08d8-473f-a6ad-7e6a5bb9734c\") " Feb 18 19:34:23 crc kubenswrapper[4942]: I0218 19:34:23.683116 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-db\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-99d9d799-8f85-4f2f-8ca2-c6e20d4d69c5\") pod \"543db3d4-08d8-473f-a6ad-7e6a5bb9734c\" (UID: \"543db3d4-08d8-473f-a6ad-7e6a5bb9734c\") " Feb 18 19:34:23 crc kubenswrapper[4942]: I0218 19:34:23.683170 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/543db3d4-08d8-473f-a6ad-7e6a5bb9734c-prometheus-metric-storage-rulefiles-0\") pod \"543db3d4-08d8-473f-a6ad-7e6a5bb9734c\" (UID: \"543db3d4-08d8-473f-a6ad-7e6a5bb9734c\") " Feb 18 19:34:23 crc kubenswrapper[4942]: I0218 19:34:23.683942 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/543db3d4-08d8-473f-a6ad-7e6a5bb9734c-prometheus-metric-storage-rulefiles-2" (OuterVolumeSpecName: "prometheus-metric-storage-rulefiles-2") pod "543db3d4-08d8-473f-a6ad-7e6a5bb9734c" (UID: "543db3d4-08d8-473f-a6ad-7e6a5bb9734c"). InnerVolumeSpecName "prometheus-metric-storage-rulefiles-2". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:34:23 crc kubenswrapper[4942]: I0218 19:34:23.684420 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/543db3d4-08d8-473f-a6ad-7e6a5bb9734c-prometheus-metric-storage-rulefiles-1" (OuterVolumeSpecName: "prometheus-metric-storage-rulefiles-1") pod "543db3d4-08d8-473f-a6ad-7e6a5bb9734c" (UID: "543db3d4-08d8-473f-a6ad-7e6a5bb9734c"). InnerVolumeSpecName "prometheus-metric-storage-rulefiles-1". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:34:23 crc kubenswrapper[4942]: I0218 19:34:23.685159 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/543db3d4-08d8-473f-a6ad-7e6a5bb9734c-prometheus-metric-storage-rulefiles-0" (OuterVolumeSpecName: "prometheus-metric-storage-rulefiles-0") pod "543db3d4-08d8-473f-a6ad-7e6a5bb9734c" (UID: "543db3d4-08d8-473f-a6ad-7e6a5bb9734c"). InnerVolumeSpecName "prometheus-metric-storage-rulefiles-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:34:23 crc kubenswrapper[4942]: I0218 19:34:23.687871 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/543db3d4-08d8-473f-a6ad-7e6a5bb9734c-kube-api-access-6pvjr" (OuterVolumeSpecName: "kube-api-access-6pvjr") pod "543db3d4-08d8-473f-a6ad-7e6a5bb9734c" (UID: "543db3d4-08d8-473f-a6ad-7e6a5bb9734c"). InnerVolumeSpecName "kube-api-access-6pvjr". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:34:23 crc kubenswrapper[4942]: I0218 19:34:23.688300 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-e916-account-create-update-lm2r5"] Feb 18 19:34:23 crc kubenswrapper[4942]: I0218 19:34:23.691804 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/543db3d4-08d8-473f-a6ad-7e6a5bb9734c-config" (OuterVolumeSpecName: "config") pod "543db3d4-08d8-473f-a6ad-7e6a5bb9734c" (UID: "543db3d4-08d8-473f-a6ad-7e6a5bb9734c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:34:23 crc kubenswrapper[4942]: I0218 19:34:23.703305 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/543db3d4-08d8-473f-a6ad-7e6a5bb9734c-config-out" (OuterVolumeSpecName: "config-out") pod "543db3d4-08d8-473f-a6ad-7e6a5bb9734c" (UID: "543db3d4-08d8-473f-a6ad-7e6a5bb9734c"). InnerVolumeSpecName "config-out". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 19:34:23 crc kubenswrapper[4942]: I0218 19:34:23.703488 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/543db3d4-08d8-473f-a6ad-7e6a5bb9734c-thanos-prometheus-http-client-file" (OuterVolumeSpecName: "thanos-prometheus-http-client-file") pod "543db3d4-08d8-473f-a6ad-7e6a5bb9734c" (UID: "543db3d4-08d8-473f-a6ad-7e6a5bb9734c"). InnerVolumeSpecName "thanos-prometheus-http-client-file". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:34:23 crc kubenswrapper[4942]: I0218 19:34:23.703981 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/543db3d4-08d8-473f-a6ad-7e6a5bb9734c-tls-assets" (OuterVolumeSpecName: "tls-assets") pod "543db3d4-08d8-473f-a6ad-7e6a5bb9734c" (UID: "543db3d4-08d8-473f-a6ad-7e6a5bb9734c"). InnerVolumeSpecName "tls-assets". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:34:23 crc kubenswrapper[4942]: I0218 19:34:23.705736 4942 scope.go:117] "RemoveContainer" containerID="ebae20c9222b3aee15451c1f0bbaa8cd79204c32bb3e86cff12a92b878e9497f" Feb 18 19:34:23 crc kubenswrapper[4942]: I0218 19:34:23.708109 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-db-secret" Feb 18 19:34:23 crc kubenswrapper[4942]: I0218 19:34:23.711916 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-db-secret" Feb 18 19:34:23 crc kubenswrapper[4942]: I0218 19:34:23.726435 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/543db3d4-08d8-473f-a6ad-7e6a5bb9734c-web-config" (OuterVolumeSpecName: "web-config") pod "543db3d4-08d8-473f-a6ad-7e6a5bb9734c" (UID: "543db3d4-08d8-473f-a6ad-7e6a5bb9734c"). InnerVolumeSpecName "web-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:34:23 crc kubenswrapper[4942]: I0218 19:34:23.741218 4942 patch_prober.go:28] interesting pod/machine-config-daemon-wqxh4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 19:34:23 crc kubenswrapper[4942]: I0218 19:34:23.741269 4942 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wqxh4" podUID="28921539-823a-4439-a230-3b5aed7085cc" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 19:34:23 crc kubenswrapper[4942]: I0218 19:34:23.746189 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-99d9d799-8f85-4f2f-8ca2-c6e20d4d69c5" (OuterVolumeSpecName: "prometheus-metric-storage-db") pod "543db3d4-08d8-473f-a6ad-7e6a5bb9734c" (UID: "543db3d4-08d8-473f-a6ad-7e6a5bb9734c"). InnerVolumeSpecName "pvc-99d9d799-8f85-4f2f-8ca2-c6e20d4d69c5". PluginName "kubernetes.io/csi", VolumeGidValue "" Feb 18 19:34:23 crc kubenswrapper[4942]: I0218 19:34:23.762422 4942 scope.go:117] "RemoveContainer" containerID="19ca73d07d23c2f4be951d7909e61b79e21cfc7d91c0a9ffd938eb9ea1e5646a" Feb 18 19:34:23 crc kubenswrapper[4942]: I0218 19:34:23.786393 4942 reconciler_common.go:293] "Volume detached for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/543db3d4-08d8-473f-a6ad-7e6a5bb9734c-thanos-prometheus-http-client-file\") on node \"crc\" DevicePath \"\"" Feb 18 19:34:23 crc kubenswrapper[4942]: I0218 19:34:23.786420 4942 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/543db3d4-08d8-473f-a6ad-7e6a5bb9734c-config\") on node \"crc\" DevicePath \"\"" Feb 18 19:34:23 crc kubenswrapper[4942]: I0218 19:34:23.786430 4942 reconciler_common.go:293] "Volume detached for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/543db3d4-08d8-473f-a6ad-7e6a5bb9734c-prometheus-metric-storage-rulefiles-2\") on node \"crc\" DevicePath \"\"" Feb 18 19:34:23 crc kubenswrapper[4942]: I0218 19:34:23.786453 4942 reconciler_common.go:293] "Volume detached for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/543db3d4-08d8-473f-a6ad-7e6a5bb9734c-config-out\") on node \"crc\" DevicePath \"\"" Feb 18 19:34:23 crc kubenswrapper[4942]: I0218 19:34:23.786465 4942 reconciler_common.go:293] "Volume detached for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/543db3d4-08d8-473f-a6ad-7e6a5bb9734c-web-config\") on node \"crc\" DevicePath \"\"" Feb 18 19:34:23 crc kubenswrapper[4942]: I0218 19:34:23.786475 4942 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6pvjr\" (UniqueName: \"kubernetes.io/projected/543db3d4-08d8-473f-a6ad-7e6a5bb9734c-kube-api-access-6pvjr\") on node \"crc\" DevicePath \"\"" Feb 18 19:34:23 crc kubenswrapper[4942]: I0218 19:34:23.786483 4942 reconciler_common.go:293] "Volume detached for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/543db3d4-08d8-473f-a6ad-7e6a5bb9734c-tls-assets\") on node \"crc\" DevicePath \"\"" Feb 18 19:34:23 crc kubenswrapper[4942]: I0218 19:34:23.786493 4942 reconciler_common.go:293] "Volume detached for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/543db3d4-08d8-473f-a6ad-7e6a5bb9734c-prometheus-metric-storage-rulefiles-1\") on node \"crc\" DevicePath \"\"" Feb 18 19:34:23 crc kubenswrapper[4942]: I0218 19:34:23.786515 4942 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-99d9d799-8f85-4f2f-8ca2-c6e20d4d69c5\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-99d9d799-8f85-4f2f-8ca2-c6e20d4d69c5\") on node \"crc\" " Feb 18 19:34:23 crc kubenswrapper[4942]: I0218 19:34:23.786526 4942 reconciler_common.go:293] "Volume detached for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/543db3d4-08d8-473f-a6ad-7e6a5bb9734c-prometheus-metric-storage-rulefiles-0\") on node \"crc\" DevicePath \"\"" Feb 18 19:34:23 crc kubenswrapper[4942]: I0218 19:34:23.820405 4942 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Feb 18 19:34:23 crc kubenswrapper[4942]: I0218 19:34:23.820523 4942 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-99d9d799-8f85-4f2f-8ca2-c6e20d4d69c5" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-99d9d799-8f85-4f2f-8ca2-c6e20d4d69c5") on node "crc" Feb 18 19:34:23 crc kubenswrapper[4942]: I0218 19:34:23.829088 4942 scope.go:117] "RemoveContainer" containerID="81a3193c7a82e4ed4f2a5322d29f8d82024b97bad905eacfd10f035fcf65ddf4" Feb 18 19:34:23 crc kubenswrapper[4942]: I0218 19:34:23.882945 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-s54gq"] Feb 18 19:34:23 crc kubenswrapper[4942]: I0218 19:34:23.887692 4942 reconciler_common.go:293] "Volume detached for volume \"pvc-99d9d799-8f85-4f2f-8ca2-c6e20d4d69c5\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-99d9d799-8f85-4f2f-8ca2-c6e20d4d69c5\") on node \"crc\" DevicePath \"\"" Feb 18 19:34:23 crc kubenswrapper[4942]: I0218 19:34:23.913754 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-fee6-account-create-update-jhlbn"] Feb 18 19:34:23 crc kubenswrapper[4942]: W0218 19:34:23.919075 4942 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfd491cd9_f58f_4821_8004_a5a4762d6bdb.slice/crio-15e5af09725a8d061b0fd0aa1bf3763e9837770b442d86d03cf057837a61bec1 WatchSource:0}: Error finding container 15e5af09725a8d061b0fd0aa1bf3763e9837770b442d86d03cf057837a61bec1: Status 404 returned error can't find the container with id 15e5af09725a8d061b0fd0aa1bf3763e9837770b442d86d03cf057837a61bec1 Feb 18 19:34:23 crc kubenswrapper[4942]: I0218 19:34:23.936873 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-4zlhp"] Feb 18 19:34:23 crc kubenswrapper[4942]: W0218 19:34:23.963175 4942 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc903d652_2880_43bd_9445_f1b03764f413.slice/crio-eeca83cb3b414d5fe71656c4ea46b51dc52f248844cf4487dafc4102ebb78727 WatchSource:0}: Error finding container eeca83cb3b414d5fe71656c4ea46b51dc52f248844cf4487dafc4102ebb78727: Status 404 returned error can't find the container with id eeca83cb3b414d5fe71656c4ea46b51dc52f248844cf4487dafc4102ebb78727 Feb 18 19:34:24 crc kubenswrapper[4942]: I0218 19:34:24.026092 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-db-secret" Feb 18 19:34:24 crc kubenswrapper[4942]: I0218 19:34:24.098936 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-87p82"] Feb 18 19:34:24 crc kubenswrapper[4942]: I0218 19:34:24.163268 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-8f782"] Feb 18 19:34:24 crc kubenswrapper[4942]: I0218 19:34:24.202229 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-mariadb-root-db-secret" Feb 18 19:34:24 crc kubenswrapper[4942]: I0218 19:34:24.205727 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-db-sync-4h9n5"] Feb 18 19:34:24 crc kubenswrapper[4942]: I0218 19:34:24.261923 4942 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/prometheus-metric-storage-0"] Feb 18 19:34:24 crc kubenswrapper[4942]: I0218 19:34:24.275678 4942 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/prometheus-metric-storage-0"] Feb 18 19:34:24 crc kubenswrapper[4942]: I0218 19:34:24.316439 4942 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/prometheus-metric-storage-0"] Feb 18 19:34:24 crc kubenswrapper[4942]: E0218 19:34:24.317036 4942 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="543db3d4-08d8-473f-a6ad-7e6a5bb9734c" containerName="prometheus" Feb 18 19:34:24 crc kubenswrapper[4942]: I0218 19:34:24.317048 4942 state_mem.go:107] "Deleted CPUSet assignment" podUID="543db3d4-08d8-473f-a6ad-7e6a5bb9734c" containerName="prometheus" Feb 18 19:34:24 crc kubenswrapper[4942]: E0218 19:34:24.317125 4942 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="543db3d4-08d8-473f-a6ad-7e6a5bb9734c" containerName="init-config-reloader" Feb 18 19:34:24 crc kubenswrapper[4942]: I0218 19:34:24.317132 4942 state_mem.go:107] "Deleted CPUSet assignment" podUID="543db3d4-08d8-473f-a6ad-7e6a5bb9734c" containerName="init-config-reloader" Feb 18 19:34:24 crc kubenswrapper[4942]: E0218 19:34:24.317143 4942 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="543db3d4-08d8-473f-a6ad-7e6a5bb9734c" containerName="config-reloader" Feb 18 19:34:24 crc kubenswrapper[4942]: I0218 19:34:24.317148 4942 state_mem.go:107] "Deleted CPUSet assignment" podUID="543db3d4-08d8-473f-a6ad-7e6a5bb9734c" containerName="config-reloader" Feb 18 19:34:24 crc kubenswrapper[4942]: E0218 19:34:24.317157 4942 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="543db3d4-08d8-473f-a6ad-7e6a5bb9734c" containerName="thanos-sidecar" Feb 18 19:34:24 crc kubenswrapper[4942]: I0218 19:34:24.317163 4942 state_mem.go:107] "Deleted CPUSet assignment" podUID="543db3d4-08d8-473f-a6ad-7e6a5bb9734c" containerName="thanos-sidecar" Feb 18 19:34:24 crc kubenswrapper[4942]: I0218 19:34:24.317328 4942 memory_manager.go:354] "RemoveStaleState removing state" podUID="543db3d4-08d8-473f-a6ad-7e6a5bb9734c" containerName="prometheus" Feb 18 19:34:24 crc kubenswrapper[4942]: I0218 19:34:24.317340 4942 memory_manager.go:354] "RemoveStaleState removing state" podUID="543db3d4-08d8-473f-a6ad-7e6a5bb9734c" containerName="config-reloader" Feb 18 19:34:24 crc kubenswrapper[4942]: I0218 19:34:24.317350 4942 memory_manager.go:354] "RemoveStaleState removing state" podUID="543db3d4-08d8-473f-a6ad-7e6a5bb9734c" containerName="thanos-sidecar" Feb 18 19:34:24 crc kubenswrapper[4942]: I0218 19:34:24.318862 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Feb 18 19:34:24 crc kubenswrapper[4942]: I0218 19:34:24.322552 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-web-config" Feb 18 19:34:24 crc kubenswrapper[4942]: I0218 19:34:24.322901 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage" Feb 18 19:34:24 crc kubenswrapper[4942]: I0218 19:34:24.328162 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"metric-storage-prometheus-dockercfg-7f4m2" Feb 18 19:34:24 crc kubenswrapper[4942]: I0218 19:34:24.328246 4942 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-1" Feb 18 19:34:24 crc kubenswrapper[4942]: I0218 19:34:24.328307 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-thanos-prometheus-http-client-file" Feb 18 19:34:24 crc kubenswrapper[4942]: I0218 19:34:24.328500 4942 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-2" Feb 18 19:34:24 crc kubenswrapper[4942]: I0218 19:34:24.328516 4942 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-0" Feb 18 19:34:24 crc kubenswrapper[4942]: I0218 19:34:24.328640 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-metric-storage-prometheus-svc" Feb 18 19:34:24 crc kubenswrapper[4942]: I0218 19:34:24.338293 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-tls-assets-0" Feb 18 19:34:24 crc kubenswrapper[4942]: I0218 19:34:24.356639 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Feb 18 19:34:24 crc kubenswrapper[4942]: I0218 19:34:24.400300 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-storage-0"] Feb 18 19:34:24 crc kubenswrapper[4942]: I0218 19:34:24.404653 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/219b2aa4-0497-40f8-a3d0-947d37be720d-secret-combined-ca-bundle\") pod \"prometheus-metric-storage-0\" (UID: \"219b2aa4-0497-40f8-a3d0-947d37be720d\") " pod="openstack/prometheus-metric-storage-0" Feb 18 19:34:24 crc kubenswrapper[4942]: I0218 19:34:24.404927 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/219b2aa4-0497-40f8-a3d0-947d37be720d-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"219b2aa4-0497-40f8-a3d0-947d37be720d\") " pod="openstack/prometheus-metric-storage-0" Feb 18 19:34:24 crc kubenswrapper[4942]: I0218 19:34:24.404956 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/219b2aa4-0497-40f8-a3d0-947d37be720d-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"219b2aa4-0497-40f8-a3d0-947d37be720d\") " pod="openstack/prometheus-metric-storage-0" Feb 18 19:34:24 crc kubenswrapper[4942]: I0218 19:34:24.404977 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/219b2aa4-0497-40f8-a3d0-947d37be720d-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"219b2aa4-0497-40f8-a3d0-947d37be720d\") " pod="openstack/prometheus-metric-storage-0" Feb 18 19:34:24 crc kubenswrapper[4942]: I0218 19:34:24.404997 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s685z\" (UniqueName: \"kubernetes.io/projected/219b2aa4-0497-40f8-a3d0-947d37be720d-kube-api-access-s685z\") pod \"prometheus-metric-storage-0\" (UID: \"219b2aa4-0497-40f8-a3d0-947d37be720d\") " pod="openstack/prometheus-metric-storage-0" Feb 18 19:34:24 crc kubenswrapper[4942]: I0218 19:34:24.405033 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/219b2aa4-0497-40f8-a3d0-947d37be720d-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"219b2aa4-0497-40f8-a3d0-947d37be720d\") " pod="openstack/prometheus-metric-storage-0" Feb 18 19:34:24 crc kubenswrapper[4942]: I0218 19:34:24.405081 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/219b2aa4-0497-40f8-a3d0-947d37be720d-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"219b2aa4-0497-40f8-a3d0-947d37be720d\") " pod="openstack/prometheus-metric-storage-0" Feb 18 19:34:24 crc kubenswrapper[4942]: I0218 19:34:24.405102 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/219b2aa4-0497-40f8-a3d0-947d37be720d-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"219b2aa4-0497-40f8-a3d0-947d37be720d\") " pod="openstack/prometheus-metric-storage-0" Feb 18 19:34:24 crc kubenswrapper[4942]: I0218 19:34:24.405125 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/219b2aa4-0497-40f8-a3d0-947d37be720d-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"219b2aa4-0497-40f8-a3d0-947d37be720d\") " pod="openstack/prometheus-metric-storage-0" Feb 18 19:34:24 crc kubenswrapper[4942]: I0218 19:34:24.405154 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/219b2aa4-0497-40f8-a3d0-947d37be720d-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"219b2aa4-0497-40f8-a3d0-947d37be720d\") " pod="openstack/prometheus-metric-storage-0" Feb 18 19:34:24 crc kubenswrapper[4942]: I0218 19:34:24.405173 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/219b2aa4-0497-40f8-a3d0-947d37be720d-config\") pod \"prometheus-metric-storage-0\" (UID: \"219b2aa4-0497-40f8-a3d0-947d37be720d\") " pod="openstack/prometheus-metric-storage-0" Feb 18 19:34:24 crc kubenswrapper[4942]: I0218 19:34:24.405206 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/219b2aa4-0497-40f8-a3d0-947d37be720d-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"219b2aa4-0497-40f8-a3d0-947d37be720d\") " pod="openstack/prometheus-metric-storage-0" Feb 18 19:34:24 crc kubenswrapper[4942]: I0218 19:34:24.405262 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-99d9d799-8f85-4f2f-8ca2-c6e20d4d69c5\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-99d9d799-8f85-4f2f-8ca2-c6e20d4d69c5\") pod \"prometheus-metric-storage-0\" (UID: \"219b2aa4-0497-40f8-a3d0-947d37be720d\") " pod="openstack/prometheus-metric-storage-0" Feb 18 19:34:24 crc kubenswrapper[4942]: I0218 19:34:24.507010 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-99d9d799-8f85-4f2f-8ca2-c6e20d4d69c5\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-99d9d799-8f85-4f2f-8ca2-c6e20d4d69c5\") pod \"prometheus-metric-storage-0\" (UID: \"219b2aa4-0497-40f8-a3d0-947d37be720d\") " pod="openstack/prometheus-metric-storage-0" Feb 18 19:34:24 crc kubenswrapper[4942]: I0218 19:34:24.507067 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/219b2aa4-0497-40f8-a3d0-947d37be720d-secret-combined-ca-bundle\") pod \"prometheus-metric-storage-0\" (UID: \"219b2aa4-0497-40f8-a3d0-947d37be720d\") " pod="openstack/prometheus-metric-storage-0" Feb 18 19:34:24 crc kubenswrapper[4942]: I0218 19:34:24.507095 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/219b2aa4-0497-40f8-a3d0-947d37be720d-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"219b2aa4-0497-40f8-a3d0-947d37be720d\") " pod="openstack/prometheus-metric-storage-0" Feb 18 19:34:24 crc kubenswrapper[4942]: I0218 19:34:24.507115 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/219b2aa4-0497-40f8-a3d0-947d37be720d-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"219b2aa4-0497-40f8-a3d0-947d37be720d\") " pod="openstack/prometheus-metric-storage-0" Feb 18 19:34:24 crc kubenswrapper[4942]: I0218 19:34:24.507136 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/219b2aa4-0497-40f8-a3d0-947d37be720d-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"219b2aa4-0497-40f8-a3d0-947d37be720d\") " pod="openstack/prometheus-metric-storage-0" Feb 18 19:34:24 crc kubenswrapper[4942]: I0218 19:34:24.507154 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s685z\" (UniqueName: \"kubernetes.io/projected/219b2aa4-0497-40f8-a3d0-947d37be720d-kube-api-access-s685z\") pod \"prometheus-metric-storage-0\" (UID: \"219b2aa4-0497-40f8-a3d0-947d37be720d\") " pod="openstack/prometheus-metric-storage-0" Feb 18 19:34:24 crc kubenswrapper[4942]: I0218 19:34:24.507188 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/219b2aa4-0497-40f8-a3d0-947d37be720d-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"219b2aa4-0497-40f8-a3d0-947d37be720d\") " pod="openstack/prometheus-metric-storage-0" Feb 18 19:34:24 crc kubenswrapper[4942]: I0218 19:34:24.507241 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/219b2aa4-0497-40f8-a3d0-947d37be720d-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"219b2aa4-0497-40f8-a3d0-947d37be720d\") " pod="openstack/prometheus-metric-storage-0" Feb 18 19:34:24 crc kubenswrapper[4942]: I0218 19:34:24.507279 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/219b2aa4-0497-40f8-a3d0-947d37be720d-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"219b2aa4-0497-40f8-a3d0-947d37be720d\") " pod="openstack/prometheus-metric-storage-0" Feb 18 19:34:24 crc kubenswrapper[4942]: I0218 19:34:24.507316 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/219b2aa4-0497-40f8-a3d0-947d37be720d-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"219b2aa4-0497-40f8-a3d0-947d37be720d\") " pod="openstack/prometheus-metric-storage-0" Feb 18 19:34:24 crc kubenswrapper[4942]: I0218 19:34:24.507342 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/219b2aa4-0497-40f8-a3d0-947d37be720d-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"219b2aa4-0497-40f8-a3d0-947d37be720d\") " pod="openstack/prometheus-metric-storage-0" Feb 18 19:34:24 crc kubenswrapper[4942]: I0218 19:34:24.507361 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/219b2aa4-0497-40f8-a3d0-947d37be720d-config\") pod \"prometheus-metric-storage-0\" (UID: \"219b2aa4-0497-40f8-a3d0-947d37be720d\") " pod="openstack/prometheus-metric-storage-0" Feb 18 19:34:24 crc kubenswrapper[4942]: I0218 19:34:24.507378 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/219b2aa4-0497-40f8-a3d0-947d37be720d-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"219b2aa4-0497-40f8-a3d0-947d37be720d\") " pod="openstack/prometheus-metric-storage-0" Feb 18 19:34:24 crc kubenswrapper[4942]: I0218 19:34:24.508124 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/219b2aa4-0497-40f8-a3d0-947d37be720d-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"219b2aa4-0497-40f8-a3d0-947d37be720d\") " pod="openstack/prometheus-metric-storage-0" Feb 18 19:34:24 crc kubenswrapper[4942]: I0218 19:34:24.509423 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/219b2aa4-0497-40f8-a3d0-947d37be720d-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"219b2aa4-0497-40f8-a3d0-947d37be720d\") " pod="openstack/prometheus-metric-storage-0" Feb 18 19:34:24 crc kubenswrapper[4942]: I0218 19:34:24.509877 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/219b2aa4-0497-40f8-a3d0-947d37be720d-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"219b2aa4-0497-40f8-a3d0-947d37be720d\") " pod="openstack/prometheus-metric-storage-0" Feb 18 19:34:24 crc kubenswrapper[4942]: I0218 19:34:24.513651 4942 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 18 19:34:24 crc kubenswrapper[4942]: I0218 19:34:24.513681 4942 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-99d9d799-8f85-4f2f-8ca2-c6e20d4d69c5\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-99d9d799-8f85-4f2f-8ca2-c6e20d4d69c5\") pod \"prometheus-metric-storage-0\" (UID: \"219b2aa4-0497-40f8-a3d0-947d37be720d\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/70b345b463ff13ff33bce45da0f4a8796a1574afa2d8fd2ecf4f2239b34767fb/globalmount\"" pod="openstack/prometheus-metric-storage-0" Feb 18 19:34:24 crc kubenswrapper[4942]: I0218 19:34:24.515447 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/219b2aa4-0497-40f8-a3d0-947d37be720d-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"219b2aa4-0497-40f8-a3d0-947d37be720d\") " pod="openstack/prometheus-metric-storage-0" Feb 18 19:34:24 crc kubenswrapper[4942]: I0218 19:34:24.519899 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/219b2aa4-0497-40f8-a3d0-947d37be720d-secret-combined-ca-bundle\") pod \"prometheus-metric-storage-0\" (UID: \"219b2aa4-0497-40f8-a3d0-947d37be720d\") " pod="openstack/prometheus-metric-storage-0" Feb 18 19:34:24 crc kubenswrapper[4942]: I0218 19:34:24.520035 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/219b2aa4-0497-40f8-a3d0-947d37be720d-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"219b2aa4-0497-40f8-a3d0-947d37be720d\") " pod="openstack/prometheus-metric-storage-0" Feb 18 19:34:24 crc kubenswrapper[4942]: I0218 19:34:24.521094 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/219b2aa4-0497-40f8-a3d0-947d37be720d-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"219b2aa4-0497-40f8-a3d0-947d37be720d\") " pod="openstack/prometheus-metric-storage-0" Feb 18 19:34:24 crc kubenswrapper[4942]: I0218 19:34:24.522871 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/219b2aa4-0497-40f8-a3d0-947d37be720d-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"219b2aa4-0497-40f8-a3d0-947d37be720d\") " pod="openstack/prometheus-metric-storage-0" Feb 18 19:34:24 crc kubenswrapper[4942]: I0218 19:34:24.525417 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/219b2aa4-0497-40f8-a3d0-947d37be720d-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"219b2aa4-0497-40f8-a3d0-947d37be720d\") " pod="openstack/prometheus-metric-storage-0" Feb 18 19:34:24 crc kubenswrapper[4942]: I0218 19:34:24.525963 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s685z\" (UniqueName: \"kubernetes.io/projected/219b2aa4-0497-40f8-a3d0-947d37be720d-kube-api-access-s685z\") pod \"prometheus-metric-storage-0\" (UID: \"219b2aa4-0497-40f8-a3d0-947d37be720d\") " pod="openstack/prometheus-metric-storage-0" Feb 18 19:34:24 crc kubenswrapper[4942]: I0218 19:34:24.526060 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/219b2aa4-0497-40f8-a3d0-947d37be720d-config\") pod \"prometheus-metric-storage-0\" (UID: \"219b2aa4-0497-40f8-a3d0-947d37be720d\") " pod="openstack/prometheus-metric-storage-0" Feb 18 19:34:24 crc kubenswrapper[4942]: I0218 19:34:24.526611 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/219b2aa4-0497-40f8-a3d0-947d37be720d-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"219b2aa4-0497-40f8-a3d0-947d37be720d\") " pod="openstack/prometheus-metric-storage-0" Feb 18 19:34:24 crc kubenswrapper[4942]: I0218 19:34:24.586639 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-99d9d799-8f85-4f2f-8ca2-c6e20d4d69c5\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-99d9d799-8f85-4f2f-8ca2-c6e20d4d69c5\") pod \"prometheus-metric-storage-0\" (UID: \"219b2aa4-0497-40f8-a3d0-947d37be720d\") " pod="openstack/prometheus-metric-storage-0" Feb 18 19:34:24 crc kubenswrapper[4942]: I0218 19:34:24.645528 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Feb 18 19:34:24 crc kubenswrapper[4942]: I0218 19:34:24.695265 4942 generic.go:334] "Generic (PLEG): container finished" podID="4edc6296-1ba6-43f7-a076-93f94c77a2c9" containerID="811ec8cee78f943aac4bbfb29b95ea4e9d51e51453fc9da48c7eabb6372bfb2b" exitCode=0 Feb 18 19:34:24 crc kubenswrapper[4942]: I0218 19:34:24.695323 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-8f782" event={"ID":"4edc6296-1ba6-43f7-a076-93f94c77a2c9","Type":"ContainerDied","Data":"811ec8cee78f943aac4bbfb29b95ea4e9d51e51453fc9da48c7eabb6372bfb2b"} Feb 18 19:34:24 crc kubenswrapper[4942]: I0218 19:34:24.695355 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-8f782" event={"ID":"4edc6296-1ba6-43f7-a076-93f94c77a2c9","Type":"ContainerStarted","Data":"b99ac8b869534b1562557fd7d216264bc451f958990baf05beabf148b59e05dd"} Feb 18 19:34:24 crc kubenswrapper[4942]: I0218 19:34:24.698787 4942 generic.go:334] "Generic (PLEG): container finished" podID="9a8e424f-44a5-4eaa-9f3f-882f070aa404" containerID="b1d49648de6b3a759e8404975f38b8d6b28e2ed6cf3c88b12649b6a3fed64a43" exitCode=0 Feb 18 19:34:24 crc kubenswrapper[4942]: I0218 19:34:24.698857 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-4zlhp" event={"ID":"9a8e424f-44a5-4eaa-9f3f-882f070aa404","Type":"ContainerDied","Data":"b1d49648de6b3a759e8404975f38b8d6b28e2ed6cf3c88b12649b6a3fed64a43"} Feb 18 19:34:24 crc kubenswrapper[4942]: I0218 19:34:24.698978 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-4zlhp" event={"ID":"9a8e424f-44a5-4eaa-9f3f-882f070aa404","Type":"ContainerStarted","Data":"782c7169a527587f0df4ddf04bca08ff30f9b37d9fbb836be2b06d269c8af331"} Feb 18 19:34:24 crc kubenswrapper[4942]: I0218 19:34:24.700789 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-s54gq" event={"ID":"fd491cd9-f58f-4821-8004-a5a4762d6bdb","Type":"ContainerStarted","Data":"55245bf67e01b4a9996ff8822e688651e94d412e130a306f9914243a723acae1"} Feb 18 19:34:24 crc kubenswrapper[4942]: I0218 19:34:24.700824 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-s54gq" event={"ID":"fd491cd9-f58f-4821-8004-a5a4762d6bdb","Type":"ContainerStarted","Data":"15e5af09725a8d061b0fd0aa1bf3763e9837770b442d86d03cf057837a61bec1"} Feb 18 19:34:24 crc kubenswrapper[4942]: I0218 19:34:24.708831 4942 generic.go:334] "Generic (PLEG): container finished" podID="fcea68e2-0d37-4812-a7ad-403e59b7b556" containerID="0e02d4fe73a4e293f62bf869926c2629a47060f29d5a8a14b093d650895a851c" exitCode=0 Feb 18 19:34:24 crc kubenswrapper[4942]: I0218 19:34:24.708907 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-e916-account-create-update-lm2r5" event={"ID":"fcea68e2-0d37-4812-a7ad-403e59b7b556","Type":"ContainerDied","Data":"0e02d4fe73a4e293f62bf869926c2629a47060f29d5a8a14b093d650895a851c"} Feb 18 19:34:24 crc kubenswrapper[4942]: I0218 19:34:24.708936 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-e916-account-create-update-lm2r5" event={"ID":"fcea68e2-0d37-4812-a7ad-403e59b7b556","Type":"ContainerStarted","Data":"c1d0729b4bd7253e74f4b6ab5454c70366e85e881a46ee7d2e977d6d54e404bc"} Feb 18 19:34:24 crc kubenswrapper[4942]: I0218 19:34:24.713537 4942 generic.go:334] "Generic (PLEG): container finished" podID="dddbc305-d881-4ef9-ada1-49e8f180162c" containerID="727dde1e275a9b0b467f516dab63cba62b27e6168562e7bbd076fe7b30b2869f" exitCode=0 Feb 18 19:34:24 crc kubenswrapper[4942]: I0218 19:34:24.713611 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-njfd6" event={"ID":"dddbc305-d881-4ef9-ada1-49e8f180162c","Type":"ContainerDied","Data":"727dde1e275a9b0b467f516dab63cba62b27e6168562e7bbd076fe7b30b2869f"} Feb 18 19:34:24 crc kubenswrapper[4942]: I0218 19:34:24.713637 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-njfd6" event={"ID":"dddbc305-d881-4ef9-ada1-49e8f180162c","Type":"ContainerStarted","Data":"49c2a64763568347502e4187d0859e00063ef22d9d7344a3f0903d0addb05807"} Feb 18 19:34:24 crc kubenswrapper[4942]: I0218 19:34:24.717111 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-fee6-account-create-update-jhlbn" event={"ID":"c903d652-2880-43bd-9445-f1b03764f413","Type":"ContainerStarted","Data":"e0735df4037c9d26aa2f69d57c8e775cb7c18bc1fdb68127c0b914f822f83bec"} Feb 18 19:34:24 crc kubenswrapper[4942]: I0218 19:34:24.717144 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-fee6-account-create-update-jhlbn" event={"ID":"c903d652-2880-43bd-9445-f1b03764f413","Type":"ContainerStarted","Data":"eeca83cb3b414d5fe71656c4ea46b51dc52f248844cf4487dafc4102ebb78727"} Feb 18 19:34:24 crc kubenswrapper[4942]: I0218 19:34:24.723106 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-db-sync-4h9n5" event={"ID":"983d5293-8413-4a29-88b2-ba775b3b4a8b","Type":"ContainerStarted","Data":"4d89390c95728bcf123b54a9e3391d1834069387fcdf07d8c1f1a0845cb094b5"} Feb 18 19:34:24 crc kubenswrapper[4942]: I0218 19:34:24.730324 4942 generic.go:334] "Generic (PLEG): container finished" podID="35dbdf24-b5f9-4a19-96f9-1fe390df90e1" containerID="a8c3861121c5594ca501846681ea609d414d4c26e10e1b891f8ff728174138b2" exitCode=0 Feb 18 19:34:24 crc kubenswrapper[4942]: I0218 19:34:24.730396 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-f862-account-create-update-29qlq" event={"ID":"35dbdf24-b5f9-4a19-96f9-1fe390df90e1","Type":"ContainerDied","Data":"a8c3861121c5594ca501846681ea609d414d4c26e10e1b891f8ff728174138b2"} Feb 18 19:34:24 crc kubenswrapper[4942]: I0218 19:34:24.730424 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-f862-account-create-update-29qlq" event={"ID":"35dbdf24-b5f9-4a19-96f9-1fe390df90e1","Type":"ContainerStarted","Data":"81ed1e2a309c2b32082391ca65190ea386781edda62314bc4f655da6fdbe708c"} Feb 18 19:34:24 crc kubenswrapper[4942]: I0218 19:34:24.744116 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-87p82" event={"ID":"7ed4f34d-fe0d-402c-95d3-171e73eb5bd5","Type":"ContainerStarted","Data":"503b84b004f829c8689047881a5f94617e83302d763c11ea9186e35169366871"} Feb 18 19:34:24 crc kubenswrapper[4942]: I0218 19:34:24.749787 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"125bdbb5-76a8-450f-b645-2133024a1bd0","Type":"ContainerStarted","Data":"85cdef38bdb1a5a0192a5ed4d12d7a5edfcde5f23ee43b27270ae3f89c1d09de"} Feb 18 19:34:24 crc kubenswrapper[4942]: I0218 19:34:24.796958 4942 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-fee6-account-create-update-jhlbn" podStartSLOduration=7.796937105 podStartE2EDuration="7.796937105s" podCreationTimestamp="2026-02-18 19:34:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:34:24.791843614 +0000 UTC m=+1024.496776279" watchObservedRunningTime="2026-02-18 19:34:24.796937105 +0000 UTC m=+1024.501869770" Feb 18 19:34:25 crc kubenswrapper[4942]: I0218 19:34:25.053695 4942 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="543db3d4-08d8-473f-a6ad-7e6a5bb9734c" path="/var/lib/kubelet/pods/543db3d4-08d8-473f-a6ad-7e6a5bb9734c/volumes" Feb 18 19:34:25 crc kubenswrapper[4942]: I0218 19:34:25.141010 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Feb 18 19:34:25 crc kubenswrapper[4942]: I0218 19:34:25.764657 4942 generic.go:334] "Generic (PLEG): container finished" podID="c903d652-2880-43bd-9445-f1b03764f413" containerID="e0735df4037c9d26aa2f69d57c8e775cb7c18bc1fdb68127c0b914f822f83bec" exitCode=0 Feb 18 19:34:25 crc kubenswrapper[4942]: I0218 19:34:25.764776 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-fee6-account-create-update-jhlbn" event={"ID":"c903d652-2880-43bd-9445-f1b03764f413","Type":"ContainerDied","Data":"e0735df4037c9d26aa2f69d57c8e775cb7c18bc1fdb68127c0b914f822f83bec"} Feb 18 19:34:25 crc kubenswrapper[4942]: I0218 19:34:25.771361 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-s54gq" event={"ID":"fd491cd9-f58f-4821-8004-a5a4762d6bdb","Type":"ContainerDied","Data":"55245bf67e01b4a9996ff8822e688651e94d412e130a306f9914243a723acae1"} Feb 18 19:34:25 crc kubenswrapper[4942]: I0218 19:34:25.773967 4942 generic.go:334] "Generic (PLEG): container finished" podID="fd491cd9-f58f-4821-8004-a5a4762d6bdb" containerID="55245bf67e01b4a9996ff8822e688651e94d412e130a306f9914243a723acae1" exitCode=0 Feb 18 19:34:25 crc kubenswrapper[4942]: I0218 19:34:25.776554 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"219b2aa4-0497-40f8-a3d0-947d37be720d","Type":"ContainerStarted","Data":"60c6687648dd41b94a4225ed03866cf4c665cec18c0eb5d84fcb09f0dbc7012b"} Feb 18 19:34:27 crc kubenswrapper[4942]: I0218 19:34:27.459210 4942 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-e916-account-create-update-lm2r5" Feb 18 19:34:27 crc kubenswrapper[4942]: I0218 19:34:27.472215 4942 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-4zlhp" Feb 18 19:34:27 crc kubenswrapper[4942]: I0218 19:34:27.481470 4942 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-f862-account-create-update-29qlq" Feb 18 19:34:27 crc kubenswrapper[4942]: I0218 19:34:27.489365 4942 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-njfd6" Feb 18 19:34:27 crc kubenswrapper[4942]: I0218 19:34:27.497954 4942 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-8f782" Feb 18 19:34:27 crc kubenswrapper[4942]: I0218 19:34:27.583805 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fcea68e2-0d37-4812-a7ad-403e59b7b556-operator-scripts\") pod \"fcea68e2-0d37-4812-a7ad-403e59b7b556\" (UID: \"fcea68e2-0d37-4812-a7ad-403e59b7b556\") " Feb 18 19:34:27 crc kubenswrapper[4942]: I0218 19:34:27.583860 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cq5mf\" (UniqueName: \"kubernetes.io/projected/4edc6296-1ba6-43f7-a076-93f94c77a2c9-kube-api-access-cq5mf\") pod \"4edc6296-1ba6-43f7-a076-93f94c77a2c9\" (UID: \"4edc6296-1ba6-43f7-a076-93f94c77a2c9\") " Feb 18 19:34:27 crc kubenswrapper[4942]: I0218 19:34:27.583949 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q5lhw\" (UniqueName: \"kubernetes.io/projected/9a8e424f-44a5-4eaa-9f3f-882f070aa404-kube-api-access-q5lhw\") pod \"9a8e424f-44a5-4eaa-9f3f-882f070aa404\" (UID: \"9a8e424f-44a5-4eaa-9f3f-882f070aa404\") " Feb 18 19:34:27 crc kubenswrapper[4942]: I0218 19:34:27.583984 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4edc6296-1ba6-43f7-a076-93f94c77a2c9-operator-scripts\") pod \"4edc6296-1ba6-43f7-a076-93f94c77a2c9\" (UID: \"4edc6296-1ba6-43f7-a076-93f94c77a2c9\") " Feb 18 19:34:27 crc kubenswrapper[4942]: I0218 19:34:27.584048 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zdmfn\" (UniqueName: \"kubernetes.io/projected/35dbdf24-b5f9-4a19-96f9-1fe390df90e1-kube-api-access-zdmfn\") pod \"35dbdf24-b5f9-4a19-96f9-1fe390df90e1\" (UID: \"35dbdf24-b5f9-4a19-96f9-1fe390df90e1\") " Feb 18 19:34:27 crc kubenswrapper[4942]: I0218 19:34:27.584066 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9a8e424f-44a5-4eaa-9f3f-882f070aa404-operator-scripts\") pod \"9a8e424f-44a5-4eaa-9f3f-882f070aa404\" (UID: \"9a8e424f-44a5-4eaa-9f3f-882f070aa404\") " Feb 18 19:34:27 crc kubenswrapper[4942]: I0218 19:34:27.584103 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kmdm8\" (UniqueName: \"kubernetes.io/projected/fcea68e2-0d37-4812-a7ad-403e59b7b556-kube-api-access-kmdm8\") pod \"fcea68e2-0d37-4812-a7ad-403e59b7b556\" (UID: \"fcea68e2-0d37-4812-a7ad-403e59b7b556\") " Feb 18 19:34:27 crc kubenswrapper[4942]: I0218 19:34:27.584125 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zvwzc\" (UniqueName: \"kubernetes.io/projected/dddbc305-d881-4ef9-ada1-49e8f180162c-kube-api-access-zvwzc\") pod \"dddbc305-d881-4ef9-ada1-49e8f180162c\" (UID: \"dddbc305-d881-4ef9-ada1-49e8f180162c\") " Feb 18 19:34:27 crc kubenswrapper[4942]: I0218 19:34:27.584146 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/35dbdf24-b5f9-4a19-96f9-1fe390df90e1-operator-scripts\") pod \"35dbdf24-b5f9-4a19-96f9-1fe390df90e1\" (UID: \"35dbdf24-b5f9-4a19-96f9-1fe390df90e1\") " Feb 18 19:34:27 crc kubenswrapper[4942]: I0218 19:34:27.584224 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/dddbc305-d881-4ef9-ada1-49e8f180162c-operator-scripts\") pod \"dddbc305-d881-4ef9-ada1-49e8f180162c\" (UID: \"dddbc305-d881-4ef9-ada1-49e8f180162c\") " Feb 18 19:34:27 crc kubenswrapper[4942]: I0218 19:34:27.584671 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fcea68e2-0d37-4812-a7ad-403e59b7b556-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "fcea68e2-0d37-4812-a7ad-403e59b7b556" (UID: "fcea68e2-0d37-4812-a7ad-403e59b7b556"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:34:27 crc kubenswrapper[4942]: I0218 19:34:27.584720 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4edc6296-1ba6-43f7-a076-93f94c77a2c9-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "4edc6296-1ba6-43f7-a076-93f94c77a2c9" (UID: "4edc6296-1ba6-43f7-a076-93f94c77a2c9"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:34:27 crc kubenswrapper[4942]: I0218 19:34:27.584975 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dddbc305-d881-4ef9-ada1-49e8f180162c-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "dddbc305-d881-4ef9-ada1-49e8f180162c" (UID: "dddbc305-d881-4ef9-ada1-49e8f180162c"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:34:27 crc kubenswrapper[4942]: I0218 19:34:27.585450 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/35dbdf24-b5f9-4a19-96f9-1fe390df90e1-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "35dbdf24-b5f9-4a19-96f9-1fe390df90e1" (UID: "35dbdf24-b5f9-4a19-96f9-1fe390df90e1"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:34:27 crc kubenswrapper[4942]: I0218 19:34:27.585515 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9a8e424f-44a5-4eaa-9f3f-882f070aa404-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "9a8e424f-44a5-4eaa-9f3f-882f070aa404" (UID: "9a8e424f-44a5-4eaa-9f3f-882f070aa404"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:34:27 crc kubenswrapper[4942]: I0218 19:34:27.589932 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fcea68e2-0d37-4812-a7ad-403e59b7b556-kube-api-access-kmdm8" (OuterVolumeSpecName: "kube-api-access-kmdm8") pod "fcea68e2-0d37-4812-a7ad-403e59b7b556" (UID: "fcea68e2-0d37-4812-a7ad-403e59b7b556"). InnerVolumeSpecName "kube-api-access-kmdm8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:34:27 crc kubenswrapper[4942]: I0218 19:34:27.590666 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9a8e424f-44a5-4eaa-9f3f-882f070aa404-kube-api-access-q5lhw" (OuterVolumeSpecName: "kube-api-access-q5lhw") pod "9a8e424f-44a5-4eaa-9f3f-882f070aa404" (UID: "9a8e424f-44a5-4eaa-9f3f-882f070aa404"). InnerVolumeSpecName "kube-api-access-q5lhw". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:34:27 crc kubenswrapper[4942]: I0218 19:34:27.665970 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/35dbdf24-b5f9-4a19-96f9-1fe390df90e1-kube-api-access-zdmfn" (OuterVolumeSpecName: "kube-api-access-zdmfn") pod "35dbdf24-b5f9-4a19-96f9-1fe390df90e1" (UID: "35dbdf24-b5f9-4a19-96f9-1fe390df90e1"). InnerVolumeSpecName "kube-api-access-zdmfn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:34:27 crc kubenswrapper[4942]: I0218 19:34:27.675329 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4edc6296-1ba6-43f7-a076-93f94c77a2c9-kube-api-access-cq5mf" (OuterVolumeSpecName: "kube-api-access-cq5mf") pod "4edc6296-1ba6-43f7-a076-93f94c77a2c9" (UID: "4edc6296-1ba6-43f7-a076-93f94c77a2c9"). InnerVolumeSpecName "kube-api-access-cq5mf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:34:27 crc kubenswrapper[4942]: I0218 19:34:27.676450 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dddbc305-d881-4ef9-ada1-49e8f180162c-kube-api-access-zvwzc" (OuterVolumeSpecName: "kube-api-access-zvwzc") pod "dddbc305-d881-4ef9-ada1-49e8f180162c" (UID: "dddbc305-d881-4ef9-ada1-49e8f180162c"). InnerVolumeSpecName "kube-api-access-zvwzc". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:34:27 crc kubenswrapper[4942]: I0218 19:34:27.686667 4942 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q5lhw\" (UniqueName: \"kubernetes.io/projected/9a8e424f-44a5-4eaa-9f3f-882f070aa404-kube-api-access-q5lhw\") on node \"crc\" DevicePath \"\"" Feb 18 19:34:27 crc kubenswrapper[4942]: I0218 19:34:27.686709 4942 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4edc6296-1ba6-43f7-a076-93f94c77a2c9-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 18 19:34:27 crc kubenswrapper[4942]: I0218 19:34:27.686728 4942 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zdmfn\" (UniqueName: \"kubernetes.io/projected/35dbdf24-b5f9-4a19-96f9-1fe390df90e1-kube-api-access-zdmfn\") on node \"crc\" DevicePath \"\"" Feb 18 19:34:27 crc kubenswrapper[4942]: I0218 19:34:27.686746 4942 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9a8e424f-44a5-4eaa-9f3f-882f070aa404-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 18 19:34:27 crc kubenswrapper[4942]: I0218 19:34:27.686784 4942 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kmdm8\" (UniqueName: \"kubernetes.io/projected/fcea68e2-0d37-4812-a7ad-403e59b7b556-kube-api-access-kmdm8\") on node \"crc\" DevicePath \"\"" Feb 18 19:34:27 crc kubenswrapper[4942]: I0218 19:34:27.686852 4942 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zvwzc\" (UniqueName: \"kubernetes.io/projected/dddbc305-d881-4ef9-ada1-49e8f180162c-kube-api-access-zvwzc\") on node \"crc\" DevicePath \"\"" Feb 18 19:34:27 crc kubenswrapper[4942]: I0218 19:34:27.686870 4942 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/35dbdf24-b5f9-4a19-96f9-1fe390df90e1-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 18 19:34:27 crc kubenswrapper[4942]: I0218 19:34:27.686886 4942 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/dddbc305-d881-4ef9-ada1-49e8f180162c-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 18 19:34:27 crc kubenswrapper[4942]: I0218 19:34:27.686902 4942 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fcea68e2-0d37-4812-a7ad-403e59b7b556-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 18 19:34:27 crc kubenswrapper[4942]: I0218 19:34:27.686919 4942 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cq5mf\" (UniqueName: \"kubernetes.io/projected/4edc6296-1ba6-43f7-a076-93f94c77a2c9-kube-api-access-cq5mf\") on node \"crc\" DevicePath \"\"" Feb 18 19:34:27 crc kubenswrapper[4942]: I0218 19:34:27.791607 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-4zlhp" event={"ID":"9a8e424f-44a5-4eaa-9f3f-882f070aa404","Type":"ContainerDied","Data":"782c7169a527587f0df4ddf04bca08ff30f9b37d9fbb836be2b06d269c8af331"} Feb 18 19:34:27 crc kubenswrapper[4942]: I0218 19:34:27.791648 4942 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="782c7169a527587f0df4ddf04bca08ff30f9b37d9fbb836be2b06d269c8af331" Feb 18 19:34:27 crc kubenswrapper[4942]: I0218 19:34:27.791647 4942 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-4zlhp" Feb 18 19:34:27 crc kubenswrapper[4942]: I0218 19:34:27.804584 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-f862-account-create-update-29qlq" event={"ID":"35dbdf24-b5f9-4a19-96f9-1fe390df90e1","Type":"ContainerDied","Data":"81ed1e2a309c2b32082391ca65190ea386781edda62314bc4f655da6fdbe708c"} Feb 18 19:34:27 crc kubenswrapper[4942]: I0218 19:34:27.804610 4942 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-f862-account-create-update-29qlq" Feb 18 19:34:27 crc kubenswrapper[4942]: I0218 19:34:27.804623 4942 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="81ed1e2a309c2b32082391ca65190ea386781edda62314bc4f655da6fdbe708c" Feb 18 19:34:27 crc kubenswrapper[4942]: I0218 19:34:27.806478 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-e916-account-create-update-lm2r5" event={"ID":"fcea68e2-0d37-4812-a7ad-403e59b7b556","Type":"ContainerDied","Data":"c1d0729b4bd7253e74f4b6ab5454c70366e85e881a46ee7d2e977d6d54e404bc"} Feb 18 19:34:27 crc kubenswrapper[4942]: I0218 19:34:27.806503 4942 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c1d0729b4bd7253e74f4b6ab5454c70366e85e881a46ee7d2e977d6d54e404bc" Feb 18 19:34:27 crc kubenswrapper[4942]: I0218 19:34:27.806521 4942 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-e916-account-create-update-lm2r5" Feb 18 19:34:27 crc kubenswrapper[4942]: I0218 19:34:27.808138 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-njfd6" event={"ID":"dddbc305-d881-4ef9-ada1-49e8f180162c","Type":"ContainerDied","Data":"49c2a64763568347502e4187d0859e00063ef22d9d7344a3f0903d0addb05807"} Feb 18 19:34:27 crc kubenswrapper[4942]: I0218 19:34:27.808208 4942 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="49c2a64763568347502e4187d0859e00063ef22d9d7344a3f0903d0addb05807" Feb 18 19:34:27 crc kubenswrapper[4942]: I0218 19:34:27.808229 4942 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-njfd6" Feb 18 19:34:27 crc kubenswrapper[4942]: I0218 19:34:27.810599 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-8f782" event={"ID":"4edc6296-1ba6-43f7-a076-93f94c77a2c9","Type":"ContainerDied","Data":"b99ac8b869534b1562557fd7d216264bc451f958990baf05beabf148b59e05dd"} Feb 18 19:34:27 crc kubenswrapper[4942]: I0218 19:34:27.810629 4942 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b99ac8b869534b1562557fd7d216264bc451f958990baf05beabf148b59e05dd" Feb 18 19:34:27 crc kubenswrapper[4942]: I0218 19:34:27.810648 4942 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-8f782" Feb 18 19:34:28 crc kubenswrapper[4942]: I0218 19:34:28.822076 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"219b2aa4-0497-40f8-a3d0-947d37be720d","Type":"ContainerStarted","Data":"7267448d8e93628304f568d013573a3a00dd9f0b1c853388c54db4200d6ef067"} Feb 18 19:34:30 crc kubenswrapper[4942]: I0218 19:34:30.848255 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-fee6-account-create-update-jhlbn" event={"ID":"c903d652-2880-43bd-9445-f1b03764f413","Type":"ContainerDied","Data":"eeca83cb3b414d5fe71656c4ea46b51dc52f248844cf4487dafc4102ebb78727"} Feb 18 19:34:30 crc kubenswrapper[4942]: I0218 19:34:30.848574 4942 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="eeca83cb3b414d5fe71656c4ea46b51dc52f248844cf4487dafc4102ebb78727" Feb 18 19:34:30 crc kubenswrapper[4942]: I0218 19:34:30.851317 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-s54gq" event={"ID":"fd491cd9-f58f-4821-8004-a5a4762d6bdb","Type":"ContainerDied","Data":"15e5af09725a8d061b0fd0aa1bf3763e9837770b442d86d03cf057837a61bec1"} Feb 18 19:34:30 crc kubenswrapper[4942]: I0218 19:34:30.851343 4942 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="15e5af09725a8d061b0fd0aa1bf3763e9837770b442d86d03cf057837a61bec1" Feb 18 19:34:30 crc kubenswrapper[4942]: I0218 19:34:30.856073 4942 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-s54gq" Feb 18 19:34:30 crc kubenswrapper[4942]: I0218 19:34:30.866526 4942 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-fee6-account-create-update-jhlbn" Feb 18 19:34:30 crc kubenswrapper[4942]: I0218 19:34:30.959579 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5fl2t\" (UniqueName: \"kubernetes.io/projected/fd491cd9-f58f-4821-8004-a5a4762d6bdb-kube-api-access-5fl2t\") pod \"fd491cd9-f58f-4821-8004-a5a4762d6bdb\" (UID: \"fd491cd9-f58f-4821-8004-a5a4762d6bdb\") " Feb 18 19:34:30 crc kubenswrapper[4942]: I0218 19:34:30.959844 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c903d652-2880-43bd-9445-f1b03764f413-operator-scripts\") pod \"c903d652-2880-43bd-9445-f1b03764f413\" (UID: \"c903d652-2880-43bd-9445-f1b03764f413\") " Feb 18 19:34:30 crc kubenswrapper[4942]: I0218 19:34:30.960053 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fd491cd9-f58f-4821-8004-a5a4762d6bdb-operator-scripts\") pod \"fd491cd9-f58f-4821-8004-a5a4762d6bdb\" (UID: \"fd491cd9-f58f-4821-8004-a5a4762d6bdb\") " Feb 18 19:34:30 crc kubenswrapper[4942]: I0218 19:34:30.960586 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nshl4\" (UniqueName: \"kubernetes.io/projected/c903d652-2880-43bd-9445-f1b03764f413-kube-api-access-nshl4\") pod \"c903d652-2880-43bd-9445-f1b03764f413\" (UID: \"c903d652-2880-43bd-9445-f1b03764f413\") " Feb 18 19:34:30 crc kubenswrapper[4942]: I0218 19:34:30.960427 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c903d652-2880-43bd-9445-f1b03764f413-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "c903d652-2880-43bd-9445-f1b03764f413" (UID: "c903d652-2880-43bd-9445-f1b03764f413"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:34:30 crc kubenswrapper[4942]: I0218 19:34:30.960488 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fd491cd9-f58f-4821-8004-a5a4762d6bdb-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "fd491cd9-f58f-4821-8004-a5a4762d6bdb" (UID: "fd491cd9-f58f-4821-8004-a5a4762d6bdb"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:34:30 crc kubenswrapper[4942]: I0218 19:34:30.961578 4942 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c903d652-2880-43bd-9445-f1b03764f413-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 18 19:34:30 crc kubenswrapper[4942]: I0218 19:34:30.961667 4942 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fd491cd9-f58f-4821-8004-a5a4762d6bdb-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 18 19:34:30 crc kubenswrapper[4942]: I0218 19:34:30.965185 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fd491cd9-f58f-4821-8004-a5a4762d6bdb-kube-api-access-5fl2t" (OuterVolumeSpecName: "kube-api-access-5fl2t") pod "fd491cd9-f58f-4821-8004-a5a4762d6bdb" (UID: "fd491cd9-f58f-4821-8004-a5a4762d6bdb"). InnerVolumeSpecName "kube-api-access-5fl2t". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:34:30 crc kubenswrapper[4942]: I0218 19:34:30.965510 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c903d652-2880-43bd-9445-f1b03764f413-kube-api-access-nshl4" (OuterVolumeSpecName: "kube-api-access-nshl4") pod "c903d652-2880-43bd-9445-f1b03764f413" (UID: "c903d652-2880-43bd-9445-f1b03764f413"). InnerVolumeSpecName "kube-api-access-nshl4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:34:31 crc kubenswrapper[4942]: I0218 19:34:31.062988 4942 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nshl4\" (UniqueName: \"kubernetes.io/projected/c903d652-2880-43bd-9445-f1b03764f413-kube-api-access-nshl4\") on node \"crc\" DevicePath \"\"" Feb 18 19:34:31 crc kubenswrapper[4942]: I0218 19:34:31.063259 4942 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5fl2t\" (UniqueName: \"kubernetes.io/projected/fd491cd9-f58f-4821-8004-a5a4762d6bdb-kube-api-access-5fl2t\") on node \"crc\" DevicePath \"\"" Feb 18 19:34:31 crc kubenswrapper[4942]: I0218 19:34:31.860489 4942 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-s54gq" Feb 18 19:34:31 crc kubenswrapper[4942]: I0218 19:34:31.860522 4942 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-fee6-account-create-update-jhlbn" Feb 18 19:34:34 crc kubenswrapper[4942]: I0218 19:34:34.894639 4942 generic.go:334] "Generic (PLEG): container finished" podID="219b2aa4-0497-40f8-a3d0-947d37be720d" containerID="7267448d8e93628304f568d013573a3a00dd9f0b1c853388c54db4200d6ef067" exitCode=0 Feb 18 19:34:34 crc kubenswrapper[4942]: I0218 19:34:34.894751 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"219b2aa4-0497-40f8-a3d0-947d37be720d","Type":"ContainerDied","Data":"7267448d8e93628304f568d013573a3a00dd9f0b1c853388c54db4200d6ef067"} Feb 18 19:34:43 crc kubenswrapper[4942]: E0218 19:34:43.000234 4942 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.12:5001/podified-epoxy-centos9/openstack-watcher-api:watcher_latest" Feb 18 19:34:43 crc kubenswrapper[4942]: E0218 19:34:43.000597 4942 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.12:5001/podified-epoxy-centos9/openstack-watcher-api:watcher_latest" Feb 18 19:34:43 crc kubenswrapper[4942]: E0218 19:34:43.000709 4942 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:watcher-db-sync,Image:38.102.83.12:5001/podified-epoxy-centos9/openstack-watcher-api:watcher_latest,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_set_configs && /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/config-data/default,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/watcher/watcher.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:watcher-dbsync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-mfbhf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod watcher-db-sync-4h9n5_openstack(983d5293-8413-4a29-88b2-ba775b3b4a8b): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 18 19:34:43 crc kubenswrapper[4942]: E0218 19:34:43.001911 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/watcher-db-sync-4h9n5" podUID="983d5293-8413-4a29-88b2-ba775b3b4a8b" Feb 18 19:34:43 crc kubenswrapper[4942]: I0218 19:34:43.992020 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-zw8ls" event={"ID":"72c2952a-cb2e-4a2e-bdb3-bf97a901cbe3","Type":"ContainerStarted","Data":"8c6545f8eaa3b666b06d888c16ee9caa900adcec0bcd683e72e4f96180bd297d"} Feb 18 19:34:44 crc kubenswrapper[4942]: I0218 19:34:44.003166 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-87p82" event={"ID":"7ed4f34d-fe0d-402c-95d3-171e73eb5bd5","Type":"ContainerStarted","Data":"373bd2d7e6e62cf5defbed6522169de2de3264581e7024f113223b1465d241c5"} Feb 18 19:34:44 crc kubenswrapper[4942]: I0218 19:34:44.007102 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"125bdbb5-76a8-450f-b645-2133024a1bd0","Type":"ContainerStarted","Data":"cf1f1da32f81e24045adc2bc49f551d88ed9f8d3b07c88459a6652b111442fd0"} Feb 18 19:34:44 crc kubenswrapper[4942]: I0218 19:34:44.007143 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"125bdbb5-76a8-450f-b645-2133024a1bd0","Type":"ContainerStarted","Data":"9311d8502dc45863220a161c11f863317f5924befd91b5142728d84955c095bd"} Feb 18 19:34:44 crc kubenswrapper[4942]: I0218 19:34:44.007153 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"125bdbb5-76a8-450f-b645-2133024a1bd0","Type":"ContainerStarted","Data":"bd26707859e9dcbcf2d0119ae331daae4b638ae2c767bd5fec59453b44e050ae"} Feb 18 19:34:44 crc kubenswrapper[4942]: I0218 19:34:44.007162 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"125bdbb5-76a8-450f-b645-2133024a1bd0","Type":"ContainerStarted","Data":"993c8c60fd3b39f8775403a8dd8bc1d8168b5636a056b78d38e7028ed5ec9139"} Feb 18 19:34:44 crc kubenswrapper[4942]: I0218 19:34:44.009301 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"219b2aa4-0497-40f8-a3d0-947d37be720d","Type":"ContainerStarted","Data":"3c0ad897361779d547581def3c2fec1b2d3e96f7b286fe553ae81f2d2d440845"} Feb 18 19:34:44 crc kubenswrapper[4942]: E0218 19:34:44.011024 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"38.102.83.12:5001/podified-epoxy-centos9/openstack-watcher-api:watcher_latest\\\"\"" pod="openstack/watcher-db-sync-4h9n5" podUID="983d5293-8413-4a29-88b2-ba775b3b4a8b" Feb 18 19:34:44 crc kubenswrapper[4942]: I0218 19:34:44.021062 4942 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-db-sync-zw8ls" podStartSLOduration=2.620893133 podStartE2EDuration="39.021044111s" podCreationTimestamp="2026-02-18 19:34:05 +0000 UTC" firstStartedPulling="2026-02-18 19:34:06.637294534 +0000 UTC m=+1006.342227199" lastFinishedPulling="2026-02-18 19:34:43.037445512 +0000 UTC m=+1042.742378177" observedRunningTime="2026-02-18 19:34:44.02062406 +0000 UTC m=+1043.725556735" watchObservedRunningTime="2026-02-18 19:34:44.021044111 +0000 UTC m=+1043.725976776" Feb 18 19:34:44 crc kubenswrapper[4942]: I0218 19:34:44.085945 4942 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-db-sync-87p82" podStartSLOduration=8.295739952 podStartE2EDuration="27.085920687s" podCreationTimestamp="2026-02-18 19:34:17 +0000 UTC" firstStartedPulling="2026-02-18 19:34:24.160238045 +0000 UTC m=+1023.865170710" lastFinishedPulling="2026-02-18 19:34:42.95041877 +0000 UTC m=+1042.655351445" observedRunningTime="2026-02-18 19:34:44.069957732 +0000 UTC m=+1043.774890387" watchObservedRunningTime="2026-02-18 19:34:44.085920687 +0000 UTC m=+1043.790853342" Feb 18 19:34:46 crc kubenswrapper[4942]: I0218 19:34:46.031028 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"125bdbb5-76a8-450f-b645-2133024a1bd0","Type":"ContainerStarted","Data":"f5c0c2e09a2d8742eb8265dd456d96d2d3f5e3e4eade37d79f317982057f5219"} Feb 18 19:34:46 crc kubenswrapper[4942]: I0218 19:34:46.031512 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"125bdbb5-76a8-450f-b645-2133024a1bd0","Type":"ContainerStarted","Data":"92c71722c3a69153f783c6fc61267ad29332d27b46fbbddb7d510351acdc5d7d"} Feb 18 19:34:47 crc kubenswrapper[4942]: I0218 19:34:47.038812 4942 generic.go:334] "Generic (PLEG): container finished" podID="7ed4f34d-fe0d-402c-95d3-171e73eb5bd5" containerID="373bd2d7e6e62cf5defbed6522169de2de3264581e7024f113223b1465d241c5" exitCode=0 Feb 18 19:34:47 crc kubenswrapper[4942]: I0218 19:34:47.045954 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-87p82" event={"ID":"7ed4f34d-fe0d-402c-95d3-171e73eb5bd5","Type":"ContainerDied","Data":"373bd2d7e6e62cf5defbed6522169de2de3264581e7024f113223b1465d241c5"} Feb 18 19:34:47 crc kubenswrapper[4942]: I0218 19:34:47.046005 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"125bdbb5-76a8-450f-b645-2133024a1bd0","Type":"ContainerStarted","Data":"e916eefff7399466ea313f01b28d972a7abc4a8738eb47fdc513b3228b3584fc"} Feb 18 19:34:47 crc kubenswrapper[4942]: I0218 19:34:47.046019 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"125bdbb5-76a8-450f-b645-2133024a1bd0","Type":"ContainerStarted","Data":"31971173ad26ea7c521fbde21c7957bb0b54e8c3c4403538b81a2957f17c3ea2"} Feb 18 19:34:47 crc kubenswrapper[4942]: I0218 19:34:47.046031 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"219b2aa4-0497-40f8-a3d0-947d37be720d","Type":"ContainerStarted","Data":"d83f11bf1c6741c63e8403adeaf2debe729c7d20905670045ec96ee9fceb1c98"} Feb 18 19:34:47 crc kubenswrapper[4942]: I0218 19:34:47.046044 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"219b2aa4-0497-40f8-a3d0-947d37be720d","Type":"ContainerStarted","Data":"188d1b7181a7567a8d1558b4f9342a2d5d02b2fb0b9db6d0ed29fc015cdd4109"} Feb 18 19:34:47 crc kubenswrapper[4942]: I0218 19:34:47.107591 4942 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/prometheus-metric-storage-0" podStartSLOduration=23.107570353 podStartE2EDuration="23.107570353s" podCreationTimestamp="2026-02-18 19:34:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:34:47.077732077 +0000 UTC m=+1046.782664792" watchObservedRunningTime="2026-02-18 19:34:47.107570353 +0000 UTC m=+1046.812503038" Feb 18 19:34:48 crc kubenswrapper[4942]: I0218 19:34:48.062222 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"125bdbb5-76a8-450f-b645-2133024a1bd0","Type":"ContainerStarted","Data":"08b2c386f0f91e534946abe8e67778681100e9eeb1394a2babb8c6b780a74954"} Feb 18 19:34:48 crc kubenswrapper[4942]: I0218 19:34:48.459625 4942 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-87p82" Feb 18 19:34:48 crc kubenswrapper[4942]: I0218 19:34:48.604444 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7ed4f34d-fe0d-402c-95d3-171e73eb5bd5-config-data\") pod \"7ed4f34d-fe0d-402c-95d3-171e73eb5bd5\" (UID: \"7ed4f34d-fe0d-402c-95d3-171e73eb5bd5\") " Feb 18 19:34:48 crc kubenswrapper[4942]: I0218 19:34:48.604578 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7ed4f34d-fe0d-402c-95d3-171e73eb5bd5-combined-ca-bundle\") pod \"7ed4f34d-fe0d-402c-95d3-171e73eb5bd5\" (UID: \"7ed4f34d-fe0d-402c-95d3-171e73eb5bd5\") " Feb 18 19:34:48 crc kubenswrapper[4942]: I0218 19:34:48.604713 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f4b9x\" (UniqueName: \"kubernetes.io/projected/7ed4f34d-fe0d-402c-95d3-171e73eb5bd5-kube-api-access-f4b9x\") pod \"7ed4f34d-fe0d-402c-95d3-171e73eb5bd5\" (UID: \"7ed4f34d-fe0d-402c-95d3-171e73eb5bd5\") " Feb 18 19:34:48 crc kubenswrapper[4942]: I0218 19:34:48.609237 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7ed4f34d-fe0d-402c-95d3-171e73eb5bd5-kube-api-access-f4b9x" (OuterVolumeSpecName: "kube-api-access-f4b9x") pod "7ed4f34d-fe0d-402c-95d3-171e73eb5bd5" (UID: "7ed4f34d-fe0d-402c-95d3-171e73eb5bd5"). InnerVolumeSpecName "kube-api-access-f4b9x". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:34:48 crc kubenswrapper[4942]: I0218 19:34:48.632777 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7ed4f34d-fe0d-402c-95d3-171e73eb5bd5-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "7ed4f34d-fe0d-402c-95d3-171e73eb5bd5" (UID: "7ed4f34d-fe0d-402c-95d3-171e73eb5bd5"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:34:48 crc kubenswrapper[4942]: I0218 19:34:48.652559 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7ed4f34d-fe0d-402c-95d3-171e73eb5bd5-config-data" (OuterVolumeSpecName: "config-data") pod "7ed4f34d-fe0d-402c-95d3-171e73eb5bd5" (UID: "7ed4f34d-fe0d-402c-95d3-171e73eb5bd5"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:34:48 crc kubenswrapper[4942]: I0218 19:34:48.709012 4942 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f4b9x\" (UniqueName: \"kubernetes.io/projected/7ed4f34d-fe0d-402c-95d3-171e73eb5bd5-kube-api-access-f4b9x\") on node \"crc\" DevicePath \"\"" Feb 18 19:34:48 crc kubenswrapper[4942]: I0218 19:34:48.709040 4942 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7ed4f34d-fe0d-402c-95d3-171e73eb5bd5-config-data\") on node \"crc\" DevicePath \"\"" Feb 18 19:34:48 crc kubenswrapper[4942]: I0218 19:34:48.709049 4942 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7ed4f34d-fe0d-402c-95d3-171e73eb5bd5-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 19:34:49 crc kubenswrapper[4942]: I0218 19:34:49.068679 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-87p82" event={"ID":"7ed4f34d-fe0d-402c-95d3-171e73eb5bd5","Type":"ContainerDied","Data":"503b84b004f829c8689047881a5f94617e83302d763c11ea9186e35169366871"} Feb 18 19:34:49 crc kubenswrapper[4942]: I0218 19:34:49.068731 4942 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="503b84b004f829c8689047881a5f94617e83302d763c11ea9186e35169366871" Feb 18 19:34:49 crc kubenswrapper[4942]: I0218 19:34:49.068697 4942 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-87p82" Feb 18 19:34:49 crc kubenswrapper[4942]: I0218 19:34:49.077924 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"125bdbb5-76a8-450f-b645-2133024a1bd0","Type":"ContainerStarted","Data":"2316229b46c142d52d046e76042a6420d0bd812349a87939b21cd4cc7fe128fa"} Feb 18 19:34:49 crc kubenswrapper[4942]: I0218 19:34:49.077968 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"125bdbb5-76a8-450f-b645-2133024a1bd0","Type":"ContainerStarted","Data":"714f124696e0ff58d3540491ec20f425179bcc1b2713feed3e5e74302085bc84"} Feb 18 19:34:49 crc kubenswrapper[4942]: I0218 19:34:49.077980 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"125bdbb5-76a8-450f-b645-2133024a1bd0","Type":"ContainerStarted","Data":"f15078d7065d6bd0395d8455dba0a45032b03213e63cad816baa689eba81e9bc"} Feb 18 19:34:49 crc kubenswrapper[4942]: I0218 19:34:49.077988 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"125bdbb5-76a8-450f-b645-2133024a1bd0","Type":"ContainerStarted","Data":"f3ff715f94be1a74557b976fd4825fabb666c14abf91066881f881170e5835ba"} Feb 18 19:34:49 crc kubenswrapper[4942]: I0218 19:34:49.077998 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"125bdbb5-76a8-450f-b645-2133024a1bd0","Type":"ContainerStarted","Data":"cdd2348a1592a89a97373234a5f964782a66ea575b8d5a650dbdc5d95f27c3bd"} Feb 18 19:34:49 crc kubenswrapper[4942]: I0218 19:34:49.409125 4942 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-wknkh"] Feb 18 19:34:49 crc kubenswrapper[4942]: E0218 19:34:49.409534 4942 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fcea68e2-0d37-4812-a7ad-403e59b7b556" containerName="mariadb-account-create-update" Feb 18 19:34:49 crc kubenswrapper[4942]: I0218 19:34:49.409554 4942 state_mem.go:107] "Deleted CPUSet assignment" podUID="fcea68e2-0d37-4812-a7ad-403e59b7b556" containerName="mariadb-account-create-update" Feb 18 19:34:49 crc kubenswrapper[4942]: E0218 19:34:49.409572 4942 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="35dbdf24-b5f9-4a19-96f9-1fe390df90e1" containerName="mariadb-account-create-update" Feb 18 19:34:49 crc kubenswrapper[4942]: I0218 19:34:49.409579 4942 state_mem.go:107] "Deleted CPUSet assignment" podUID="35dbdf24-b5f9-4a19-96f9-1fe390df90e1" containerName="mariadb-account-create-update" Feb 18 19:34:49 crc kubenswrapper[4942]: E0218 19:34:49.409595 4942 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c903d652-2880-43bd-9445-f1b03764f413" containerName="mariadb-account-create-update" Feb 18 19:34:49 crc kubenswrapper[4942]: I0218 19:34:49.409601 4942 state_mem.go:107] "Deleted CPUSet assignment" podUID="c903d652-2880-43bd-9445-f1b03764f413" containerName="mariadb-account-create-update" Feb 18 19:34:49 crc kubenswrapper[4942]: E0218 19:34:49.409610 4942 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4edc6296-1ba6-43f7-a076-93f94c77a2c9" containerName="mariadb-account-create-update" Feb 18 19:34:49 crc kubenswrapper[4942]: I0218 19:34:49.409616 4942 state_mem.go:107] "Deleted CPUSet assignment" podUID="4edc6296-1ba6-43f7-a076-93f94c77a2c9" containerName="mariadb-account-create-update" Feb 18 19:34:49 crc kubenswrapper[4942]: E0218 19:34:49.409624 4942 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7ed4f34d-fe0d-402c-95d3-171e73eb5bd5" containerName="keystone-db-sync" Feb 18 19:34:49 crc kubenswrapper[4942]: I0218 19:34:49.409629 4942 state_mem.go:107] "Deleted CPUSet assignment" podUID="7ed4f34d-fe0d-402c-95d3-171e73eb5bd5" containerName="keystone-db-sync" Feb 18 19:34:49 crc kubenswrapper[4942]: E0218 19:34:49.409640 4942 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9a8e424f-44a5-4eaa-9f3f-882f070aa404" containerName="mariadb-database-create" Feb 18 19:34:49 crc kubenswrapper[4942]: I0218 19:34:49.409647 4942 state_mem.go:107] "Deleted CPUSet assignment" podUID="9a8e424f-44a5-4eaa-9f3f-882f070aa404" containerName="mariadb-database-create" Feb 18 19:34:49 crc kubenswrapper[4942]: E0218 19:34:49.409659 4942 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dddbc305-d881-4ef9-ada1-49e8f180162c" containerName="mariadb-database-create" Feb 18 19:34:49 crc kubenswrapper[4942]: I0218 19:34:49.409667 4942 state_mem.go:107] "Deleted CPUSet assignment" podUID="dddbc305-d881-4ef9-ada1-49e8f180162c" containerName="mariadb-database-create" Feb 18 19:34:49 crc kubenswrapper[4942]: E0218 19:34:49.409675 4942 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fd491cd9-f58f-4821-8004-a5a4762d6bdb" containerName="mariadb-database-create" Feb 18 19:34:49 crc kubenswrapper[4942]: I0218 19:34:49.409681 4942 state_mem.go:107] "Deleted CPUSet assignment" podUID="fd491cd9-f58f-4821-8004-a5a4762d6bdb" containerName="mariadb-database-create" Feb 18 19:34:49 crc kubenswrapper[4942]: I0218 19:34:49.409893 4942 memory_manager.go:354] "RemoveStaleState removing state" podUID="35dbdf24-b5f9-4a19-96f9-1fe390df90e1" containerName="mariadb-account-create-update" Feb 18 19:34:49 crc kubenswrapper[4942]: I0218 19:34:49.409914 4942 memory_manager.go:354] "RemoveStaleState removing state" podUID="fcea68e2-0d37-4812-a7ad-403e59b7b556" containerName="mariadb-account-create-update" Feb 18 19:34:49 crc kubenswrapper[4942]: I0218 19:34:49.409923 4942 memory_manager.go:354] "RemoveStaleState removing state" podUID="c903d652-2880-43bd-9445-f1b03764f413" containerName="mariadb-account-create-update" Feb 18 19:34:49 crc kubenswrapper[4942]: I0218 19:34:49.409937 4942 memory_manager.go:354] "RemoveStaleState removing state" podUID="4edc6296-1ba6-43f7-a076-93f94c77a2c9" containerName="mariadb-account-create-update" Feb 18 19:34:49 crc kubenswrapper[4942]: I0218 19:34:49.409959 4942 memory_manager.go:354] "RemoveStaleState removing state" podUID="dddbc305-d881-4ef9-ada1-49e8f180162c" containerName="mariadb-database-create" Feb 18 19:34:49 crc kubenswrapper[4942]: I0218 19:34:49.409969 4942 memory_manager.go:354] "RemoveStaleState removing state" podUID="7ed4f34d-fe0d-402c-95d3-171e73eb5bd5" containerName="keystone-db-sync" Feb 18 19:34:49 crc kubenswrapper[4942]: I0218 19:34:49.409984 4942 memory_manager.go:354] "RemoveStaleState removing state" podUID="9a8e424f-44a5-4eaa-9f3f-882f070aa404" containerName="mariadb-database-create" Feb 18 19:34:49 crc kubenswrapper[4942]: I0218 19:34:49.409995 4942 memory_manager.go:354] "RemoveStaleState removing state" podUID="fd491cd9-f58f-4821-8004-a5a4762d6bdb" containerName="mariadb-database-create" Feb 18 19:34:49 crc kubenswrapper[4942]: I0218 19:34:49.410610 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-wknkh" Feb 18 19:34:49 crc kubenswrapper[4942]: I0218 19:34:49.413008 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Feb 18 19:34:49 crc kubenswrapper[4942]: I0218 19:34:49.414291 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Feb 18 19:34:49 crc kubenswrapper[4942]: I0218 19:34:49.414613 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Feb 18 19:34:49 crc kubenswrapper[4942]: I0218 19:34:49.414919 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-9szpl" Feb 18 19:34:49 crc kubenswrapper[4942]: I0218 19:34:49.415234 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Feb 18 19:34:49 crc kubenswrapper[4942]: I0218 19:34:49.418141 4942 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-f877ddd87-8qph9"] Feb 18 19:34:49 crc kubenswrapper[4942]: I0218 19:34:49.422628 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-f877ddd87-8qph9" Feb 18 19:34:49 crc kubenswrapper[4942]: I0218 19:34:49.437009 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-f877ddd87-8qph9"] Feb 18 19:34:49 crc kubenswrapper[4942]: I0218 19:34:49.455017 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-wknkh"] Feb 18 19:34:49 crc kubenswrapper[4942]: I0218 19:34:49.525058 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0e907b66-eaef-489a-b729-f61f0c7e347d-dns-svc\") pod \"dnsmasq-dns-f877ddd87-8qph9\" (UID: \"0e907b66-eaef-489a-b729-f61f0c7e347d\") " pod="openstack/dnsmasq-dns-f877ddd87-8qph9" Feb 18 19:34:49 crc kubenswrapper[4942]: I0218 19:34:49.525101 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9csq5\" (UniqueName: \"kubernetes.io/projected/4c05cfb2-6a1f-46e5-a784-29bad5c3cdc0-kube-api-access-9csq5\") pod \"keystone-bootstrap-wknkh\" (UID: \"4c05cfb2-6a1f-46e5-a784-29bad5c3cdc0\") " pod="openstack/keystone-bootstrap-wknkh" Feb 18 19:34:49 crc kubenswrapper[4942]: I0218 19:34:49.525160 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jfbpd\" (UniqueName: \"kubernetes.io/projected/0e907b66-eaef-489a-b729-f61f0c7e347d-kube-api-access-jfbpd\") pod \"dnsmasq-dns-f877ddd87-8qph9\" (UID: \"0e907b66-eaef-489a-b729-f61f0c7e347d\") " pod="openstack/dnsmasq-dns-f877ddd87-8qph9" Feb 18 19:34:49 crc kubenswrapper[4942]: I0218 19:34:49.525177 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4c05cfb2-6a1f-46e5-a784-29bad5c3cdc0-config-data\") pod \"keystone-bootstrap-wknkh\" (UID: \"4c05cfb2-6a1f-46e5-a784-29bad5c3cdc0\") " pod="openstack/keystone-bootstrap-wknkh" Feb 18 19:34:49 crc kubenswrapper[4942]: I0218 19:34:49.525202 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0e907b66-eaef-489a-b729-f61f0c7e347d-ovsdbserver-nb\") pod \"dnsmasq-dns-f877ddd87-8qph9\" (UID: \"0e907b66-eaef-489a-b729-f61f0c7e347d\") " pod="openstack/dnsmasq-dns-f877ddd87-8qph9" Feb 18 19:34:49 crc kubenswrapper[4942]: I0218 19:34:49.525234 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4c05cfb2-6a1f-46e5-a784-29bad5c3cdc0-combined-ca-bundle\") pod \"keystone-bootstrap-wknkh\" (UID: \"4c05cfb2-6a1f-46e5-a784-29bad5c3cdc0\") " pod="openstack/keystone-bootstrap-wknkh" Feb 18 19:34:49 crc kubenswrapper[4942]: I0218 19:34:49.525250 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/4c05cfb2-6a1f-46e5-a784-29bad5c3cdc0-fernet-keys\") pod \"keystone-bootstrap-wknkh\" (UID: \"4c05cfb2-6a1f-46e5-a784-29bad5c3cdc0\") " pod="openstack/keystone-bootstrap-wknkh" Feb 18 19:34:49 crc kubenswrapper[4942]: I0218 19:34:49.525268 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/0e907b66-eaef-489a-b729-f61f0c7e347d-ovsdbserver-sb\") pod \"dnsmasq-dns-f877ddd87-8qph9\" (UID: \"0e907b66-eaef-489a-b729-f61f0c7e347d\") " pod="openstack/dnsmasq-dns-f877ddd87-8qph9" Feb 18 19:34:49 crc kubenswrapper[4942]: I0218 19:34:49.525296 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/4c05cfb2-6a1f-46e5-a784-29bad5c3cdc0-credential-keys\") pod \"keystone-bootstrap-wknkh\" (UID: \"4c05cfb2-6a1f-46e5-a784-29bad5c3cdc0\") " pod="openstack/keystone-bootstrap-wknkh" Feb 18 19:34:49 crc kubenswrapper[4942]: I0218 19:34:49.525309 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0e907b66-eaef-489a-b729-f61f0c7e347d-config\") pod \"dnsmasq-dns-f877ddd87-8qph9\" (UID: \"0e907b66-eaef-489a-b729-f61f0c7e347d\") " pod="openstack/dnsmasq-dns-f877ddd87-8qph9" Feb 18 19:34:49 crc kubenswrapper[4942]: I0218 19:34:49.525334 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4c05cfb2-6a1f-46e5-a784-29bad5c3cdc0-scripts\") pod \"keystone-bootstrap-wknkh\" (UID: \"4c05cfb2-6a1f-46e5-a784-29bad5c3cdc0\") " pod="openstack/keystone-bootstrap-wknkh" Feb 18 19:34:49 crc kubenswrapper[4942]: I0218 19:34:49.563468 4942 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-sync-p9l27"] Feb 18 19:34:49 crc kubenswrapper[4942]: I0218 19:34:49.564609 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-p9l27" Feb 18 19:34:49 crc kubenswrapper[4942]: I0218 19:34:49.573234 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Feb 18 19:34:49 crc kubenswrapper[4942]: I0218 19:34:49.573264 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-pc4kw" Feb 18 19:34:49 crc kubenswrapper[4942]: I0218 19:34:49.574292 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Feb 18 19:34:49 crc kubenswrapper[4942]: I0218 19:34:49.588250 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-p9l27"] Feb 18 19:34:49 crc kubenswrapper[4942]: I0218 19:34:49.626562 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/4c05cfb2-6a1f-46e5-a784-29bad5c3cdc0-credential-keys\") pod \"keystone-bootstrap-wknkh\" (UID: \"4c05cfb2-6a1f-46e5-a784-29bad5c3cdc0\") " pod="openstack/keystone-bootstrap-wknkh" Feb 18 19:34:49 crc kubenswrapper[4942]: I0218 19:34:49.626604 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0e907b66-eaef-489a-b729-f61f0c7e347d-config\") pod \"dnsmasq-dns-f877ddd87-8qph9\" (UID: \"0e907b66-eaef-489a-b729-f61f0c7e347d\") " pod="openstack/dnsmasq-dns-f877ddd87-8qph9" Feb 18 19:34:49 crc kubenswrapper[4942]: I0218 19:34:49.626634 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4c05cfb2-6a1f-46e5-a784-29bad5c3cdc0-scripts\") pod \"keystone-bootstrap-wknkh\" (UID: \"4c05cfb2-6a1f-46e5-a784-29bad5c3cdc0\") " pod="openstack/keystone-bootstrap-wknkh" Feb 18 19:34:49 crc kubenswrapper[4942]: I0218 19:34:49.626669 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0e907b66-eaef-489a-b729-f61f0c7e347d-dns-svc\") pod \"dnsmasq-dns-f877ddd87-8qph9\" (UID: \"0e907b66-eaef-489a-b729-f61f0c7e347d\") " pod="openstack/dnsmasq-dns-f877ddd87-8qph9" Feb 18 19:34:49 crc kubenswrapper[4942]: I0218 19:34:49.626689 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9csq5\" (UniqueName: \"kubernetes.io/projected/4c05cfb2-6a1f-46e5-a784-29bad5c3cdc0-kube-api-access-9csq5\") pod \"keystone-bootstrap-wknkh\" (UID: \"4c05cfb2-6a1f-46e5-a784-29bad5c3cdc0\") " pod="openstack/keystone-bootstrap-wknkh" Feb 18 19:34:49 crc kubenswrapper[4942]: I0218 19:34:49.626742 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jfbpd\" (UniqueName: \"kubernetes.io/projected/0e907b66-eaef-489a-b729-f61f0c7e347d-kube-api-access-jfbpd\") pod \"dnsmasq-dns-f877ddd87-8qph9\" (UID: \"0e907b66-eaef-489a-b729-f61f0c7e347d\") " pod="openstack/dnsmasq-dns-f877ddd87-8qph9" Feb 18 19:34:49 crc kubenswrapper[4942]: I0218 19:34:49.626776 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4c05cfb2-6a1f-46e5-a784-29bad5c3cdc0-config-data\") pod \"keystone-bootstrap-wknkh\" (UID: \"4c05cfb2-6a1f-46e5-a784-29bad5c3cdc0\") " pod="openstack/keystone-bootstrap-wknkh" Feb 18 19:34:49 crc kubenswrapper[4942]: I0218 19:34:49.627168 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0e907b66-eaef-489a-b729-f61f0c7e347d-ovsdbserver-nb\") pod \"dnsmasq-dns-f877ddd87-8qph9\" (UID: \"0e907b66-eaef-489a-b729-f61f0c7e347d\") " pod="openstack/dnsmasq-dns-f877ddd87-8qph9" Feb 18 19:34:49 crc kubenswrapper[4942]: I0218 19:34:49.627327 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4c05cfb2-6a1f-46e5-a784-29bad5c3cdc0-combined-ca-bundle\") pod \"keystone-bootstrap-wknkh\" (UID: \"4c05cfb2-6a1f-46e5-a784-29bad5c3cdc0\") " pod="openstack/keystone-bootstrap-wknkh" Feb 18 19:34:49 crc kubenswrapper[4942]: I0218 19:34:49.627422 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/4c05cfb2-6a1f-46e5-a784-29bad5c3cdc0-fernet-keys\") pod \"keystone-bootstrap-wknkh\" (UID: \"4c05cfb2-6a1f-46e5-a784-29bad5c3cdc0\") " pod="openstack/keystone-bootstrap-wknkh" Feb 18 19:34:49 crc kubenswrapper[4942]: I0218 19:34:49.627509 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/0e907b66-eaef-489a-b729-f61f0c7e347d-ovsdbserver-sb\") pod \"dnsmasq-dns-f877ddd87-8qph9\" (UID: \"0e907b66-eaef-489a-b729-f61f0c7e347d\") " pod="openstack/dnsmasq-dns-f877ddd87-8qph9" Feb 18 19:34:49 crc kubenswrapper[4942]: I0218 19:34:49.628551 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0e907b66-eaef-489a-b729-f61f0c7e347d-ovsdbserver-nb\") pod \"dnsmasq-dns-f877ddd87-8qph9\" (UID: \"0e907b66-eaef-489a-b729-f61f0c7e347d\") " pod="openstack/dnsmasq-dns-f877ddd87-8qph9" Feb 18 19:34:49 crc kubenswrapper[4942]: I0218 19:34:49.628632 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/0e907b66-eaef-489a-b729-f61f0c7e347d-ovsdbserver-sb\") pod \"dnsmasq-dns-f877ddd87-8qph9\" (UID: \"0e907b66-eaef-489a-b729-f61f0c7e347d\") " pod="openstack/dnsmasq-dns-f877ddd87-8qph9" Feb 18 19:34:49 crc kubenswrapper[4942]: I0218 19:34:49.629434 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0e907b66-eaef-489a-b729-f61f0c7e347d-dns-svc\") pod \"dnsmasq-dns-f877ddd87-8qph9\" (UID: \"0e907b66-eaef-489a-b729-f61f0c7e347d\") " pod="openstack/dnsmasq-dns-f877ddd87-8qph9" Feb 18 19:34:49 crc kubenswrapper[4942]: I0218 19:34:49.631778 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4c05cfb2-6a1f-46e5-a784-29bad5c3cdc0-scripts\") pod \"keystone-bootstrap-wknkh\" (UID: \"4c05cfb2-6a1f-46e5-a784-29bad5c3cdc0\") " pod="openstack/keystone-bootstrap-wknkh" Feb 18 19:34:49 crc kubenswrapper[4942]: I0218 19:34:49.631972 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4c05cfb2-6a1f-46e5-a784-29bad5c3cdc0-config-data\") pod \"keystone-bootstrap-wknkh\" (UID: \"4c05cfb2-6a1f-46e5-a784-29bad5c3cdc0\") " pod="openstack/keystone-bootstrap-wknkh" Feb 18 19:34:49 crc kubenswrapper[4942]: I0218 19:34:49.633568 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4c05cfb2-6a1f-46e5-a784-29bad5c3cdc0-combined-ca-bundle\") pod \"keystone-bootstrap-wknkh\" (UID: \"4c05cfb2-6a1f-46e5-a784-29bad5c3cdc0\") " pod="openstack/keystone-bootstrap-wknkh" Feb 18 19:34:49 crc kubenswrapper[4942]: I0218 19:34:49.633846 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0e907b66-eaef-489a-b729-f61f0c7e347d-config\") pod \"dnsmasq-dns-f877ddd87-8qph9\" (UID: \"0e907b66-eaef-489a-b729-f61f0c7e347d\") " pod="openstack/dnsmasq-dns-f877ddd87-8qph9" Feb 18 19:34:49 crc kubenswrapper[4942]: I0218 19:34:49.640906 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/4c05cfb2-6a1f-46e5-a784-29bad5c3cdc0-credential-keys\") pod \"keystone-bootstrap-wknkh\" (UID: \"4c05cfb2-6a1f-46e5-a784-29bad5c3cdc0\") " pod="openstack/keystone-bootstrap-wknkh" Feb 18 19:34:49 crc kubenswrapper[4942]: I0218 19:34:49.642269 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/4c05cfb2-6a1f-46e5-a784-29bad5c3cdc0-fernet-keys\") pod \"keystone-bootstrap-wknkh\" (UID: \"4c05cfb2-6a1f-46e5-a784-29bad5c3cdc0\") " pod="openstack/keystone-bootstrap-wknkh" Feb 18 19:34:49 crc kubenswrapper[4942]: I0218 19:34:49.645835 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/prometheus-metric-storage-0" Feb 18 19:34:49 crc kubenswrapper[4942]: I0218 19:34:49.692801 4942 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-6487999dc5-x92k5"] Feb 18 19:34:49 crc kubenswrapper[4942]: I0218 19:34:49.694626 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-6487999dc5-x92k5" Feb 18 19:34:49 crc kubenswrapper[4942]: I0218 19:34:49.707600 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9csq5\" (UniqueName: \"kubernetes.io/projected/4c05cfb2-6a1f-46e5-a784-29bad5c3cdc0-kube-api-access-9csq5\") pod \"keystone-bootstrap-wknkh\" (UID: \"4c05cfb2-6a1f-46e5-a784-29bad5c3cdc0\") " pod="openstack/keystone-bootstrap-wknkh" Feb 18 19:34:49 crc kubenswrapper[4942]: I0218 19:34:49.708581 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jfbpd\" (UniqueName: \"kubernetes.io/projected/0e907b66-eaef-489a-b729-f61f0c7e347d-kube-api-access-jfbpd\") pod \"dnsmasq-dns-f877ddd87-8qph9\" (UID: \"0e907b66-eaef-489a-b729-f61f0c7e347d\") " pod="openstack/dnsmasq-dns-f877ddd87-8qph9" Feb 18 19:34:49 crc kubenswrapper[4942]: I0218 19:34:49.728256 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"horizon-horizon-dockercfg-j2dt6" Feb 18 19:34:49 crc kubenswrapper[4942]: I0218 19:34:49.728674 4942 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"horizon-scripts" Feb 18 19:34:49 crc kubenswrapper[4942]: I0218 19:34:49.728881 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"horizon" Feb 18 19:34:49 crc kubenswrapper[4942]: I0218 19:34:49.729077 4942 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"horizon-config-data" Feb 18 19:34:49 crc kubenswrapper[4942]: I0218 19:34:49.732656 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d99k8\" (UniqueName: \"kubernetes.io/projected/a6c912f7-7ee8-4f53-a358-a6a6a5088be5-kube-api-access-d99k8\") pod \"neutron-db-sync-p9l27\" (UID: \"a6c912f7-7ee8-4f53-a358-a6a6a5088be5\") " pod="openstack/neutron-db-sync-p9l27" Feb 18 19:34:49 crc kubenswrapper[4942]: I0218 19:34:49.732883 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a6c912f7-7ee8-4f53-a358-a6a6a5088be5-combined-ca-bundle\") pod \"neutron-db-sync-p9l27\" (UID: \"a6c912f7-7ee8-4f53-a358-a6a6a5088be5\") " pod="openstack/neutron-db-sync-p9l27" Feb 18 19:34:49 crc kubenswrapper[4942]: I0218 19:34:49.732962 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/a6c912f7-7ee8-4f53-a358-a6a6a5088be5-config\") pod \"neutron-db-sync-p9l27\" (UID: \"a6c912f7-7ee8-4f53-a358-a6a6a5088be5\") " pod="openstack/neutron-db-sync-p9l27" Feb 18 19:34:49 crc kubenswrapper[4942]: I0218 19:34:49.745503 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-wknkh" Feb 18 19:34:49 crc kubenswrapper[4942]: I0218 19:34:49.751854 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-f877ddd87-8qph9" Feb 18 19:34:49 crc kubenswrapper[4942]: I0218 19:34:49.827182 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-6487999dc5-x92k5"] Feb 18 19:34:49 crc kubenswrapper[4942]: I0218 19:34:49.837195 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d99k8\" (UniqueName: \"kubernetes.io/projected/a6c912f7-7ee8-4f53-a358-a6a6a5088be5-kube-api-access-d99k8\") pod \"neutron-db-sync-p9l27\" (UID: \"a6c912f7-7ee8-4f53-a358-a6a6a5088be5\") " pod="openstack/neutron-db-sync-p9l27" Feb 18 19:34:49 crc kubenswrapper[4942]: I0218 19:34:49.837313 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a6c912f7-7ee8-4f53-a358-a6a6a5088be5-combined-ca-bundle\") pod \"neutron-db-sync-p9l27\" (UID: \"a6c912f7-7ee8-4f53-a358-a6a6a5088be5\") " pod="openstack/neutron-db-sync-p9l27" Feb 18 19:34:49 crc kubenswrapper[4942]: I0218 19:34:49.837340 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/a6c912f7-7ee8-4f53-a358-a6a6a5088be5-config\") pod \"neutron-db-sync-p9l27\" (UID: \"a6c912f7-7ee8-4f53-a358-a6a6a5088be5\") " pod="openstack/neutron-db-sync-p9l27" Feb 18 19:34:49 crc kubenswrapper[4942]: I0218 19:34:49.856151 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/a6c912f7-7ee8-4f53-a358-a6a6a5088be5-config\") pod \"neutron-db-sync-p9l27\" (UID: \"a6c912f7-7ee8-4f53-a358-a6a6a5088be5\") " pod="openstack/neutron-db-sync-p9l27" Feb 18 19:34:49 crc kubenswrapper[4942]: I0218 19:34:49.902395 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a6c912f7-7ee8-4f53-a358-a6a6a5088be5-combined-ca-bundle\") pod \"neutron-db-sync-p9l27\" (UID: \"a6c912f7-7ee8-4f53-a358-a6a6a5088be5\") " pod="openstack/neutron-db-sync-p9l27" Feb 18 19:34:49 crc kubenswrapper[4942]: I0218 19:34:49.939552 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d99k8\" (UniqueName: \"kubernetes.io/projected/a6c912f7-7ee8-4f53-a358-a6a6a5088be5-kube-api-access-d99k8\") pod \"neutron-db-sync-p9l27\" (UID: \"a6c912f7-7ee8-4f53-a358-a6a6a5088be5\") " pod="openstack/neutron-db-sync-p9l27" Feb 18 19:34:49 crc kubenswrapper[4942]: I0218 19:34:49.949090 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c4f4df56-7f3e-490d-9321-dc520b65369a-logs\") pod \"horizon-6487999dc5-x92k5\" (UID: \"c4f4df56-7f3e-490d-9321-dc520b65369a\") " pod="openstack/horizon-6487999dc5-x92k5" Feb 18 19:34:49 crc kubenswrapper[4942]: I0218 19:34:49.949170 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/c4f4df56-7f3e-490d-9321-dc520b65369a-horizon-secret-key\") pod \"horizon-6487999dc5-x92k5\" (UID: \"c4f4df56-7f3e-490d-9321-dc520b65369a\") " pod="openstack/horizon-6487999dc5-x92k5" Feb 18 19:34:49 crc kubenswrapper[4942]: I0218 19:34:49.949204 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r8qtj\" (UniqueName: \"kubernetes.io/projected/c4f4df56-7f3e-490d-9321-dc520b65369a-kube-api-access-r8qtj\") pod \"horizon-6487999dc5-x92k5\" (UID: \"c4f4df56-7f3e-490d-9321-dc520b65369a\") " pod="openstack/horizon-6487999dc5-x92k5" Feb 18 19:34:49 crc kubenswrapper[4942]: I0218 19:34:49.949237 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/c4f4df56-7f3e-490d-9321-dc520b65369a-config-data\") pod \"horizon-6487999dc5-x92k5\" (UID: \"c4f4df56-7f3e-490d-9321-dc520b65369a\") " pod="openstack/horizon-6487999dc5-x92k5" Feb 18 19:34:49 crc kubenswrapper[4942]: I0218 19:34:49.949256 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c4f4df56-7f3e-490d-9321-dc520b65369a-scripts\") pod \"horizon-6487999dc5-x92k5\" (UID: \"c4f4df56-7f3e-490d-9321-dc520b65369a\") " pod="openstack/horizon-6487999dc5-x92k5" Feb 18 19:34:49 crc kubenswrapper[4942]: I0218 19:34:49.972716 4942 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-sync-qvzh5"] Feb 18 19:34:49 crc kubenswrapper[4942]: I0218 19:34:49.973896 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-qvzh5" Feb 18 19:34:50 crc kubenswrapper[4942]: I0218 19:34:50.005452 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-qvzh5"] Feb 18 19:34:50 crc kubenswrapper[4942]: I0218 19:34:50.011015 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Feb 18 19:34:50 crc kubenswrapper[4942]: I0218 19:34:50.011211 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Feb 18 19:34:50 crc kubenswrapper[4942]: I0218 19:34:50.012126 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-rhdz8" Feb 18 19:34:50 crc kubenswrapper[4942]: I0218 19:34:50.037159 4942 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-sync-9ntpw"] Feb 18 19:34:50 crc kubenswrapper[4942]: I0218 19:34:50.038499 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-9ntpw" Feb 18 19:34:50 crc kubenswrapper[4942]: I0218 19:34:50.050715 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z75l6\" (UniqueName: \"kubernetes.io/projected/8db7f68b-a733-44fc-90b9-a1dd489fb42d-kube-api-access-z75l6\") pod \"cinder-db-sync-qvzh5\" (UID: \"8db7f68b-a733-44fc-90b9-a1dd489fb42d\") " pod="openstack/cinder-db-sync-qvzh5" Feb 18 19:34:50 crc kubenswrapper[4942]: I0218 19:34:50.050785 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c4f4df56-7f3e-490d-9321-dc520b65369a-logs\") pod \"horizon-6487999dc5-x92k5\" (UID: \"c4f4df56-7f3e-490d-9321-dc520b65369a\") " pod="openstack/horizon-6487999dc5-x92k5" Feb 18 19:34:50 crc kubenswrapper[4942]: I0218 19:34:50.050820 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/c4f4df56-7f3e-490d-9321-dc520b65369a-horizon-secret-key\") pod \"horizon-6487999dc5-x92k5\" (UID: \"c4f4df56-7f3e-490d-9321-dc520b65369a\") " pod="openstack/horizon-6487999dc5-x92k5" Feb 18 19:34:50 crc kubenswrapper[4942]: I0218 19:34:50.050845 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r8qtj\" (UniqueName: \"kubernetes.io/projected/c4f4df56-7f3e-490d-9321-dc520b65369a-kube-api-access-r8qtj\") pod \"horizon-6487999dc5-x92k5\" (UID: \"c4f4df56-7f3e-490d-9321-dc520b65369a\") " pod="openstack/horizon-6487999dc5-x92k5" Feb 18 19:34:50 crc kubenswrapper[4942]: I0218 19:34:50.050868 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/c4f4df56-7f3e-490d-9321-dc520b65369a-config-data\") pod \"horizon-6487999dc5-x92k5\" (UID: \"c4f4df56-7f3e-490d-9321-dc520b65369a\") " pod="openstack/horizon-6487999dc5-x92k5" Feb 18 19:34:50 crc kubenswrapper[4942]: I0218 19:34:50.050885 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c4f4df56-7f3e-490d-9321-dc520b65369a-scripts\") pod \"horizon-6487999dc5-x92k5\" (UID: \"c4f4df56-7f3e-490d-9321-dc520b65369a\") " pod="openstack/horizon-6487999dc5-x92k5" Feb 18 19:34:50 crc kubenswrapper[4942]: I0218 19:34:50.050910 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8db7f68b-a733-44fc-90b9-a1dd489fb42d-scripts\") pod \"cinder-db-sync-qvzh5\" (UID: \"8db7f68b-a733-44fc-90b9-a1dd489fb42d\") " pod="openstack/cinder-db-sync-qvzh5" Feb 18 19:34:50 crc kubenswrapper[4942]: I0218 19:34:50.050927 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8db7f68b-a733-44fc-90b9-a1dd489fb42d-config-data\") pod \"cinder-db-sync-qvzh5\" (UID: \"8db7f68b-a733-44fc-90b9-a1dd489fb42d\") " pod="openstack/cinder-db-sync-qvzh5" Feb 18 19:34:50 crc kubenswrapper[4942]: I0218 19:34:50.050994 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8db7f68b-a733-44fc-90b9-a1dd489fb42d-combined-ca-bundle\") pod \"cinder-db-sync-qvzh5\" (UID: \"8db7f68b-a733-44fc-90b9-a1dd489fb42d\") " pod="openstack/cinder-db-sync-qvzh5" Feb 18 19:34:50 crc kubenswrapper[4942]: I0218 19:34:50.051013 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/8db7f68b-a733-44fc-90b9-a1dd489fb42d-etc-machine-id\") pod \"cinder-db-sync-qvzh5\" (UID: \"8db7f68b-a733-44fc-90b9-a1dd489fb42d\") " pod="openstack/cinder-db-sync-qvzh5" Feb 18 19:34:50 crc kubenswrapper[4942]: I0218 19:34:50.051039 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/8db7f68b-a733-44fc-90b9-a1dd489fb42d-db-sync-config-data\") pod \"cinder-db-sync-qvzh5\" (UID: \"8db7f68b-a733-44fc-90b9-a1dd489fb42d\") " pod="openstack/cinder-db-sync-qvzh5" Feb 18 19:34:50 crc kubenswrapper[4942]: I0218 19:34:50.051463 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c4f4df56-7f3e-490d-9321-dc520b65369a-logs\") pod \"horizon-6487999dc5-x92k5\" (UID: \"c4f4df56-7f3e-490d-9321-dc520b65369a\") " pod="openstack/horizon-6487999dc5-x92k5" Feb 18 19:34:50 crc kubenswrapper[4942]: I0218 19:34:50.053916 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c4f4df56-7f3e-490d-9321-dc520b65369a-scripts\") pod \"horizon-6487999dc5-x92k5\" (UID: \"c4f4df56-7f3e-490d-9321-dc520b65369a\") " pod="openstack/horizon-6487999dc5-x92k5" Feb 18 19:34:50 crc kubenswrapper[4942]: I0218 19:34:50.054007 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/c4f4df56-7f3e-490d-9321-dc520b65369a-config-data\") pod \"horizon-6487999dc5-x92k5\" (UID: \"c4f4df56-7f3e-490d-9321-dc520b65369a\") " pod="openstack/horizon-6487999dc5-x92k5" Feb 18 19:34:50 crc kubenswrapper[4942]: I0218 19:34:50.061749 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Feb 18 19:34:50 crc kubenswrapper[4942]: I0218 19:34:50.061935 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Feb 18 19:34:50 crc kubenswrapper[4942]: I0218 19:34:50.062110 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-z4q86" Feb 18 19:34:50 crc kubenswrapper[4942]: I0218 19:34:50.066408 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/c4f4df56-7f3e-490d-9321-dc520b65369a-horizon-secret-key\") pod \"horizon-6487999dc5-x92k5\" (UID: \"c4f4df56-7f3e-490d-9321-dc520b65369a\") " pod="openstack/horizon-6487999dc5-x92k5" Feb 18 19:34:50 crc kubenswrapper[4942]: I0218 19:34:50.073818 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r8qtj\" (UniqueName: \"kubernetes.io/projected/c4f4df56-7f3e-490d-9321-dc520b65369a-kube-api-access-r8qtj\") pod \"horizon-6487999dc5-x92k5\" (UID: \"c4f4df56-7f3e-490d-9321-dc520b65369a\") " pod="openstack/horizon-6487999dc5-x92k5" Feb 18 19:34:50 crc kubenswrapper[4942]: I0218 19:34:50.095511 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-9ntpw"] Feb 18 19:34:50 crc kubenswrapper[4942]: I0218 19:34:50.122463 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"125bdbb5-76a8-450f-b645-2133024a1bd0","Type":"ContainerStarted","Data":"040a6c9a84d4f13ad2cfb74bb2fba17bf1179819d6e678b47e5738572d85436f"} Feb 18 19:34:50 crc kubenswrapper[4942]: I0218 19:34:50.128856 4942 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-f877ddd87-8qph9"] Feb 18 19:34:50 crc kubenswrapper[4942]: I0218 19:34:50.136920 4942 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-sync-h2kjs"] Feb 18 19:34:50 crc kubenswrapper[4942]: I0218 19:34:50.137884 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-6487999dc5-x92k5" Feb 18 19:34:50 crc kubenswrapper[4942]: I0218 19:34:50.138335 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-h2kjs" Feb 18 19:34:50 crc kubenswrapper[4942]: I0218 19:34:50.143213 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Feb 18 19:34:50 crc kubenswrapper[4942]: I0218 19:34:50.143393 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-qg5fj" Feb 18 19:34:50 crc kubenswrapper[4942]: I0218 19:34:50.154005 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z75l6\" (UniqueName: \"kubernetes.io/projected/8db7f68b-a733-44fc-90b9-a1dd489fb42d-kube-api-access-z75l6\") pod \"cinder-db-sync-qvzh5\" (UID: \"8db7f68b-a733-44fc-90b9-a1dd489fb42d\") " pod="openstack/cinder-db-sync-qvzh5" Feb 18 19:34:50 crc kubenswrapper[4942]: I0218 19:34:50.154318 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/af8e769c-00c3-41a1-97c4-d91902767dfe-combined-ca-bundle\") pod \"placement-db-sync-9ntpw\" (UID: \"af8e769c-00c3-41a1-97c4-d91902767dfe\") " pod="openstack/placement-db-sync-9ntpw" Feb 18 19:34:50 crc kubenswrapper[4942]: I0218 19:34:50.154457 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8db7f68b-a733-44fc-90b9-a1dd489fb42d-scripts\") pod \"cinder-db-sync-qvzh5\" (UID: \"8db7f68b-a733-44fc-90b9-a1dd489fb42d\") " pod="openstack/cinder-db-sync-qvzh5" Feb 18 19:34:50 crc kubenswrapper[4942]: I0218 19:34:50.154543 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/af8e769c-00c3-41a1-97c4-d91902767dfe-config-data\") pod \"placement-db-sync-9ntpw\" (UID: \"af8e769c-00c3-41a1-97c4-d91902767dfe\") " pod="openstack/placement-db-sync-9ntpw" Feb 18 19:34:50 crc kubenswrapper[4942]: I0218 19:34:50.154627 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8db7f68b-a733-44fc-90b9-a1dd489fb42d-config-data\") pod \"cinder-db-sync-qvzh5\" (UID: \"8db7f68b-a733-44fc-90b9-a1dd489fb42d\") " pod="openstack/cinder-db-sync-qvzh5" Feb 18 19:34:50 crc kubenswrapper[4942]: I0218 19:34:50.154723 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/af8e769c-00c3-41a1-97c4-d91902767dfe-scripts\") pod \"placement-db-sync-9ntpw\" (UID: \"af8e769c-00c3-41a1-97c4-d91902767dfe\") " pod="openstack/placement-db-sync-9ntpw" Feb 18 19:34:50 crc kubenswrapper[4942]: I0218 19:34:50.154843 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9dmdr\" (UniqueName: \"kubernetes.io/projected/af8e769c-00c3-41a1-97c4-d91902767dfe-kube-api-access-9dmdr\") pod \"placement-db-sync-9ntpw\" (UID: \"af8e769c-00c3-41a1-97c4-d91902767dfe\") " pod="openstack/placement-db-sync-9ntpw" Feb 18 19:34:50 crc kubenswrapper[4942]: I0218 19:34:50.154937 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/af8e769c-00c3-41a1-97c4-d91902767dfe-logs\") pod \"placement-db-sync-9ntpw\" (UID: \"af8e769c-00c3-41a1-97c4-d91902767dfe\") " pod="openstack/placement-db-sync-9ntpw" Feb 18 19:34:50 crc kubenswrapper[4942]: I0218 19:34:50.155056 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8db7f68b-a733-44fc-90b9-a1dd489fb42d-combined-ca-bundle\") pod \"cinder-db-sync-qvzh5\" (UID: \"8db7f68b-a733-44fc-90b9-a1dd489fb42d\") " pod="openstack/cinder-db-sync-qvzh5" Feb 18 19:34:50 crc kubenswrapper[4942]: I0218 19:34:50.155145 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/8db7f68b-a733-44fc-90b9-a1dd489fb42d-etc-machine-id\") pod \"cinder-db-sync-qvzh5\" (UID: \"8db7f68b-a733-44fc-90b9-a1dd489fb42d\") " pod="openstack/cinder-db-sync-qvzh5" Feb 18 19:34:50 crc kubenswrapper[4942]: I0218 19:34:50.155239 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/8db7f68b-a733-44fc-90b9-a1dd489fb42d-db-sync-config-data\") pod \"cinder-db-sync-qvzh5\" (UID: \"8db7f68b-a733-44fc-90b9-a1dd489fb42d\") " pod="openstack/cinder-db-sync-qvzh5" Feb 18 19:34:50 crc kubenswrapper[4942]: I0218 19:34:50.159350 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/8db7f68b-a733-44fc-90b9-a1dd489fb42d-etc-machine-id\") pod \"cinder-db-sync-qvzh5\" (UID: \"8db7f68b-a733-44fc-90b9-a1dd489fb42d\") " pod="openstack/cinder-db-sync-qvzh5" Feb 18 19:34:50 crc kubenswrapper[4942]: I0218 19:34:50.164055 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-h2kjs"] Feb 18 19:34:50 crc kubenswrapper[4942]: I0218 19:34:50.177587 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8db7f68b-a733-44fc-90b9-a1dd489fb42d-config-data\") pod \"cinder-db-sync-qvzh5\" (UID: \"8db7f68b-a733-44fc-90b9-a1dd489fb42d\") " pod="openstack/cinder-db-sync-qvzh5" Feb 18 19:34:50 crc kubenswrapper[4942]: I0218 19:34:50.180331 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/8db7f68b-a733-44fc-90b9-a1dd489fb42d-db-sync-config-data\") pod \"cinder-db-sync-qvzh5\" (UID: \"8db7f68b-a733-44fc-90b9-a1dd489fb42d\") " pod="openstack/cinder-db-sync-qvzh5" Feb 18 19:34:50 crc kubenswrapper[4942]: I0218 19:34:50.182368 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8db7f68b-a733-44fc-90b9-a1dd489fb42d-scripts\") pod \"cinder-db-sync-qvzh5\" (UID: \"8db7f68b-a733-44fc-90b9-a1dd489fb42d\") " pod="openstack/cinder-db-sync-qvzh5" Feb 18 19:34:50 crc kubenswrapper[4942]: I0218 19:34:50.185911 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z75l6\" (UniqueName: \"kubernetes.io/projected/8db7f68b-a733-44fc-90b9-a1dd489fb42d-kube-api-access-z75l6\") pod \"cinder-db-sync-qvzh5\" (UID: \"8db7f68b-a733-44fc-90b9-a1dd489fb42d\") " pod="openstack/cinder-db-sync-qvzh5" Feb 18 19:34:50 crc kubenswrapper[4942]: I0218 19:34:50.187224 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8db7f68b-a733-44fc-90b9-a1dd489fb42d-combined-ca-bundle\") pod \"cinder-db-sync-qvzh5\" (UID: \"8db7f68b-a733-44fc-90b9-a1dd489fb42d\") " pod="openstack/cinder-db-sync-qvzh5" Feb 18 19:34:50 crc kubenswrapper[4942]: I0218 19:34:50.202048 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-p9l27" Feb 18 19:34:50 crc kubenswrapper[4942]: I0218 19:34:50.204264 4942 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-5dcf8ff489-qc7h7"] Feb 18 19:34:50 crc kubenswrapper[4942]: I0218 19:34:50.205911 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-5dcf8ff489-qc7h7" Feb 18 19:34:50 crc kubenswrapper[4942]: I0218 19:34:50.235344 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-5dcf8ff489-qc7h7"] Feb 18 19:34:50 crc kubenswrapper[4942]: I0218 19:34:50.243832 4942 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-68dcc9cf6f-tmqbr"] Feb 18 19:34:50 crc kubenswrapper[4942]: I0218 19:34:50.245784 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-68dcc9cf6f-tmqbr" Feb 18 19:34:50 crc kubenswrapper[4942]: I0218 19:34:50.258870 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9dmdr\" (UniqueName: \"kubernetes.io/projected/af8e769c-00c3-41a1-97c4-d91902767dfe-kube-api-access-9dmdr\") pod \"placement-db-sync-9ntpw\" (UID: \"af8e769c-00c3-41a1-97c4-d91902767dfe\") " pod="openstack/placement-db-sync-9ntpw" Feb 18 19:34:50 crc kubenswrapper[4942]: I0218 19:34:50.258913 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/af8e769c-00c3-41a1-97c4-d91902767dfe-logs\") pod \"placement-db-sync-9ntpw\" (UID: \"af8e769c-00c3-41a1-97c4-d91902767dfe\") " pod="openstack/placement-db-sync-9ntpw" Feb 18 19:34:50 crc kubenswrapper[4942]: I0218 19:34:50.258979 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/8aeac097-ba93-4859-a14f-839ae1421e28-db-sync-config-data\") pod \"barbican-db-sync-h2kjs\" (UID: \"8aeac097-ba93-4859-a14f-839ae1421e28\") " pod="openstack/barbican-db-sync-h2kjs" Feb 18 19:34:50 crc kubenswrapper[4942]: I0218 19:34:50.259053 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kdzfp\" (UniqueName: \"kubernetes.io/projected/8aeac097-ba93-4859-a14f-839ae1421e28-kube-api-access-kdzfp\") pod \"barbican-db-sync-h2kjs\" (UID: \"8aeac097-ba93-4859-a14f-839ae1421e28\") " pod="openstack/barbican-db-sync-h2kjs" Feb 18 19:34:50 crc kubenswrapper[4942]: I0218 19:34:50.259152 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8aeac097-ba93-4859-a14f-839ae1421e28-combined-ca-bundle\") pod \"barbican-db-sync-h2kjs\" (UID: \"8aeac097-ba93-4859-a14f-839ae1421e28\") " pod="openstack/barbican-db-sync-h2kjs" Feb 18 19:34:50 crc kubenswrapper[4942]: I0218 19:34:50.259261 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/af8e769c-00c3-41a1-97c4-d91902767dfe-combined-ca-bundle\") pod \"placement-db-sync-9ntpw\" (UID: \"af8e769c-00c3-41a1-97c4-d91902767dfe\") " pod="openstack/placement-db-sync-9ntpw" Feb 18 19:34:50 crc kubenswrapper[4942]: I0218 19:34:50.259359 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/af8e769c-00c3-41a1-97c4-d91902767dfe-config-data\") pod \"placement-db-sync-9ntpw\" (UID: \"af8e769c-00c3-41a1-97c4-d91902767dfe\") " pod="openstack/placement-db-sync-9ntpw" Feb 18 19:34:50 crc kubenswrapper[4942]: I0218 19:34:50.259439 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/af8e769c-00c3-41a1-97c4-d91902767dfe-scripts\") pod \"placement-db-sync-9ntpw\" (UID: \"af8e769c-00c3-41a1-97c4-d91902767dfe\") " pod="openstack/placement-db-sync-9ntpw" Feb 18 19:34:50 crc kubenswrapper[4942]: I0218 19:34:50.260859 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/af8e769c-00c3-41a1-97c4-d91902767dfe-logs\") pod \"placement-db-sync-9ntpw\" (UID: \"af8e769c-00c3-41a1-97c4-d91902767dfe\") " pod="openstack/placement-db-sync-9ntpw" Feb 18 19:34:50 crc kubenswrapper[4942]: I0218 19:34:50.266067 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/af8e769c-00c3-41a1-97c4-d91902767dfe-combined-ca-bundle\") pod \"placement-db-sync-9ntpw\" (UID: \"af8e769c-00c3-41a1-97c4-d91902767dfe\") " pod="openstack/placement-db-sync-9ntpw" Feb 18 19:34:50 crc kubenswrapper[4942]: I0218 19:34:50.270609 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/af8e769c-00c3-41a1-97c4-d91902767dfe-config-data\") pod \"placement-db-sync-9ntpw\" (UID: \"af8e769c-00c3-41a1-97c4-d91902767dfe\") " pod="openstack/placement-db-sync-9ntpw" Feb 18 19:34:50 crc kubenswrapper[4942]: I0218 19:34:50.274021 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/af8e769c-00c3-41a1-97c4-d91902767dfe-scripts\") pod \"placement-db-sync-9ntpw\" (UID: \"af8e769c-00c3-41a1-97c4-d91902767dfe\") " pod="openstack/placement-db-sync-9ntpw" Feb 18 19:34:50 crc kubenswrapper[4942]: I0218 19:34:50.282104 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9dmdr\" (UniqueName: \"kubernetes.io/projected/af8e769c-00c3-41a1-97c4-d91902767dfe-kube-api-access-9dmdr\") pod \"placement-db-sync-9ntpw\" (UID: \"af8e769c-00c3-41a1-97c4-d91902767dfe\") " pod="openstack/placement-db-sync-9ntpw" Feb 18 19:34:50 crc kubenswrapper[4942]: I0218 19:34:50.284945 4942 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 18 19:34:50 crc kubenswrapper[4942]: I0218 19:34:50.287639 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 18 19:34:50 crc kubenswrapper[4942]: I0218 19:34:50.294591 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-qvzh5" Feb 18 19:34:50 crc kubenswrapper[4942]: I0218 19:34:50.296826 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 18 19:34:50 crc kubenswrapper[4942]: I0218 19:34:50.297033 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 18 19:34:50 crc kubenswrapper[4942]: I0218 19:34:50.359560 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-68dcc9cf6f-tmqbr"] Feb 18 19:34:50 crc kubenswrapper[4942]: I0218 19:34:50.362913 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e4517368-322e-4467-b31a-45b487e1035b-run-httpd\") pod \"ceilometer-0\" (UID: \"e4517368-322e-4467-b31a-45b487e1035b\") " pod="openstack/ceilometer-0" Feb 18 19:34:50 crc kubenswrapper[4942]: I0218 19:34:50.362970 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/8aeac097-ba93-4859-a14f-839ae1421e28-db-sync-config-data\") pod \"barbican-db-sync-h2kjs\" (UID: \"8aeac097-ba93-4859-a14f-839ae1421e28\") " pod="openstack/barbican-db-sync-h2kjs" Feb 18 19:34:50 crc kubenswrapper[4942]: I0218 19:34:50.363032 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kdzfp\" (UniqueName: \"kubernetes.io/projected/8aeac097-ba93-4859-a14f-839ae1421e28-kube-api-access-kdzfp\") pod \"barbican-db-sync-h2kjs\" (UID: \"8aeac097-ba93-4859-a14f-839ae1421e28\") " pod="openstack/barbican-db-sync-h2kjs" Feb 18 19:34:50 crc kubenswrapper[4942]: I0218 19:34:50.363060 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/79f6285d-991e-4118-8f5b-d451c225f1d6-scripts\") pod \"horizon-5dcf8ff489-qc7h7\" (UID: \"79f6285d-991e-4118-8f5b-d451c225f1d6\") " pod="openstack/horizon-5dcf8ff489-qc7h7" Feb 18 19:34:50 crc kubenswrapper[4942]: I0218 19:34:50.363109 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f152879a-9670-449a-be9f-d3314368e29c-dns-svc\") pod \"dnsmasq-dns-68dcc9cf6f-tmqbr\" (UID: \"f152879a-9670-449a-be9f-d3314368e29c\") " pod="openstack/dnsmasq-dns-68dcc9cf6f-tmqbr" Feb 18 19:34:50 crc kubenswrapper[4942]: I0218 19:34:50.363146 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-68tzs\" (UniqueName: \"kubernetes.io/projected/e4517368-322e-4467-b31a-45b487e1035b-kube-api-access-68tzs\") pod \"ceilometer-0\" (UID: \"e4517368-322e-4467-b31a-45b487e1035b\") " pod="openstack/ceilometer-0" Feb 18 19:34:50 crc kubenswrapper[4942]: I0218 19:34:50.363234 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/79f6285d-991e-4118-8f5b-d451c225f1d6-logs\") pod \"horizon-5dcf8ff489-qc7h7\" (UID: \"79f6285d-991e-4118-8f5b-d451c225f1d6\") " pod="openstack/horizon-5dcf8ff489-qc7h7" Feb 18 19:34:50 crc kubenswrapper[4942]: I0218 19:34:50.363294 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8aeac097-ba93-4859-a14f-839ae1421e28-combined-ca-bundle\") pod \"barbican-db-sync-h2kjs\" (UID: \"8aeac097-ba93-4859-a14f-839ae1421e28\") " pod="openstack/barbican-db-sync-h2kjs" Feb 18 19:34:50 crc kubenswrapper[4942]: I0218 19:34:50.363353 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e4517368-322e-4467-b31a-45b487e1035b-config-data\") pod \"ceilometer-0\" (UID: \"e4517368-322e-4467-b31a-45b487e1035b\") " pod="openstack/ceilometer-0" Feb 18 19:34:50 crc kubenswrapper[4942]: I0218 19:34:50.363393 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/79f6285d-991e-4118-8f5b-d451c225f1d6-config-data\") pod \"horizon-5dcf8ff489-qc7h7\" (UID: \"79f6285d-991e-4118-8f5b-d451c225f1d6\") " pod="openstack/horizon-5dcf8ff489-qc7h7" Feb 18 19:34:50 crc kubenswrapper[4942]: I0218 19:34:50.363499 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f152879a-9670-449a-be9f-d3314368e29c-config\") pod \"dnsmasq-dns-68dcc9cf6f-tmqbr\" (UID: \"f152879a-9670-449a-be9f-d3314368e29c\") " pod="openstack/dnsmasq-dns-68dcc9cf6f-tmqbr" Feb 18 19:34:50 crc kubenswrapper[4942]: I0218 19:34:50.363559 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f152879a-9670-449a-be9f-d3314368e29c-ovsdbserver-sb\") pod \"dnsmasq-dns-68dcc9cf6f-tmqbr\" (UID: \"f152879a-9670-449a-be9f-d3314368e29c\") " pod="openstack/dnsmasq-dns-68dcc9cf6f-tmqbr" Feb 18 19:34:50 crc kubenswrapper[4942]: I0218 19:34:50.363591 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sjzkc\" (UniqueName: \"kubernetes.io/projected/f152879a-9670-449a-be9f-d3314368e29c-kube-api-access-sjzkc\") pod \"dnsmasq-dns-68dcc9cf6f-tmqbr\" (UID: \"f152879a-9670-449a-be9f-d3314368e29c\") " pod="openstack/dnsmasq-dns-68dcc9cf6f-tmqbr" Feb 18 19:34:50 crc kubenswrapper[4942]: I0218 19:34:50.363673 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e4517368-322e-4467-b31a-45b487e1035b-scripts\") pod \"ceilometer-0\" (UID: \"e4517368-322e-4467-b31a-45b487e1035b\") " pod="openstack/ceilometer-0" Feb 18 19:34:50 crc kubenswrapper[4942]: I0218 19:34:50.363752 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e4517368-322e-4467-b31a-45b487e1035b-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"e4517368-322e-4467-b31a-45b487e1035b\") " pod="openstack/ceilometer-0" Feb 18 19:34:50 crc kubenswrapper[4942]: I0218 19:34:50.363808 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/79f6285d-991e-4118-8f5b-d451c225f1d6-horizon-secret-key\") pod \"horizon-5dcf8ff489-qc7h7\" (UID: \"79f6285d-991e-4118-8f5b-d451c225f1d6\") " pod="openstack/horizon-5dcf8ff489-qc7h7" Feb 18 19:34:50 crc kubenswrapper[4942]: I0218 19:34:50.363832 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e4517368-322e-4467-b31a-45b487e1035b-log-httpd\") pod \"ceilometer-0\" (UID: \"e4517368-322e-4467-b31a-45b487e1035b\") " pod="openstack/ceilometer-0" Feb 18 19:34:50 crc kubenswrapper[4942]: I0218 19:34:50.363864 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/e4517368-322e-4467-b31a-45b487e1035b-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"e4517368-322e-4467-b31a-45b487e1035b\") " pod="openstack/ceilometer-0" Feb 18 19:34:50 crc kubenswrapper[4942]: I0218 19:34:50.363884 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f152879a-9670-449a-be9f-d3314368e29c-ovsdbserver-nb\") pod \"dnsmasq-dns-68dcc9cf6f-tmqbr\" (UID: \"f152879a-9670-449a-be9f-d3314368e29c\") " pod="openstack/dnsmasq-dns-68dcc9cf6f-tmqbr" Feb 18 19:34:50 crc kubenswrapper[4942]: I0218 19:34:50.363903 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mwckx\" (UniqueName: \"kubernetes.io/projected/79f6285d-991e-4118-8f5b-d451c225f1d6-kube-api-access-mwckx\") pod \"horizon-5dcf8ff489-qc7h7\" (UID: \"79f6285d-991e-4118-8f5b-d451c225f1d6\") " pod="openstack/horizon-5dcf8ff489-qc7h7" Feb 18 19:34:50 crc kubenswrapper[4942]: I0218 19:34:50.368329 4942 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-storage-0" podStartSLOduration=45.142573986 podStartE2EDuration="1m8.368313153s" podCreationTimestamp="2026-02-18 19:33:42 +0000 UTC" firstStartedPulling="2026-02-18 19:34:24.42636145 +0000 UTC m=+1024.131294115" lastFinishedPulling="2026-02-18 19:34:47.652100627 +0000 UTC m=+1047.357033282" observedRunningTime="2026-02-18 19:34:50.207412491 +0000 UTC m=+1049.912345156" watchObservedRunningTime="2026-02-18 19:34:50.368313153 +0000 UTC m=+1050.073245818" Feb 18 19:34:50 crc kubenswrapper[4942]: I0218 19:34:50.374115 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 18 19:34:50 crc kubenswrapper[4942]: I0218 19:34:50.390397 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-9ntpw" Feb 18 19:34:50 crc kubenswrapper[4942]: I0218 19:34:50.401012 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/8aeac097-ba93-4859-a14f-839ae1421e28-db-sync-config-data\") pod \"barbican-db-sync-h2kjs\" (UID: \"8aeac097-ba93-4859-a14f-839ae1421e28\") " pod="openstack/barbican-db-sync-h2kjs" Feb 18 19:34:50 crc kubenswrapper[4942]: I0218 19:34:50.418296 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8aeac097-ba93-4859-a14f-839ae1421e28-combined-ca-bundle\") pod \"barbican-db-sync-h2kjs\" (UID: \"8aeac097-ba93-4859-a14f-839ae1421e28\") " pod="openstack/barbican-db-sync-h2kjs" Feb 18 19:34:50 crc kubenswrapper[4942]: I0218 19:34:50.435867 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kdzfp\" (UniqueName: \"kubernetes.io/projected/8aeac097-ba93-4859-a14f-839ae1421e28-kube-api-access-kdzfp\") pod \"barbican-db-sync-h2kjs\" (UID: \"8aeac097-ba93-4859-a14f-839ae1421e28\") " pod="openstack/barbican-db-sync-h2kjs" Feb 18 19:34:50 crc kubenswrapper[4942]: I0218 19:34:50.470680 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f152879a-9670-449a-be9f-d3314368e29c-dns-svc\") pod \"dnsmasq-dns-68dcc9cf6f-tmqbr\" (UID: \"f152879a-9670-449a-be9f-d3314368e29c\") " pod="openstack/dnsmasq-dns-68dcc9cf6f-tmqbr" Feb 18 19:34:50 crc kubenswrapper[4942]: I0218 19:34:50.470731 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-68tzs\" (UniqueName: \"kubernetes.io/projected/e4517368-322e-4467-b31a-45b487e1035b-kube-api-access-68tzs\") pod \"ceilometer-0\" (UID: \"e4517368-322e-4467-b31a-45b487e1035b\") " pod="openstack/ceilometer-0" Feb 18 19:34:50 crc kubenswrapper[4942]: I0218 19:34:50.470752 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/79f6285d-991e-4118-8f5b-d451c225f1d6-logs\") pod \"horizon-5dcf8ff489-qc7h7\" (UID: \"79f6285d-991e-4118-8f5b-d451c225f1d6\") " pod="openstack/horizon-5dcf8ff489-qc7h7" Feb 18 19:34:50 crc kubenswrapper[4942]: I0218 19:34:50.470805 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e4517368-322e-4467-b31a-45b487e1035b-config-data\") pod \"ceilometer-0\" (UID: \"e4517368-322e-4467-b31a-45b487e1035b\") " pod="openstack/ceilometer-0" Feb 18 19:34:50 crc kubenswrapper[4942]: I0218 19:34:50.470825 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/79f6285d-991e-4118-8f5b-d451c225f1d6-config-data\") pod \"horizon-5dcf8ff489-qc7h7\" (UID: \"79f6285d-991e-4118-8f5b-d451c225f1d6\") " pod="openstack/horizon-5dcf8ff489-qc7h7" Feb 18 19:34:50 crc kubenswrapper[4942]: I0218 19:34:50.470859 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f152879a-9670-449a-be9f-d3314368e29c-config\") pod \"dnsmasq-dns-68dcc9cf6f-tmqbr\" (UID: \"f152879a-9670-449a-be9f-d3314368e29c\") " pod="openstack/dnsmasq-dns-68dcc9cf6f-tmqbr" Feb 18 19:34:50 crc kubenswrapper[4942]: I0218 19:34:50.470873 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f152879a-9670-449a-be9f-d3314368e29c-ovsdbserver-sb\") pod \"dnsmasq-dns-68dcc9cf6f-tmqbr\" (UID: \"f152879a-9670-449a-be9f-d3314368e29c\") " pod="openstack/dnsmasq-dns-68dcc9cf6f-tmqbr" Feb 18 19:34:50 crc kubenswrapper[4942]: I0218 19:34:50.470891 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sjzkc\" (UniqueName: \"kubernetes.io/projected/f152879a-9670-449a-be9f-d3314368e29c-kube-api-access-sjzkc\") pod \"dnsmasq-dns-68dcc9cf6f-tmqbr\" (UID: \"f152879a-9670-449a-be9f-d3314368e29c\") " pod="openstack/dnsmasq-dns-68dcc9cf6f-tmqbr" Feb 18 19:34:50 crc kubenswrapper[4942]: I0218 19:34:50.470923 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e4517368-322e-4467-b31a-45b487e1035b-scripts\") pod \"ceilometer-0\" (UID: \"e4517368-322e-4467-b31a-45b487e1035b\") " pod="openstack/ceilometer-0" Feb 18 19:34:50 crc kubenswrapper[4942]: I0218 19:34:50.470946 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e4517368-322e-4467-b31a-45b487e1035b-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"e4517368-322e-4467-b31a-45b487e1035b\") " pod="openstack/ceilometer-0" Feb 18 19:34:50 crc kubenswrapper[4942]: I0218 19:34:50.470965 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/79f6285d-991e-4118-8f5b-d451c225f1d6-horizon-secret-key\") pod \"horizon-5dcf8ff489-qc7h7\" (UID: \"79f6285d-991e-4118-8f5b-d451c225f1d6\") " pod="openstack/horizon-5dcf8ff489-qc7h7" Feb 18 19:34:50 crc kubenswrapper[4942]: I0218 19:34:50.470980 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e4517368-322e-4467-b31a-45b487e1035b-log-httpd\") pod \"ceilometer-0\" (UID: \"e4517368-322e-4467-b31a-45b487e1035b\") " pod="openstack/ceilometer-0" Feb 18 19:34:50 crc kubenswrapper[4942]: I0218 19:34:50.470996 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/e4517368-322e-4467-b31a-45b487e1035b-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"e4517368-322e-4467-b31a-45b487e1035b\") " pod="openstack/ceilometer-0" Feb 18 19:34:50 crc kubenswrapper[4942]: I0218 19:34:50.471015 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f152879a-9670-449a-be9f-d3314368e29c-ovsdbserver-nb\") pod \"dnsmasq-dns-68dcc9cf6f-tmqbr\" (UID: \"f152879a-9670-449a-be9f-d3314368e29c\") " pod="openstack/dnsmasq-dns-68dcc9cf6f-tmqbr" Feb 18 19:34:50 crc kubenswrapper[4942]: I0218 19:34:50.471032 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mwckx\" (UniqueName: \"kubernetes.io/projected/79f6285d-991e-4118-8f5b-d451c225f1d6-kube-api-access-mwckx\") pod \"horizon-5dcf8ff489-qc7h7\" (UID: \"79f6285d-991e-4118-8f5b-d451c225f1d6\") " pod="openstack/horizon-5dcf8ff489-qc7h7" Feb 18 19:34:50 crc kubenswrapper[4942]: I0218 19:34:50.471065 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e4517368-322e-4467-b31a-45b487e1035b-run-httpd\") pod \"ceilometer-0\" (UID: \"e4517368-322e-4467-b31a-45b487e1035b\") " pod="openstack/ceilometer-0" Feb 18 19:34:50 crc kubenswrapper[4942]: I0218 19:34:50.471095 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/79f6285d-991e-4118-8f5b-d451c225f1d6-scripts\") pod \"horizon-5dcf8ff489-qc7h7\" (UID: \"79f6285d-991e-4118-8f5b-d451c225f1d6\") " pod="openstack/horizon-5dcf8ff489-qc7h7" Feb 18 19:34:50 crc kubenswrapper[4942]: I0218 19:34:50.471731 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/79f6285d-991e-4118-8f5b-d451c225f1d6-scripts\") pod \"horizon-5dcf8ff489-qc7h7\" (UID: \"79f6285d-991e-4118-8f5b-d451c225f1d6\") " pod="openstack/horizon-5dcf8ff489-qc7h7" Feb 18 19:34:50 crc kubenswrapper[4942]: I0218 19:34:50.472323 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f152879a-9670-449a-be9f-d3314368e29c-dns-svc\") pod \"dnsmasq-dns-68dcc9cf6f-tmqbr\" (UID: \"f152879a-9670-449a-be9f-d3314368e29c\") " pod="openstack/dnsmasq-dns-68dcc9cf6f-tmqbr" Feb 18 19:34:50 crc kubenswrapper[4942]: I0218 19:34:50.472814 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/79f6285d-991e-4118-8f5b-d451c225f1d6-logs\") pod \"horizon-5dcf8ff489-qc7h7\" (UID: \"79f6285d-991e-4118-8f5b-d451c225f1d6\") " pod="openstack/horizon-5dcf8ff489-qc7h7" Feb 18 19:34:50 crc kubenswrapper[4942]: I0218 19:34:50.474775 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f152879a-9670-449a-be9f-d3314368e29c-ovsdbserver-sb\") pod \"dnsmasq-dns-68dcc9cf6f-tmqbr\" (UID: \"f152879a-9670-449a-be9f-d3314368e29c\") " pod="openstack/dnsmasq-dns-68dcc9cf6f-tmqbr" Feb 18 19:34:50 crc kubenswrapper[4942]: I0218 19:34:50.475704 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/79f6285d-991e-4118-8f5b-d451c225f1d6-config-data\") pod \"horizon-5dcf8ff489-qc7h7\" (UID: \"79f6285d-991e-4118-8f5b-d451c225f1d6\") " pod="openstack/horizon-5dcf8ff489-qc7h7" Feb 18 19:34:50 crc kubenswrapper[4942]: I0218 19:34:50.476214 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f152879a-9670-449a-be9f-d3314368e29c-config\") pod \"dnsmasq-dns-68dcc9cf6f-tmqbr\" (UID: \"f152879a-9670-449a-be9f-d3314368e29c\") " pod="openstack/dnsmasq-dns-68dcc9cf6f-tmqbr" Feb 18 19:34:50 crc kubenswrapper[4942]: I0218 19:34:50.479081 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f152879a-9670-449a-be9f-d3314368e29c-ovsdbserver-nb\") pod \"dnsmasq-dns-68dcc9cf6f-tmqbr\" (UID: \"f152879a-9670-449a-be9f-d3314368e29c\") " pod="openstack/dnsmasq-dns-68dcc9cf6f-tmqbr" Feb 18 19:34:50 crc kubenswrapper[4942]: I0218 19:34:50.479597 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e4517368-322e-4467-b31a-45b487e1035b-run-httpd\") pod \"ceilometer-0\" (UID: \"e4517368-322e-4467-b31a-45b487e1035b\") " pod="openstack/ceilometer-0" Feb 18 19:34:50 crc kubenswrapper[4942]: I0218 19:34:50.479846 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e4517368-322e-4467-b31a-45b487e1035b-log-httpd\") pod \"ceilometer-0\" (UID: \"e4517368-322e-4467-b31a-45b487e1035b\") " pod="openstack/ceilometer-0" Feb 18 19:34:50 crc kubenswrapper[4942]: I0218 19:34:50.480154 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-h2kjs" Feb 18 19:34:50 crc kubenswrapper[4942]: I0218 19:34:50.497312 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e4517368-322e-4467-b31a-45b487e1035b-config-data\") pod \"ceilometer-0\" (UID: \"e4517368-322e-4467-b31a-45b487e1035b\") " pod="openstack/ceilometer-0" Feb 18 19:34:50 crc kubenswrapper[4942]: I0218 19:34:50.506279 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/e4517368-322e-4467-b31a-45b487e1035b-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"e4517368-322e-4467-b31a-45b487e1035b\") " pod="openstack/ceilometer-0" Feb 18 19:34:50 crc kubenswrapper[4942]: I0218 19:34:50.507050 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e4517368-322e-4467-b31a-45b487e1035b-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"e4517368-322e-4467-b31a-45b487e1035b\") " pod="openstack/ceilometer-0" Feb 18 19:34:50 crc kubenswrapper[4942]: I0218 19:34:50.518182 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-68tzs\" (UniqueName: \"kubernetes.io/projected/e4517368-322e-4467-b31a-45b487e1035b-kube-api-access-68tzs\") pod \"ceilometer-0\" (UID: \"e4517368-322e-4467-b31a-45b487e1035b\") " pod="openstack/ceilometer-0" Feb 18 19:34:50 crc kubenswrapper[4942]: I0218 19:34:50.519785 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e4517368-322e-4467-b31a-45b487e1035b-scripts\") pod \"ceilometer-0\" (UID: \"e4517368-322e-4467-b31a-45b487e1035b\") " pod="openstack/ceilometer-0" Feb 18 19:34:50 crc kubenswrapper[4942]: I0218 19:34:50.525038 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/79f6285d-991e-4118-8f5b-d451c225f1d6-horizon-secret-key\") pod \"horizon-5dcf8ff489-qc7h7\" (UID: \"79f6285d-991e-4118-8f5b-d451c225f1d6\") " pod="openstack/horizon-5dcf8ff489-qc7h7" Feb 18 19:34:50 crc kubenswrapper[4942]: I0218 19:34:50.531020 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sjzkc\" (UniqueName: \"kubernetes.io/projected/f152879a-9670-449a-be9f-d3314368e29c-kube-api-access-sjzkc\") pod \"dnsmasq-dns-68dcc9cf6f-tmqbr\" (UID: \"f152879a-9670-449a-be9f-d3314368e29c\") " pod="openstack/dnsmasq-dns-68dcc9cf6f-tmqbr" Feb 18 19:34:50 crc kubenswrapper[4942]: I0218 19:34:50.534158 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mwckx\" (UniqueName: \"kubernetes.io/projected/79f6285d-991e-4118-8f5b-d451c225f1d6-kube-api-access-mwckx\") pod \"horizon-5dcf8ff489-qc7h7\" (UID: \"79f6285d-991e-4118-8f5b-d451c225f1d6\") " pod="openstack/horizon-5dcf8ff489-qc7h7" Feb 18 19:34:50 crc kubenswrapper[4942]: I0218 19:34:50.542465 4942 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-68dcc9cf6f-tmqbr"] Feb 18 19:34:50 crc kubenswrapper[4942]: I0218 19:34:50.544382 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-68dcc9cf6f-tmqbr" Feb 18 19:34:50 crc kubenswrapper[4942]: I0218 19:34:50.580637 4942 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-58dd9ff6bc-pdwb6"] Feb 18 19:34:50 crc kubenswrapper[4942]: I0218 19:34:50.583284 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-58dd9ff6bc-pdwb6" Feb 18 19:34:50 crc kubenswrapper[4942]: I0218 19:34:50.587223 4942 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-swift-storage-0" Feb 18 19:34:50 crc kubenswrapper[4942]: I0218 19:34:50.594258 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-58dd9ff6bc-pdwb6"] Feb 18 19:34:50 crc kubenswrapper[4942]: I0218 19:34:50.677587 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f354be6c-0a53-41b2-923d-60de99a6ed65-ovsdbserver-sb\") pod \"dnsmasq-dns-58dd9ff6bc-pdwb6\" (UID: \"f354be6c-0a53-41b2-923d-60de99a6ed65\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-pdwb6" Feb 18 19:34:50 crc kubenswrapper[4942]: I0218 19:34:50.677668 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f354be6c-0a53-41b2-923d-60de99a6ed65-ovsdbserver-nb\") pod \"dnsmasq-dns-58dd9ff6bc-pdwb6\" (UID: \"f354be6c-0a53-41b2-923d-60de99a6ed65\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-pdwb6" Feb 18 19:34:50 crc kubenswrapper[4942]: I0218 19:34:50.677701 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f354be6c-0a53-41b2-923d-60de99a6ed65-config\") pod \"dnsmasq-dns-58dd9ff6bc-pdwb6\" (UID: \"f354be6c-0a53-41b2-923d-60de99a6ed65\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-pdwb6" Feb 18 19:34:50 crc kubenswrapper[4942]: I0218 19:34:50.677789 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f354be6c-0a53-41b2-923d-60de99a6ed65-dns-svc\") pod \"dnsmasq-dns-58dd9ff6bc-pdwb6\" (UID: \"f354be6c-0a53-41b2-923d-60de99a6ed65\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-pdwb6" Feb 18 19:34:50 crc kubenswrapper[4942]: I0218 19:34:50.677815 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cs5jn\" (UniqueName: \"kubernetes.io/projected/f354be6c-0a53-41b2-923d-60de99a6ed65-kube-api-access-cs5jn\") pod \"dnsmasq-dns-58dd9ff6bc-pdwb6\" (UID: \"f354be6c-0a53-41b2-923d-60de99a6ed65\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-pdwb6" Feb 18 19:34:50 crc kubenswrapper[4942]: I0218 19:34:50.677913 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/f354be6c-0a53-41b2-923d-60de99a6ed65-dns-swift-storage-0\") pod \"dnsmasq-dns-58dd9ff6bc-pdwb6\" (UID: \"f354be6c-0a53-41b2-923d-60de99a6ed65\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-pdwb6" Feb 18 19:34:50 crc kubenswrapper[4942]: I0218 19:34:50.757497 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 18 19:34:50 crc kubenswrapper[4942]: I0218 19:34:50.779608 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cs5jn\" (UniqueName: \"kubernetes.io/projected/f354be6c-0a53-41b2-923d-60de99a6ed65-kube-api-access-cs5jn\") pod \"dnsmasq-dns-58dd9ff6bc-pdwb6\" (UID: \"f354be6c-0a53-41b2-923d-60de99a6ed65\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-pdwb6" Feb 18 19:34:50 crc kubenswrapper[4942]: I0218 19:34:50.779688 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/f354be6c-0a53-41b2-923d-60de99a6ed65-dns-swift-storage-0\") pod \"dnsmasq-dns-58dd9ff6bc-pdwb6\" (UID: \"f354be6c-0a53-41b2-923d-60de99a6ed65\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-pdwb6" Feb 18 19:34:50 crc kubenswrapper[4942]: I0218 19:34:50.779736 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f354be6c-0a53-41b2-923d-60de99a6ed65-ovsdbserver-sb\") pod \"dnsmasq-dns-58dd9ff6bc-pdwb6\" (UID: \"f354be6c-0a53-41b2-923d-60de99a6ed65\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-pdwb6" Feb 18 19:34:50 crc kubenswrapper[4942]: I0218 19:34:50.780168 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f354be6c-0a53-41b2-923d-60de99a6ed65-ovsdbserver-nb\") pod \"dnsmasq-dns-58dd9ff6bc-pdwb6\" (UID: \"f354be6c-0a53-41b2-923d-60de99a6ed65\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-pdwb6" Feb 18 19:34:50 crc kubenswrapper[4942]: I0218 19:34:50.780192 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f354be6c-0a53-41b2-923d-60de99a6ed65-config\") pod \"dnsmasq-dns-58dd9ff6bc-pdwb6\" (UID: \"f354be6c-0a53-41b2-923d-60de99a6ed65\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-pdwb6" Feb 18 19:34:50 crc kubenswrapper[4942]: I0218 19:34:50.780247 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f354be6c-0a53-41b2-923d-60de99a6ed65-dns-svc\") pod \"dnsmasq-dns-58dd9ff6bc-pdwb6\" (UID: \"f354be6c-0a53-41b2-923d-60de99a6ed65\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-pdwb6" Feb 18 19:34:50 crc kubenswrapper[4942]: I0218 19:34:50.781352 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f354be6c-0a53-41b2-923d-60de99a6ed65-dns-svc\") pod \"dnsmasq-dns-58dd9ff6bc-pdwb6\" (UID: \"f354be6c-0a53-41b2-923d-60de99a6ed65\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-pdwb6" Feb 18 19:34:50 crc kubenswrapper[4942]: I0218 19:34:50.781804 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f354be6c-0a53-41b2-923d-60de99a6ed65-ovsdbserver-sb\") pod \"dnsmasq-dns-58dd9ff6bc-pdwb6\" (UID: \"f354be6c-0a53-41b2-923d-60de99a6ed65\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-pdwb6" Feb 18 19:34:50 crc kubenswrapper[4942]: I0218 19:34:50.781937 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f354be6c-0a53-41b2-923d-60de99a6ed65-ovsdbserver-nb\") pod \"dnsmasq-dns-58dd9ff6bc-pdwb6\" (UID: \"f354be6c-0a53-41b2-923d-60de99a6ed65\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-pdwb6" Feb 18 19:34:50 crc kubenswrapper[4942]: I0218 19:34:50.782612 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/f354be6c-0a53-41b2-923d-60de99a6ed65-dns-swift-storage-0\") pod \"dnsmasq-dns-58dd9ff6bc-pdwb6\" (UID: \"f354be6c-0a53-41b2-923d-60de99a6ed65\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-pdwb6" Feb 18 19:34:50 crc kubenswrapper[4942]: I0218 19:34:50.782693 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f354be6c-0a53-41b2-923d-60de99a6ed65-config\") pod \"dnsmasq-dns-58dd9ff6bc-pdwb6\" (UID: \"f354be6c-0a53-41b2-923d-60de99a6ed65\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-pdwb6" Feb 18 19:34:50 crc kubenswrapper[4942]: I0218 19:34:50.803171 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cs5jn\" (UniqueName: \"kubernetes.io/projected/f354be6c-0a53-41b2-923d-60de99a6ed65-kube-api-access-cs5jn\") pod \"dnsmasq-dns-58dd9ff6bc-pdwb6\" (UID: \"f354be6c-0a53-41b2-923d-60de99a6ed65\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-pdwb6" Feb 18 19:34:50 crc kubenswrapper[4942]: I0218 19:34:50.818384 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-wknkh"] Feb 18 19:34:50 crc kubenswrapper[4942]: I0218 19:34:50.832823 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-5dcf8ff489-qc7h7" Feb 18 19:34:50 crc kubenswrapper[4942]: I0218 19:34:50.844353 4942 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-f877ddd87-8qph9"] Feb 18 19:34:50 crc kubenswrapper[4942]: I0218 19:34:50.926693 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-58dd9ff6bc-pdwb6" Feb 18 19:34:51 crc kubenswrapper[4942]: I0218 19:34:51.012570 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-9ntpw"] Feb 18 19:34:51 crc kubenswrapper[4942]: I0218 19:34:51.031249 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-6487999dc5-x92k5"] Feb 18 19:34:51 crc kubenswrapper[4942]: I0218 19:34:51.069112 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-p9l27"] Feb 18 19:34:51 crc kubenswrapper[4942]: I0218 19:34:51.158822 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-p9l27" event={"ID":"a6c912f7-7ee8-4f53-a358-a6a6a5088be5","Type":"ContainerStarted","Data":"316d5107b8b347fd0cea3be7273208da7013d9d15ad9e9d0440db47bc1ed0d8e"} Feb 18 19:34:51 crc kubenswrapper[4942]: I0218 19:34:51.161505 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-f877ddd87-8qph9" event={"ID":"0e907b66-eaef-489a-b729-f61f0c7e347d","Type":"ContainerStarted","Data":"43ab5328f956e2ed08ffaa0187dd014311f455b02df0f837b7a002e208528e41"} Feb 18 19:34:51 crc kubenswrapper[4942]: I0218 19:34:51.163733 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-9ntpw" event={"ID":"af8e769c-00c3-41a1-97c4-d91902767dfe","Type":"ContainerStarted","Data":"eb7a8e3a23f3477cac51aacb10a95d5378f6772c63aae9e96752efd516b0a2a1"} Feb 18 19:34:51 crc kubenswrapper[4942]: I0218 19:34:51.166646 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-6487999dc5-x92k5" event={"ID":"c4f4df56-7f3e-490d-9321-dc520b65369a","Type":"ContainerStarted","Data":"ac381e3f114e8f2e0ca2ad49412144e5bd5345aa14e469a41eeec38b75b61e1c"} Feb 18 19:34:51 crc kubenswrapper[4942]: I0218 19:34:51.173073 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-wknkh" event={"ID":"4c05cfb2-6a1f-46e5-a784-29bad5c3cdc0","Type":"ContainerStarted","Data":"262290f48bc9f52e9ad2af485330819793bdd52215504ccea4c7c0b79cc77dac"} Feb 18 19:34:51 crc kubenswrapper[4942]: I0218 19:34:51.195901 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-h2kjs"] Feb 18 19:34:51 crc kubenswrapper[4942]: I0218 19:34:51.277994 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 18 19:34:51 crc kubenswrapper[4942]: W0218 19:34:51.297879 4942 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf152879a_9670_449a_be9f_d3314368e29c.slice/crio-e2f8c0a37589b6fa961dd22d4a9b95b3343135606c2c9865d94c65eebcefb5e6 WatchSource:0}: Error finding container e2f8c0a37589b6fa961dd22d4a9b95b3343135606c2c9865d94c65eebcefb5e6: Status 404 returned error can't find the container with id e2f8c0a37589b6fa961dd22d4a9b95b3343135606c2c9865d94c65eebcefb5e6 Feb 18 19:34:51 crc kubenswrapper[4942]: I0218 19:34:51.307279 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-qvzh5"] Feb 18 19:34:51 crc kubenswrapper[4942]: I0218 19:34:51.316825 4942 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-68dcc9cf6f-tmqbr"] Feb 18 19:34:51 crc kubenswrapper[4942]: I0218 19:34:51.407382 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-58dd9ff6bc-pdwb6"] Feb 18 19:34:51 crc kubenswrapper[4942]: W0218 19:34:51.419633 4942 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf354be6c_0a53_41b2_923d_60de99a6ed65.slice/crio-d0d48456629f18d0d25f803c1de4ee3c6cb53d9140b37084a9b2aa9d6750f014 WatchSource:0}: Error finding container d0d48456629f18d0d25f803c1de4ee3c6cb53d9140b37084a9b2aa9d6750f014: Status 404 returned error can't find the container with id d0d48456629f18d0d25f803c1de4ee3c6cb53d9140b37084a9b2aa9d6750f014 Feb 18 19:34:51 crc kubenswrapper[4942]: I0218 19:34:51.528851 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-5dcf8ff489-qc7h7"] Feb 18 19:34:51 crc kubenswrapper[4942]: W0218 19:34:51.545858 4942 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod79f6285d_991e_4118_8f5b_d451c225f1d6.slice/crio-f7d111b50e472dcb7f51a51999f9e9be0fffcc2cd7c0ebb311c39dd7aa656b89 WatchSource:0}: Error finding container f7d111b50e472dcb7f51a51999f9e9be0fffcc2cd7c0ebb311c39dd7aa656b89: Status 404 returned error can't find the container with id f7d111b50e472dcb7f51a51999f9e9be0fffcc2cd7c0ebb311c39dd7aa656b89 Feb 18 19:34:52 crc kubenswrapper[4942]: I0218 19:34:52.189802 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-wknkh" event={"ID":"4c05cfb2-6a1f-46e5-a784-29bad5c3cdc0","Type":"ContainerStarted","Data":"fa114cb799909584016955a551d4df04e20f11df9588933ed8a958c11cc58031"} Feb 18 19:34:52 crc kubenswrapper[4942]: I0218 19:34:52.194076 4942 generic.go:334] "Generic (PLEG): container finished" podID="f354be6c-0a53-41b2-923d-60de99a6ed65" containerID="64088a0ac3e8c72656fdd5f6eb8640c5b4d051cdc758b1b3613619d364046d6d" exitCode=0 Feb 18 19:34:52 crc kubenswrapper[4942]: I0218 19:34:52.194130 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-58dd9ff6bc-pdwb6" event={"ID":"f354be6c-0a53-41b2-923d-60de99a6ed65","Type":"ContainerDied","Data":"64088a0ac3e8c72656fdd5f6eb8640c5b4d051cdc758b1b3613619d364046d6d"} Feb 18 19:34:52 crc kubenswrapper[4942]: I0218 19:34:52.194148 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-58dd9ff6bc-pdwb6" event={"ID":"f354be6c-0a53-41b2-923d-60de99a6ed65","Type":"ContainerStarted","Data":"d0d48456629f18d0d25f803c1de4ee3c6cb53d9140b37084a9b2aa9d6750f014"} Feb 18 19:34:52 crc kubenswrapper[4942]: I0218 19:34:52.197833 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-p9l27" event={"ID":"a6c912f7-7ee8-4f53-a358-a6a6a5088be5","Type":"ContainerStarted","Data":"1f69a1fd29ab925cd8cf8e9aff116531b62f274c86f6998747eb096250393ed9"} Feb 18 19:34:52 crc kubenswrapper[4942]: I0218 19:34:52.200555 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-qvzh5" event={"ID":"8db7f68b-a733-44fc-90b9-a1dd489fb42d","Type":"ContainerStarted","Data":"e6bd17d6977af834a72bbf74bee36179b26553390413854446805a67a2e12afa"} Feb 18 19:34:52 crc kubenswrapper[4942]: I0218 19:34:52.202024 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e4517368-322e-4467-b31a-45b487e1035b","Type":"ContainerStarted","Data":"6813065f5777b4af8dd89f8c25333785bb85a450b21a1a7ab93d214ca1b8049c"} Feb 18 19:34:52 crc kubenswrapper[4942]: I0218 19:34:52.207487 4942 generic.go:334] "Generic (PLEG): container finished" podID="0e907b66-eaef-489a-b729-f61f0c7e347d" containerID="0d67f368ec724e01a3830704823ce44b7d34d87d57cad7e2696b5373ea79d251" exitCode=0 Feb 18 19:34:52 crc kubenswrapper[4942]: I0218 19:34:52.207555 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-f877ddd87-8qph9" event={"ID":"0e907b66-eaef-489a-b729-f61f0c7e347d","Type":"ContainerDied","Data":"0d67f368ec724e01a3830704823ce44b7d34d87d57cad7e2696b5373ea79d251"} Feb 18 19:34:52 crc kubenswrapper[4942]: I0218 19:34:52.208127 4942 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-wknkh" podStartSLOduration=3.208117437 podStartE2EDuration="3.208117437s" podCreationTimestamp="2026-02-18 19:34:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:34:52.207913562 +0000 UTC m=+1051.912846247" watchObservedRunningTime="2026-02-18 19:34:52.208117437 +0000 UTC m=+1051.913050102" Feb 18 19:34:52 crc kubenswrapper[4942]: I0218 19:34:52.213263 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-h2kjs" event={"ID":"8aeac097-ba93-4859-a14f-839ae1421e28","Type":"ContainerStarted","Data":"e12d1b9fecda9ebe7bb6c836765d71cc803f359fe9c297ce1d8263fb74f3fe1c"} Feb 18 19:34:52 crc kubenswrapper[4942]: I0218 19:34:52.222827 4942 generic.go:334] "Generic (PLEG): container finished" podID="f152879a-9670-449a-be9f-d3314368e29c" containerID="7873d578054ec79fc1afaa80065023d0a5361c0b9d7456f037b28f5f4424be84" exitCode=0 Feb 18 19:34:52 crc kubenswrapper[4942]: I0218 19:34:52.222888 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-68dcc9cf6f-tmqbr" event={"ID":"f152879a-9670-449a-be9f-d3314368e29c","Type":"ContainerDied","Data":"7873d578054ec79fc1afaa80065023d0a5361c0b9d7456f037b28f5f4424be84"} Feb 18 19:34:52 crc kubenswrapper[4942]: I0218 19:34:52.222912 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-68dcc9cf6f-tmqbr" event={"ID":"f152879a-9670-449a-be9f-d3314368e29c","Type":"ContainerStarted","Data":"e2f8c0a37589b6fa961dd22d4a9b95b3343135606c2c9865d94c65eebcefb5e6"} Feb 18 19:34:52 crc kubenswrapper[4942]: I0218 19:34:52.231298 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-5dcf8ff489-qc7h7" event={"ID":"79f6285d-991e-4118-8f5b-d451c225f1d6","Type":"ContainerStarted","Data":"f7d111b50e472dcb7f51a51999f9e9be0fffcc2cd7c0ebb311c39dd7aa656b89"} Feb 18 19:34:52 crc kubenswrapper[4942]: I0218 19:34:52.292243 4942 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-db-sync-p9l27" podStartSLOduration=3.292226653 podStartE2EDuration="3.292226653s" podCreationTimestamp="2026-02-18 19:34:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:34:52.28478302 +0000 UTC m=+1051.989715685" watchObservedRunningTime="2026-02-18 19:34:52.292226653 +0000 UTC m=+1051.997159318" Feb 18 19:34:52 crc kubenswrapper[4942]: I0218 19:34:52.497805 4942 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-5dcf8ff489-qc7h7"] Feb 18 19:34:52 crc kubenswrapper[4942]: I0218 19:34:52.546840 4942 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-7d6d8bb5d5-5l49m"] Feb 18 19:34:52 crc kubenswrapper[4942]: I0218 19:34:52.548357 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-7d6d8bb5d5-5l49m" Feb 18 19:34:52 crc kubenswrapper[4942]: I0218 19:34:52.555641 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-7d6d8bb5d5-5l49m"] Feb 18 19:34:52 crc kubenswrapper[4942]: I0218 19:34:52.643724 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/29a05f17-8ada-451f-8460-887a45caa4e6-horizon-secret-key\") pod \"horizon-7d6d8bb5d5-5l49m\" (UID: \"29a05f17-8ada-451f-8460-887a45caa4e6\") " pod="openstack/horizon-7d6d8bb5d5-5l49m" Feb 18 19:34:52 crc kubenswrapper[4942]: I0218 19:34:52.643794 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/29a05f17-8ada-451f-8460-887a45caa4e6-config-data\") pod \"horizon-7d6d8bb5d5-5l49m\" (UID: \"29a05f17-8ada-451f-8460-887a45caa4e6\") " pod="openstack/horizon-7d6d8bb5d5-5l49m" Feb 18 19:34:52 crc kubenswrapper[4942]: I0218 19:34:52.643860 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/29a05f17-8ada-451f-8460-887a45caa4e6-logs\") pod \"horizon-7d6d8bb5d5-5l49m\" (UID: \"29a05f17-8ada-451f-8460-887a45caa4e6\") " pod="openstack/horizon-7d6d8bb5d5-5l49m" Feb 18 19:34:52 crc kubenswrapper[4942]: I0218 19:34:52.644651 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hbrsd\" (UniqueName: \"kubernetes.io/projected/29a05f17-8ada-451f-8460-887a45caa4e6-kube-api-access-hbrsd\") pod \"horizon-7d6d8bb5d5-5l49m\" (UID: \"29a05f17-8ada-451f-8460-887a45caa4e6\") " pod="openstack/horizon-7d6d8bb5d5-5l49m" Feb 18 19:34:52 crc kubenswrapper[4942]: I0218 19:34:52.644715 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/29a05f17-8ada-451f-8460-887a45caa4e6-scripts\") pod \"horizon-7d6d8bb5d5-5l49m\" (UID: \"29a05f17-8ada-451f-8460-887a45caa4e6\") " pod="openstack/horizon-7d6d8bb5d5-5l49m" Feb 18 19:34:52 crc kubenswrapper[4942]: I0218 19:34:52.738096 4942 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 18 19:34:52 crc kubenswrapper[4942]: I0218 19:34:52.748413 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/29a05f17-8ada-451f-8460-887a45caa4e6-logs\") pod \"horizon-7d6d8bb5d5-5l49m\" (UID: \"29a05f17-8ada-451f-8460-887a45caa4e6\") " pod="openstack/horizon-7d6d8bb5d5-5l49m" Feb 18 19:34:52 crc kubenswrapper[4942]: I0218 19:34:52.748461 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hbrsd\" (UniqueName: \"kubernetes.io/projected/29a05f17-8ada-451f-8460-887a45caa4e6-kube-api-access-hbrsd\") pod \"horizon-7d6d8bb5d5-5l49m\" (UID: \"29a05f17-8ada-451f-8460-887a45caa4e6\") " pod="openstack/horizon-7d6d8bb5d5-5l49m" Feb 18 19:34:52 crc kubenswrapper[4942]: I0218 19:34:52.748487 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/29a05f17-8ada-451f-8460-887a45caa4e6-scripts\") pod \"horizon-7d6d8bb5d5-5l49m\" (UID: \"29a05f17-8ada-451f-8460-887a45caa4e6\") " pod="openstack/horizon-7d6d8bb5d5-5l49m" Feb 18 19:34:52 crc kubenswrapper[4942]: I0218 19:34:52.748552 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/29a05f17-8ada-451f-8460-887a45caa4e6-horizon-secret-key\") pod \"horizon-7d6d8bb5d5-5l49m\" (UID: \"29a05f17-8ada-451f-8460-887a45caa4e6\") " pod="openstack/horizon-7d6d8bb5d5-5l49m" Feb 18 19:34:52 crc kubenswrapper[4942]: I0218 19:34:52.748579 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/29a05f17-8ada-451f-8460-887a45caa4e6-config-data\") pod \"horizon-7d6d8bb5d5-5l49m\" (UID: \"29a05f17-8ada-451f-8460-887a45caa4e6\") " pod="openstack/horizon-7d6d8bb5d5-5l49m" Feb 18 19:34:52 crc kubenswrapper[4942]: I0218 19:34:52.749617 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/29a05f17-8ada-451f-8460-887a45caa4e6-config-data\") pod \"horizon-7d6d8bb5d5-5l49m\" (UID: \"29a05f17-8ada-451f-8460-887a45caa4e6\") " pod="openstack/horizon-7d6d8bb5d5-5l49m" Feb 18 19:34:52 crc kubenswrapper[4942]: I0218 19:34:52.749862 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/29a05f17-8ada-451f-8460-887a45caa4e6-logs\") pod \"horizon-7d6d8bb5d5-5l49m\" (UID: \"29a05f17-8ada-451f-8460-887a45caa4e6\") " pod="openstack/horizon-7d6d8bb5d5-5l49m" Feb 18 19:34:52 crc kubenswrapper[4942]: I0218 19:34:52.750396 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/29a05f17-8ada-451f-8460-887a45caa4e6-scripts\") pod \"horizon-7d6d8bb5d5-5l49m\" (UID: \"29a05f17-8ada-451f-8460-887a45caa4e6\") " pod="openstack/horizon-7d6d8bb5d5-5l49m" Feb 18 19:34:52 crc kubenswrapper[4942]: I0218 19:34:52.768709 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/29a05f17-8ada-451f-8460-887a45caa4e6-horizon-secret-key\") pod \"horizon-7d6d8bb5d5-5l49m\" (UID: \"29a05f17-8ada-451f-8460-887a45caa4e6\") " pod="openstack/horizon-7d6d8bb5d5-5l49m" Feb 18 19:34:52 crc kubenswrapper[4942]: I0218 19:34:52.774288 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hbrsd\" (UniqueName: \"kubernetes.io/projected/29a05f17-8ada-451f-8460-887a45caa4e6-kube-api-access-hbrsd\") pod \"horizon-7d6d8bb5d5-5l49m\" (UID: \"29a05f17-8ada-451f-8460-887a45caa4e6\") " pod="openstack/horizon-7d6d8bb5d5-5l49m" Feb 18 19:34:52 crc kubenswrapper[4942]: I0218 19:34:52.860685 4942 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-f877ddd87-8qph9" Feb 18 19:34:52 crc kubenswrapper[4942]: I0218 19:34:52.883656 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-7d6d8bb5d5-5l49m" Feb 18 19:34:52 crc kubenswrapper[4942]: I0218 19:34:52.929274 4942 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-68dcc9cf6f-tmqbr" Feb 18 19:34:52 crc kubenswrapper[4942]: I0218 19:34:52.962572 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0e907b66-eaef-489a-b729-f61f0c7e347d-dns-svc\") pod \"0e907b66-eaef-489a-b729-f61f0c7e347d\" (UID: \"0e907b66-eaef-489a-b729-f61f0c7e347d\") " Feb 18 19:34:52 crc kubenswrapper[4942]: I0218 19:34:52.962698 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0e907b66-eaef-489a-b729-f61f0c7e347d-config\") pod \"0e907b66-eaef-489a-b729-f61f0c7e347d\" (UID: \"0e907b66-eaef-489a-b729-f61f0c7e347d\") " Feb 18 19:34:52 crc kubenswrapper[4942]: I0218 19:34:52.962743 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jfbpd\" (UniqueName: \"kubernetes.io/projected/0e907b66-eaef-489a-b729-f61f0c7e347d-kube-api-access-jfbpd\") pod \"0e907b66-eaef-489a-b729-f61f0c7e347d\" (UID: \"0e907b66-eaef-489a-b729-f61f0c7e347d\") " Feb 18 19:34:52 crc kubenswrapper[4942]: I0218 19:34:52.962816 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0e907b66-eaef-489a-b729-f61f0c7e347d-ovsdbserver-nb\") pod \"0e907b66-eaef-489a-b729-f61f0c7e347d\" (UID: \"0e907b66-eaef-489a-b729-f61f0c7e347d\") " Feb 18 19:34:52 crc kubenswrapper[4942]: I0218 19:34:52.962883 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/0e907b66-eaef-489a-b729-f61f0c7e347d-ovsdbserver-sb\") pod \"0e907b66-eaef-489a-b729-f61f0c7e347d\" (UID: \"0e907b66-eaef-489a-b729-f61f0c7e347d\") " Feb 18 19:34:52 crc kubenswrapper[4942]: I0218 19:34:52.972938 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0e907b66-eaef-489a-b729-f61f0c7e347d-kube-api-access-jfbpd" (OuterVolumeSpecName: "kube-api-access-jfbpd") pod "0e907b66-eaef-489a-b729-f61f0c7e347d" (UID: "0e907b66-eaef-489a-b729-f61f0c7e347d"). InnerVolumeSpecName "kube-api-access-jfbpd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:34:52 crc kubenswrapper[4942]: I0218 19:34:52.987051 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0e907b66-eaef-489a-b729-f61f0c7e347d-config" (OuterVolumeSpecName: "config") pod "0e907b66-eaef-489a-b729-f61f0c7e347d" (UID: "0e907b66-eaef-489a-b729-f61f0c7e347d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:34:52 crc kubenswrapper[4942]: I0218 19:34:52.989232 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0e907b66-eaef-489a-b729-f61f0c7e347d-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "0e907b66-eaef-489a-b729-f61f0c7e347d" (UID: "0e907b66-eaef-489a-b729-f61f0c7e347d"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:34:53 crc kubenswrapper[4942]: I0218 19:34:53.012705 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0e907b66-eaef-489a-b729-f61f0c7e347d-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "0e907b66-eaef-489a-b729-f61f0c7e347d" (UID: "0e907b66-eaef-489a-b729-f61f0c7e347d"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:34:53 crc kubenswrapper[4942]: I0218 19:34:53.014468 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0e907b66-eaef-489a-b729-f61f0c7e347d-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "0e907b66-eaef-489a-b729-f61f0c7e347d" (UID: "0e907b66-eaef-489a-b729-f61f0c7e347d"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:34:53 crc kubenswrapper[4942]: I0218 19:34:53.065042 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f152879a-9670-449a-be9f-d3314368e29c-dns-svc\") pod \"f152879a-9670-449a-be9f-d3314368e29c\" (UID: \"f152879a-9670-449a-be9f-d3314368e29c\") " Feb 18 19:34:53 crc kubenswrapper[4942]: I0218 19:34:53.065171 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f152879a-9670-449a-be9f-d3314368e29c-config\") pod \"f152879a-9670-449a-be9f-d3314368e29c\" (UID: \"f152879a-9670-449a-be9f-d3314368e29c\") " Feb 18 19:34:53 crc kubenswrapper[4942]: I0218 19:34:53.065199 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f152879a-9670-449a-be9f-d3314368e29c-ovsdbserver-sb\") pod \"f152879a-9670-449a-be9f-d3314368e29c\" (UID: \"f152879a-9670-449a-be9f-d3314368e29c\") " Feb 18 19:34:53 crc kubenswrapper[4942]: I0218 19:34:53.065267 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f152879a-9670-449a-be9f-d3314368e29c-ovsdbserver-nb\") pod \"f152879a-9670-449a-be9f-d3314368e29c\" (UID: \"f152879a-9670-449a-be9f-d3314368e29c\") " Feb 18 19:34:53 crc kubenswrapper[4942]: I0218 19:34:53.065427 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sjzkc\" (UniqueName: \"kubernetes.io/projected/f152879a-9670-449a-be9f-d3314368e29c-kube-api-access-sjzkc\") pod \"f152879a-9670-449a-be9f-d3314368e29c\" (UID: \"f152879a-9670-449a-be9f-d3314368e29c\") " Feb 18 19:34:53 crc kubenswrapper[4942]: I0218 19:34:53.065884 4942 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0e907b66-eaef-489a-b729-f61f0c7e347d-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 18 19:34:53 crc kubenswrapper[4942]: I0218 19:34:53.065901 4942 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/0e907b66-eaef-489a-b729-f61f0c7e347d-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 18 19:34:53 crc kubenswrapper[4942]: I0218 19:34:53.065910 4942 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0e907b66-eaef-489a-b729-f61f0c7e347d-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 18 19:34:53 crc kubenswrapper[4942]: I0218 19:34:53.065918 4942 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0e907b66-eaef-489a-b729-f61f0c7e347d-config\") on node \"crc\" DevicePath \"\"" Feb 18 19:34:53 crc kubenswrapper[4942]: I0218 19:34:53.065926 4942 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jfbpd\" (UniqueName: \"kubernetes.io/projected/0e907b66-eaef-489a-b729-f61f0c7e347d-kube-api-access-jfbpd\") on node \"crc\" DevicePath \"\"" Feb 18 19:34:53 crc kubenswrapper[4942]: I0218 19:34:53.079192 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f152879a-9670-449a-be9f-d3314368e29c-kube-api-access-sjzkc" (OuterVolumeSpecName: "kube-api-access-sjzkc") pod "f152879a-9670-449a-be9f-d3314368e29c" (UID: "f152879a-9670-449a-be9f-d3314368e29c"). InnerVolumeSpecName "kube-api-access-sjzkc". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:34:53 crc kubenswrapper[4942]: I0218 19:34:53.096324 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f152879a-9670-449a-be9f-d3314368e29c-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "f152879a-9670-449a-be9f-d3314368e29c" (UID: "f152879a-9670-449a-be9f-d3314368e29c"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:34:53 crc kubenswrapper[4942]: I0218 19:34:53.096404 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f152879a-9670-449a-be9f-d3314368e29c-config" (OuterVolumeSpecName: "config") pod "f152879a-9670-449a-be9f-d3314368e29c" (UID: "f152879a-9670-449a-be9f-d3314368e29c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:34:53 crc kubenswrapper[4942]: I0218 19:34:53.103670 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f152879a-9670-449a-be9f-d3314368e29c-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "f152879a-9670-449a-be9f-d3314368e29c" (UID: "f152879a-9670-449a-be9f-d3314368e29c"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:34:53 crc kubenswrapper[4942]: I0218 19:34:53.109513 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f152879a-9670-449a-be9f-d3314368e29c-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "f152879a-9670-449a-be9f-d3314368e29c" (UID: "f152879a-9670-449a-be9f-d3314368e29c"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:34:53 crc kubenswrapper[4942]: I0218 19:34:53.173069 4942 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f152879a-9670-449a-be9f-d3314368e29c-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 18 19:34:53 crc kubenswrapper[4942]: I0218 19:34:53.173369 4942 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f152879a-9670-449a-be9f-d3314368e29c-config\") on node \"crc\" DevicePath \"\"" Feb 18 19:34:53 crc kubenswrapper[4942]: I0218 19:34:53.173397 4942 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f152879a-9670-449a-be9f-d3314368e29c-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 18 19:34:53 crc kubenswrapper[4942]: I0218 19:34:53.173409 4942 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f152879a-9670-449a-be9f-d3314368e29c-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 18 19:34:53 crc kubenswrapper[4942]: I0218 19:34:53.173436 4942 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sjzkc\" (UniqueName: \"kubernetes.io/projected/f152879a-9670-449a-be9f-d3314368e29c-kube-api-access-sjzkc\") on node \"crc\" DevicePath \"\"" Feb 18 19:34:53 crc kubenswrapper[4942]: I0218 19:34:53.247581 4942 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-f877ddd87-8qph9" Feb 18 19:34:53 crc kubenswrapper[4942]: I0218 19:34:53.248337 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-f877ddd87-8qph9" event={"ID":"0e907b66-eaef-489a-b729-f61f0c7e347d","Type":"ContainerDied","Data":"43ab5328f956e2ed08ffaa0187dd014311f455b02df0f837b7a002e208528e41"} Feb 18 19:34:53 crc kubenswrapper[4942]: I0218 19:34:53.248369 4942 scope.go:117] "RemoveContainer" containerID="0d67f368ec724e01a3830704823ce44b7d34d87d57cad7e2696b5373ea79d251" Feb 18 19:34:53 crc kubenswrapper[4942]: I0218 19:34:53.251478 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-68dcc9cf6f-tmqbr" event={"ID":"f152879a-9670-449a-be9f-d3314368e29c","Type":"ContainerDied","Data":"e2f8c0a37589b6fa961dd22d4a9b95b3343135606c2c9865d94c65eebcefb5e6"} Feb 18 19:34:53 crc kubenswrapper[4942]: I0218 19:34:53.251549 4942 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-68dcc9cf6f-tmqbr" Feb 18 19:34:53 crc kubenswrapper[4942]: I0218 19:34:53.255152 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-58dd9ff6bc-pdwb6" event={"ID":"f354be6c-0a53-41b2-923d-60de99a6ed65","Type":"ContainerStarted","Data":"cd8e8a9783f92883c4d637d09eea3e643009a45c5a511f5d36eb98f2dff7bd34"} Feb 18 19:34:53 crc kubenswrapper[4942]: I0218 19:34:53.255565 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-58dd9ff6bc-pdwb6" Feb 18 19:34:53 crc kubenswrapper[4942]: I0218 19:34:53.311711 4942 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-58dd9ff6bc-pdwb6" podStartSLOduration=3.311694024 podStartE2EDuration="3.311694024s" podCreationTimestamp="2026-02-18 19:34:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:34:53.289144078 +0000 UTC m=+1052.994076763" watchObservedRunningTime="2026-02-18 19:34:53.311694024 +0000 UTC m=+1053.016626689" Feb 18 19:34:53 crc kubenswrapper[4942]: I0218 19:34:53.352270 4942 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-68dcc9cf6f-tmqbr"] Feb 18 19:34:53 crc kubenswrapper[4942]: I0218 19:34:53.364814 4942 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-68dcc9cf6f-tmqbr"] Feb 18 19:34:53 crc kubenswrapper[4942]: I0218 19:34:53.409132 4942 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-f877ddd87-8qph9"] Feb 18 19:34:53 crc kubenswrapper[4942]: I0218 19:34:53.420172 4942 scope.go:117] "RemoveContainer" containerID="7873d578054ec79fc1afaa80065023d0a5361c0b9d7456f037b28f5f4424be84" Feb 18 19:34:53 crc kubenswrapper[4942]: I0218 19:34:53.428063 4942 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-f877ddd87-8qph9"] Feb 18 19:34:53 crc kubenswrapper[4942]: I0218 19:34:53.445866 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-7d6d8bb5d5-5l49m"] Feb 18 19:34:53 crc kubenswrapper[4942]: I0218 19:34:53.741351 4942 patch_prober.go:28] interesting pod/machine-config-daemon-wqxh4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 19:34:53 crc kubenswrapper[4942]: I0218 19:34:53.741406 4942 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wqxh4" podUID="28921539-823a-4439-a230-3b5aed7085cc" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 19:34:53 crc kubenswrapper[4942]: I0218 19:34:53.741449 4942 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-wqxh4" Feb 18 19:34:53 crc kubenswrapper[4942]: I0218 19:34:53.742250 4942 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"4ad75b87330a71997979db298f42e179882b61890e654d3a0c077cf25d5cb90b"} pod="openshift-machine-config-operator/machine-config-daemon-wqxh4" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 18 19:34:53 crc kubenswrapper[4942]: I0218 19:34:53.742308 4942 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-wqxh4" podUID="28921539-823a-4439-a230-3b5aed7085cc" containerName="machine-config-daemon" containerID="cri-o://4ad75b87330a71997979db298f42e179882b61890e654d3a0c077cf25d5cb90b" gracePeriod=600 Feb 18 19:34:54 crc kubenswrapper[4942]: I0218 19:34:54.276437 4942 generic.go:334] "Generic (PLEG): container finished" podID="28921539-823a-4439-a230-3b5aed7085cc" containerID="4ad75b87330a71997979db298f42e179882b61890e654d3a0c077cf25d5cb90b" exitCode=0 Feb 18 19:34:54 crc kubenswrapper[4942]: I0218 19:34:54.276518 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wqxh4" event={"ID":"28921539-823a-4439-a230-3b5aed7085cc","Type":"ContainerDied","Data":"4ad75b87330a71997979db298f42e179882b61890e654d3a0c077cf25d5cb90b"} Feb 18 19:34:54 crc kubenswrapper[4942]: I0218 19:34:54.276722 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wqxh4" event={"ID":"28921539-823a-4439-a230-3b5aed7085cc","Type":"ContainerStarted","Data":"8ecda90ff377eb2cb3234b37ad9a8ec87fa575a7e7c5a3a78ee7c2e00f4a7b66"} Feb 18 19:34:54 crc kubenswrapper[4942]: I0218 19:34:54.276739 4942 scope.go:117] "RemoveContainer" containerID="573640abad6b15c1dd30fd80a1b600755a1efda149dab25e49e3a1173acf646a" Feb 18 19:34:54 crc kubenswrapper[4942]: I0218 19:34:54.280549 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-7d6d8bb5d5-5l49m" event={"ID":"29a05f17-8ada-451f-8460-887a45caa4e6","Type":"ContainerStarted","Data":"f9f400d74dcc827f603d02436cc05b6b30e0d9e44bb3117a942b80c1685b87ee"} Feb 18 19:34:54 crc kubenswrapper[4942]: I0218 19:34:54.296600 4942 generic.go:334] "Generic (PLEG): container finished" podID="72c2952a-cb2e-4a2e-bdb3-bf97a901cbe3" containerID="8c6545f8eaa3b666b06d888c16ee9caa900adcec0bcd683e72e4f96180bd297d" exitCode=0 Feb 18 19:34:54 crc kubenswrapper[4942]: I0218 19:34:54.296690 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-zw8ls" event={"ID":"72c2952a-cb2e-4a2e-bdb3-bf97a901cbe3","Type":"ContainerDied","Data":"8c6545f8eaa3b666b06d888c16ee9caa900adcec0bcd683e72e4f96180bd297d"} Feb 18 19:34:54 crc kubenswrapper[4942]: I0218 19:34:54.646582 4942 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/prometheus-metric-storage-0" Feb 18 19:34:54 crc kubenswrapper[4942]: I0218 19:34:54.661269 4942 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/prometheus-metric-storage-0" Feb 18 19:34:55 crc kubenswrapper[4942]: I0218 19:34:55.051341 4942 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0e907b66-eaef-489a-b729-f61f0c7e347d" path="/var/lib/kubelet/pods/0e907b66-eaef-489a-b729-f61f0c7e347d/volumes" Feb 18 19:34:55 crc kubenswrapper[4942]: I0218 19:34:55.052453 4942 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f152879a-9670-449a-be9f-d3314368e29c" path="/var/lib/kubelet/pods/f152879a-9670-449a-be9f-d3314368e29c/volumes" Feb 18 19:34:55 crc kubenswrapper[4942]: I0218 19:34:55.332610 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/prometheus-metric-storage-0" Feb 18 19:34:56 crc kubenswrapper[4942]: I0218 19:34:56.346816 4942 generic.go:334] "Generic (PLEG): container finished" podID="4c05cfb2-6a1f-46e5-a784-29bad5c3cdc0" containerID="fa114cb799909584016955a551d4df04e20f11df9588933ed8a958c11cc58031" exitCode=0 Feb 18 19:34:56 crc kubenswrapper[4942]: I0218 19:34:56.346989 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-wknkh" event={"ID":"4c05cfb2-6a1f-46e5-a784-29bad5c3cdc0","Type":"ContainerDied","Data":"fa114cb799909584016955a551d4df04e20f11df9588933ed8a958c11cc58031"} Feb 18 19:34:59 crc kubenswrapper[4942]: I0218 19:34:59.163166 4942 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-6487999dc5-x92k5"] Feb 18 19:34:59 crc kubenswrapper[4942]: I0218 19:34:59.197852 4942 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-54d64cf59b-xp7rk"] Feb 18 19:34:59 crc kubenswrapper[4942]: E0218 19:34:59.198173 4942 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0e907b66-eaef-489a-b729-f61f0c7e347d" containerName="init" Feb 18 19:34:59 crc kubenswrapper[4942]: I0218 19:34:59.198187 4942 state_mem.go:107] "Deleted CPUSet assignment" podUID="0e907b66-eaef-489a-b729-f61f0c7e347d" containerName="init" Feb 18 19:34:59 crc kubenswrapper[4942]: E0218 19:34:59.198222 4942 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f152879a-9670-449a-be9f-d3314368e29c" containerName="init" Feb 18 19:34:59 crc kubenswrapper[4942]: I0218 19:34:59.198232 4942 state_mem.go:107] "Deleted CPUSet assignment" podUID="f152879a-9670-449a-be9f-d3314368e29c" containerName="init" Feb 18 19:34:59 crc kubenswrapper[4942]: I0218 19:34:59.198402 4942 memory_manager.go:354] "RemoveStaleState removing state" podUID="0e907b66-eaef-489a-b729-f61f0c7e347d" containerName="init" Feb 18 19:34:59 crc kubenswrapper[4942]: I0218 19:34:59.198426 4942 memory_manager.go:354] "RemoveStaleState removing state" podUID="f152879a-9670-449a-be9f-d3314368e29c" containerName="init" Feb 18 19:34:59 crc kubenswrapper[4942]: I0218 19:34:59.199265 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-54d64cf59b-xp7rk" Feb 18 19:34:59 crc kubenswrapper[4942]: I0218 19:34:59.202818 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-horizon-svc" Feb 18 19:34:59 crc kubenswrapper[4942]: I0218 19:34:59.211105 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-54d64cf59b-xp7rk"] Feb 18 19:34:59 crc kubenswrapper[4942]: I0218 19:34:59.224056 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3ecc91e6-4e7f-438f-8530-bb8dd55764c5-logs\") pod \"horizon-54d64cf59b-xp7rk\" (UID: \"3ecc91e6-4e7f-438f-8530-bb8dd55764c5\") " pod="openstack/horizon-54d64cf59b-xp7rk" Feb 18 19:34:59 crc kubenswrapper[4942]: I0218 19:34:59.224160 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/3ecc91e6-4e7f-438f-8530-bb8dd55764c5-scripts\") pod \"horizon-54d64cf59b-xp7rk\" (UID: \"3ecc91e6-4e7f-438f-8530-bb8dd55764c5\") " pod="openstack/horizon-54d64cf59b-xp7rk" Feb 18 19:34:59 crc kubenswrapper[4942]: I0218 19:34:59.224221 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dnrb5\" (UniqueName: \"kubernetes.io/projected/3ecc91e6-4e7f-438f-8530-bb8dd55764c5-kube-api-access-dnrb5\") pod \"horizon-54d64cf59b-xp7rk\" (UID: \"3ecc91e6-4e7f-438f-8530-bb8dd55764c5\") " pod="openstack/horizon-54d64cf59b-xp7rk" Feb 18 19:34:59 crc kubenswrapper[4942]: I0218 19:34:59.224259 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3ecc91e6-4e7f-438f-8530-bb8dd55764c5-combined-ca-bundle\") pod \"horizon-54d64cf59b-xp7rk\" (UID: \"3ecc91e6-4e7f-438f-8530-bb8dd55764c5\") " pod="openstack/horizon-54d64cf59b-xp7rk" Feb 18 19:34:59 crc kubenswrapper[4942]: I0218 19:34:59.224337 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/3ecc91e6-4e7f-438f-8530-bb8dd55764c5-config-data\") pod \"horizon-54d64cf59b-xp7rk\" (UID: \"3ecc91e6-4e7f-438f-8530-bb8dd55764c5\") " pod="openstack/horizon-54d64cf59b-xp7rk" Feb 18 19:34:59 crc kubenswrapper[4942]: I0218 19:34:59.224393 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/3ecc91e6-4e7f-438f-8530-bb8dd55764c5-horizon-tls-certs\") pod \"horizon-54d64cf59b-xp7rk\" (UID: \"3ecc91e6-4e7f-438f-8530-bb8dd55764c5\") " pod="openstack/horizon-54d64cf59b-xp7rk" Feb 18 19:34:59 crc kubenswrapper[4942]: I0218 19:34:59.224423 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/3ecc91e6-4e7f-438f-8530-bb8dd55764c5-horizon-secret-key\") pod \"horizon-54d64cf59b-xp7rk\" (UID: \"3ecc91e6-4e7f-438f-8530-bb8dd55764c5\") " pod="openstack/horizon-54d64cf59b-xp7rk" Feb 18 19:34:59 crc kubenswrapper[4942]: I0218 19:34:59.247349 4942 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-7d6d8bb5d5-5l49m"] Feb 18 19:34:59 crc kubenswrapper[4942]: I0218 19:34:59.281098 4942 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-7b6b6597b8-m8ngr"] Feb 18 19:34:59 crc kubenswrapper[4942]: I0218 19:34:59.282700 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-7b6b6597b8-m8ngr" Feb 18 19:34:59 crc kubenswrapper[4942]: I0218 19:34:59.289487 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-7b6b6597b8-m8ngr"] Feb 18 19:34:59 crc kubenswrapper[4942]: I0218 19:34:59.326089 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/55d24776-2d1c-413a-8ba1-06cdadf63d04-horizon-secret-key\") pod \"horizon-7b6b6597b8-m8ngr\" (UID: \"55d24776-2d1c-413a-8ba1-06cdadf63d04\") " pod="openstack/horizon-7b6b6597b8-m8ngr" Feb 18 19:34:59 crc kubenswrapper[4942]: I0218 19:34:59.326157 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/55d24776-2d1c-413a-8ba1-06cdadf63d04-combined-ca-bundle\") pod \"horizon-7b6b6597b8-m8ngr\" (UID: \"55d24776-2d1c-413a-8ba1-06cdadf63d04\") " pod="openstack/horizon-7b6b6597b8-m8ngr" Feb 18 19:34:59 crc kubenswrapper[4942]: I0218 19:34:59.326256 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3ecc91e6-4e7f-438f-8530-bb8dd55764c5-combined-ca-bundle\") pod \"horizon-54d64cf59b-xp7rk\" (UID: \"3ecc91e6-4e7f-438f-8530-bb8dd55764c5\") " pod="openstack/horizon-54d64cf59b-xp7rk" Feb 18 19:34:59 crc kubenswrapper[4942]: I0218 19:34:59.326292 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/55d24776-2d1c-413a-8ba1-06cdadf63d04-logs\") pod \"horizon-7b6b6597b8-m8ngr\" (UID: \"55d24776-2d1c-413a-8ba1-06cdadf63d04\") " pod="openstack/horizon-7b6b6597b8-m8ngr" Feb 18 19:34:59 crc kubenswrapper[4942]: I0218 19:34:59.326391 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/3ecc91e6-4e7f-438f-8530-bb8dd55764c5-config-data\") pod \"horizon-54d64cf59b-xp7rk\" (UID: \"3ecc91e6-4e7f-438f-8530-bb8dd55764c5\") " pod="openstack/horizon-54d64cf59b-xp7rk" Feb 18 19:34:59 crc kubenswrapper[4942]: I0218 19:34:59.326425 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/3ecc91e6-4e7f-438f-8530-bb8dd55764c5-horizon-tls-certs\") pod \"horizon-54d64cf59b-xp7rk\" (UID: \"3ecc91e6-4e7f-438f-8530-bb8dd55764c5\") " pod="openstack/horizon-54d64cf59b-xp7rk" Feb 18 19:34:59 crc kubenswrapper[4942]: I0218 19:34:59.326441 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/55d24776-2d1c-413a-8ba1-06cdadf63d04-horizon-tls-certs\") pod \"horizon-7b6b6597b8-m8ngr\" (UID: \"55d24776-2d1c-413a-8ba1-06cdadf63d04\") " pod="openstack/horizon-7b6b6597b8-m8ngr" Feb 18 19:34:59 crc kubenswrapper[4942]: I0218 19:34:59.326476 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/55d24776-2d1c-413a-8ba1-06cdadf63d04-scripts\") pod \"horizon-7b6b6597b8-m8ngr\" (UID: \"55d24776-2d1c-413a-8ba1-06cdadf63d04\") " pod="openstack/horizon-7b6b6597b8-m8ngr" Feb 18 19:34:59 crc kubenswrapper[4942]: I0218 19:34:59.326507 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/3ecc91e6-4e7f-438f-8530-bb8dd55764c5-horizon-secret-key\") pod \"horizon-54d64cf59b-xp7rk\" (UID: \"3ecc91e6-4e7f-438f-8530-bb8dd55764c5\") " pod="openstack/horizon-54d64cf59b-xp7rk" Feb 18 19:34:59 crc kubenswrapper[4942]: I0218 19:34:59.326524 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j8hzt\" (UniqueName: \"kubernetes.io/projected/55d24776-2d1c-413a-8ba1-06cdadf63d04-kube-api-access-j8hzt\") pod \"horizon-7b6b6597b8-m8ngr\" (UID: \"55d24776-2d1c-413a-8ba1-06cdadf63d04\") " pod="openstack/horizon-7b6b6597b8-m8ngr" Feb 18 19:34:59 crc kubenswrapper[4942]: I0218 19:34:59.326694 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3ecc91e6-4e7f-438f-8530-bb8dd55764c5-logs\") pod \"horizon-54d64cf59b-xp7rk\" (UID: \"3ecc91e6-4e7f-438f-8530-bb8dd55764c5\") " pod="openstack/horizon-54d64cf59b-xp7rk" Feb 18 19:34:59 crc kubenswrapper[4942]: I0218 19:34:59.326780 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/55d24776-2d1c-413a-8ba1-06cdadf63d04-config-data\") pod \"horizon-7b6b6597b8-m8ngr\" (UID: \"55d24776-2d1c-413a-8ba1-06cdadf63d04\") " pod="openstack/horizon-7b6b6597b8-m8ngr" Feb 18 19:34:59 crc kubenswrapper[4942]: I0218 19:34:59.326815 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/3ecc91e6-4e7f-438f-8530-bb8dd55764c5-scripts\") pod \"horizon-54d64cf59b-xp7rk\" (UID: \"3ecc91e6-4e7f-438f-8530-bb8dd55764c5\") " pod="openstack/horizon-54d64cf59b-xp7rk" Feb 18 19:34:59 crc kubenswrapper[4942]: I0218 19:34:59.326851 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dnrb5\" (UniqueName: \"kubernetes.io/projected/3ecc91e6-4e7f-438f-8530-bb8dd55764c5-kube-api-access-dnrb5\") pod \"horizon-54d64cf59b-xp7rk\" (UID: \"3ecc91e6-4e7f-438f-8530-bb8dd55764c5\") " pod="openstack/horizon-54d64cf59b-xp7rk" Feb 18 19:34:59 crc kubenswrapper[4942]: I0218 19:34:59.328937 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3ecc91e6-4e7f-438f-8530-bb8dd55764c5-logs\") pod \"horizon-54d64cf59b-xp7rk\" (UID: \"3ecc91e6-4e7f-438f-8530-bb8dd55764c5\") " pod="openstack/horizon-54d64cf59b-xp7rk" Feb 18 19:34:59 crc kubenswrapper[4942]: I0218 19:34:59.329544 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/3ecc91e6-4e7f-438f-8530-bb8dd55764c5-config-data\") pod \"horizon-54d64cf59b-xp7rk\" (UID: \"3ecc91e6-4e7f-438f-8530-bb8dd55764c5\") " pod="openstack/horizon-54d64cf59b-xp7rk" Feb 18 19:34:59 crc kubenswrapper[4942]: I0218 19:34:59.330291 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/3ecc91e6-4e7f-438f-8530-bb8dd55764c5-scripts\") pod \"horizon-54d64cf59b-xp7rk\" (UID: \"3ecc91e6-4e7f-438f-8530-bb8dd55764c5\") " pod="openstack/horizon-54d64cf59b-xp7rk" Feb 18 19:34:59 crc kubenswrapper[4942]: I0218 19:34:59.332584 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/3ecc91e6-4e7f-438f-8530-bb8dd55764c5-horizon-tls-certs\") pod \"horizon-54d64cf59b-xp7rk\" (UID: \"3ecc91e6-4e7f-438f-8530-bb8dd55764c5\") " pod="openstack/horizon-54d64cf59b-xp7rk" Feb 18 19:34:59 crc kubenswrapper[4942]: I0218 19:34:59.333087 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/3ecc91e6-4e7f-438f-8530-bb8dd55764c5-horizon-secret-key\") pod \"horizon-54d64cf59b-xp7rk\" (UID: \"3ecc91e6-4e7f-438f-8530-bb8dd55764c5\") " pod="openstack/horizon-54d64cf59b-xp7rk" Feb 18 19:34:59 crc kubenswrapper[4942]: I0218 19:34:59.340789 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3ecc91e6-4e7f-438f-8530-bb8dd55764c5-combined-ca-bundle\") pod \"horizon-54d64cf59b-xp7rk\" (UID: \"3ecc91e6-4e7f-438f-8530-bb8dd55764c5\") " pod="openstack/horizon-54d64cf59b-xp7rk" Feb 18 19:34:59 crc kubenswrapper[4942]: I0218 19:34:59.345923 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dnrb5\" (UniqueName: \"kubernetes.io/projected/3ecc91e6-4e7f-438f-8530-bb8dd55764c5-kube-api-access-dnrb5\") pod \"horizon-54d64cf59b-xp7rk\" (UID: \"3ecc91e6-4e7f-438f-8530-bb8dd55764c5\") " pod="openstack/horizon-54d64cf59b-xp7rk" Feb 18 19:34:59 crc kubenswrapper[4942]: I0218 19:34:59.428412 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/55d24776-2d1c-413a-8ba1-06cdadf63d04-config-data\") pod \"horizon-7b6b6597b8-m8ngr\" (UID: \"55d24776-2d1c-413a-8ba1-06cdadf63d04\") " pod="openstack/horizon-7b6b6597b8-m8ngr" Feb 18 19:34:59 crc kubenswrapper[4942]: I0218 19:34:59.428501 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/55d24776-2d1c-413a-8ba1-06cdadf63d04-horizon-secret-key\") pod \"horizon-7b6b6597b8-m8ngr\" (UID: \"55d24776-2d1c-413a-8ba1-06cdadf63d04\") " pod="openstack/horizon-7b6b6597b8-m8ngr" Feb 18 19:34:59 crc kubenswrapper[4942]: I0218 19:34:59.428540 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/55d24776-2d1c-413a-8ba1-06cdadf63d04-combined-ca-bundle\") pod \"horizon-7b6b6597b8-m8ngr\" (UID: \"55d24776-2d1c-413a-8ba1-06cdadf63d04\") " pod="openstack/horizon-7b6b6597b8-m8ngr" Feb 18 19:34:59 crc kubenswrapper[4942]: I0218 19:34:59.428568 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/55d24776-2d1c-413a-8ba1-06cdadf63d04-logs\") pod \"horizon-7b6b6597b8-m8ngr\" (UID: \"55d24776-2d1c-413a-8ba1-06cdadf63d04\") " pod="openstack/horizon-7b6b6597b8-m8ngr" Feb 18 19:34:59 crc kubenswrapper[4942]: I0218 19:34:59.428632 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/55d24776-2d1c-413a-8ba1-06cdadf63d04-horizon-tls-certs\") pod \"horizon-7b6b6597b8-m8ngr\" (UID: \"55d24776-2d1c-413a-8ba1-06cdadf63d04\") " pod="openstack/horizon-7b6b6597b8-m8ngr" Feb 18 19:34:59 crc kubenswrapper[4942]: I0218 19:34:59.428665 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/55d24776-2d1c-413a-8ba1-06cdadf63d04-scripts\") pod \"horizon-7b6b6597b8-m8ngr\" (UID: \"55d24776-2d1c-413a-8ba1-06cdadf63d04\") " pod="openstack/horizon-7b6b6597b8-m8ngr" Feb 18 19:34:59 crc kubenswrapper[4942]: I0218 19:34:59.428696 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j8hzt\" (UniqueName: \"kubernetes.io/projected/55d24776-2d1c-413a-8ba1-06cdadf63d04-kube-api-access-j8hzt\") pod \"horizon-7b6b6597b8-m8ngr\" (UID: \"55d24776-2d1c-413a-8ba1-06cdadf63d04\") " pod="openstack/horizon-7b6b6597b8-m8ngr" Feb 18 19:34:59 crc kubenswrapper[4942]: I0218 19:34:59.429471 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/55d24776-2d1c-413a-8ba1-06cdadf63d04-logs\") pod \"horizon-7b6b6597b8-m8ngr\" (UID: \"55d24776-2d1c-413a-8ba1-06cdadf63d04\") " pod="openstack/horizon-7b6b6597b8-m8ngr" Feb 18 19:34:59 crc kubenswrapper[4942]: I0218 19:34:59.430180 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/55d24776-2d1c-413a-8ba1-06cdadf63d04-scripts\") pod \"horizon-7b6b6597b8-m8ngr\" (UID: \"55d24776-2d1c-413a-8ba1-06cdadf63d04\") " pod="openstack/horizon-7b6b6597b8-m8ngr" Feb 18 19:34:59 crc kubenswrapper[4942]: I0218 19:34:59.431972 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/55d24776-2d1c-413a-8ba1-06cdadf63d04-config-data\") pod \"horizon-7b6b6597b8-m8ngr\" (UID: \"55d24776-2d1c-413a-8ba1-06cdadf63d04\") " pod="openstack/horizon-7b6b6597b8-m8ngr" Feb 18 19:34:59 crc kubenswrapper[4942]: I0218 19:34:59.433024 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/55d24776-2d1c-413a-8ba1-06cdadf63d04-horizon-secret-key\") pod \"horizon-7b6b6597b8-m8ngr\" (UID: \"55d24776-2d1c-413a-8ba1-06cdadf63d04\") " pod="openstack/horizon-7b6b6597b8-m8ngr" Feb 18 19:34:59 crc kubenswrapper[4942]: I0218 19:34:59.434392 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/55d24776-2d1c-413a-8ba1-06cdadf63d04-combined-ca-bundle\") pod \"horizon-7b6b6597b8-m8ngr\" (UID: \"55d24776-2d1c-413a-8ba1-06cdadf63d04\") " pod="openstack/horizon-7b6b6597b8-m8ngr" Feb 18 19:34:59 crc kubenswrapper[4942]: I0218 19:34:59.434420 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/55d24776-2d1c-413a-8ba1-06cdadf63d04-horizon-tls-certs\") pod \"horizon-7b6b6597b8-m8ngr\" (UID: \"55d24776-2d1c-413a-8ba1-06cdadf63d04\") " pod="openstack/horizon-7b6b6597b8-m8ngr" Feb 18 19:34:59 crc kubenswrapper[4942]: I0218 19:34:59.456248 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j8hzt\" (UniqueName: \"kubernetes.io/projected/55d24776-2d1c-413a-8ba1-06cdadf63d04-kube-api-access-j8hzt\") pod \"horizon-7b6b6597b8-m8ngr\" (UID: \"55d24776-2d1c-413a-8ba1-06cdadf63d04\") " pod="openstack/horizon-7b6b6597b8-m8ngr" Feb 18 19:34:59 crc kubenswrapper[4942]: I0218 19:34:59.526419 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-54d64cf59b-xp7rk" Feb 18 19:34:59 crc kubenswrapper[4942]: I0218 19:34:59.614413 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-7b6b6597b8-m8ngr" Feb 18 19:35:00 crc kubenswrapper[4942]: I0218 19:35:00.927952 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-58dd9ff6bc-pdwb6" Feb 18 19:35:00 crc kubenswrapper[4942]: I0218 19:35:00.995189 4942 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-698758b865-nnzck"] Feb 18 19:35:00 crc kubenswrapper[4942]: I0218 19:35:00.995507 4942 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-698758b865-nnzck" podUID="1e919317-cae2-432d-959f-8cf1d4520b56" containerName="dnsmasq-dns" containerID="cri-o://c929bc7a17036437784be59c9727e4ee675c038074de07e36b3deb35090e3ae7" gracePeriod=10 Feb 18 19:35:01 crc kubenswrapper[4942]: I0218 19:35:01.396694 4942 generic.go:334] "Generic (PLEG): container finished" podID="1e919317-cae2-432d-959f-8cf1d4520b56" containerID="c929bc7a17036437784be59c9727e4ee675c038074de07e36b3deb35090e3ae7" exitCode=0 Feb 18 19:35:01 crc kubenswrapper[4942]: I0218 19:35:01.396828 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-698758b865-nnzck" event={"ID":"1e919317-cae2-432d-959f-8cf1d4520b56","Type":"ContainerDied","Data":"c929bc7a17036437784be59c9727e4ee675c038074de07e36b3deb35090e3ae7"} Feb 18 19:35:02 crc kubenswrapper[4942]: I0218 19:35:02.182175 4942 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-zw8ls" Feb 18 19:35:02 crc kubenswrapper[4942]: I0218 19:35:02.310662 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/72c2952a-cb2e-4a2e-bdb3-bf97a901cbe3-db-sync-config-data\") pod \"72c2952a-cb2e-4a2e-bdb3-bf97a901cbe3\" (UID: \"72c2952a-cb2e-4a2e-bdb3-bf97a901cbe3\") " Feb 18 19:35:02 crc kubenswrapper[4942]: I0218 19:35:02.310756 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/72c2952a-cb2e-4a2e-bdb3-bf97a901cbe3-combined-ca-bundle\") pod \"72c2952a-cb2e-4a2e-bdb3-bf97a901cbe3\" (UID: \"72c2952a-cb2e-4a2e-bdb3-bf97a901cbe3\") " Feb 18 19:35:02 crc kubenswrapper[4942]: I0218 19:35:02.310842 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gb9h8\" (UniqueName: \"kubernetes.io/projected/72c2952a-cb2e-4a2e-bdb3-bf97a901cbe3-kube-api-access-gb9h8\") pod \"72c2952a-cb2e-4a2e-bdb3-bf97a901cbe3\" (UID: \"72c2952a-cb2e-4a2e-bdb3-bf97a901cbe3\") " Feb 18 19:35:02 crc kubenswrapper[4942]: I0218 19:35:02.310886 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/72c2952a-cb2e-4a2e-bdb3-bf97a901cbe3-config-data\") pod \"72c2952a-cb2e-4a2e-bdb3-bf97a901cbe3\" (UID: \"72c2952a-cb2e-4a2e-bdb3-bf97a901cbe3\") " Feb 18 19:35:02 crc kubenswrapper[4942]: I0218 19:35:02.320507 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/72c2952a-cb2e-4a2e-bdb3-bf97a901cbe3-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "72c2952a-cb2e-4a2e-bdb3-bf97a901cbe3" (UID: "72c2952a-cb2e-4a2e-bdb3-bf97a901cbe3"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:35:02 crc kubenswrapper[4942]: I0218 19:35:02.320886 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/72c2952a-cb2e-4a2e-bdb3-bf97a901cbe3-kube-api-access-gb9h8" (OuterVolumeSpecName: "kube-api-access-gb9h8") pod "72c2952a-cb2e-4a2e-bdb3-bf97a901cbe3" (UID: "72c2952a-cb2e-4a2e-bdb3-bf97a901cbe3"). InnerVolumeSpecName "kube-api-access-gb9h8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:35:02 crc kubenswrapper[4942]: I0218 19:35:02.343722 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/72c2952a-cb2e-4a2e-bdb3-bf97a901cbe3-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "72c2952a-cb2e-4a2e-bdb3-bf97a901cbe3" (UID: "72c2952a-cb2e-4a2e-bdb3-bf97a901cbe3"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:35:02 crc kubenswrapper[4942]: I0218 19:35:02.371651 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/72c2952a-cb2e-4a2e-bdb3-bf97a901cbe3-config-data" (OuterVolumeSpecName: "config-data") pod "72c2952a-cb2e-4a2e-bdb3-bf97a901cbe3" (UID: "72c2952a-cb2e-4a2e-bdb3-bf97a901cbe3"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:35:02 crc kubenswrapper[4942]: I0218 19:35:02.408339 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-zw8ls" event={"ID":"72c2952a-cb2e-4a2e-bdb3-bf97a901cbe3","Type":"ContainerDied","Data":"e983b61464f792023c5c202bd16dd9437e3b945f9e2f82c09b596638a70e9520"} Feb 18 19:35:02 crc kubenswrapper[4942]: I0218 19:35:02.408375 4942 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e983b61464f792023c5c202bd16dd9437e3b945f9e2f82c09b596638a70e9520" Feb 18 19:35:02 crc kubenswrapper[4942]: I0218 19:35:02.408427 4942 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-zw8ls" Feb 18 19:35:02 crc kubenswrapper[4942]: I0218 19:35:02.414280 4942 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gb9h8\" (UniqueName: \"kubernetes.io/projected/72c2952a-cb2e-4a2e-bdb3-bf97a901cbe3-kube-api-access-gb9h8\") on node \"crc\" DevicePath \"\"" Feb 18 19:35:02 crc kubenswrapper[4942]: I0218 19:35:02.414316 4942 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/72c2952a-cb2e-4a2e-bdb3-bf97a901cbe3-config-data\") on node \"crc\" DevicePath \"\"" Feb 18 19:35:02 crc kubenswrapper[4942]: I0218 19:35:02.414330 4942 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/72c2952a-cb2e-4a2e-bdb3-bf97a901cbe3-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Feb 18 19:35:02 crc kubenswrapper[4942]: I0218 19:35:02.414341 4942 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/72c2952a-cb2e-4a2e-bdb3-bf97a901cbe3-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 19:35:02 crc kubenswrapper[4942]: I0218 19:35:02.856171 4942 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-698758b865-nnzck" podUID="1e919317-cae2-432d-959f-8cf1d4520b56" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.120:5353: connect: connection refused" Feb 18 19:35:03 crc kubenswrapper[4942]: I0218 19:35:03.635770 4942 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-785d8bcb8c-jhblh"] Feb 18 19:35:03 crc kubenswrapper[4942]: E0218 19:35:03.636367 4942 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="72c2952a-cb2e-4a2e-bdb3-bf97a901cbe3" containerName="glance-db-sync" Feb 18 19:35:03 crc kubenswrapper[4942]: I0218 19:35:03.636383 4942 state_mem.go:107] "Deleted CPUSet assignment" podUID="72c2952a-cb2e-4a2e-bdb3-bf97a901cbe3" containerName="glance-db-sync" Feb 18 19:35:03 crc kubenswrapper[4942]: I0218 19:35:03.636563 4942 memory_manager.go:354] "RemoveStaleState removing state" podUID="72c2952a-cb2e-4a2e-bdb3-bf97a901cbe3" containerName="glance-db-sync" Feb 18 19:35:03 crc kubenswrapper[4942]: I0218 19:35:03.644440 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-785d8bcb8c-jhblh" Feb 18 19:35:03 crc kubenswrapper[4942]: I0218 19:35:03.650464 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-785d8bcb8c-jhblh"] Feb 18 19:35:03 crc kubenswrapper[4942]: I0218 19:35:03.736944 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/47732c7e-8c0f-4244-bddb-98bf7b21d2db-dns-swift-storage-0\") pod \"dnsmasq-dns-785d8bcb8c-jhblh\" (UID: \"47732c7e-8c0f-4244-bddb-98bf7b21d2db\") " pod="openstack/dnsmasq-dns-785d8bcb8c-jhblh" Feb 18 19:35:03 crc kubenswrapper[4942]: I0218 19:35:03.736995 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/47732c7e-8c0f-4244-bddb-98bf7b21d2db-ovsdbserver-sb\") pod \"dnsmasq-dns-785d8bcb8c-jhblh\" (UID: \"47732c7e-8c0f-4244-bddb-98bf7b21d2db\") " pod="openstack/dnsmasq-dns-785d8bcb8c-jhblh" Feb 18 19:35:03 crc kubenswrapper[4942]: I0218 19:35:03.737035 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/47732c7e-8c0f-4244-bddb-98bf7b21d2db-config\") pod \"dnsmasq-dns-785d8bcb8c-jhblh\" (UID: \"47732c7e-8c0f-4244-bddb-98bf7b21d2db\") " pod="openstack/dnsmasq-dns-785d8bcb8c-jhblh" Feb 18 19:35:03 crc kubenswrapper[4942]: I0218 19:35:03.737104 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d8v6l\" (UniqueName: \"kubernetes.io/projected/47732c7e-8c0f-4244-bddb-98bf7b21d2db-kube-api-access-d8v6l\") pod \"dnsmasq-dns-785d8bcb8c-jhblh\" (UID: \"47732c7e-8c0f-4244-bddb-98bf7b21d2db\") " pod="openstack/dnsmasq-dns-785d8bcb8c-jhblh" Feb 18 19:35:03 crc kubenswrapper[4942]: I0218 19:35:03.737130 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/47732c7e-8c0f-4244-bddb-98bf7b21d2db-ovsdbserver-nb\") pod \"dnsmasq-dns-785d8bcb8c-jhblh\" (UID: \"47732c7e-8c0f-4244-bddb-98bf7b21d2db\") " pod="openstack/dnsmasq-dns-785d8bcb8c-jhblh" Feb 18 19:35:03 crc kubenswrapper[4942]: I0218 19:35:03.737171 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/47732c7e-8c0f-4244-bddb-98bf7b21d2db-dns-svc\") pod \"dnsmasq-dns-785d8bcb8c-jhblh\" (UID: \"47732c7e-8c0f-4244-bddb-98bf7b21d2db\") " pod="openstack/dnsmasq-dns-785d8bcb8c-jhblh" Feb 18 19:35:03 crc kubenswrapper[4942]: I0218 19:35:03.838616 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/47732c7e-8c0f-4244-bddb-98bf7b21d2db-dns-swift-storage-0\") pod \"dnsmasq-dns-785d8bcb8c-jhblh\" (UID: \"47732c7e-8c0f-4244-bddb-98bf7b21d2db\") " pod="openstack/dnsmasq-dns-785d8bcb8c-jhblh" Feb 18 19:35:03 crc kubenswrapper[4942]: I0218 19:35:03.838690 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/47732c7e-8c0f-4244-bddb-98bf7b21d2db-ovsdbserver-sb\") pod \"dnsmasq-dns-785d8bcb8c-jhblh\" (UID: \"47732c7e-8c0f-4244-bddb-98bf7b21d2db\") " pod="openstack/dnsmasq-dns-785d8bcb8c-jhblh" Feb 18 19:35:03 crc kubenswrapper[4942]: I0218 19:35:03.838747 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/47732c7e-8c0f-4244-bddb-98bf7b21d2db-config\") pod \"dnsmasq-dns-785d8bcb8c-jhblh\" (UID: \"47732c7e-8c0f-4244-bddb-98bf7b21d2db\") " pod="openstack/dnsmasq-dns-785d8bcb8c-jhblh" Feb 18 19:35:03 crc kubenswrapper[4942]: I0218 19:35:03.838846 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d8v6l\" (UniqueName: \"kubernetes.io/projected/47732c7e-8c0f-4244-bddb-98bf7b21d2db-kube-api-access-d8v6l\") pod \"dnsmasq-dns-785d8bcb8c-jhblh\" (UID: \"47732c7e-8c0f-4244-bddb-98bf7b21d2db\") " pod="openstack/dnsmasq-dns-785d8bcb8c-jhblh" Feb 18 19:35:03 crc kubenswrapper[4942]: I0218 19:35:03.838884 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/47732c7e-8c0f-4244-bddb-98bf7b21d2db-ovsdbserver-nb\") pod \"dnsmasq-dns-785d8bcb8c-jhblh\" (UID: \"47732c7e-8c0f-4244-bddb-98bf7b21d2db\") " pod="openstack/dnsmasq-dns-785d8bcb8c-jhblh" Feb 18 19:35:03 crc kubenswrapper[4942]: I0218 19:35:03.838947 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/47732c7e-8c0f-4244-bddb-98bf7b21d2db-dns-svc\") pod \"dnsmasq-dns-785d8bcb8c-jhblh\" (UID: \"47732c7e-8c0f-4244-bddb-98bf7b21d2db\") " pod="openstack/dnsmasq-dns-785d8bcb8c-jhblh" Feb 18 19:35:03 crc kubenswrapper[4942]: I0218 19:35:03.840098 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/47732c7e-8c0f-4244-bddb-98bf7b21d2db-ovsdbserver-sb\") pod \"dnsmasq-dns-785d8bcb8c-jhblh\" (UID: \"47732c7e-8c0f-4244-bddb-98bf7b21d2db\") " pod="openstack/dnsmasq-dns-785d8bcb8c-jhblh" Feb 18 19:35:03 crc kubenswrapper[4942]: I0218 19:35:03.840201 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/47732c7e-8c0f-4244-bddb-98bf7b21d2db-dns-svc\") pod \"dnsmasq-dns-785d8bcb8c-jhblh\" (UID: \"47732c7e-8c0f-4244-bddb-98bf7b21d2db\") " pod="openstack/dnsmasq-dns-785d8bcb8c-jhblh" Feb 18 19:35:03 crc kubenswrapper[4942]: I0218 19:35:03.840440 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/47732c7e-8c0f-4244-bddb-98bf7b21d2db-config\") pod \"dnsmasq-dns-785d8bcb8c-jhblh\" (UID: \"47732c7e-8c0f-4244-bddb-98bf7b21d2db\") " pod="openstack/dnsmasq-dns-785d8bcb8c-jhblh" Feb 18 19:35:03 crc kubenswrapper[4942]: I0218 19:35:03.840839 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/47732c7e-8c0f-4244-bddb-98bf7b21d2db-dns-swift-storage-0\") pod \"dnsmasq-dns-785d8bcb8c-jhblh\" (UID: \"47732c7e-8c0f-4244-bddb-98bf7b21d2db\") " pod="openstack/dnsmasq-dns-785d8bcb8c-jhblh" Feb 18 19:35:03 crc kubenswrapper[4942]: I0218 19:35:03.841180 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/47732c7e-8c0f-4244-bddb-98bf7b21d2db-ovsdbserver-nb\") pod \"dnsmasq-dns-785d8bcb8c-jhblh\" (UID: \"47732c7e-8c0f-4244-bddb-98bf7b21d2db\") " pod="openstack/dnsmasq-dns-785d8bcb8c-jhblh" Feb 18 19:35:03 crc kubenswrapper[4942]: I0218 19:35:03.873542 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d8v6l\" (UniqueName: \"kubernetes.io/projected/47732c7e-8c0f-4244-bddb-98bf7b21d2db-kube-api-access-d8v6l\") pod \"dnsmasq-dns-785d8bcb8c-jhblh\" (UID: \"47732c7e-8c0f-4244-bddb-98bf7b21d2db\") " pod="openstack/dnsmasq-dns-785d8bcb8c-jhblh" Feb 18 19:35:03 crc kubenswrapper[4942]: I0218 19:35:03.966379 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-785d8bcb8c-jhblh" Feb 18 19:35:04 crc kubenswrapper[4942]: I0218 19:35:04.537644 4942 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Feb 18 19:35:04 crc kubenswrapper[4942]: I0218 19:35:04.539036 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 18 19:35:04 crc kubenswrapper[4942]: I0218 19:35:04.540970 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-j6c2t" Feb 18 19:35:04 crc kubenswrapper[4942]: I0218 19:35:04.541185 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-scripts" Feb 18 19:35:04 crc kubenswrapper[4942]: I0218 19:35:04.541305 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Feb 18 19:35:04 crc kubenswrapper[4942]: I0218 19:35:04.552357 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 18 19:35:04 crc kubenswrapper[4942]: I0218 19:35:04.652280 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/1019761a-2eb2-43f0-bce6-94e8b11a5c6a-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"1019761a-2eb2-43f0-bce6-94e8b11a5c6a\") " pod="openstack/glance-default-external-api-0" Feb 18 19:35:04 crc kubenswrapper[4942]: I0218 19:35:04.652345 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1019761a-2eb2-43f0-bce6-94e8b11a5c6a-scripts\") pod \"glance-default-external-api-0\" (UID: \"1019761a-2eb2-43f0-bce6-94e8b11a5c6a\") " pod="openstack/glance-default-external-api-0" Feb 18 19:35:04 crc kubenswrapper[4942]: I0218 19:35:04.652424 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1019761a-2eb2-43f0-bce6-94e8b11a5c6a-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"1019761a-2eb2-43f0-bce6-94e8b11a5c6a\") " pod="openstack/glance-default-external-api-0" Feb 18 19:35:04 crc kubenswrapper[4942]: I0218 19:35:04.652491 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-external-api-0\" (UID: \"1019761a-2eb2-43f0-bce6-94e8b11a5c6a\") " pod="openstack/glance-default-external-api-0" Feb 18 19:35:04 crc kubenswrapper[4942]: I0218 19:35:04.652513 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1019761a-2eb2-43f0-bce6-94e8b11a5c6a-config-data\") pod \"glance-default-external-api-0\" (UID: \"1019761a-2eb2-43f0-bce6-94e8b11a5c6a\") " pod="openstack/glance-default-external-api-0" Feb 18 19:35:04 crc kubenswrapper[4942]: I0218 19:35:04.652577 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1019761a-2eb2-43f0-bce6-94e8b11a5c6a-logs\") pod \"glance-default-external-api-0\" (UID: \"1019761a-2eb2-43f0-bce6-94e8b11a5c6a\") " pod="openstack/glance-default-external-api-0" Feb 18 19:35:04 crc kubenswrapper[4942]: I0218 19:35:04.652648 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n92mx\" (UniqueName: \"kubernetes.io/projected/1019761a-2eb2-43f0-bce6-94e8b11a5c6a-kube-api-access-n92mx\") pod \"glance-default-external-api-0\" (UID: \"1019761a-2eb2-43f0-bce6-94e8b11a5c6a\") " pod="openstack/glance-default-external-api-0" Feb 18 19:35:04 crc kubenswrapper[4942]: I0218 19:35:04.754433 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-external-api-0\" (UID: \"1019761a-2eb2-43f0-bce6-94e8b11a5c6a\") " pod="openstack/glance-default-external-api-0" Feb 18 19:35:04 crc kubenswrapper[4942]: I0218 19:35:04.755016 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1019761a-2eb2-43f0-bce6-94e8b11a5c6a-config-data\") pod \"glance-default-external-api-0\" (UID: \"1019761a-2eb2-43f0-bce6-94e8b11a5c6a\") " pod="openstack/glance-default-external-api-0" Feb 18 19:35:04 crc kubenswrapper[4942]: I0218 19:35:04.755084 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1019761a-2eb2-43f0-bce6-94e8b11a5c6a-logs\") pod \"glance-default-external-api-0\" (UID: \"1019761a-2eb2-43f0-bce6-94e8b11a5c6a\") " pod="openstack/glance-default-external-api-0" Feb 18 19:35:04 crc kubenswrapper[4942]: I0218 19:35:04.755213 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n92mx\" (UniqueName: \"kubernetes.io/projected/1019761a-2eb2-43f0-bce6-94e8b11a5c6a-kube-api-access-n92mx\") pod \"glance-default-external-api-0\" (UID: \"1019761a-2eb2-43f0-bce6-94e8b11a5c6a\") " pod="openstack/glance-default-external-api-0" Feb 18 19:35:04 crc kubenswrapper[4942]: I0218 19:35:04.755269 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/1019761a-2eb2-43f0-bce6-94e8b11a5c6a-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"1019761a-2eb2-43f0-bce6-94e8b11a5c6a\") " pod="openstack/glance-default-external-api-0" Feb 18 19:35:04 crc kubenswrapper[4942]: I0218 19:35:04.755320 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1019761a-2eb2-43f0-bce6-94e8b11a5c6a-scripts\") pod \"glance-default-external-api-0\" (UID: \"1019761a-2eb2-43f0-bce6-94e8b11a5c6a\") " pod="openstack/glance-default-external-api-0" Feb 18 19:35:04 crc kubenswrapper[4942]: I0218 19:35:04.755377 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1019761a-2eb2-43f0-bce6-94e8b11a5c6a-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"1019761a-2eb2-43f0-bce6-94e8b11a5c6a\") " pod="openstack/glance-default-external-api-0" Feb 18 19:35:04 crc kubenswrapper[4942]: I0218 19:35:04.756236 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1019761a-2eb2-43f0-bce6-94e8b11a5c6a-logs\") pod \"glance-default-external-api-0\" (UID: \"1019761a-2eb2-43f0-bce6-94e8b11a5c6a\") " pod="openstack/glance-default-external-api-0" Feb 18 19:35:04 crc kubenswrapper[4942]: I0218 19:35:04.754880 4942 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-external-api-0\" (UID: \"1019761a-2eb2-43f0-bce6-94e8b11a5c6a\") device mount path \"/mnt/openstack/pv08\"" pod="openstack/glance-default-external-api-0" Feb 18 19:35:04 crc kubenswrapper[4942]: I0218 19:35:04.756855 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/1019761a-2eb2-43f0-bce6-94e8b11a5c6a-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"1019761a-2eb2-43f0-bce6-94e8b11a5c6a\") " pod="openstack/glance-default-external-api-0" Feb 18 19:35:04 crc kubenswrapper[4942]: I0218 19:35:04.762005 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1019761a-2eb2-43f0-bce6-94e8b11a5c6a-config-data\") pod \"glance-default-external-api-0\" (UID: \"1019761a-2eb2-43f0-bce6-94e8b11a5c6a\") " pod="openstack/glance-default-external-api-0" Feb 18 19:35:04 crc kubenswrapper[4942]: I0218 19:35:04.764777 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1019761a-2eb2-43f0-bce6-94e8b11a5c6a-scripts\") pod \"glance-default-external-api-0\" (UID: \"1019761a-2eb2-43f0-bce6-94e8b11a5c6a\") " pod="openstack/glance-default-external-api-0" Feb 18 19:35:04 crc kubenswrapper[4942]: I0218 19:35:04.772052 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1019761a-2eb2-43f0-bce6-94e8b11a5c6a-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"1019761a-2eb2-43f0-bce6-94e8b11a5c6a\") " pod="openstack/glance-default-external-api-0" Feb 18 19:35:04 crc kubenswrapper[4942]: I0218 19:35:04.773191 4942 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 18 19:35:04 crc kubenswrapper[4942]: I0218 19:35:04.776344 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 18 19:35:04 crc kubenswrapper[4942]: I0218 19:35:04.783141 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Feb 18 19:35:04 crc kubenswrapper[4942]: I0218 19:35:04.785593 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n92mx\" (UniqueName: \"kubernetes.io/projected/1019761a-2eb2-43f0-bce6-94e8b11a5c6a-kube-api-access-n92mx\") pod \"glance-default-external-api-0\" (UID: \"1019761a-2eb2-43f0-bce6-94e8b11a5c6a\") " pod="openstack/glance-default-external-api-0" Feb 18 19:35:04 crc kubenswrapper[4942]: I0218 19:35:04.794682 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-external-api-0\" (UID: \"1019761a-2eb2-43f0-bce6-94e8b11a5c6a\") " pod="openstack/glance-default-external-api-0" Feb 18 19:35:04 crc kubenswrapper[4942]: I0218 19:35:04.815234 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 18 19:35:04 crc kubenswrapper[4942]: I0218 19:35:04.856610 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e6ae6fb3-34ed-41bd-beb7-7ecedb83a6e3-scripts\") pod \"glance-default-internal-api-0\" (UID: \"e6ae6fb3-34ed-41bd-beb7-7ecedb83a6e3\") " pod="openstack/glance-default-internal-api-0" Feb 18 19:35:04 crc kubenswrapper[4942]: I0218 19:35:04.856742 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e6ae6fb3-34ed-41bd-beb7-7ecedb83a6e3-logs\") pod \"glance-default-internal-api-0\" (UID: \"e6ae6fb3-34ed-41bd-beb7-7ecedb83a6e3\") " pod="openstack/glance-default-internal-api-0" Feb 18 19:35:04 crc kubenswrapper[4942]: I0218 19:35:04.856807 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e6ae6fb3-34ed-41bd-beb7-7ecedb83a6e3-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"e6ae6fb3-34ed-41bd-beb7-7ecedb83a6e3\") " pod="openstack/glance-default-internal-api-0" Feb 18 19:35:04 crc kubenswrapper[4942]: I0218 19:35:04.856832 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/e6ae6fb3-34ed-41bd-beb7-7ecedb83a6e3-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"e6ae6fb3-34ed-41bd-beb7-7ecedb83a6e3\") " pod="openstack/glance-default-internal-api-0" Feb 18 19:35:04 crc kubenswrapper[4942]: I0218 19:35:04.856851 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"glance-default-internal-api-0\" (UID: \"e6ae6fb3-34ed-41bd-beb7-7ecedb83a6e3\") " pod="openstack/glance-default-internal-api-0" Feb 18 19:35:04 crc kubenswrapper[4942]: I0218 19:35:04.856881 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r6mcd\" (UniqueName: \"kubernetes.io/projected/e6ae6fb3-34ed-41bd-beb7-7ecedb83a6e3-kube-api-access-r6mcd\") pod \"glance-default-internal-api-0\" (UID: \"e6ae6fb3-34ed-41bd-beb7-7ecedb83a6e3\") " pod="openstack/glance-default-internal-api-0" Feb 18 19:35:04 crc kubenswrapper[4942]: I0218 19:35:04.856909 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e6ae6fb3-34ed-41bd-beb7-7ecedb83a6e3-config-data\") pod \"glance-default-internal-api-0\" (UID: \"e6ae6fb3-34ed-41bd-beb7-7ecedb83a6e3\") " pod="openstack/glance-default-internal-api-0" Feb 18 19:35:04 crc kubenswrapper[4942]: I0218 19:35:04.863840 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 18 19:35:04 crc kubenswrapper[4942]: I0218 19:35:04.958132 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e6ae6fb3-34ed-41bd-beb7-7ecedb83a6e3-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"e6ae6fb3-34ed-41bd-beb7-7ecedb83a6e3\") " pod="openstack/glance-default-internal-api-0" Feb 18 19:35:04 crc kubenswrapper[4942]: I0218 19:35:04.958194 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/e6ae6fb3-34ed-41bd-beb7-7ecedb83a6e3-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"e6ae6fb3-34ed-41bd-beb7-7ecedb83a6e3\") " pod="openstack/glance-default-internal-api-0" Feb 18 19:35:04 crc kubenswrapper[4942]: I0218 19:35:04.958224 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"glance-default-internal-api-0\" (UID: \"e6ae6fb3-34ed-41bd-beb7-7ecedb83a6e3\") " pod="openstack/glance-default-internal-api-0" Feb 18 19:35:04 crc kubenswrapper[4942]: I0218 19:35:04.958264 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r6mcd\" (UniqueName: \"kubernetes.io/projected/e6ae6fb3-34ed-41bd-beb7-7ecedb83a6e3-kube-api-access-r6mcd\") pod \"glance-default-internal-api-0\" (UID: \"e6ae6fb3-34ed-41bd-beb7-7ecedb83a6e3\") " pod="openstack/glance-default-internal-api-0" Feb 18 19:35:04 crc kubenswrapper[4942]: I0218 19:35:04.958306 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e6ae6fb3-34ed-41bd-beb7-7ecedb83a6e3-config-data\") pod \"glance-default-internal-api-0\" (UID: \"e6ae6fb3-34ed-41bd-beb7-7ecedb83a6e3\") " pod="openstack/glance-default-internal-api-0" Feb 18 19:35:04 crc kubenswrapper[4942]: I0218 19:35:04.958347 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e6ae6fb3-34ed-41bd-beb7-7ecedb83a6e3-scripts\") pod \"glance-default-internal-api-0\" (UID: \"e6ae6fb3-34ed-41bd-beb7-7ecedb83a6e3\") " pod="openstack/glance-default-internal-api-0" Feb 18 19:35:04 crc kubenswrapper[4942]: I0218 19:35:04.958421 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e6ae6fb3-34ed-41bd-beb7-7ecedb83a6e3-logs\") pod \"glance-default-internal-api-0\" (UID: \"e6ae6fb3-34ed-41bd-beb7-7ecedb83a6e3\") " pod="openstack/glance-default-internal-api-0" Feb 18 19:35:04 crc kubenswrapper[4942]: I0218 19:35:04.958542 4942 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"glance-default-internal-api-0\" (UID: \"e6ae6fb3-34ed-41bd-beb7-7ecedb83a6e3\") device mount path \"/mnt/openstack/pv04\"" pod="openstack/glance-default-internal-api-0" Feb 18 19:35:04 crc kubenswrapper[4942]: I0218 19:35:04.959634 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e6ae6fb3-34ed-41bd-beb7-7ecedb83a6e3-logs\") pod \"glance-default-internal-api-0\" (UID: \"e6ae6fb3-34ed-41bd-beb7-7ecedb83a6e3\") " pod="openstack/glance-default-internal-api-0" Feb 18 19:35:04 crc kubenswrapper[4942]: I0218 19:35:04.961718 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e6ae6fb3-34ed-41bd-beb7-7ecedb83a6e3-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"e6ae6fb3-34ed-41bd-beb7-7ecedb83a6e3\") " pod="openstack/glance-default-internal-api-0" Feb 18 19:35:04 crc kubenswrapper[4942]: I0218 19:35:04.961901 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/e6ae6fb3-34ed-41bd-beb7-7ecedb83a6e3-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"e6ae6fb3-34ed-41bd-beb7-7ecedb83a6e3\") " pod="openstack/glance-default-internal-api-0" Feb 18 19:35:04 crc kubenswrapper[4942]: I0218 19:35:04.964591 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e6ae6fb3-34ed-41bd-beb7-7ecedb83a6e3-config-data\") pod \"glance-default-internal-api-0\" (UID: \"e6ae6fb3-34ed-41bd-beb7-7ecedb83a6e3\") " pod="openstack/glance-default-internal-api-0" Feb 18 19:35:04 crc kubenswrapper[4942]: I0218 19:35:04.975538 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e6ae6fb3-34ed-41bd-beb7-7ecedb83a6e3-scripts\") pod \"glance-default-internal-api-0\" (UID: \"e6ae6fb3-34ed-41bd-beb7-7ecedb83a6e3\") " pod="openstack/glance-default-internal-api-0" Feb 18 19:35:04 crc kubenswrapper[4942]: I0218 19:35:04.976540 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r6mcd\" (UniqueName: \"kubernetes.io/projected/e6ae6fb3-34ed-41bd-beb7-7ecedb83a6e3-kube-api-access-r6mcd\") pod \"glance-default-internal-api-0\" (UID: \"e6ae6fb3-34ed-41bd-beb7-7ecedb83a6e3\") " pod="openstack/glance-default-internal-api-0" Feb 18 19:35:04 crc kubenswrapper[4942]: I0218 19:35:04.986546 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"glance-default-internal-api-0\" (UID: \"e6ae6fb3-34ed-41bd-beb7-7ecedb83a6e3\") " pod="openstack/glance-default-internal-api-0" Feb 18 19:35:05 crc kubenswrapper[4942]: I0218 19:35:05.174635 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 18 19:35:06 crc kubenswrapper[4942]: I0218 19:35:06.258810 4942 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 18 19:35:06 crc kubenswrapper[4942]: I0218 19:35:06.338582 4942 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 18 19:35:07 crc kubenswrapper[4942]: I0218 19:35:07.856098 4942 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-698758b865-nnzck" podUID="1e919317-cae2-432d-959f-8cf1d4520b56" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.120:5353: connect: connection refused" Feb 18 19:35:09 crc kubenswrapper[4942]: E0218 19:35:09.838209 4942 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-horizon:current-podified" Feb 18 19:35:09 crc kubenswrapper[4942]: E0218 19:35:09.838886 4942 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:horizon-log,Image:quay.io/podified-antelope-centos9/openstack-horizon:current-podified,Command:[/bin/bash],Args:[-c tail -n+1 -F /var/log/horizon/horizon.log],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n5b9h64dh5bdh596h588hb7h6h5d6h654h5c7h558hdch5ffh686h68ch5ddh686h5f9hcbh544h587h55fhd7h5c9h676hfhc4h5cdh674h599hd4h547q,ValueFrom:nil,},EnvVar{Name:ENABLE_DESIGNATE,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_HEAT,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_IRONIC,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_MANILA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_OCTAVIA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_WATCHER,Value:yes,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},EnvVar{Name:UNPACK_THEME,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:logs,ReadOnly:false,MountPath:/var/log/horizon,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-r8qtj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*48,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*42400,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod horizon-6487999dc5-x92k5_openstack(c4f4df56-7f3e-490d-9321-dc520b65369a): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 18 19:35:09 crc kubenswrapper[4942]: E0218 19:35:09.879327 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"horizon-log\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\", failed to \"StartContainer\" for \"horizon\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-horizon:current-podified\\\"\"]" pod="openstack/horizon-6487999dc5-x92k5" podUID="c4f4df56-7f3e-490d-9321-dc520b65369a" Feb 18 19:35:11 crc kubenswrapper[4942]: E0218 19:35:11.645753 4942 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-horizon:current-podified" Feb 18 19:35:11 crc kubenswrapper[4942]: E0218 19:35:11.646298 4942 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:horizon-log,Image:quay.io/podified-antelope-centos9/openstack-horizon:current-podified,Command:[/bin/bash],Args:[-c tail -n+1 -F /var/log/horizon/horizon.log],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n5bfh679h78h649h5ffh5ch5b6h56dhf7h54h86h57hbch68fhc8h557h8fh5c7h5b5h6fhb4h5fh5ddh56fh595h5dfh55fh566h5f6h64h84h86q,ValueFrom:nil,},EnvVar{Name:ENABLE_DESIGNATE,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_HEAT,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_IRONIC,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_MANILA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_OCTAVIA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_WATCHER,Value:yes,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},EnvVar{Name:UNPACK_THEME,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:logs,ReadOnly:false,MountPath:/var/log/horizon,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-mwckx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*48,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*42400,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod horizon-5dcf8ff489-qc7h7_openstack(79f6285d-991e-4118-8f5b-d451c225f1d6): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 18 19:35:11 crc kubenswrapper[4942]: E0218 19:35:11.648310 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"horizon-log\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\", failed to \"StartContainer\" for \"horizon\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-horizon:current-podified\\\"\"]" pod="openstack/horizon-5dcf8ff489-qc7h7" podUID="79f6285d-991e-4118-8f5b-d451c225f1d6" Feb 18 19:35:11 crc kubenswrapper[4942]: E0218 19:35:11.652961 4942 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-horizon:current-podified" Feb 18 19:35:11 crc kubenswrapper[4942]: E0218 19:35:11.653088 4942 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:horizon-log,Image:quay.io/podified-antelope-centos9/openstack-horizon:current-podified,Command:[/bin/bash],Args:[-c tail -n+1 -F /var/log/horizon/horizon.log],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n598h5c7h4h64bh55ch55ch688h668h56h676h5fdhd5h5b9h589h697h8dh57dhdbh568hf5h655h566h579h99h55bh66dh544h594h66h7ch646h568q,ValueFrom:nil,},EnvVar{Name:ENABLE_DESIGNATE,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_HEAT,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_IRONIC,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_MANILA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_OCTAVIA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_WATCHER,Value:yes,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},EnvVar{Name:UNPACK_THEME,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:logs,ReadOnly:false,MountPath:/var/log/horizon,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-hbrsd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*48,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*42400,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod horizon-7d6d8bb5d5-5l49m_openstack(29a05f17-8ada-451f-8460-887a45caa4e6): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 18 19:35:11 crc kubenswrapper[4942]: E0218 19:35:11.655298 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"horizon-log\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\", failed to \"StartContainer\" for \"horizon\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-horizon:current-podified\\\"\"]" pod="openstack/horizon-7d6d8bb5d5-5l49m" podUID="29a05f17-8ada-451f-8460-887a45caa4e6" Feb 18 19:35:11 crc kubenswrapper[4942]: I0218 19:35:11.776577 4942 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-wknkh" Feb 18 19:35:11 crc kubenswrapper[4942]: I0218 19:35:11.879161 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/4c05cfb2-6a1f-46e5-a784-29bad5c3cdc0-fernet-keys\") pod \"4c05cfb2-6a1f-46e5-a784-29bad5c3cdc0\" (UID: \"4c05cfb2-6a1f-46e5-a784-29bad5c3cdc0\") " Feb 18 19:35:11 crc kubenswrapper[4942]: I0218 19:35:11.879222 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4c05cfb2-6a1f-46e5-a784-29bad5c3cdc0-config-data\") pod \"4c05cfb2-6a1f-46e5-a784-29bad5c3cdc0\" (UID: \"4c05cfb2-6a1f-46e5-a784-29bad5c3cdc0\") " Feb 18 19:35:11 crc kubenswrapper[4942]: I0218 19:35:11.879247 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9csq5\" (UniqueName: \"kubernetes.io/projected/4c05cfb2-6a1f-46e5-a784-29bad5c3cdc0-kube-api-access-9csq5\") pod \"4c05cfb2-6a1f-46e5-a784-29bad5c3cdc0\" (UID: \"4c05cfb2-6a1f-46e5-a784-29bad5c3cdc0\") " Feb 18 19:35:11 crc kubenswrapper[4942]: I0218 19:35:11.879333 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4c05cfb2-6a1f-46e5-a784-29bad5c3cdc0-scripts\") pod \"4c05cfb2-6a1f-46e5-a784-29bad5c3cdc0\" (UID: \"4c05cfb2-6a1f-46e5-a784-29bad5c3cdc0\") " Feb 18 19:35:11 crc kubenswrapper[4942]: I0218 19:35:11.879432 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/4c05cfb2-6a1f-46e5-a784-29bad5c3cdc0-credential-keys\") pod \"4c05cfb2-6a1f-46e5-a784-29bad5c3cdc0\" (UID: \"4c05cfb2-6a1f-46e5-a784-29bad5c3cdc0\") " Feb 18 19:35:11 crc kubenswrapper[4942]: I0218 19:35:11.879477 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4c05cfb2-6a1f-46e5-a784-29bad5c3cdc0-combined-ca-bundle\") pod \"4c05cfb2-6a1f-46e5-a784-29bad5c3cdc0\" (UID: \"4c05cfb2-6a1f-46e5-a784-29bad5c3cdc0\") " Feb 18 19:35:11 crc kubenswrapper[4942]: I0218 19:35:11.885710 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4c05cfb2-6a1f-46e5-a784-29bad5c3cdc0-scripts" (OuterVolumeSpecName: "scripts") pod "4c05cfb2-6a1f-46e5-a784-29bad5c3cdc0" (UID: "4c05cfb2-6a1f-46e5-a784-29bad5c3cdc0"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:35:11 crc kubenswrapper[4942]: I0218 19:35:11.886470 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4c05cfb2-6a1f-46e5-a784-29bad5c3cdc0-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "4c05cfb2-6a1f-46e5-a784-29bad5c3cdc0" (UID: "4c05cfb2-6a1f-46e5-a784-29bad5c3cdc0"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:35:11 crc kubenswrapper[4942]: I0218 19:35:11.886997 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4c05cfb2-6a1f-46e5-a784-29bad5c3cdc0-kube-api-access-9csq5" (OuterVolumeSpecName: "kube-api-access-9csq5") pod "4c05cfb2-6a1f-46e5-a784-29bad5c3cdc0" (UID: "4c05cfb2-6a1f-46e5-a784-29bad5c3cdc0"). InnerVolumeSpecName "kube-api-access-9csq5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:35:11 crc kubenswrapper[4942]: I0218 19:35:11.894441 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4c05cfb2-6a1f-46e5-a784-29bad5c3cdc0-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "4c05cfb2-6a1f-46e5-a784-29bad5c3cdc0" (UID: "4c05cfb2-6a1f-46e5-a784-29bad5c3cdc0"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:35:11 crc kubenswrapper[4942]: I0218 19:35:11.907352 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4c05cfb2-6a1f-46e5-a784-29bad5c3cdc0-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "4c05cfb2-6a1f-46e5-a784-29bad5c3cdc0" (UID: "4c05cfb2-6a1f-46e5-a784-29bad5c3cdc0"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:35:11 crc kubenswrapper[4942]: I0218 19:35:11.913640 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4c05cfb2-6a1f-46e5-a784-29bad5c3cdc0-config-data" (OuterVolumeSpecName: "config-data") pod "4c05cfb2-6a1f-46e5-a784-29bad5c3cdc0" (UID: "4c05cfb2-6a1f-46e5-a784-29bad5c3cdc0"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:35:11 crc kubenswrapper[4942]: I0218 19:35:11.981458 4942 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/4c05cfb2-6a1f-46e5-a784-29bad5c3cdc0-credential-keys\") on node \"crc\" DevicePath \"\"" Feb 18 19:35:11 crc kubenswrapper[4942]: I0218 19:35:11.981490 4942 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4c05cfb2-6a1f-46e5-a784-29bad5c3cdc0-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 19:35:11 crc kubenswrapper[4942]: I0218 19:35:11.981500 4942 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/4c05cfb2-6a1f-46e5-a784-29bad5c3cdc0-fernet-keys\") on node \"crc\" DevicePath \"\"" Feb 18 19:35:11 crc kubenswrapper[4942]: I0218 19:35:11.981508 4942 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4c05cfb2-6a1f-46e5-a784-29bad5c3cdc0-config-data\") on node \"crc\" DevicePath \"\"" Feb 18 19:35:11 crc kubenswrapper[4942]: I0218 19:35:11.981517 4942 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9csq5\" (UniqueName: \"kubernetes.io/projected/4c05cfb2-6a1f-46e5-a784-29bad5c3cdc0-kube-api-access-9csq5\") on node \"crc\" DevicePath \"\"" Feb 18 19:35:11 crc kubenswrapper[4942]: I0218 19:35:11.981527 4942 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4c05cfb2-6a1f-46e5-a784-29bad5c3cdc0-scripts\") on node \"crc\" DevicePath \"\"" Feb 18 19:35:12 crc kubenswrapper[4942]: I0218 19:35:12.494438 4942 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-wknkh" Feb 18 19:35:12 crc kubenswrapper[4942]: I0218 19:35:12.494457 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-wknkh" event={"ID":"4c05cfb2-6a1f-46e5-a784-29bad5c3cdc0","Type":"ContainerDied","Data":"262290f48bc9f52e9ad2af485330819793bdd52215504ccea4c7c0b79cc77dac"} Feb 18 19:35:12 crc kubenswrapper[4942]: I0218 19:35:12.494509 4942 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="262290f48bc9f52e9ad2af485330819793bdd52215504ccea4c7c0b79cc77dac" Feb 18 19:35:12 crc kubenswrapper[4942]: I0218 19:35:12.496351 4942 generic.go:334] "Generic (PLEG): container finished" podID="a6c912f7-7ee8-4f53-a358-a6a6a5088be5" containerID="1f69a1fd29ab925cd8cf8e9aff116531b62f274c86f6998747eb096250393ed9" exitCode=0 Feb 18 19:35:12 crc kubenswrapper[4942]: I0218 19:35:12.496389 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-p9l27" event={"ID":"a6c912f7-7ee8-4f53-a358-a6a6a5088be5","Type":"ContainerDied","Data":"1f69a1fd29ab925cd8cf8e9aff116531b62f274c86f6998747eb096250393ed9"} Feb 18 19:35:12 crc kubenswrapper[4942]: I0218 19:35:12.896483 4942 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-wknkh"] Feb 18 19:35:12 crc kubenswrapper[4942]: I0218 19:35:12.904788 4942 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-wknkh"] Feb 18 19:35:12 crc kubenswrapper[4942]: I0218 19:35:12.978953 4942 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-tnqg7"] Feb 18 19:35:12 crc kubenswrapper[4942]: E0218 19:35:12.979301 4942 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4c05cfb2-6a1f-46e5-a784-29bad5c3cdc0" containerName="keystone-bootstrap" Feb 18 19:35:12 crc kubenswrapper[4942]: I0218 19:35:12.979317 4942 state_mem.go:107] "Deleted CPUSet assignment" podUID="4c05cfb2-6a1f-46e5-a784-29bad5c3cdc0" containerName="keystone-bootstrap" Feb 18 19:35:12 crc kubenswrapper[4942]: I0218 19:35:12.979535 4942 memory_manager.go:354] "RemoveStaleState removing state" podUID="4c05cfb2-6a1f-46e5-a784-29bad5c3cdc0" containerName="keystone-bootstrap" Feb 18 19:35:12 crc kubenswrapper[4942]: I0218 19:35:12.980364 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-tnqg7" Feb 18 19:35:12 crc kubenswrapper[4942]: I0218 19:35:12.982619 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Feb 18 19:35:12 crc kubenswrapper[4942]: I0218 19:35:12.982739 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Feb 18 19:35:12 crc kubenswrapper[4942]: I0218 19:35:12.982776 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-9szpl" Feb 18 19:35:12 crc kubenswrapper[4942]: I0218 19:35:12.982970 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Feb 18 19:35:12 crc kubenswrapper[4942]: I0218 19:35:12.983227 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Feb 18 19:35:12 crc kubenswrapper[4942]: I0218 19:35:12.998005 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-tnqg7"] Feb 18 19:35:13 crc kubenswrapper[4942]: I0218 19:35:13.048649 4942 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4c05cfb2-6a1f-46e5-a784-29bad5c3cdc0" path="/var/lib/kubelet/pods/4c05cfb2-6a1f-46e5-a784-29bad5c3cdc0/volumes" Feb 18 19:35:13 crc kubenswrapper[4942]: I0218 19:35:13.101319 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f29ae8a1-b3cc-452c-ac99-b450ef3125d8-scripts\") pod \"keystone-bootstrap-tnqg7\" (UID: \"f29ae8a1-b3cc-452c-ac99-b450ef3125d8\") " pod="openstack/keystone-bootstrap-tnqg7" Feb 18 19:35:13 crc kubenswrapper[4942]: I0218 19:35:13.101371 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/f29ae8a1-b3cc-452c-ac99-b450ef3125d8-fernet-keys\") pod \"keystone-bootstrap-tnqg7\" (UID: \"f29ae8a1-b3cc-452c-ac99-b450ef3125d8\") " pod="openstack/keystone-bootstrap-tnqg7" Feb 18 19:35:13 crc kubenswrapper[4942]: I0218 19:35:13.101400 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/f29ae8a1-b3cc-452c-ac99-b450ef3125d8-credential-keys\") pod \"keystone-bootstrap-tnqg7\" (UID: \"f29ae8a1-b3cc-452c-ac99-b450ef3125d8\") " pod="openstack/keystone-bootstrap-tnqg7" Feb 18 19:35:13 crc kubenswrapper[4942]: I0218 19:35:13.101439 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f29ae8a1-b3cc-452c-ac99-b450ef3125d8-config-data\") pod \"keystone-bootstrap-tnqg7\" (UID: \"f29ae8a1-b3cc-452c-ac99-b450ef3125d8\") " pod="openstack/keystone-bootstrap-tnqg7" Feb 18 19:35:13 crc kubenswrapper[4942]: I0218 19:35:13.101988 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-28x9f\" (UniqueName: \"kubernetes.io/projected/f29ae8a1-b3cc-452c-ac99-b450ef3125d8-kube-api-access-28x9f\") pod \"keystone-bootstrap-tnqg7\" (UID: \"f29ae8a1-b3cc-452c-ac99-b450ef3125d8\") " pod="openstack/keystone-bootstrap-tnqg7" Feb 18 19:35:13 crc kubenswrapper[4942]: I0218 19:35:13.102237 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f29ae8a1-b3cc-452c-ac99-b450ef3125d8-combined-ca-bundle\") pod \"keystone-bootstrap-tnqg7\" (UID: \"f29ae8a1-b3cc-452c-ac99-b450ef3125d8\") " pod="openstack/keystone-bootstrap-tnqg7" Feb 18 19:35:13 crc kubenswrapper[4942]: I0218 19:35:13.203237 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-28x9f\" (UniqueName: \"kubernetes.io/projected/f29ae8a1-b3cc-452c-ac99-b450ef3125d8-kube-api-access-28x9f\") pod \"keystone-bootstrap-tnqg7\" (UID: \"f29ae8a1-b3cc-452c-ac99-b450ef3125d8\") " pod="openstack/keystone-bootstrap-tnqg7" Feb 18 19:35:13 crc kubenswrapper[4942]: I0218 19:35:13.203530 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f29ae8a1-b3cc-452c-ac99-b450ef3125d8-combined-ca-bundle\") pod \"keystone-bootstrap-tnqg7\" (UID: \"f29ae8a1-b3cc-452c-ac99-b450ef3125d8\") " pod="openstack/keystone-bootstrap-tnqg7" Feb 18 19:35:13 crc kubenswrapper[4942]: I0218 19:35:13.203608 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f29ae8a1-b3cc-452c-ac99-b450ef3125d8-scripts\") pod \"keystone-bootstrap-tnqg7\" (UID: \"f29ae8a1-b3cc-452c-ac99-b450ef3125d8\") " pod="openstack/keystone-bootstrap-tnqg7" Feb 18 19:35:13 crc kubenswrapper[4942]: I0218 19:35:13.203630 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/f29ae8a1-b3cc-452c-ac99-b450ef3125d8-fernet-keys\") pod \"keystone-bootstrap-tnqg7\" (UID: \"f29ae8a1-b3cc-452c-ac99-b450ef3125d8\") " pod="openstack/keystone-bootstrap-tnqg7" Feb 18 19:35:13 crc kubenswrapper[4942]: I0218 19:35:13.203658 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/f29ae8a1-b3cc-452c-ac99-b450ef3125d8-credential-keys\") pod \"keystone-bootstrap-tnqg7\" (UID: \"f29ae8a1-b3cc-452c-ac99-b450ef3125d8\") " pod="openstack/keystone-bootstrap-tnqg7" Feb 18 19:35:13 crc kubenswrapper[4942]: I0218 19:35:13.203686 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f29ae8a1-b3cc-452c-ac99-b450ef3125d8-config-data\") pod \"keystone-bootstrap-tnqg7\" (UID: \"f29ae8a1-b3cc-452c-ac99-b450ef3125d8\") " pod="openstack/keystone-bootstrap-tnqg7" Feb 18 19:35:13 crc kubenswrapper[4942]: I0218 19:35:13.209336 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f29ae8a1-b3cc-452c-ac99-b450ef3125d8-config-data\") pod \"keystone-bootstrap-tnqg7\" (UID: \"f29ae8a1-b3cc-452c-ac99-b450ef3125d8\") " pod="openstack/keystone-bootstrap-tnqg7" Feb 18 19:35:13 crc kubenswrapper[4942]: I0218 19:35:13.216668 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/f29ae8a1-b3cc-452c-ac99-b450ef3125d8-fernet-keys\") pod \"keystone-bootstrap-tnqg7\" (UID: \"f29ae8a1-b3cc-452c-ac99-b450ef3125d8\") " pod="openstack/keystone-bootstrap-tnqg7" Feb 18 19:35:13 crc kubenswrapper[4942]: I0218 19:35:13.219810 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f29ae8a1-b3cc-452c-ac99-b450ef3125d8-scripts\") pod \"keystone-bootstrap-tnqg7\" (UID: \"f29ae8a1-b3cc-452c-ac99-b450ef3125d8\") " pod="openstack/keystone-bootstrap-tnqg7" Feb 18 19:35:13 crc kubenswrapper[4942]: I0218 19:35:13.220036 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f29ae8a1-b3cc-452c-ac99-b450ef3125d8-combined-ca-bundle\") pod \"keystone-bootstrap-tnqg7\" (UID: \"f29ae8a1-b3cc-452c-ac99-b450ef3125d8\") " pod="openstack/keystone-bootstrap-tnqg7" Feb 18 19:35:13 crc kubenswrapper[4942]: I0218 19:35:13.220802 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-28x9f\" (UniqueName: \"kubernetes.io/projected/f29ae8a1-b3cc-452c-ac99-b450ef3125d8-kube-api-access-28x9f\") pod \"keystone-bootstrap-tnqg7\" (UID: \"f29ae8a1-b3cc-452c-ac99-b450ef3125d8\") " pod="openstack/keystone-bootstrap-tnqg7" Feb 18 19:35:13 crc kubenswrapper[4942]: I0218 19:35:13.222404 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/f29ae8a1-b3cc-452c-ac99-b450ef3125d8-credential-keys\") pod \"keystone-bootstrap-tnqg7\" (UID: \"f29ae8a1-b3cc-452c-ac99-b450ef3125d8\") " pod="openstack/keystone-bootstrap-tnqg7" Feb 18 19:35:13 crc kubenswrapper[4942]: I0218 19:35:13.297118 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-tnqg7" Feb 18 19:35:17 crc kubenswrapper[4942]: I0218 19:35:17.857389 4942 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-698758b865-nnzck" podUID="1e919317-cae2-432d-959f-8cf1d4520b56" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.120:5353: i/o timeout" Feb 18 19:35:17 crc kubenswrapper[4942]: I0218 19:35:17.857927 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-698758b865-nnzck" Feb 18 19:35:19 crc kubenswrapper[4942]: I0218 19:35:19.326611 4942 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-6487999dc5-x92k5" Feb 18 19:35:19 crc kubenswrapper[4942]: I0218 19:35:19.528532 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/c4f4df56-7f3e-490d-9321-dc520b65369a-horizon-secret-key\") pod \"c4f4df56-7f3e-490d-9321-dc520b65369a\" (UID: \"c4f4df56-7f3e-490d-9321-dc520b65369a\") " Feb 18 19:35:19 crc kubenswrapper[4942]: I0218 19:35:19.528713 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c4f4df56-7f3e-490d-9321-dc520b65369a-logs\") pod \"c4f4df56-7f3e-490d-9321-dc520b65369a\" (UID: \"c4f4df56-7f3e-490d-9321-dc520b65369a\") " Feb 18 19:35:19 crc kubenswrapper[4942]: I0218 19:35:19.528896 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c4f4df56-7f3e-490d-9321-dc520b65369a-scripts\") pod \"c4f4df56-7f3e-490d-9321-dc520b65369a\" (UID: \"c4f4df56-7f3e-490d-9321-dc520b65369a\") " Feb 18 19:35:19 crc kubenswrapper[4942]: I0218 19:35:19.529328 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c4f4df56-7f3e-490d-9321-dc520b65369a-logs" (OuterVolumeSpecName: "logs") pod "c4f4df56-7f3e-490d-9321-dc520b65369a" (UID: "c4f4df56-7f3e-490d-9321-dc520b65369a"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 19:35:19 crc kubenswrapper[4942]: I0218 19:35:19.529689 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c4f4df56-7f3e-490d-9321-dc520b65369a-scripts" (OuterVolumeSpecName: "scripts") pod "c4f4df56-7f3e-490d-9321-dc520b65369a" (UID: "c4f4df56-7f3e-490d-9321-dc520b65369a"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:35:19 crc kubenswrapper[4942]: I0218 19:35:19.529803 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/c4f4df56-7f3e-490d-9321-dc520b65369a-config-data\") pod \"c4f4df56-7f3e-490d-9321-dc520b65369a\" (UID: \"c4f4df56-7f3e-490d-9321-dc520b65369a\") " Feb 18 19:35:19 crc kubenswrapper[4942]: I0218 19:35:19.529838 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r8qtj\" (UniqueName: \"kubernetes.io/projected/c4f4df56-7f3e-490d-9321-dc520b65369a-kube-api-access-r8qtj\") pod \"c4f4df56-7f3e-490d-9321-dc520b65369a\" (UID: \"c4f4df56-7f3e-490d-9321-dc520b65369a\") " Feb 18 19:35:19 crc kubenswrapper[4942]: I0218 19:35:19.530338 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c4f4df56-7f3e-490d-9321-dc520b65369a-config-data" (OuterVolumeSpecName: "config-data") pod "c4f4df56-7f3e-490d-9321-dc520b65369a" (UID: "c4f4df56-7f3e-490d-9321-dc520b65369a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:35:19 crc kubenswrapper[4942]: I0218 19:35:19.530848 4942 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c4f4df56-7f3e-490d-9321-dc520b65369a-logs\") on node \"crc\" DevicePath \"\"" Feb 18 19:35:19 crc kubenswrapper[4942]: I0218 19:35:19.530879 4942 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c4f4df56-7f3e-490d-9321-dc520b65369a-scripts\") on node \"crc\" DevicePath \"\"" Feb 18 19:35:19 crc kubenswrapper[4942]: I0218 19:35:19.530896 4942 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/c4f4df56-7f3e-490d-9321-dc520b65369a-config-data\") on node \"crc\" DevicePath \"\"" Feb 18 19:35:19 crc kubenswrapper[4942]: I0218 19:35:19.536463 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c4f4df56-7f3e-490d-9321-dc520b65369a-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "c4f4df56-7f3e-490d-9321-dc520b65369a" (UID: "c4f4df56-7f3e-490d-9321-dc520b65369a"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:35:19 crc kubenswrapper[4942]: I0218 19:35:19.536962 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c4f4df56-7f3e-490d-9321-dc520b65369a-kube-api-access-r8qtj" (OuterVolumeSpecName: "kube-api-access-r8qtj") pod "c4f4df56-7f3e-490d-9321-dc520b65369a" (UID: "c4f4df56-7f3e-490d-9321-dc520b65369a"). InnerVolumeSpecName "kube-api-access-r8qtj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:35:19 crc kubenswrapper[4942]: I0218 19:35:19.553459 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-6487999dc5-x92k5" event={"ID":"c4f4df56-7f3e-490d-9321-dc520b65369a","Type":"ContainerDied","Data":"ac381e3f114e8f2e0ca2ad49412144e5bd5345aa14e469a41eeec38b75b61e1c"} Feb 18 19:35:19 crc kubenswrapper[4942]: I0218 19:35:19.553556 4942 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-6487999dc5-x92k5" Feb 18 19:35:19 crc kubenswrapper[4942]: I0218 19:35:19.631635 4942 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r8qtj\" (UniqueName: \"kubernetes.io/projected/c4f4df56-7f3e-490d-9321-dc520b65369a-kube-api-access-r8qtj\") on node \"crc\" DevicePath \"\"" Feb 18 19:35:19 crc kubenswrapper[4942]: I0218 19:35:19.631662 4942 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/c4f4df56-7f3e-490d-9321-dc520b65369a-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Feb 18 19:35:19 crc kubenswrapper[4942]: I0218 19:35:19.691187 4942 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-6487999dc5-x92k5"] Feb 18 19:35:19 crc kubenswrapper[4942]: I0218 19:35:19.699352 4942 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-6487999dc5-x92k5"] Feb 18 19:35:19 crc kubenswrapper[4942]: E0218 19:35:19.798496 4942 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-barbican-api:current-podified" Feb 18 19:35:19 crc kubenswrapper[4942]: E0218 19:35:19.798668 4942 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:barbican-db-sync,Image:quay.io/podified-antelope-centos9/openstack-barbican-api:current-podified,Command:[/bin/bash],Args:[-c barbican-manage db upgrade],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/barbican/barbican.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-kdzfp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42403,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:*42403,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod barbican-db-sync-h2kjs_openstack(8aeac097-ba93-4859-a14f-839ae1421e28): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 18 19:35:19 crc kubenswrapper[4942]: E0218 19:35:19.799864 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"barbican-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/barbican-db-sync-h2kjs" podUID="8aeac097-ba93-4859-a14f-839ae1421e28" Feb 18 19:35:19 crc kubenswrapper[4942]: I0218 19:35:19.848551 4942 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-698758b865-nnzck" Feb 18 19:35:19 crc kubenswrapper[4942]: I0218 19:35:19.856879 4942 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-5dcf8ff489-qc7h7" Feb 18 19:35:19 crc kubenswrapper[4942]: I0218 19:35:19.900018 4942 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-7d6d8bb5d5-5l49m" Feb 18 19:35:19 crc kubenswrapper[4942]: I0218 19:35:19.907251 4942 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-p9l27" Feb 18 19:35:19 crc kubenswrapper[4942]: I0218 19:35:19.937393 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a6c912f7-7ee8-4f53-a358-a6a6a5088be5-combined-ca-bundle\") pod \"a6c912f7-7ee8-4f53-a358-a6a6a5088be5\" (UID: \"a6c912f7-7ee8-4f53-a358-a6a6a5088be5\") " Feb 18 19:35:19 crc kubenswrapper[4942]: I0218 19:35:19.937471 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d99k8\" (UniqueName: \"kubernetes.io/projected/a6c912f7-7ee8-4f53-a358-a6a6a5088be5-kube-api-access-d99k8\") pod \"a6c912f7-7ee8-4f53-a358-a6a6a5088be5\" (UID: \"a6c912f7-7ee8-4f53-a358-a6a6a5088be5\") " Feb 18 19:35:19 crc kubenswrapper[4942]: I0218 19:35:19.937550 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/29a05f17-8ada-451f-8460-887a45caa4e6-config-data\") pod \"29a05f17-8ada-451f-8460-887a45caa4e6\" (UID: \"29a05f17-8ada-451f-8460-887a45caa4e6\") " Feb 18 19:35:19 crc kubenswrapper[4942]: I0218 19:35:19.937641 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/79f6285d-991e-4118-8f5b-d451c225f1d6-logs\") pod \"79f6285d-991e-4118-8f5b-d451c225f1d6\" (UID: \"79f6285d-991e-4118-8f5b-d451c225f1d6\") " Feb 18 19:35:19 crc kubenswrapper[4942]: I0218 19:35:19.937711 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/79f6285d-991e-4118-8f5b-d451c225f1d6-config-data\") pod \"79f6285d-991e-4118-8f5b-d451c225f1d6\" (UID: \"79f6285d-991e-4118-8f5b-d451c225f1d6\") " Feb 18 19:35:19 crc kubenswrapper[4942]: I0218 19:35:19.937736 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/29a05f17-8ada-451f-8460-887a45caa4e6-logs\") pod \"29a05f17-8ada-451f-8460-887a45caa4e6\" (UID: \"29a05f17-8ada-451f-8460-887a45caa4e6\") " Feb 18 19:35:19 crc kubenswrapper[4942]: I0218 19:35:19.937814 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1e919317-cae2-432d-959f-8cf1d4520b56-config\") pod \"1e919317-cae2-432d-959f-8cf1d4520b56\" (UID: \"1e919317-cae2-432d-959f-8cf1d4520b56\") " Feb 18 19:35:19 crc kubenswrapper[4942]: I0218 19:35:19.937860 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1e919317-cae2-432d-959f-8cf1d4520b56-ovsdbserver-sb\") pod \"1e919317-cae2-432d-959f-8cf1d4520b56\" (UID: \"1e919317-cae2-432d-959f-8cf1d4520b56\") " Feb 18 19:35:19 crc kubenswrapper[4942]: I0218 19:35:19.937988 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/29a05f17-8ada-451f-8460-887a45caa4e6-horizon-secret-key\") pod \"29a05f17-8ada-451f-8460-887a45caa4e6\" (UID: \"29a05f17-8ada-451f-8460-887a45caa4e6\") " Feb 18 19:35:19 crc kubenswrapper[4942]: I0218 19:35:19.938051 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1e919317-cae2-432d-959f-8cf1d4520b56-dns-svc\") pod \"1e919317-cae2-432d-959f-8cf1d4520b56\" (UID: \"1e919317-cae2-432d-959f-8cf1d4520b56\") " Feb 18 19:35:19 crc kubenswrapper[4942]: I0218 19:35:19.938076 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mwckx\" (UniqueName: \"kubernetes.io/projected/79f6285d-991e-4118-8f5b-d451c225f1d6-kube-api-access-mwckx\") pod \"79f6285d-991e-4118-8f5b-d451c225f1d6\" (UID: \"79f6285d-991e-4118-8f5b-d451c225f1d6\") " Feb 18 19:35:19 crc kubenswrapper[4942]: I0218 19:35:19.938144 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1e919317-cae2-432d-959f-8cf1d4520b56-ovsdbserver-nb\") pod \"1e919317-cae2-432d-959f-8cf1d4520b56\" (UID: \"1e919317-cae2-432d-959f-8cf1d4520b56\") " Feb 18 19:35:19 crc kubenswrapper[4942]: I0218 19:35:19.938217 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/29a05f17-8ada-451f-8460-887a45caa4e6-scripts\") pod \"29a05f17-8ada-451f-8460-887a45caa4e6\" (UID: \"29a05f17-8ada-451f-8460-887a45caa4e6\") " Feb 18 19:35:19 crc kubenswrapper[4942]: I0218 19:35:19.938293 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/a6c912f7-7ee8-4f53-a358-a6a6a5088be5-config\") pod \"a6c912f7-7ee8-4f53-a358-a6a6a5088be5\" (UID: \"a6c912f7-7ee8-4f53-a358-a6a6a5088be5\") " Feb 18 19:35:19 crc kubenswrapper[4942]: I0218 19:35:19.938327 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/79f6285d-991e-4118-8f5b-d451c225f1d6-scripts\") pod \"79f6285d-991e-4118-8f5b-d451c225f1d6\" (UID: \"79f6285d-991e-4118-8f5b-d451c225f1d6\") " Feb 18 19:35:19 crc kubenswrapper[4942]: I0218 19:35:19.939054 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/29a05f17-8ada-451f-8460-887a45caa4e6-scripts" (OuterVolumeSpecName: "scripts") pod "29a05f17-8ada-451f-8460-887a45caa4e6" (UID: "29a05f17-8ada-451f-8460-887a45caa4e6"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:35:19 crc kubenswrapper[4942]: I0218 19:35:19.939638 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/29a05f17-8ada-451f-8460-887a45caa4e6-logs" (OuterVolumeSpecName: "logs") pod "29a05f17-8ada-451f-8460-887a45caa4e6" (UID: "29a05f17-8ada-451f-8460-887a45caa4e6"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 19:35:19 crc kubenswrapper[4942]: I0218 19:35:19.939800 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hbrsd\" (UniqueName: \"kubernetes.io/projected/29a05f17-8ada-451f-8460-887a45caa4e6-kube-api-access-hbrsd\") pod \"29a05f17-8ada-451f-8460-887a45caa4e6\" (UID: \"29a05f17-8ada-451f-8460-887a45caa4e6\") " Feb 18 19:35:19 crc kubenswrapper[4942]: I0218 19:35:19.939839 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q8vt6\" (UniqueName: \"kubernetes.io/projected/1e919317-cae2-432d-959f-8cf1d4520b56-kube-api-access-q8vt6\") pod \"1e919317-cae2-432d-959f-8cf1d4520b56\" (UID: \"1e919317-cae2-432d-959f-8cf1d4520b56\") " Feb 18 19:35:19 crc kubenswrapper[4942]: I0218 19:35:19.939870 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/79f6285d-991e-4118-8f5b-d451c225f1d6-horizon-secret-key\") pod \"79f6285d-991e-4118-8f5b-d451c225f1d6\" (UID: \"79f6285d-991e-4118-8f5b-d451c225f1d6\") " Feb 18 19:35:19 crc kubenswrapper[4942]: I0218 19:35:19.939838 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/79f6285d-991e-4118-8f5b-d451c225f1d6-config-data" (OuterVolumeSpecName: "config-data") pod "79f6285d-991e-4118-8f5b-d451c225f1d6" (UID: "79f6285d-991e-4118-8f5b-d451c225f1d6"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:35:19 crc kubenswrapper[4942]: I0218 19:35:19.940289 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/79f6285d-991e-4118-8f5b-d451c225f1d6-logs" (OuterVolumeSpecName: "logs") pod "79f6285d-991e-4118-8f5b-d451c225f1d6" (UID: "79f6285d-991e-4118-8f5b-d451c225f1d6"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 19:35:19 crc kubenswrapper[4942]: I0218 19:35:19.940543 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/29a05f17-8ada-451f-8460-887a45caa4e6-config-data" (OuterVolumeSpecName: "config-data") pod "29a05f17-8ada-451f-8460-887a45caa4e6" (UID: "29a05f17-8ada-451f-8460-887a45caa4e6"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:35:19 crc kubenswrapper[4942]: I0218 19:35:19.940790 4942 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/79f6285d-991e-4118-8f5b-d451c225f1d6-logs\") on node \"crc\" DevicePath \"\"" Feb 18 19:35:19 crc kubenswrapper[4942]: I0218 19:35:19.940812 4942 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/79f6285d-991e-4118-8f5b-d451c225f1d6-config-data\") on node \"crc\" DevicePath \"\"" Feb 18 19:35:19 crc kubenswrapper[4942]: I0218 19:35:19.940829 4942 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/29a05f17-8ada-451f-8460-887a45caa4e6-logs\") on node \"crc\" DevicePath \"\"" Feb 18 19:35:19 crc kubenswrapper[4942]: I0218 19:35:19.940841 4942 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/29a05f17-8ada-451f-8460-887a45caa4e6-scripts\") on node \"crc\" DevicePath \"\"" Feb 18 19:35:19 crc kubenswrapper[4942]: I0218 19:35:19.940853 4942 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/29a05f17-8ada-451f-8460-887a45caa4e6-config-data\") on node \"crc\" DevicePath \"\"" Feb 18 19:35:19 crc kubenswrapper[4942]: I0218 19:35:19.945981 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/79f6285d-991e-4118-8f5b-d451c225f1d6-kube-api-access-mwckx" (OuterVolumeSpecName: "kube-api-access-mwckx") pod "79f6285d-991e-4118-8f5b-d451c225f1d6" (UID: "79f6285d-991e-4118-8f5b-d451c225f1d6"). InnerVolumeSpecName "kube-api-access-mwckx". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:35:19 crc kubenswrapper[4942]: I0218 19:35:19.946374 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/79f6285d-991e-4118-8f5b-d451c225f1d6-scripts" (OuterVolumeSpecName: "scripts") pod "79f6285d-991e-4118-8f5b-d451c225f1d6" (UID: "79f6285d-991e-4118-8f5b-d451c225f1d6"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:35:19 crc kubenswrapper[4942]: I0218 19:35:19.964353 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/29a05f17-8ada-451f-8460-887a45caa4e6-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "29a05f17-8ada-451f-8460-887a45caa4e6" (UID: "29a05f17-8ada-451f-8460-887a45caa4e6"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:35:19 crc kubenswrapper[4942]: I0218 19:35:19.964400 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/29a05f17-8ada-451f-8460-887a45caa4e6-kube-api-access-hbrsd" (OuterVolumeSpecName: "kube-api-access-hbrsd") pod "29a05f17-8ada-451f-8460-887a45caa4e6" (UID: "29a05f17-8ada-451f-8460-887a45caa4e6"). InnerVolumeSpecName "kube-api-access-hbrsd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:35:19 crc kubenswrapper[4942]: I0218 19:35:19.964935 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/79f6285d-991e-4118-8f5b-d451c225f1d6-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "79f6285d-991e-4118-8f5b-d451c225f1d6" (UID: "79f6285d-991e-4118-8f5b-d451c225f1d6"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:35:19 crc kubenswrapper[4942]: I0218 19:35:19.978262 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a6c912f7-7ee8-4f53-a358-a6a6a5088be5-kube-api-access-d99k8" (OuterVolumeSpecName: "kube-api-access-d99k8") pod "a6c912f7-7ee8-4f53-a358-a6a6a5088be5" (UID: "a6c912f7-7ee8-4f53-a358-a6a6a5088be5"). InnerVolumeSpecName "kube-api-access-d99k8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:35:19 crc kubenswrapper[4942]: I0218 19:35:19.984237 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1e919317-cae2-432d-959f-8cf1d4520b56-kube-api-access-q8vt6" (OuterVolumeSpecName: "kube-api-access-q8vt6") pod "1e919317-cae2-432d-959f-8cf1d4520b56" (UID: "1e919317-cae2-432d-959f-8cf1d4520b56"). InnerVolumeSpecName "kube-api-access-q8vt6". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:35:19 crc kubenswrapper[4942]: I0218 19:35:19.994217 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a6c912f7-7ee8-4f53-a358-a6a6a5088be5-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a6c912f7-7ee8-4f53-a358-a6a6a5088be5" (UID: "a6c912f7-7ee8-4f53-a358-a6a6a5088be5"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:35:20 crc kubenswrapper[4942]: I0218 19:35:20.000292 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a6c912f7-7ee8-4f53-a358-a6a6a5088be5-config" (OuterVolumeSpecName: "config") pod "a6c912f7-7ee8-4f53-a358-a6a6a5088be5" (UID: "a6c912f7-7ee8-4f53-a358-a6a6a5088be5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:35:20 crc kubenswrapper[4942]: I0218 19:35:20.012969 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1e919317-cae2-432d-959f-8cf1d4520b56-config" (OuterVolumeSpecName: "config") pod "1e919317-cae2-432d-959f-8cf1d4520b56" (UID: "1e919317-cae2-432d-959f-8cf1d4520b56"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:35:20 crc kubenswrapper[4942]: I0218 19:35:20.016238 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1e919317-cae2-432d-959f-8cf1d4520b56-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "1e919317-cae2-432d-959f-8cf1d4520b56" (UID: "1e919317-cae2-432d-959f-8cf1d4520b56"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:35:20 crc kubenswrapper[4942]: I0218 19:35:20.017524 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1e919317-cae2-432d-959f-8cf1d4520b56-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "1e919317-cae2-432d-959f-8cf1d4520b56" (UID: "1e919317-cae2-432d-959f-8cf1d4520b56"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:35:20 crc kubenswrapper[4942]: I0218 19:35:20.026747 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1e919317-cae2-432d-959f-8cf1d4520b56-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "1e919317-cae2-432d-959f-8cf1d4520b56" (UID: "1e919317-cae2-432d-959f-8cf1d4520b56"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:35:20 crc kubenswrapper[4942]: I0218 19:35:20.042190 4942 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1e919317-cae2-432d-959f-8cf1d4520b56-config\") on node \"crc\" DevicePath \"\"" Feb 18 19:35:20 crc kubenswrapper[4942]: I0218 19:35:20.042227 4942 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1e919317-cae2-432d-959f-8cf1d4520b56-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 18 19:35:20 crc kubenswrapper[4942]: I0218 19:35:20.042239 4942 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/29a05f17-8ada-451f-8460-887a45caa4e6-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Feb 18 19:35:20 crc kubenswrapper[4942]: I0218 19:35:20.042249 4942 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1e919317-cae2-432d-959f-8cf1d4520b56-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 18 19:35:20 crc kubenswrapper[4942]: I0218 19:35:20.042259 4942 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mwckx\" (UniqueName: \"kubernetes.io/projected/79f6285d-991e-4118-8f5b-d451c225f1d6-kube-api-access-mwckx\") on node \"crc\" DevicePath \"\"" Feb 18 19:35:20 crc kubenswrapper[4942]: I0218 19:35:20.042270 4942 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1e919317-cae2-432d-959f-8cf1d4520b56-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 18 19:35:20 crc kubenswrapper[4942]: I0218 19:35:20.042282 4942 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/a6c912f7-7ee8-4f53-a358-a6a6a5088be5-config\") on node \"crc\" DevicePath \"\"" Feb 18 19:35:20 crc kubenswrapper[4942]: I0218 19:35:20.042292 4942 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/79f6285d-991e-4118-8f5b-d451c225f1d6-scripts\") on node \"crc\" DevicePath \"\"" Feb 18 19:35:20 crc kubenswrapper[4942]: I0218 19:35:20.042301 4942 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hbrsd\" (UniqueName: \"kubernetes.io/projected/29a05f17-8ada-451f-8460-887a45caa4e6-kube-api-access-hbrsd\") on node \"crc\" DevicePath \"\"" Feb 18 19:35:20 crc kubenswrapper[4942]: I0218 19:35:20.042310 4942 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q8vt6\" (UniqueName: \"kubernetes.io/projected/1e919317-cae2-432d-959f-8cf1d4520b56-kube-api-access-q8vt6\") on node \"crc\" DevicePath \"\"" Feb 18 19:35:20 crc kubenswrapper[4942]: I0218 19:35:20.042320 4942 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/79f6285d-991e-4118-8f5b-d451c225f1d6-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Feb 18 19:35:20 crc kubenswrapper[4942]: I0218 19:35:20.042329 4942 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a6c912f7-7ee8-4f53-a358-a6a6a5088be5-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 19:35:20 crc kubenswrapper[4942]: I0218 19:35:20.042340 4942 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d99k8\" (UniqueName: \"kubernetes.io/projected/a6c912f7-7ee8-4f53-a358-a6a6a5088be5-kube-api-access-d99k8\") on node \"crc\" DevicePath \"\"" Feb 18 19:35:20 crc kubenswrapper[4942]: I0218 19:35:20.563864 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-698758b865-nnzck" event={"ID":"1e919317-cae2-432d-959f-8cf1d4520b56","Type":"ContainerDied","Data":"78b20f729f326e0f7c3c648fac44018c3d34b24ab3d2f709a7f976353f04998c"} Feb 18 19:35:20 crc kubenswrapper[4942]: I0218 19:35:20.563919 4942 scope.go:117] "RemoveContainer" containerID="c929bc7a17036437784be59c9727e4ee675c038074de07e36b3deb35090e3ae7" Feb 18 19:35:20 crc kubenswrapper[4942]: I0218 19:35:20.563931 4942 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-698758b865-nnzck" Feb 18 19:35:20 crc kubenswrapper[4942]: I0218 19:35:20.565525 4942 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-p9l27" Feb 18 19:35:20 crc kubenswrapper[4942]: I0218 19:35:20.566224 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-p9l27" event={"ID":"a6c912f7-7ee8-4f53-a358-a6a6a5088be5","Type":"ContainerDied","Data":"316d5107b8b347fd0cea3be7273208da7013d9d15ad9e9d0440db47bc1ed0d8e"} Feb 18 19:35:20 crc kubenswrapper[4942]: I0218 19:35:20.566247 4942 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="316d5107b8b347fd0cea3be7273208da7013d9d15ad9e9d0440db47bc1ed0d8e" Feb 18 19:35:20 crc kubenswrapper[4942]: I0218 19:35:20.572378 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-5dcf8ff489-qc7h7" event={"ID":"79f6285d-991e-4118-8f5b-d451c225f1d6","Type":"ContainerDied","Data":"f7d111b50e472dcb7f51a51999f9e9be0fffcc2cd7c0ebb311c39dd7aa656b89"} Feb 18 19:35:20 crc kubenswrapper[4942]: I0218 19:35:20.572429 4942 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-5dcf8ff489-qc7h7" Feb 18 19:35:20 crc kubenswrapper[4942]: I0218 19:35:20.575037 4942 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-7d6d8bb5d5-5l49m" Feb 18 19:35:20 crc kubenswrapper[4942]: I0218 19:35:20.575036 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-7d6d8bb5d5-5l49m" event={"ID":"29a05f17-8ada-451f-8460-887a45caa4e6","Type":"ContainerDied","Data":"f9f400d74dcc827f603d02436cc05b6b30e0d9e44bb3117a942b80c1685b87ee"} Feb 18 19:35:20 crc kubenswrapper[4942]: E0218 19:35:20.575910 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"barbican-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-barbican-api:current-podified\\\"\"" pod="openstack/barbican-db-sync-h2kjs" podUID="8aeac097-ba93-4859-a14f-839ae1421e28" Feb 18 19:35:20 crc kubenswrapper[4942]: I0218 19:35:20.624432 4942 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-698758b865-nnzck"] Feb 18 19:35:20 crc kubenswrapper[4942]: I0218 19:35:20.634019 4942 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-698758b865-nnzck"] Feb 18 19:35:20 crc kubenswrapper[4942]: I0218 19:35:20.668122 4942 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-5dcf8ff489-qc7h7"] Feb 18 19:35:20 crc kubenswrapper[4942]: I0218 19:35:20.686228 4942 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-5dcf8ff489-qc7h7"] Feb 18 19:35:20 crc kubenswrapper[4942]: I0218 19:35:20.704886 4942 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-7d6d8bb5d5-5l49m"] Feb 18 19:35:20 crc kubenswrapper[4942]: I0218 19:35:20.713327 4942 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-7d6d8bb5d5-5l49m"] Feb 18 19:35:21 crc kubenswrapper[4942]: E0218 19:35:21.028632 4942 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified" Feb 18 19:35:21 crc kubenswrapper[4942]: E0218 19:35:21.028967 4942 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:cinder-db-sync,Image:quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_set_configs && /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:etc-machine-id,ReadOnly:true,MountPath:/etc/machine-id,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:scripts,ReadOnly:true,MountPath:/usr/local/bin/container-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/config-data/merged,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/cinder/cinder.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:db-sync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-z75l6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cinder-db-sync-qvzh5_openstack(8db7f68b-a733-44fc-90b9-a1dd489fb42d): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 18 19:35:21 crc kubenswrapper[4942]: E0218 19:35:21.031206 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/cinder-db-sync-qvzh5" podUID="8db7f68b-a733-44fc-90b9-a1dd489fb42d" Feb 18 19:35:21 crc kubenswrapper[4942]: I0218 19:35:21.052385 4942 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1e919317-cae2-432d-959f-8cf1d4520b56" path="/var/lib/kubelet/pods/1e919317-cae2-432d-959f-8cf1d4520b56/volumes" Feb 18 19:35:21 crc kubenswrapper[4942]: I0218 19:35:21.053679 4942 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="29a05f17-8ada-451f-8460-887a45caa4e6" path="/var/lib/kubelet/pods/29a05f17-8ada-451f-8460-887a45caa4e6/volumes" Feb 18 19:35:21 crc kubenswrapper[4942]: I0218 19:35:21.057830 4942 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="79f6285d-991e-4118-8f5b-d451c225f1d6" path="/var/lib/kubelet/pods/79f6285d-991e-4118-8f5b-d451c225f1d6/volumes" Feb 18 19:35:21 crc kubenswrapper[4942]: I0218 19:35:21.058705 4942 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c4f4df56-7f3e-490d-9321-dc520b65369a" path="/var/lib/kubelet/pods/c4f4df56-7f3e-490d-9321-dc520b65369a/volumes" Feb 18 19:35:21 crc kubenswrapper[4942]: I0218 19:35:21.119275 4942 scope.go:117] "RemoveContainer" containerID="2d800ad31d40bf814e416ec398183ae11509cddedf514a96b60bf309617fbbde" Feb 18 19:35:21 crc kubenswrapper[4942]: I0218 19:35:21.216471 4942 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-785d8bcb8c-jhblh"] Feb 18 19:35:21 crc kubenswrapper[4942]: I0218 19:35:21.293877 4942 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-55f844cf75-b4sf9"] Feb 18 19:35:21 crc kubenswrapper[4942]: E0218 19:35:21.330152 4942 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1e919317-cae2-432d-959f-8cf1d4520b56" containerName="dnsmasq-dns" Feb 18 19:35:21 crc kubenswrapper[4942]: I0218 19:35:21.330245 4942 state_mem.go:107] "Deleted CPUSet assignment" podUID="1e919317-cae2-432d-959f-8cf1d4520b56" containerName="dnsmasq-dns" Feb 18 19:35:21 crc kubenswrapper[4942]: E0218 19:35:21.330327 4942 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a6c912f7-7ee8-4f53-a358-a6a6a5088be5" containerName="neutron-db-sync" Feb 18 19:35:21 crc kubenswrapper[4942]: I0218 19:35:21.330337 4942 state_mem.go:107] "Deleted CPUSet assignment" podUID="a6c912f7-7ee8-4f53-a358-a6a6a5088be5" containerName="neutron-db-sync" Feb 18 19:35:21 crc kubenswrapper[4942]: E0218 19:35:21.330361 4942 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1e919317-cae2-432d-959f-8cf1d4520b56" containerName="init" Feb 18 19:35:21 crc kubenswrapper[4942]: I0218 19:35:21.330369 4942 state_mem.go:107] "Deleted CPUSet assignment" podUID="1e919317-cae2-432d-959f-8cf1d4520b56" containerName="init" Feb 18 19:35:21 crc kubenswrapper[4942]: I0218 19:35:21.340156 4942 memory_manager.go:354] "RemoveStaleState removing state" podUID="a6c912f7-7ee8-4f53-a358-a6a6a5088be5" containerName="neutron-db-sync" Feb 18 19:35:21 crc kubenswrapper[4942]: I0218 19:35:21.340199 4942 memory_manager.go:354] "RemoveStaleState removing state" podUID="1e919317-cae2-432d-959f-8cf1d4520b56" containerName="dnsmasq-dns" Feb 18 19:35:21 crc kubenswrapper[4942]: I0218 19:35:21.345219 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-55f844cf75-b4sf9"] Feb 18 19:35:21 crc kubenswrapper[4942]: I0218 19:35:21.345340 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-55f844cf75-b4sf9" Feb 18 19:35:21 crc kubenswrapper[4942]: I0218 19:35:21.366077 4942 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-67cc44d6c6-sp59w"] Feb 18 19:35:21 crc kubenswrapper[4942]: I0218 19:35:21.369351 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-67cc44d6c6-sp59w" Feb 18 19:35:21 crc kubenswrapper[4942]: I0218 19:35:21.371840 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Feb 18 19:35:21 crc kubenswrapper[4942]: I0218 19:35:21.383836 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-pc4kw" Feb 18 19:35:21 crc kubenswrapper[4942]: I0218 19:35:21.384094 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Feb 18 19:35:21 crc kubenswrapper[4942]: I0218 19:35:21.390520 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-ovndbs" Feb 18 19:35:21 crc kubenswrapper[4942]: I0218 19:35:21.394642 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-67cc44d6c6-sp59w"] Feb 18 19:35:21 crc kubenswrapper[4942]: I0218 19:35:21.488460 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/df34bdbb-8771-4d46-b5ba-29088c793a4c-config\") pod \"neutron-67cc44d6c6-sp59w\" (UID: \"df34bdbb-8771-4d46-b5ba-29088c793a4c\") " pod="openstack/neutron-67cc44d6c6-sp59w" Feb 18 19:35:21 crc kubenswrapper[4942]: I0218 19:35:21.489405 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3eb861b2-8f3f-482a-98b8-e4aa9de98ecd-dns-svc\") pod \"dnsmasq-dns-55f844cf75-b4sf9\" (UID: \"3eb861b2-8f3f-482a-98b8-e4aa9de98ecd\") " pod="openstack/dnsmasq-dns-55f844cf75-b4sf9" Feb 18 19:35:21 crc kubenswrapper[4942]: I0218 19:35:21.489516 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/df34bdbb-8771-4d46-b5ba-29088c793a4c-combined-ca-bundle\") pod \"neutron-67cc44d6c6-sp59w\" (UID: \"df34bdbb-8771-4d46-b5ba-29088c793a4c\") " pod="openstack/neutron-67cc44d6c6-sp59w" Feb 18 19:35:21 crc kubenswrapper[4942]: I0218 19:35:21.489613 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/df34bdbb-8771-4d46-b5ba-29088c793a4c-httpd-config\") pod \"neutron-67cc44d6c6-sp59w\" (UID: \"df34bdbb-8771-4d46-b5ba-29088c793a4c\") " pod="openstack/neutron-67cc44d6c6-sp59w" Feb 18 19:35:21 crc kubenswrapper[4942]: I0218 19:35:21.489844 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3eb861b2-8f3f-482a-98b8-e4aa9de98ecd-ovsdbserver-nb\") pod \"dnsmasq-dns-55f844cf75-b4sf9\" (UID: \"3eb861b2-8f3f-482a-98b8-e4aa9de98ecd\") " pod="openstack/dnsmasq-dns-55f844cf75-b4sf9" Feb 18 19:35:21 crc kubenswrapper[4942]: I0218 19:35:21.489960 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/3eb861b2-8f3f-482a-98b8-e4aa9de98ecd-dns-swift-storage-0\") pod \"dnsmasq-dns-55f844cf75-b4sf9\" (UID: \"3eb861b2-8f3f-482a-98b8-e4aa9de98ecd\") " pod="openstack/dnsmasq-dns-55f844cf75-b4sf9" Feb 18 19:35:21 crc kubenswrapper[4942]: I0218 19:35:21.490025 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3eb861b2-8f3f-482a-98b8-e4aa9de98ecd-config\") pod \"dnsmasq-dns-55f844cf75-b4sf9\" (UID: \"3eb861b2-8f3f-482a-98b8-e4aa9de98ecd\") " pod="openstack/dnsmasq-dns-55f844cf75-b4sf9" Feb 18 19:35:21 crc kubenswrapper[4942]: I0218 19:35:21.490096 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/df34bdbb-8771-4d46-b5ba-29088c793a4c-ovndb-tls-certs\") pod \"neutron-67cc44d6c6-sp59w\" (UID: \"df34bdbb-8771-4d46-b5ba-29088c793a4c\") " pod="openstack/neutron-67cc44d6c6-sp59w" Feb 18 19:35:21 crc kubenswrapper[4942]: I0218 19:35:21.490184 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7gq76\" (UniqueName: \"kubernetes.io/projected/3eb861b2-8f3f-482a-98b8-e4aa9de98ecd-kube-api-access-7gq76\") pod \"dnsmasq-dns-55f844cf75-b4sf9\" (UID: \"3eb861b2-8f3f-482a-98b8-e4aa9de98ecd\") " pod="openstack/dnsmasq-dns-55f844cf75-b4sf9" Feb 18 19:35:21 crc kubenswrapper[4942]: I0218 19:35:21.490264 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3eb861b2-8f3f-482a-98b8-e4aa9de98ecd-ovsdbserver-sb\") pod \"dnsmasq-dns-55f844cf75-b4sf9\" (UID: \"3eb861b2-8f3f-482a-98b8-e4aa9de98ecd\") " pod="openstack/dnsmasq-dns-55f844cf75-b4sf9" Feb 18 19:35:21 crc kubenswrapper[4942]: I0218 19:35:21.490384 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-znrs9\" (UniqueName: \"kubernetes.io/projected/df34bdbb-8771-4d46-b5ba-29088c793a4c-kube-api-access-znrs9\") pod \"neutron-67cc44d6c6-sp59w\" (UID: \"df34bdbb-8771-4d46-b5ba-29088c793a4c\") " pod="openstack/neutron-67cc44d6c6-sp59w" Feb 18 19:35:21 crc kubenswrapper[4942]: E0218 19:35:21.590111 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified\\\"\"" pod="openstack/cinder-db-sync-qvzh5" podUID="8db7f68b-a733-44fc-90b9-a1dd489fb42d" Feb 18 19:35:21 crc kubenswrapper[4942]: I0218 19:35:21.591452 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3eb861b2-8f3f-482a-98b8-e4aa9de98ecd-ovsdbserver-nb\") pod \"dnsmasq-dns-55f844cf75-b4sf9\" (UID: \"3eb861b2-8f3f-482a-98b8-e4aa9de98ecd\") " pod="openstack/dnsmasq-dns-55f844cf75-b4sf9" Feb 18 19:35:21 crc kubenswrapper[4942]: I0218 19:35:21.591505 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/3eb861b2-8f3f-482a-98b8-e4aa9de98ecd-dns-swift-storage-0\") pod \"dnsmasq-dns-55f844cf75-b4sf9\" (UID: \"3eb861b2-8f3f-482a-98b8-e4aa9de98ecd\") " pod="openstack/dnsmasq-dns-55f844cf75-b4sf9" Feb 18 19:35:21 crc kubenswrapper[4942]: I0218 19:35:21.591524 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3eb861b2-8f3f-482a-98b8-e4aa9de98ecd-config\") pod \"dnsmasq-dns-55f844cf75-b4sf9\" (UID: \"3eb861b2-8f3f-482a-98b8-e4aa9de98ecd\") " pod="openstack/dnsmasq-dns-55f844cf75-b4sf9" Feb 18 19:35:21 crc kubenswrapper[4942]: I0218 19:35:21.591543 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/df34bdbb-8771-4d46-b5ba-29088c793a4c-ovndb-tls-certs\") pod \"neutron-67cc44d6c6-sp59w\" (UID: \"df34bdbb-8771-4d46-b5ba-29088c793a4c\") " pod="openstack/neutron-67cc44d6c6-sp59w" Feb 18 19:35:21 crc kubenswrapper[4942]: I0218 19:35:21.591585 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7gq76\" (UniqueName: \"kubernetes.io/projected/3eb861b2-8f3f-482a-98b8-e4aa9de98ecd-kube-api-access-7gq76\") pod \"dnsmasq-dns-55f844cf75-b4sf9\" (UID: \"3eb861b2-8f3f-482a-98b8-e4aa9de98ecd\") " pod="openstack/dnsmasq-dns-55f844cf75-b4sf9" Feb 18 19:35:21 crc kubenswrapper[4942]: I0218 19:35:21.591601 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3eb861b2-8f3f-482a-98b8-e4aa9de98ecd-ovsdbserver-sb\") pod \"dnsmasq-dns-55f844cf75-b4sf9\" (UID: \"3eb861b2-8f3f-482a-98b8-e4aa9de98ecd\") " pod="openstack/dnsmasq-dns-55f844cf75-b4sf9" Feb 18 19:35:21 crc kubenswrapper[4942]: I0218 19:35:21.591644 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-znrs9\" (UniqueName: \"kubernetes.io/projected/df34bdbb-8771-4d46-b5ba-29088c793a4c-kube-api-access-znrs9\") pod \"neutron-67cc44d6c6-sp59w\" (UID: \"df34bdbb-8771-4d46-b5ba-29088c793a4c\") " pod="openstack/neutron-67cc44d6c6-sp59w" Feb 18 19:35:21 crc kubenswrapper[4942]: I0218 19:35:21.591679 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/df34bdbb-8771-4d46-b5ba-29088c793a4c-config\") pod \"neutron-67cc44d6c6-sp59w\" (UID: \"df34bdbb-8771-4d46-b5ba-29088c793a4c\") " pod="openstack/neutron-67cc44d6c6-sp59w" Feb 18 19:35:21 crc kubenswrapper[4942]: I0218 19:35:21.591708 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3eb861b2-8f3f-482a-98b8-e4aa9de98ecd-dns-svc\") pod \"dnsmasq-dns-55f844cf75-b4sf9\" (UID: \"3eb861b2-8f3f-482a-98b8-e4aa9de98ecd\") " pod="openstack/dnsmasq-dns-55f844cf75-b4sf9" Feb 18 19:35:21 crc kubenswrapper[4942]: I0218 19:35:21.591729 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/df34bdbb-8771-4d46-b5ba-29088c793a4c-combined-ca-bundle\") pod \"neutron-67cc44d6c6-sp59w\" (UID: \"df34bdbb-8771-4d46-b5ba-29088c793a4c\") " pod="openstack/neutron-67cc44d6c6-sp59w" Feb 18 19:35:21 crc kubenswrapper[4942]: I0218 19:35:21.591749 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/df34bdbb-8771-4d46-b5ba-29088c793a4c-httpd-config\") pod \"neutron-67cc44d6c6-sp59w\" (UID: \"df34bdbb-8771-4d46-b5ba-29088c793a4c\") " pod="openstack/neutron-67cc44d6c6-sp59w" Feb 18 19:35:21 crc kubenswrapper[4942]: I0218 19:35:21.592833 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3eb861b2-8f3f-482a-98b8-e4aa9de98ecd-ovsdbserver-nb\") pod \"dnsmasq-dns-55f844cf75-b4sf9\" (UID: \"3eb861b2-8f3f-482a-98b8-e4aa9de98ecd\") " pod="openstack/dnsmasq-dns-55f844cf75-b4sf9" Feb 18 19:35:21 crc kubenswrapper[4942]: I0218 19:35:21.593600 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3eb861b2-8f3f-482a-98b8-e4aa9de98ecd-ovsdbserver-sb\") pod \"dnsmasq-dns-55f844cf75-b4sf9\" (UID: \"3eb861b2-8f3f-482a-98b8-e4aa9de98ecd\") " pod="openstack/dnsmasq-dns-55f844cf75-b4sf9" Feb 18 19:35:21 crc kubenswrapper[4942]: I0218 19:35:21.593884 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/3eb861b2-8f3f-482a-98b8-e4aa9de98ecd-dns-swift-storage-0\") pod \"dnsmasq-dns-55f844cf75-b4sf9\" (UID: \"3eb861b2-8f3f-482a-98b8-e4aa9de98ecd\") " pod="openstack/dnsmasq-dns-55f844cf75-b4sf9" Feb 18 19:35:21 crc kubenswrapper[4942]: I0218 19:35:21.594218 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3eb861b2-8f3f-482a-98b8-e4aa9de98ecd-dns-svc\") pod \"dnsmasq-dns-55f844cf75-b4sf9\" (UID: \"3eb861b2-8f3f-482a-98b8-e4aa9de98ecd\") " pod="openstack/dnsmasq-dns-55f844cf75-b4sf9" Feb 18 19:35:21 crc kubenswrapper[4942]: I0218 19:35:21.597345 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3eb861b2-8f3f-482a-98b8-e4aa9de98ecd-config\") pod \"dnsmasq-dns-55f844cf75-b4sf9\" (UID: \"3eb861b2-8f3f-482a-98b8-e4aa9de98ecd\") " pod="openstack/dnsmasq-dns-55f844cf75-b4sf9" Feb 18 19:35:21 crc kubenswrapper[4942]: I0218 19:35:21.605482 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/df34bdbb-8771-4d46-b5ba-29088c793a4c-ovndb-tls-certs\") pod \"neutron-67cc44d6c6-sp59w\" (UID: \"df34bdbb-8771-4d46-b5ba-29088c793a4c\") " pod="openstack/neutron-67cc44d6c6-sp59w" Feb 18 19:35:21 crc kubenswrapper[4942]: I0218 19:35:21.605632 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/df34bdbb-8771-4d46-b5ba-29088c793a4c-config\") pod \"neutron-67cc44d6c6-sp59w\" (UID: \"df34bdbb-8771-4d46-b5ba-29088c793a4c\") " pod="openstack/neutron-67cc44d6c6-sp59w" Feb 18 19:35:21 crc kubenswrapper[4942]: I0218 19:35:21.608567 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/df34bdbb-8771-4d46-b5ba-29088c793a4c-httpd-config\") pod \"neutron-67cc44d6c6-sp59w\" (UID: \"df34bdbb-8771-4d46-b5ba-29088c793a4c\") " pod="openstack/neutron-67cc44d6c6-sp59w" Feb 18 19:35:21 crc kubenswrapper[4942]: I0218 19:35:21.612496 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7gq76\" (UniqueName: \"kubernetes.io/projected/3eb861b2-8f3f-482a-98b8-e4aa9de98ecd-kube-api-access-7gq76\") pod \"dnsmasq-dns-55f844cf75-b4sf9\" (UID: \"3eb861b2-8f3f-482a-98b8-e4aa9de98ecd\") " pod="openstack/dnsmasq-dns-55f844cf75-b4sf9" Feb 18 19:35:21 crc kubenswrapper[4942]: I0218 19:35:21.612548 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-znrs9\" (UniqueName: \"kubernetes.io/projected/df34bdbb-8771-4d46-b5ba-29088c793a4c-kube-api-access-znrs9\") pod \"neutron-67cc44d6c6-sp59w\" (UID: \"df34bdbb-8771-4d46-b5ba-29088c793a4c\") " pod="openstack/neutron-67cc44d6c6-sp59w" Feb 18 19:35:21 crc kubenswrapper[4942]: I0218 19:35:21.617549 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/df34bdbb-8771-4d46-b5ba-29088c793a4c-combined-ca-bundle\") pod \"neutron-67cc44d6c6-sp59w\" (UID: \"df34bdbb-8771-4d46-b5ba-29088c793a4c\") " pod="openstack/neutron-67cc44d6c6-sp59w" Feb 18 19:35:21 crc kubenswrapper[4942]: I0218 19:35:21.766540 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-55f844cf75-b4sf9" Feb 18 19:35:21 crc kubenswrapper[4942]: I0218 19:35:21.775122 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-67cc44d6c6-sp59w" Feb 18 19:35:21 crc kubenswrapper[4942]: I0218 19:35:21.775477 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-54d64cf59b-xp7rk"] Feb 18 19:35:21 crc kubenswrapper[4942]: I0218 19:35:21.822405 4942 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-785d8bcb8c-jhblh"] Feb 18 19:35:21 crc kubenswrapper[4942]: I0218 19:35:21.943022 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-7b6b6597b8-m8ngr"] Feb 18 19:35:22 crc kubenswrapper[4942]: I0218 19:35:22.001969 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-tnqg7"] Feb 18 19:35:22 crc kubenswrapper[4942]: I0218 19:35:22.041355 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Feb 18 19:35:22 crc kubenswrapper[4942]: I0218 19:35:22.092724 4942 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 18 19:35:22 crc kubenswrapper[4942]: I0218 19:35:22.348451 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-55f844cf75-b4sf9"] Feb 18 19:35:22 crc kubenswrapper[4942]: I0218 19:35:22.565005 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-67cc44d6c6-sp59w"] Feb 18 19:35:22 crc kubenswrapper[4942]: I0218 19:35:22.605640 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-tnqg7" event={"ID":"f29ae8a1-b3cc-452c-ac99-b450ef3125d8","Type":"ContainerStarted","Data":"16fd17087ed9bd06ba590a2897d1853b93c4e9cb882e3c311955fd4cf453c84b"} Feb 18 19:35:22 crc kubenswrapper[4942]: I0218 19:35:22.605702 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-tnqg7" event={"ID":"f29ae8a1-b3cc-452c-ac99-b450ef3125d8","Type":"ContainerStarted","Data":"6ec2961c66c2e9651f7fa79f615b6ade4d1fc9deb7327e765b2f6fda45cc46c8"} Feb 18 19:35:22 crc kubenswrapper[4942]: I0218 19:35:22.615211 4942 generic.go:334] "Generic (PLEG): container finished" podID="47732c7e-8c0f-4244-bddb-98bf7b21d2db" containerID="b13ac4955f984728a414f0dd111c2e579b7dc9058268103695046c6e78fc7cfc" exitCode=0 Feb 18 19:35:22 crc kubenswrapper[4942]: I0218 19:35:22.615307 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-785d8bcb8c-jhblh" event={"ID":"47732c7e-8c0f-4244-bddb-98bf7b21d2db","Type":"ContainerDied","Data":"b13ac4955f984728a414f0dd111c2e579b7dc9058268103695046c6e78fc7cfc"} Feb 18 19:35:22 crc kubenswrapper[4942]: I0218 19:35:22.615335 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-785d8bcb8c-jhblh" event={"ID":"47732c7e-8c0f-4244-bddb-98bf7b21d2db","Type":"ContainerStarted","Data":"8337fe8032827581404d71567c1183946117f42260043a9aad5e272dceb8f9f6"} Feb 18 19:35:22 crc kubenswrapper[4942]: I0218 19:35:22.620497 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-67cc44d6c6-sp59w" event={"ID":"df34bdbb-8771-4d46-b5ba-29088c793a4c","Type":"ContainerStarted","Data":"16cfdf5777da304074f8658c0e294de7985ac237e0c31312cdfc21ceef0ca88c"} Feb 18 19:35:22 crc kubenswrapper[4942]: I0218 19:35:22.622424 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"e6ae6fb3-34ed-41bd-beb7-7ecedb83a6e3","Type":"ContainerStarted","Data":"a08610a6a430e153a9003711c6d5df1b3e69d004820a8579935266408e2afede"} Feb 18 19:35:22 crc kubenswrapper[4942]: I0218 19:35:22.625043 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-7b6b6597b8-m8ngr" event={"ID":"55d24776-2d1c-413a-8ba1-06cdadf63d04","Type":"ContainerStarted","Data":"8e07cc3497636460ed1799cf2428d3afb905f2937022e98e328e60ed8e665be5"} Feb 18 19:35:22 crc kubenswrapper[4942]: I0218 19:35:22.626907 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e4517368-322e-4467-b31a-45b487e1035b","Type":"ContainerStarted","Data":"36f35a87fe58dff89b8aed800be1382b5a73805c6babc09fce366da3515f6407"} Feb 18 19:35:22 crc kubenswrapper[4942]: I0218 19:35:22.634036 4942 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-tnqg7" podStartSLOduration=10.634017397000001 podStartE2EDuration="10.634017397s" podCreationTimestamp="2026-02-18 19:35:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:35:22.630896476 +0000 UTC m=+1082.335829171" watchObservedRunningTime="2026-02-18 19:35:22.634017397 +0000 UTC m=+1082.338950062" Feb 18 19:35:22 crc kubenswrapper[4942]: I0218 19:35:22.645347 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-9ntpw" event={"ID":"af8e769c-00c3-41a1-97c4-d91902767dfe","Type":"ContainerStarted","Data":"a5a266a5f35f400b4926f114a4e397e8de76de3f56a176f14c64d1b553d123f4"} Feb 18 19:35:22 crc kubenswrapper[4942]: I0218 19:35:22.652537 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-54d64cf59b-xp7rk" event={"ID":"3ecc91e6-4e7f-438f-8530-bb8dd55764c5","Type":"ContainerStarted","Data":"f9c6502e1e5809e23b3664eb42d069f99f7705e9a66bf07935b4912b98778c64"} Feb 18 19:35:22 crc kubenswrapper[4942]: I0218 19:35:22.666292 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-db-sync-4h9n5" event={"ID":"983d5293-8413-4a29-88b2-ba775b3b4a8b","Type":"ContainerStarted","Data":"96103ab065d78416959c1d84cf5d96a95a67496c5bf29a0bff2dd2c96318a211"} Feb 18 19:35:22 crc kubenswrapper[4942]: I0218 19:35:22.675325 4942 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 18 19:35:22 crc kubenswrapper[4942]: I0218 19:35:22.682944 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-55f844cf75-b4sf9" event={"ID":"3eb861b2-8f3f-482a-98b8-e4aa9de98ecd","Type":"ContainerStarted","Data":"a28152676e5bbeaa52dbf0acfa190644662ce9fce2d0b5f7310504317b4faf82"} Feb 18 19:35:22 crc kubenswrapper[4942]: W0218 19:35:22.707138 4942 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1019761a_2eb2_43f0_bce6_94e8b11a5c6a.slice/crio-a86dd504e5cfc46b03f20f6e448da41a9c6e744b02c0f0f6b9cfc4506ef33bc9 WatchSource:0}: Error finding container a86dd504e5cfc46b03f20f6e448da41a9c6e744b02c0f0f6b9cfc4506ef33bc9: Status 404 returned error can't find the container with id a86dd504e5cfc46b03f20f6e448da41a9c6e744b02c0f0f6b9cfc4506ef33bc9 Feb 18 19:35:22 crc kubenswrapper[4942]: I0218 19:35:22.716700 4942 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-db-sync-9ntpw" podStartSLOduration=5.071064364 podStartE2EDuration="33.716679316s" podCreationTimestamp="2026-02-18 19:34:49 +0000 UTC" firstStartedPulling="2026-02-18 19:34:51.139008067 +0000 UTC m=+1050.843940732" lastFinishedPulling="2026-02-18 19:35:19.784623019 +0000 UTC m=+1079.489555684" observedRunningTime="2026-02-18 19:35:22.689220002 +0000 UTC m=+1082.394152667" watchObservedRunningTime="2026-02-18 19:35:22.716679316 +0000 UTC m=+1082.421611981" Feb 18 19:35:22 crc kubenswrapper[4942]: I0218 19:35:22.732556 4942 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/watcher-db-sync-4h9n5" podStartSLOduration=8.852964198 podStartE2EDuration="1m5.732538248s" podCreationTimestamp="2026-02-18 19:34:17 +0000 UTC" firstStartedPulling="2026-02-18 19:34:24.184447176 +0000 UTC m=+1023.889379841" lastFinishedPulling="2026-02-18 19:35:21.064021226 +0000 UTC m=+1080.768953891" observedRunningTime="2026-02-18 19:35:22.705122715 +0000 UTC m=+1082.410055370" watchObservedRunningTime="2026-02-18 19:35:22.732538248 +0000 UTC m=+1082.437470913" Feb 18 19:35:22 crc kubenswrapper[4942]: I0218 19:35:22.858617 4942 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-698758b865-nnzck" podUID="1e919317-cae2-432d-959f-8cf1d4520b56" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.120:5353: i/o timeout" Feb 18 19:35:23 crc kubenswrapper[4942]: I0218 19:35:23.148203 4942 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-785d8bcb8c-jhblh" Feb 18 19:35:23 crc kubenswrapper[4942]: I0218 19:35:23.347487 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d8v6l\" (UniqueName: \"kubernetes.io/projected/47732c7e-8c0f-4244-bddb-98bf7b21d2db-kube-api-access-d8v6l\") pod \"47732c7e-8c0f-4244-bddb-98bf7b21d2db\" (UID: \"47732c7e-8c0f-4244-bddb-98bf7b21d2db\") " Feb 18 19:35:23 crc kubenswrapper[4942]: I0218 19:35:23.347553 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/47732c7e-8c0f-4244-bddb-98bf7b21d2db-config\") pod \"47732c7e-8c0f-4244-bddb-98bf7b21d2db\" (UID: \"47732c7e-8c0f-4244-bddb-98bf7b21d2db\") " Feb 18 19:35:23 crc kubenswrapper[4942]: I0218 19:35:23.347574 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/47732c7e-8c0f-4244-bddb-98bf7b21d2db-dns-swift-storage-0\") pod \"47732c7e-8c0f-4244-bddb-98bf7b21d2db\" (UID: \"47732c7e-8c0f-4244-bddb-98bf7b21d2db\") " Feb 18 19:35:23 crc kubenswrapper[4942]: I0218 19:35:23.347666 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/47732c7e-8c0f-4244-bddb-98bf7b21d2db-ovsdbserver-nb\") pod \"47732c7e-8c0f-4244-bddb-98bf7b21d2db\" (UID: \"47732c7e-8c0f-4244-bddb-98bf7b21d2db\") " Feb 18 19:35:23 crc kubenswrapper[4942]: I0218 19:35:23.347732 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/47732c7e-8c0f-4244-bddb-98bf7b21d2db-dns-svc\") pod \"47732c7e-8c0f-4244-bddb-98bf7b21d2db\" (UID: \"47732c7e-8c0f-4244-bddb-98bf7b21d2db\") " Feb 18 19:35:23 crc kubenswrapper[4942]: I0218 19:35:23.347749 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/47732c7e-8c0f-4244-bddb-98bf7b21d2db-ovsdbserver-sb\") pod \"47732c7e-8c0f-4244-bddb-98bf7b21d2db\" (UID: \"47732c7e-8c0f-4244-bddb-98bf7b21d2db\") " Feb 18 19:35:23 crc kubenswrapper[4942]: I0218 19:35:23.373966 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/47732c7e-8c0f-4244-bddb-98bf7b21d2db-kube-api-access-d8v6l" (OuterVolumeSpecName: "kube-api-access-d8v6l") pod "47732c7e-8c0f-4244-bddb-98bf7b21d2db" (UID: "47732c7e-8c0f-4244-bddb-98bf7b21d2db"). InnerVolumeSpecName "kube-api-access-d8v6l". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:35:23 crc kubenswrapper[4942]: I0218 19:35:23.394311 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/47732c7e-8c0f-4244-bddb-98bf7b21d2db-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "47732c7e-8c0f-4244-bddb-98bf7b21d2db" (UID: "47732c7e-8c0f-4244-bddb-98bf7b21d2db"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:35:23 crc kubenswrapper[4942]: I0218 19:35:23.422683 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/47732c7e-8c0f-4244-bddb-98bf7b21d2db-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "47732c7e-8c0f-4244-bddb-98bf7b21d2db" (UID: "47732c7e-8c0f-4244-bddb-98bf7b21d2db"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:35:23 crc kubenswrapper[4942]: I0218 19:35:23.423274 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/47732c7e-8c0f-4244-bddb-98bf7b21d2db-config" (OuterVolumeSpecName: "config") pod "47732c7e-8c0f-4244-bddb-98bf7b21d2db" (UID: "47732c7e-8c0f-4244-bddb-98bf7b21d2db"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:35:23 crc kubenswrapper[4942]: I0218 19:35:23.423529 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/47732c7e-8c0f-4244-bddb-98bf7b21d2db-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "47732c7e-8c0f-4244-bddb-98bf7b21d2db" (UID: "47732c7e-8c0f-4244-bddb-98bf7b21d2db"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:35:23 crc kubenswrapper[4942]: I0218 19:35:23.433291 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/47732c7e-8c0f-4244-bddb-98bf7b21d2db-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "47732c7e-8c0f-4244-bddb-98bf7b21d2db" (UID: "47732c7e-8c0f-4244-bddb-98bf7b21d2db"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:35:23 crc kubenswrapper[4942]: I0218 19:35:23.452491 4942 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/47732c7e-8c0f-4244-bddb-98bf7b21d2db-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 18 19:35:23 crc kubenswrapper[4942]: I0218 19:35:23.452533 4942 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/47732c7e-8c0f-4244-bddb-98bf7b21d2db-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 18 19:35:23 crc kubenswrapper[4942]: I0218 19:35:23.452549 4942 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d8v6l\" (UniqueName: \"kubernetes.io/projected/47732c7e-8c0f-4244-bddb-98bf7b21d2db-kube-api-access-d8v6l\") on node \"crc\" DevicePath \"\"" Feb 18 19:35:23 crc kubenswrapper[4942]: I0218 19:35:23.452561 4942 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/47732c7e-8c0f-4244-bddb-98bf7b21d2db-config\") on node \"crc\" DevicePath \"\"" Feb 18 19:35:23 crc kubenswrapper[4942]: I0218 19:35:23.452573 4942 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/47732c7e-8c0f-4244-bddb-98bf7b21d2db-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 18 19:35:23 crc kubenswrapper[4942]: I0218 19:35:23.452585 4942 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/47732c7e-8c0f-4244-bddb-98bf7b21d2db-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 18 19:35:23 crc kubenswrapper[4942]: I0218 19:35:23.704854 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-7b6b6597b8-m8ngr" event={"ID":"55d24776-2d1c-413a-8ba1-06cdadf63d04","Type":"ContainerStarted","Data":"fdd3811b77cebb81cb4d835bd7bb9549dffa32c8c00fba3295d69f115674b90e"} Feb 18 19:35:23 crc kubenswrapper[4942]: I0218 19:35:23.707363 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-7b6b6597b8-m8ngr" event={"ID":"55d24776-2d1c-413a-8ba1-06cdadf63d04","Type":"ContainerStarted","Data":"0f654f8decc1fb809c31df37ac391bf8043913d039ad32343d90b0ea671290c4"} Feb 18 19:35:23 crc kubenswrapper[4942]: I0218 19:35:23.713314 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-54d64cf59b-xp7rk" event={"ID":"3ecc91e6-4e7f-438f-8530-bb8dd55764c5","Type":"ContainerStarted","Data":"4bd98068ec637cd03846de3ac7d0bc145a81ebf089811ebc4b9501aa76cae874"} Feb 18 19:35:23 crc kubenswrapper[4942]: I0218 19:35:23.713357 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-54d64cf59b-xp7rk" event={"ID":"3ecc91e6-4e7f-438f-8530-bb8dd55764c5","Type":"ContainerStarted","Data":"036dc92b12e420ef80458fb3e23d3375424a9aed1ed6d80a904da58e73ba2659"} Feb 18 19:35:23 crc kubenswrapper[4942]: I0218 19:35:23.737422 4942 generic.go:334] "Generic (PLEG): container finished" podID="3eb861b2-8f3f-482a-98b8-e4aa9de98ecd" containerID="9bb47534d9e06becc5f445ae59185cbfce5bbc93ac6da1f08bbfa8a94ab2efbe" exitCode=0 Feb 18 19:35:23 crc kubenswrapper[4942]: I0218 19:35:23.737509 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-55f844cf75-b4sf9" event={"ID":"3eb861b2-8f3f-482a-98b8-e4aa9de98ecd","Type":"ContainerDied","Data":"9bb47534d9e06becc5f445ae59185cbfce5bbc93ac6da1f08bbfa8a94ab2efbe"} Feb 18 19:35:23 crc kubenswrapper[4942]: I0218 19:35:23.740990 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-785d8bcb8c-jhblh" event={"ID":"47732c7e-8c0f-4244-bddb-98bf7b21d2db","Type":"ContainerDied","Data":"8337fe8032827581404d71567c1183946117f42260043a9aad5e272dceb8f9f6"} Feb 18 19:35:23 crc kubenswrapper[4942]: I0218 19:35:23.741060 4942 scope.go:117] "RemoveContainer" containerID="b13ac4955f984728a414f0dd111c2e579b7dc9058268103695046c6e78fc7cfc" Feb 18 19:35:23 crc kubenswrapper[4942]: I0218 19:35:23.741228 4942 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-785d8bcb8c-jhblh" Feb 18 19:35:23 crc kubenswrapper[4942]: I0218 19:35:23.750137 4942 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-7b6b6597b8-m8ngr" podStartSLOduration=23.794370455 podStartE2EDuration="24.750114549s" podCreationTimestamp="2026-02-18 19:34:59 +0000 UTC" firstStartedPulling="2026-02-18 19:35:21.985948271 +0000 UTC m=+1081.690880936" lastFinishedPulling="2026-02-18 19:35:22.941692355 +0000 UTC m=+1082.646625030" observedRunningTime="2026-02-18 19:35:23.729604046 +0000 UTC m=+1083.434536711" watchObservedRunningTime="2026-02-18 19:35:23.750114549 +0000 UTC m=+1083.455047214" Feb 18 19:35:23 crc kubenswrapper[4942]: I0218 19:35:23.771662 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"1019761a-2eb2-43f0-bce6-94e8b11a5c6a","Type":"ContainerStarted","Data":"9320749dc03b7598fafa195353940eae613dd86b6a5b319bcd099b683cd1ffb7"} Feb 18 19:35:23 crc kubenswrapper[4942]: I0218 19:35:23.771703 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"1019761a-2eb2-43f0-bce6-94e8b11a5c6a","Type":"ContainerStarted","Data":"a86dd504e5cfc46b03f20f6e448da41a9c6e744b02c0f0f6b9cfc4506ef33bc9"} Feb 18 19:35:23 crc kubenswrapper[4942]: I0218 19:35:23.811045 4942 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-54d64cf59b-xp7rk" podStartSLOduration=23.777483076 podStartE2EDuration="24.811019023s" podCreationTimestamp="2026-02-18 19:34:59 +0000 UTC" firstStartedPulling="2026-02-18 19:35:21.812360148 +0000 UTC m=+1081.517292813" lastFinishedPulling="2026-02-18 19:35:22.845896105 +0000 UTC m=+1082.550828760" observedRunningTime="2026-02-18 19:35:23.767859091 +0000 UTC m=+1083.472791756" watchObservedRunningTime="2026-02-18 19:35:23.811019023 +0000 UTC m=+1083.515951678" Feb 18 19:35:23 crc kubenswrapper[4942]: I0218 19:35:23.811395 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-67cc44d6c6-sp59w" event={"ID":"df34bdbb-8771-4d46-b5ba-29088c793a4c","Type":"ContainerStarted","Data":"686f47180a9ccf7623cbed7358eef7f2d2fa27a8a72e96ad726f79f619dd1afc"} Feb 18 19:35:23 crc kubenswrapper[4942]: I0218 19:35:23.811436 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-67cc44d6c6-sp59w" event={"ID":"df34bdbb-8771-4d46-b5ba-29088c793a4c","Type":"ContainerStarted","Data":"8b2790adbab8c3f7f1e931b6f90eb17d0d170a8ea3e8297671b08ac8cd2f42be"} Feb 18 19:35:23 crc kubenswrapper[4942]: I0218 19:35:23.811477 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-67cc44d6c6-sp59w" Feb 18 19:35:23 crc kubenswrapper[4942]: I0218 19:35:23.815003 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"e6ae6fb3-34ed-41bd-beb7-7ecedb83a6e3","Type":"ContainerStarted","Data":"286c5e645cca1f276431a49063d017c31331b57ff3a3adef65ab9aa752117a6e"} Feb 18 19:35:23 crc kubenswrapper[4942]: I0218 19:35:23.815037 4942 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-6b8c9f8ffc-qtdr8"] Feb 18 19:35:23 crc kubenswrapper[4942]: E0218 19:35:23.815727 4942 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="47732c7e-8c0f-4244-bddb-98bf7b21d2db" containerName="init" Feb 18 19:35:23 crc kubenswrapper[4942]: I0218 19:35:23.815750 4942 state_mem.go:107] "Deleted CPUSet assignment" podUID="47732c7e-8c0f-4244-bddb-98bf7b21d2db" containerName="init" Feb 18 19:35:23 crc kubenswrapper[4942]: I0218 19:35:23.815922 4942 memory_manager.go:354] "RemoveStaleState removing state" podUID="47732c7e-8c0f-4244-bddb-98bf7b21d2db" containerName="init" Feb 18 19:35:23 crc kubenswrapper[4942]: I0218 19:35:23.827875 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-6b8c9f8ffc-qtdr8" Feb 18 19:35:23 crc kubenswrapper[4942]: I0218 19:35:23.828356 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-6b8c9f8ffc-qtdr8"] Feb 18 19:35:23 crc kubenswrapper[4942]: I0218 19:35:23.831477 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-internal-svc" Feb 18 19:35:23 crc kubenswrapper[4942]: I0218 19:35:23.831916 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-public-svc" Feb 18 19:35:23 crc kubenswrapper[4942]: I0218 19:35:23.875246 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/921d1a28-ead8-42a6-933c-38a339741884-internal-tls-certs\") pod \"neutron-6b8c9f8ffc-qtdr8\" (UID: \"921d1a28-ead8-42a6-933c-38a339741884\") " pod="openstack/neutron-6b8c9f8ffc-qtdr8" Feb 18 19:35:23 crc kubenswrapper[4942]: I0218 19:35:23.875304 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/921d1a28-ead8-42a6-933c-38a339741884-httpd-config\") pod \"neutron-6b8c9f8ffc-qtdr8\" (UID: \"921d1a28-ead8-42a6-933c-38a339741884\") " pod="openstack/neutron-6b8c9f8ffc-qtdr8" Feb 18 19:35:23 crc kubenswrapper[4942]: I0218 19:35:23.875500 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/921d1a28-ead8-42a6-933c-38a339741884-public-tls-certs\") pod \"neutron-6b8c9f8ffc-qtdr8\" (UID: \"921d1a28-ead8-42a6-933c-38a339741884\") " pod="openstack/neutron-6b8c9f8ffc-qtdr8" Feb 18 19:35:23 crc kubenswrapper[4942]: I0218 19:35:23.875899 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4hnxj\" (UniqueName: \"kubernetes.io/projected/921d1a28-ead8-42a6-933c-38a339741884-kube-api-access-4hnxj\") pod \"neutron-6b8c9f8ffc-qtdr8\" (UID: \"921d1a28-ead8-42a6-933c-38a339741884\") " pod="openstack/neutron-6b8c9f8ffc-qtdr8" Feb 18 19:35:23 crc kubenswrapper[4942]: I0218 19:35:23.884166 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/921d1a28-ead8-42a6-933c-38a339741884-ovndb-tls-certs\") pod \"neutron-6b8c9f8ffc-qtdr8\" (UID: \"921d1a28-ead8-42a6-933c-38a339741884\") " pod="openstack/neutron-6b8c9f8ffc-qtdr8" Feb 18 19:35:23 crc kubenswrapper[4942]: I0218 19:35:23.884780 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/921d1a28-ead8-42a6-933c-38a339741884-combined-ca-bundle\") pod \"neutron-6b8c9f8ffc-qtdr8\" (UID: \"921d1a28-ead8-42a6-933c-38a339741884\") " pod="openstack/neutron-6b8c9f8ffc-qtdr8" Feb 18 19:35:23 crc kubenswrapper[4942]: I0218 19:35:23.886139 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/921d1a28-ead8-42a6-933c-38a339741884-config\") pod \"neutron-6b8c9f8ffc-qtdr8\" (UID: \"921d1a28-ead8-42a6-933c-38a339741884\") " pod="openstack/neutron-6b8c9f8ffc-qtdr8" Feb 18 19:35:23 crc kubenswrapper[4942]: I0218 19:35:23.908852 4942 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-785d8bcb8c-jhblh"] Feb 18 19:35:23 crc kubenswrapper[4942]: I0218 19:35:23.925982 4942 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-785d8bcb8c-jhblh"] Feb 18 19:35:23 crc kubenswrapper[4942]: I0218 19:35:23.946894 4942 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-67cc44d6c6-sp59w" podStartSLOduration=2.946874223 podStartE2EDuration="2.946874223s" podCreationTimestamp="2026-02-18 19:35:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:35:23.91634484 +0000 UTC m=+1083.621277495" watchObservedRunningTime="2026-02-18 19:35:23.946874223 +0000 UTC m=+1083.651806888" Feb 18 19:35:23 crc kubenswrapper[4942]: I0218 19:35:23.987714 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4hnxj\" (UniqueName: \"kubernetes.io/projected/921d1a28-ead8-42a6-933c-38a339741884-kube-api-access-4hnxj\") pod \"neutron-6b8c9f8ffc-qtdr8\" (UID: \"921d1a28-ead8-42a6-933c-38a339741884\") " pod="openstack/neutron-6b8c9f8ffc-qtdr8" Feb 18 19:35:23 crc kubenswrapper[4942]: I0218 19:35:23.987785 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/921d1a28-ead8-42a6-933c-38a339741884-ovndb-tls-certs\") pod \"neutron-6b8c9f8ffc-qtdr8\" (UID: \"921d1a28-ead8-42a6-933c-38a339741884\") " pod="openstack/neutron-6b8c9f8ffc-qtdr8" Feb 18 19:35:23 crc kubenswrapper[4942]: I0218 19:35:23.987806 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/921d1a28-ead8-42a6-933c-38a339741884-combined-ca-bundle\") pod \"neutron-6b8c9f8ffc-qtdr8\" (UID: \"921d1a28-ead8-42a6-933c-38a339741884\") " pod="openstack/neutron-6b8c9f8ffc-qtdr8" Feb 18 19:35:23 crc kubenswrapper[4942]: I0218 19:35:23.987877 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/921d1a28-ead8-42a6-933c-38a339741884-config\") pod \"neutron-6b8c9f8ffc-qtdr8\" (UID: \"921d1a28-ead8-42a6-933c-38a339741884\") " pod="openstack/neutron-6b8c9f8ffc-qtdr8" Feb 18 19:35:23 crc kubenswrapper[4942]: I0218 19:35:23.987915 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/921d1a28-ead8-42a6-933c-38a339741884-internal-tls-certs\") pod \"neutron-6b8c9f8ffc-qtdr8\" (UID: \"921d1a28-ead8-42a6-933c-38a339741884\") " pod="openstack/neutron-6b8c9f8ffc-qtdr8" Feb 18 19:35:23 crc kubenswrapper[4942]: I0218 19:35:23.987936 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/921d1a28-ead8-42a6-933c-38a339741884-httpd-config\") pod \"neutron-6b8c9f8ffc-qtdr8\" (UID: \"921d1a28-ead8-42a6-933c-38a339741884\") " pod="openstack/neutron-6b8c9f8ffc-qtdr8" Feb 18 19:35:23 crc kubenswrapper[4942]: I0218 19:35:23.987964 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/921d1a28-ead8-42a6-933c-38a339741884-public-tls-certs\") pod \"neutron-6b8c9f8ffc-qtdr8\" (UID: \"921d1a28-ead8-42a6-933c-38a339741884\") " pod="openstack/neutron-6b8c9f8ffc-qtdr8" Feb 18 19:35:23 crc kubenswrapper[4942]: I0218 19:35:23.996578 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/921d1a28-ead8-42a6-933c-38a339741884-httpd-config\") pod \"neutron-6b8c9f8ffc-qtdr8\" (UID: \"921d1a28-ead8-42a6-933c-38a339741884\") " pod="openstack/neutron-6b8c9f8ffc-qtdr8" Feb 18 19:35:23 crc kubenswrapper[4942]: I0218 19:35:23.997140 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/921d1a28-ead8-42a6-933c-38a339741884-public-tls-certs\") pod \"neutron-6b8c9f8ffc-qtdr8\" (UID: \"921d1a28-ead8-42a6-933c-38a339741884\") " pod="openstack/neutron-6b8c9f8ffc-qtdr8" Feb 18 19:35:24 crc kubenswrapper[4942]: I0218 19:35:24.000595 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/921d1a28-ead8-42a6-933c-38a339741884-internal-tls-certs\") pod \"neutron-6b8c9f8ffc-qtdr8\" (UID: \"921d1a28-ead8-42a6-933c-38a339741884\") " pod="openstack/neutron-6b8c9f8ffc-qtdr8" Feb 18 19:35:24 crc kubenswrapper[4942]: I0218 19:35:24.001423 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/921d1a28-ead8-42a6-933c-38a339741884-ovndb-tls-certs\") pod \"neutron-6b8c9f8ffc-qtdr8\" (UID: \"921d1a28-ead8-42a6-933c-38a339741884\") " pod="openstack/neutron-6b8c9f8ffc-qtdr8" Feb 18 19:35:24 crc kubenswrapper[4942]: I0218 19:35:24.006923 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/921d1a28-ead8-42a6-933c-38a339741884-combined-ca-bundle\") pod \"neutron-6b8c9f8ffc-qtdr8\" (UID: \"921d1a28-ead8-42a6-933c-38a339741884\") " pod="openstack/neutron-6b8c9f8ffc-qtdr8" Feb 18 19:35:24 crc kubenswrapper[4942]: I0218 19:35:24.008608 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/921d1a28-ead8-42a6-933c-38a339741884-config\") pod \"neutron-6b8c9f8ffc-qtdr8\" (UID: \"921d1a28-ead8-42a6-933c-38a339741884\") " pod="openstack/neutron-6b8c9f8ffc-qtdr8" Feb 18 19:35:24 crc kubenswrapper[4942]: I0218 19:35:24.018473 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4hnxj\" (UniqueName: \"kubernetes.io/projected/921d1a28-ead8-42a6-933c-38a339741884-kube-api-access-4hnxj\") pod \"neutron-6b8c9f8ffc-qtdr8\" (UID: \"921d1a28-ead8-42a6-933c-38a339741884\") " pod="openstack/neutron-6b8c9f8ffc-qtdr8" Feb 18 19:35:24 crc kubenswrapper[4942]: I0218 19:35:24.196832 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-6b8c9f8ffc-qtdr8" Feb 18 19:35:24 crc kubenswrapper[4942]: I0218 19:35:24.841561 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"1019761a-2eb2-43f0-bce6-94e8b11a5c6a","Type":"ContainerStarted","Data":"c3ea01aa2e28d7af52196a675dad3daf6cebda76c104454a2dbd5773bb572698"} Feb 18 19:35:24 crc kubenswrapper[4942]: I0218 19:35:24.841981 4942 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="1019761a-2eb2-43f0-bce6-94e8b11a5c6a" containerName="glance-log" containerID="cri-o://9320749dc03b7598fafa195353940eae613dd86b6a5b319bcd099b683cd1ffb7" gracePeriod=30 Feb 18 19:35:24 crc kubenswrapper[4942]: I0218 19:35:24.842410 4942 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="1019761a-2eb2-43f0-bce6-94e8b11a5c6a" containerName="glance-httpd" containerID="cri-o://c3ea01aa2e28d7af52196a675dad3daf6cebda76c104454a2dbd5773bb572698" gracePeriod=30 Feb 18 19:35:24 crc kubenswrapper[4942]: I0218 19:35:24.851247 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"e6ae6fb3-34ed-41bd-beb7-7ecedb83a6e3","Type":"ContainerStarted","Data":"608226f93f6b011f29d114f5bb7d2061e4add8384a0ff0be89f7c6d996ce4530"} Feb 18 19:35:24 crc kubenswrapper[4942]: I0218 19:35:24.851404 4942 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="e6ae6fb3-34ed-41bd-beb7-7ecedb83a6e3" containerName="glance-log" containerID="cri-o://286c5e645cca1f276431a49063d017c31331b57ff3a3adef65ab9aa752117a6e" gracePeriod=30 Feb 18 19:35:24 crc kubenswrapper[4942]: I0218 19:35:24.851521 4942 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="e6ae6fb3-34ed-41bd-beb7-7ecedb83a6e3" containerName="glance-httpd" containerID="cri-o://608226f93f6b011f29d114f5bb7d2061e4add8384a0ff0be89f7c6d996ce4530" gracePeriod=30 Feb 18 19:35:24 crc kubenswrapper[4942]: I0218 19:35:24.859904 4942 generic.go:334] "Generic (PLEG): container finished" podID="af8e769c-00c3-41a1-97c4-d91902767dfe" containerID="a5a266a5f35f400b4926f114a4e397e8de76de3f56a176f14c64d1b553d123f4" exitCode=0 Feb 18 19:35:24 crc kubenswrapper[4942]: I0218 19:35:24.860294 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-9ntpw" event={"ID":"af8e769c-00c3-41a1-97c4-d91902767dfe","Type":"ContainerDied","Data":"a5a266a5f35f400b4926f114a4e397e8de76de3f56a176f14c64d1b553d123f4"} Feb 18 19:35:24 crc kubenswrapper[4942]: I0218 19:35:24.882974 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-55f844cf75-b4sf9" event={"ID":"3eb861b2-8f3f-482a-98b8-e4aa9de98ecd","Type":"ContainerStarted","Data":"07ed859237f582f1701b07e571f92a114e6576149d9ab982ddb17cd24aca3587"} Feb 18 19:35:24 crc kubenswrapper[4942]: I0218 19:35:24.883918 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-55f844cf75-b4sf9" Feb 18 19:35:24 crc kubenswrapper[4942]: I0218 19:35:24.948546 4942 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=21.948524871 podStartE2EDuration="21.948524871s" podCreationTimestamp="2026-02-18 19:35:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:35:24.902367501 +0000 UTC m=+1084.607300196" watchObservedRunningTime="2026-02-18 19:35:24.948524871 +0000 UTC m=+1084.653457536" Feb 18 19:35:24 crc kubenswrapper[4942]: I0218 19:35:24.951094 4942 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=21.951083047 podStartE2EDuration="21.951083047s" podCreationTimestamp="2026-02-18 19:35:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:35:24.86693196 +0000 UTC m=+1084.571864635" watchObservedRunningTime="2026-02-18 19:35:24.951083047 +0000 UTC m=+1084.656015712" Feb 18 19:35:24 crc kubenswrapper[4942]: I0218 19:35:24.963365 4942 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-55f844cf75-b4sf9" podStartSLOduration=3.963345026 podStartE2EDuration="3.963345026s" podCreationTimestamp="2026-02-18 19:35:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:35:24.939341572 +0000 UTC m=+1084.644274247" watchObservedRunningTime="2026-02-18 19:35:24.963345026 +0000 UTC m=+1084.668277691" Feb 18 19:35:25 crc kubenswrapper[4942]: I0218 19:35:25.051117 4942 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="47732c7e-8c0f-4244-bddb-98bf7b21d2db" path="/var/lib/kubelet/pods/47732c7e-8c0f-4244-bddb-98bf7b21d2db/volumes" Feb 18 19:35:25 crc kubenswrapper[4942]: I0218 19:35:25.395207 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-6b8c9f8ffc-qtdr8"] Feb 18 19:35:25 crc kubenswrapper[4942]: I0218 19:35:25.618035 4942 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 18 19:35:25 crc kubenswrapper[4942]: I0218 19:35:25.748084 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1019761a-2eb2-43f0-bce6-94e8b11a5c6a-config-data\") pod \"1019761a-2eb2-43f0-bce6-94e8b11a5c6a\" (UID: \"1019761a-2eb2-43f0-bce6-94e8b11a5c6a\") " Feb 18 19:35:25 crc kubenswrapper[4942]: I0218 19:35:25.748308 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/1019761a-2eb2-43f0-bce6-94e8b11a5c6a-httpd-run\") pod \"1019761a-2eb2-43f0-bce6-94e8b11a5c6a\" (UID: \"1019761a-2eb2-43f0-bce6-94e8b11a5c6a\") " Feb 18 19:35:25 crc kubenswrapper[4942]: I0218 19:35:25.748381 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1019761a-2eb2-43f0-bce6-94e8b11a5c6a-combined-ca-bundle\") pod \"1019761a-2eb2-43f0-bce6-94e8b11a5c6a\" (UID: \"1019761a-2eb2-43f0-bce6-94e8b11a5c6a\") " Feb 18 19:35:25 crc kubenswrapper[4942]: I0218 19:35:25.748412 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1019761a-2eb2-43f0-bce6-94e8b11a5c6a-logs\") pod \"1019761a-2eb2-43f0-bce6-94e8b11a5c6a\" (UID: \"1019761a-2eb2-43f0-bce6-94e8b11a5c6a\") " Feb 18 19:35:25 crc kubenswrapper[4942]: I0218 19:35:25.748436 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n92mx\" (UniqueName: \"kubernetes.io/projected/1019761a-2eb2-43f0-bce6-94e8b11a5c6a-kube-api-access-n92mx\") pod \"1019761a-2eb2-43f0-bce6-94e8b11a5c6a\" (UID: \"1019761a-2eb2-43f0-bce6-94e8b11a5c6a\") " Feb 18 19:35:25 crc kubenswrapper[4942]: I0218 19:35:25.748481 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1019761a-2eb2-43f0-bce6-94e8b11a5c6a-scripts\") pod \"1019761a-2eb2-43f0-bce6-94e8b11a5c6a\" (UID: \"1019761a-2eb2-43f0-bce6-94e8b11a5c6a\") " Feb 18 19:35:25 crc kubenswrapper[4942]: I0218 19:35:25.748498 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"1019761a-2eb2-43f0-bce6-94e8b11a5c6a\" (UID: \"1019761a-2eb2-43f0-bce6-94e8b11a5c6a\") " Feb 18 19:35:25 crc kubenswrapper[4942]: I0218 19:35:25.748644 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1019761a-2eb2-43f0-bce6-94e8b11a5c6a-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "1019761a-2eb2-43f0-bce6-94e8b11a5c6a" (UID: "1019761a-2eb2-43f0-bce6-94e8b11a5c6a"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 19:35:25 crc kubenswrapper[4942]: I0218 19:35:25.749069 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1019761a-2eb2-43f0-bce6-94e8b11a5c6a-logs" (OuterVolumeSpecName: "logs") pod "1019761a-2eb2-43f0-bce6-94e8b11a5c6a" (UID: "1019761a-2eb2-43f0-bce6-94e8b11a5c6a"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 19:35:25 crc kubenswrapper[4942]: I0218 19:35:25.749092 4942 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/1019761a-2eb2-43f0-bce6-94e8b11a5c6a-httpd-run\") on node \"crc\" DevicePath \"\"" Feb 18 19:35:25 crc kubenswrapper[4942]: I0218 19:35:25.753435 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1019761a-2eb2-43f0-bce6-94e8b11a5c6a-kube-api-access-n92mx" (OuterVolumeSpecName: "kube-api-access-n92mx") pod "1019761a-2eb2-43f0-bce6-94e8b11a5c6a" (UID: "1019761a-2eb2-43f0-bce6-94e8b11a5c6a"). InnerVolumeSpecName "kube-api-access-n92mx". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:35:25 crc kubenswrapper[4942]: I0218 19:35:25.761045 4942 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 18 19:35:25 crc kubenswrapper[4942]: I0218 19:35:25.763323 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1019761a-2eb2-43f0-bce6-94e8b11a5c6a-scripts" (OuterVolumeSpecName: "scripts") pod "1019761a-2eb2-43f0-bce6-94e8b11a5c6a" (UID: "1019761a-2eb2-43f0-bce6-94e8b11a5c6a"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:35:25 crc kubenswrapper[4942]: I0218 19:35:25.763858 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage08-crc" (OuterVolumeSpecName: "glance") pod "1019761a-2eb2-43f0-bce6-94e8b11a5c6a" (UID: "1019761a-2eb2-43f0-bce6-94e8b11a5c6a"). InnerVolumeSpecName "local-storage08-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Feb 18 19:35:25 crc kubenswrapper[4942]: I0218 19:35:25.781135 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1019761a-2eb2-43f0-bce6-94e8b11a5c6a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "1019761a-2eb2-43f0-bce6-94e8b11a5c6a" (UID: "1019761a-2eb2-43f0-bce6-94e8b11a5c6a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:35:25 crc kubenswrapper[4942]: I0218 19:35:25.849955 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e6ae6fb3-34ed-41bd-beb7-7ecedb83a6e3-logs\") pod \"e6ae6fb3-34ed-41bd-beb7-7ecedb83a6e3\" (UID: \"e6ae6fb3-34ed-41bd-beb7-7ecedb83a6e3\") " Feb 18 19:35:25 crc kubenswrapper[4942]: I0218 19:35:25.849987 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"e6ae6fb3-34ed-41bd-beb7-7ecedb83a6e3\" (UID: \"e6ae6fb3-34ed-41bd-beb7-7ecedb83a6e3\") " Feb 18 19:35:25 crc kubenswrapper[4942]: I0218 19:35:25.849983 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1019761a-2eb2-43f0-bce6-94e8b11a5c6a-config-data" (OuterVolumeSpecName: "config-data") pod "1019761a-2eb2-43f0-bce6-94e8b11a5c6a" (UID: "1019761a-2eb2-43f0-bce6-94e8b11a5c6a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:35:25 crc kubenswrapper[4942]: I0218 19:35:25.850029 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e6ae6fb3-34ed-41bd-beb7-7ecedb83a6e3-scripts\") pod \"e6ae6fb3-34ed-41bd-beb7-7ecedb83a6e3\" (UID: \"e6ae6fb3-34ed-41bd-beb7-7ecedb83a6e3\") " Feb 18 19:35:25 crc kubenswrapper[4942]: I0218 19:35:25.850066 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r6mcd\" (UniqueName: \"kubernetes.io/projected/e6ae6fb3-34ed-41bd-beb7-7ecedb83a6e3-kube-api-access-r6mcd\") pod \"e6ae6fb3-34ed-41bd-beb7-7ecedb83a6e3\" (UID: \"e6ae6fb3-34ed-41bd-beb7-7ecedb83a6e3\") " Feb 18 19:35:25 crc kubenswrapper[4942]: I0218 19:35:25.850087 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/e6ae6fb3-34ed-41bd-beb7-7ecedb83a6e3-httpd-run\") pod \"e6ae6fb3-34ed-41bd-beb7-7ecedb83a6e3\" (UID: \"e6ae6fb3-34ed-41bd-beb7-7ecedb83a6e3\") " Feb 18 19:35:25 crc kubenswrapper[4942]: I0218 19:35:25.850108 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e6ae6fb3-34ed-41bd-beb7-7ecedb83a6e3-config-data\") pod \"e6ae6fb3-34ed-41bd-beb7-7ecedb83a6e3\" (UID: \"e6ae6fb3-34ed-41bd-beb7-7ecedb83a6e3\") " Feb 18 19:35:25 crc kubenswrapper[4942]: I0218 19:35:25.850140 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e6ae6fb3-34ed-41bd-beb7-7ecedb83a6e3-combined-ca-bundle\") pod \"e6ae6fb3-34ed-41bd-beb7-7ecedb83a6e3\" (UID: \"e6ae6fb3-34ed-41bd-beb7-7ecedb83a6e3\") " Feb 18 19:35:25 crc kubenswrapper[4942]: I0218 19:35:25.850163 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1019761a-2eb2-43f0-bce6-94e8b11a5c6a-config-data\") pod \"1019761a-2eb2-43f0-bce6-94e8b11a5c6a\" (UID: \"1019761a-2eb2-43f0-bce6-94e8b11a5c6a\") " Feb 18 19:35:25 crc kubenswrapper[4942]: I0218 19:35:25.850446 4942 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1019761a-2eb2-43f0-bce6-94e8b11a5c6a-scripts\") on node \"crc\" DevicePath \"\"" Feb 18 19:35:25 crc kubenswrapper[4942]: I0218 19:35:25.850468 4942 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") on node \"crc\" " Feb 18 19:35:25 crc kubenswrapper[4942]: I0218 19:35:25.850479 4942 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1019761a-2eb2-43f0-bce6-94e8b11a5c6a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 19:35:25 crc kubenswrapper[4942]: I0218 19:35:25.850524 4942 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1019761a-2eb2-43f0-bce6-94e8b11a5c6a-logs\") on node \"crc\" DevicePath \"\"" Feb 18 19:35:25 crc kubenswrapper[4942]: I0218 19:35:25.850532 4942 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n92mx\" (UniqueName: \"kubernetes.io/projected/1019761a-2eb2-43f0-bce6-94e8b11a5c6a-kube-api-access-n92mx\") on node \"crc\" DevicePath \"\"" Feb 18 19:35:25 crc kubenswrapper[4942]: I0218 19:35:25.850612 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e6ae6fb3-34ed-41bd-beb7-7ecedb83a6e3-logs" (OuterVolumeSpecName: "logs") pod "e6ae6fb3-34ed-41bd-beb7-7ecedb83a6e3" (UID: "e6ae6fb3-34ed-41bd-beb7-7ecedb83a6e3"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 19:35:25 crc kubenswrapper[4942]: I0218 19:35:25.850953 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e6ae6fb3-34ed-41bd-beb7-7ecedb83a6e3-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "e6ae6fb3-34ed-41bd-beb7-7ecedb83a6e3" (UID: "e6ae6fb3-34ed-41bd-beb7-7ecedb83a6e3"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 19:35:25 crc kubenswrapper[4942]: W0218 19:35:25.853100 4942 empty_dir.go:500] Warning: Unmount skipped because path does not exist: /var/lib/kubelet/pods/1019761a-2eb2-43f0-bce6-94e8b11a5c6a/volumes/kubernetes.io~secret/config-data Feb 18 19:35:25 crc kubenswrapper[4942]: I0218 19:35:25.853121 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1019761a-2eb2-43f0-bce6-94e8b11a5c6a-config-data" (OuterVolumeSpecName: "config-data") pod "1019761a-2eb2-43f0-bce6-94e8b11a5c6a" (UID: "1019761a-2eb2-43f0-bce6-94e8b11a5c6a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:35:25 crc kubenswrapper[4942]: I0218 19:35:25.873326 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage04-crc" (OuterVolumeSpecName: "glance") pod "e6ae6fb3-34ed-41bd-beb7-7ecedb83a6e3" (UID: "e6ae6fb3-34ed-41bd-beb7-7ecedb83a6e3"). InnerVolumeSpecName "local-storage04-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Feb 18 19:35:25 crc kubenswrapper[4942]: I0218 19:35:25.875365 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e6ae6fb3-34ed-41bd-beb7-7ecedb83a6e3-scripts" (OuterVolumeSpecName: "scripts") pod "e6ae6fb3-34ed-41bd-beb7-7ecedb83a6e3" (UID: "e6ae6fb3-34ed-41bd-beb7-7ecedb83a6e3"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:35:25 crc kubenswrapper[4942]: I0218 19:35:25.895810 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-6b8c9f8ffc-qtdr8" event={"ID":"921d1a28-ead8-42a6-933c-38a339741884","Type":"ContainerStarted","Data":"5406c6b90781279268f75608c064a21d3a65e4eb4c8a4c7e959d4465b49185b9"} Feb 18 19:35:25 crc kubenswrapper[4942]: I0218 19:35:25.895851 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-6b8c9f8ffc-qtdr8" event={"ID":"921d1a28-ead8-42a6-933c-38a339741884","Type":"ContainerStarted","Data":"66f57c246570cb64775a601036f5870a5885605c57cb8be2088eae510c596f8b"} Feb 18 19:35:25 crc kubenswrapper[4942]: I0218 19:35:25.897468 4942 generic.go:334] "Generic (PLEG): container finished" podID="e6ae6fb3-34ed-41bd-beb7-7ecedb83a6e3" containerID="608226f93f6b011f29d114f5bb7d2061e4add8384a0ff0be89f7c6d996ce4530" exitCode=0 Feb 18 19:35:25 crc kubenswrapper[4942]: I0218 19:35:25.897490 4942 generic.go:334] "Generic (PLEG): container finished" podID="e6ae6fb3-34ed-41bd-beb7-7ecedb83a6e3" containerID="286c5e645cca1f276431a49063d017c31331b57ff3a3adef65ab9aa752117a6e" exitCode=143 Feb 18 19:35:25 crc kubenswrapper[4942]: I0218 19:35:25.897522 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"e6ae6fb3-34ed-41bd-beb7-7ecedb83a6e3","Type":"ContainerDied","Data":"608226f93f6b011f29d114f5bb7d2061e4add8384a0ff0be89f7c6d996ce4530"} Feb 18 19:35:25 crc kubenswrapper[4942]: I0218 19:35:25.897540 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"e6ae6fb3-34ed-41bd-beb7-7ecedb83a6e3","Type":"ContainerDied","Data":"286c5e645cca1f276431a49063d017c31331b57ff3a3adef65ab9aa752117a6e"} Feb 18 19:35:25 crc kubenswrapper[4942]: I0218 19:35:25.897550 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"e6ae6fb3-34ed-41bd-beb7-7ecedb83a6e3","Type":"ContainerDied","Data":"a08610a6a430e153a9003711c6d5df1b3e69d004820a8579935266408e2afede"} Feb 18 19:35:25 crc kubenswrapper[4942]: I0218 19:35:25.897565 4942 scope.go:117] "RemoveContainer" containerID="608226f93f6b011f29d114f5bb7d2061e4add8384a0ff0be89f7c6d996ce4530" Feb 18 19:35:25 crc kubenswrapper[4942]: I0218 19:35:25.897743 4942 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 18 19:35:25 crc kubenswrapper[4942]: I0218 19:35:25.905724 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e6ae6fb3-34ed-41bd-beb7-7ecedb83a6e3-kube-api-access-r6mcd" (OuterVolumeSpecName: "kube-api-access-r6mcd") pod "e6ae6fb3-34ed-41bd-beb7-7ecedb83a6e3" (UID: "e6ae6fb3-34ed-41bd-beb7-7ecedb83a6e3"). InnerVolumeSpecName "kube-api-access-r6mcd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:35:25 crc kubenswrapper[4942]: I0218 19:35:25.906215 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e4517368-322e-4467-b31a-45b487e1035b","Type":"ContainerStarted","Data":"e4a549323fce47497ee0c4cfa6ce99131c2b1fa4f1a33956d55a73512533ebbd"} Feb 18 19:35:25 crc kubenswrapper[4942]: I0218 19:35:25.906927 4942 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage08-crc" (UniqueName: "kubernetes.io/local-volume/local-storage08-crc") on node "crc" Feb 18 19:35:25 crc kubenswrapper[4942]: I0218 19:35:25.908056 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e6ae6fb3-34ed-41bd-beb7-7ecedb83a6e3-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e6ae6fb3-34ed-41bd-beb7-7ecedb83a6e3" (UID: "e6ae6fb3-34ed-41bd-beb7-7ecedb83a6e3"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:35:25 crc kubenswrapper[4942]: I0218 19:35:25.911789 4942 generic.go:334] "Generic (PLEG): container finished" podID="1019761a-2eb2-43f0-bce6-94e8b11a5c6a" containerID="c3ea01aa2e28d7af52196a675dad3daf6cebda76c104454a2dbd5773bb572698" exitCode=143 Feb 18 19:35:25 crc kubenswrapper[4942]: I0218 19:35:25.911817 4942 generic.go:334] "Generic (PLEG): container finished" podID="1019761a-2eb2-43f0-bce6-94e8b11a5c6a" containerID="9320749dc03b7598fafa195353940eae613dd86b6a5b319bcd099b683cd1ffb7" exitCode=143 Feb 18 19:35:25 crc kubenswrapper[4942]: I0218 19:35:25.912094 4942 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 18 19:35:25 crc kubenswrapper[4942]: I0218 19:35:25.912080 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"1019761a-2eb2-43f0-bce6-94e8b11a5c6a","Type":"ContainerDied","Data":"c3ea01aa2e28d7af52196a675dad3daf6cebda76c104454a2dbd5773bb572698"} Feb 18 19:35:25 crc kubenswrapper[4942]: I0218 19:35:25.912382 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"1019761a-2eb2-43f0-bce6-94e8b11a5c6a","Type":"ContainerDied","Data":"9320749dc03b7598fafa195353940eae613dd86b6a5b319bcd099b683cd1ffb7"} Feb 18 19:35:25 crc kubenswrapper[4942]: I0218 19:35:25.912404 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"1019761a-2eb2-43f0-bce6-94e8b11a5c6a","Type":"ContainerDied","Data":"a86dd504e5cfc46b03f20f6e448da41a9c6e744b02c0f0f6b9cfc4506ef33bc9"} Feb 18 19:35:25 crc kubenswrapper[4942]: I0218 19:35:25.939244 4942 scope.go:117] "RemoveContainer" containerID="286c5e645cca1f276431a49063d017c31331b57ff3a3adef65ab9aa752117a6e" Feb 18 19:35:25 crc kubenswrapper[4942]: I0218 19:35:25.951753 4942 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e6ae6fb3-34ed-41bd-beb7-7ecedb83a6e3-logs\") on node \"crc\" DevicePath \"\"" Feb 18 19:35:25 crc kubenswrapper[4942]: I0218 19:35:25.951807 4942 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") on node \"crc\" " Feb 18 19:35:25 crc kubenswrapper[4942]: I0218 19:35:25.951816 4942 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e6ae6fb3-34ed-41bd-beb7-7ecedb83a6e3-scripts\") on node \"crc\" DevicePath \"\"" Feb 18 19:35:25 crc kubenswrapper[4942]: I0218 19:35:25.951825 4942 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r6mcd\" (UniqueName: \"kubernetes.io/projected/e6ae6fb3-34ed-41bd-beb7-7ecedb83a6e3-kube-api-access-r6mcd\") on node \"crc\" DevicePath \"\"" Feb 18 19:35:25 crc kubenswrapper[4942]: I0218 19:35:25.951836 4942 reconciler_common.go:293] "Volume detached for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") on node \"crc\" DevicePath \"\"" Feb 18 19:35:25 crc kubenswrapper[4942]: I0218 19:35:25.951844 4942 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/e6ae6fb3-34ed-41bd-beb7-7ecedb83a6e3-httpd-run\") on node \"crc\" DevicePath \"\"" Feb 18 19:35:25 crc kubenswrapper[4942]: I0218 19:35:25.951853 4942 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e6ae6fb3-34ed-41bd-beb7-7ecedb83a6e3-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 19:35:25 crc kubenswrapper[4942]: I0218 19:35:25.951861 4942 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1019761a-2eb2-43f0-bce6-94e8b11a5c6a-config-data\") on node \"crc\" DevicePath \"\"" Feb 18 19:35:25 crc kubenswrapper[4942]: I0218 19:35:25.956031 4942 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 18 19:35:25 crc kubenswrapper[4942]: I0218 19:35:25.984160 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e6ae6fb3-34ed-41bd-beb7-7ecedb83a6e3-config-data" (OuterVolumeSpecName: "config-data") pod "e6ae6fb3-34ed-41bd-beb7-7ecedb83a6e3" (UID: "e6ae6fb3-34ed-41bd-beb7-7ecedb83a6e3"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:35:25 crc kubenswrapper[4942]: I0218 19:35:25.993180 4942 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 18 19:35:26 crc kubenswrapper[4942]: I0218 19:35:26.007119 4942 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage04-crc" (UniqueName: "kubernetes.io/local-volume/local-storage04-crc") on node "crc" Feb 18 19:35:26 crc kubenswrapper[4942]: I0218 19:35:26.017077 4942 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Feb 18 19:35:26 crc kubenswrapper[4942]: E0218 19:35:26.017615 4942 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1019761a-2eb2-43f0-bce6-94e8b11a5c6a" containerName="glance-log" Feb 18 19:35:26 crc kubenswrapper[4942]: I0218 19:35:26.017632 4942 state_mem.go:107] "Deleted CPUSet assignment" podUID="1019761a-2eb2-43f0-bce6-94e8b11a5c6a" containerName="glance-log" Feb 18 19:35:26 crc kubenswrapper[4942]: E0218 19:35:26.017650 4942 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e6ae6fb3-34ed-41bd-beb7-7ecedb83a6e3" containerName="glance-httpd" Feb 18 19:35:26 crc kubenswrapper[4942]: I0218 19:35:26.017657 4942 state_mem.go:107] "Deleted CPUSet assignment" podUID="e6ae6fb3-34ed-41bd-beb7-7ecedb83a6e3" containerName="glance-httpd" Feb 18 19:35:26 crc kubenswrapper[4942]: E0218 19:35:26.017695 4942 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1019761a-2eb2-43f0-bce6-94e8b11a5c6a" containerName="glance-httpd" Feb 18 19:35:26 crc kubenswrapper[4942]: I0218 19:35:26.017703 4942 state_mem.go:107] "Deleted CPUSet assignment" podUID="1019761a-2eb2-43f0-bce6-94e8b11a5c6a" containerName="glance-httpd" Feb 18 19:35:26 crc kubenswrapper[4942]: E0218 19:35:26.017718 4942 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e6ae6fb3-34ed-41bd-beb7-7ecedb83a6e3" containerName="glance-log" Feb 18 19:35:26 crc kubenswrapper[4942]: I0218 19:35:26.017723 4942 state_mem.go:107] "Deleted CPUSet assignment" podUID="e6ae6fb3-34ed-41bd-beb7-7ecedb83a6e3" containerName="glance-log" Feb 18 19:35:26 crc kubenswrapper[4942]: I0218 19:35:26.017918 4942 memory_manager.go:354] "RemoveStaleState removing state" podUID="e6ae6fb3-34ed-41bd-beb7-7ecedb83a6e3" containerName="glance-log" Feb 18 19:35:26 crc kubenswrapper[4942]: I0218 19:35:26.017935 4942 memory_manager.go:354] "RemoveStaleState removing state" podUID="1019761a-2eb2-43f0-bce6-94e8b11a5c6a" containerName="glance-httpd" Feb 18 19:35:26 crc kubenswrapper[4942]: I0218 19:35:26.017944 4942 memory_manager.go:354] "RemoveStaleState removing state" podUID="e6ae6fb3-34ed-41bd-beb7-7ecedb83a6e3" containerName="glance-httpd" Feb 18 19:35:26 crc kubenswrapper[4942]: I0218 19:35:26.017986 4942 memory_manager.go:354] "RemoveStaleState removing state" podUID="1019761a-2eb2-43f0-bce6-94e8b11a5c6a" containerName="glance-log" Feb 18 19:35:26 crc kubenswrapper[4942]: I0218 19:35:26.019112 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 18 19:35:26 crc kubenswrapper[4942]: I0218 19:35:26.020135 4942 scope.go:117] "RemoveContainer" containerID="608226f93f6b011f29d114f5bb7d2061e4add8384a0ff0be89f7c6d996ce4530" Feb 18 19:35:26 crc kubenswrapper[4942]: E0218 19:35:26.020545 4942 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"608226f93f6b011f29d114f5bb7d2061e4add8384a0ff0be89f7c6d996ce4530\": container with ID starting with 608226f93f6b011f29d114f5bb7d2061e4add8384a0ff0be89f7c6d996ce4530 not found: ID does not exist" containerID="608226f93f6b011f29d114f5bb7d2061e4add8384a0ff0be89f7c6d996ce4530" Feb 18 19:35:26 crc kubenswrapper[4942]: I0218 19:35:26.020573 4942 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"608226f93f6b011f29d114f5bb7d2061e4add8384a0ff0be89f7c6d996ce4530"} err="failed to get container status \"608226f93f6b011f29d114f5bb7d2061e4add8384a0ff0be89f7c6d996ce4530\": rpc error: code = NotFound desc = could not find container \"608226f93f6b011f29d114f5bb7d2061e4add8384a0ff0be89f7c6d996ce4530\": container with ID starting with 608226f93f6b011f29d114f5bb7d2061e4add8384a0ff0be89f7c6d996ce4530 not found: ID does not exist" Feb 18 19:35:26 crc kubenswrapper[4942]: I0218 19:35:26.020594 4942 scope.go:117] "RemoveContainer" containerID="286c5e645cca1f276431a49063d017c31331b57ff3a3adef65ab9aa752117a6e" Feb 18 19:35:26 crc kubenswrapper[4942]: E0218 19:35:26.020772 4942 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"286c5e645cca1f276431a49063d017c31331b57ff3a3adef65ab9aa752117a6e\": container with ID starting with 286c5e645cca1f276431a49063d017c31331b57ff3a3adef65ab9aa752117a6e not found: ID does not exist" containerID="286c5e645cca1f276431a49063d017c31331b57ff3a3adef65ab9aa752117a6e" Feb 18 19:35:26 crc kubenswrapper[4942]: I0218 19:35:26.020792 4942 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"286c5e645cca1f276431a49063d017c31331b57ff3a3adef65ab9aa752117a6e"} err="failed to get container status \"286c5e645cca1f276431a49063d017c31331b57ff3a3adef65ab9aa752117a6e\": rpc error: code = NotFound desc = could not find container \"286c5e645cca1f276431a49063d017c31331b57ff3a3adef65ab9aa752117a6e\": container with ID starting with 286c5e645cca1f276431a49063d017c31331b57ff3a3adef65ab9aa752117a6e not found: ID does not exist" Feb 18 19:35:26 crc kubenswrapper[4942]: I0218 19:35:26.020806 4942 scope.go:117] "RemoveContainer" containerID="608226f93f6b011f29d114f5bb7d2061e4add8384a0ff0be89f7c6d996ce4530" Feb 18 19:35:26 crc kubenswrapper[4942]: I0218 19:35:26.021019 4942 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"608226f93f6b011f29d114f5bb7d2061e4add8384a0ff0be89f7c6d996ce4530"} err="failed to get container status \"608226f93f6b011f29d114f5bb7d2061e4add8384a0ff0be89f7c6d996ce4530\": rpc error: code = NotFound desc = could not find container \"608226f93f6b011f29d114f5bb7d2061e4add8384a0ff0be89f7c6d996ce4530\": container with ID starting with 608226f93f6b011f29d114f5bb7d2061e4add8384a0ff0be89f7c6d996ce4530 not found: ID does not exist" Feb 18 19:35:26 crc kubenswrapper[4942]: I0218 19:35:26.021056 4942 scope.go:117] "RemoveContainer" containerID="286c5e645cca1f276431a49063d017c31331b57ff3a3adef65ab9aa752117a6e" Feb 18 19:35:26 crc kubenswrapper[4942]: I0218 19:35:26.021300 4942 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"286c5e645cca1f276431a49063d017c31331b57ff3a3adef65ab9aa752117a6e"} err="failed to get container status \"286c5e645cca1f276431a49063d017c31331b57ff3a3adef65ab9aa752117a6e\": rpc error: code = NotFound desc = could not find container \"286c5e645cca1f276431a49063d017c31331b57ff3a3adef65ab9aa752117a6e\": container with ID starting with 286c5e645cca1f276431a49063d017c31331b57ff3a3adef65ab9aa752117a6e not found: ID does not exist" Feb 18 19:35:26 crc kubenswrapper[4942]: I0218 19:35:26.021314 4942 scope.go:117] "RemoveContainer" containerID="c3ea01aa2e28d7af52196a675dad3daf6cebda76c104454a2dbd5773bb572698" Feb 18 19:35:26 crc kubenswrapper[4942]: I0218 19:35:26.021777 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Feb 18 19:35:26 crc kubenswrapper[4942]: I0218 19:35:26.021958 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Feb 18 19:35:26 crc kubenswrapper[4942]: I0218 19:35:26.028375 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 18 19:35:26 crc kubenswrapper[4942]: I0218 19:35:26.053062 4942 reconciler_common.go:293] "Volume detached for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") on node \"crc\" DevicePath \"\"" Feb 18 19:35:26 crc kubenswrapper[4942]: I0218 19:35:26.053101 4942 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e6ae6fb3-34ed-41bd-beb7-7ecedb83a6e3-config-data\") on node \"crc\" DevicePath \"\"" Feb 18 19:35:26 crc kubenswrapper[4942]: I0218 19:35:26.106175 4942 scope.go:117] "RemoveContainer" containerID="9320749dc03b7598fafa195353940eae613dd86b6a5b319bcd099b683cd1ffb7" Feb 18 19:35:26 crc kubenswrapper[4942]: I0218 19:35:26.155571 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/dc47abc8-8f2f-41c6-96c3-d6e81388e5b2-scripts\") pod \"glance-default-external-api-0\" (UID: \"dc47abc8-8f2f-41c6-96c3-d6e81388e5b2\") " pod="openstack/glance-default-external-api-0" Feb 18 19:35:26 crc kubenswrapper[4942]: I0218 19:35:26.155820 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/dc47abc8-8f2f-41c6-96c3-d6e81388e5b2-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"dc47abc8-8f2f-41c6-96c3-d6e81388e5b2\") " pod="openstack/glance-default-external-api-0" Feb 18 19:35:26 crc kubenswrapper[4942]: I0218 19:35:26.155958 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dc47abc8-8f2f-41c6-96c3-d6e81388e5b2-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"dc47abc8-8f2f-41c6-96c3-d6e81388e5b2\") " pod="openstack/glance-default-external-api-0" Feb 18 19:35:26 crc kubenswrapper[4942]: I0218 19:35:26.155992 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/dc47abc8-8f2f-41c6-96c3-d6e81388e5b2-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"dc47abc8-8f2f-41c6-96c3-d6e81388e5b2\") " pod="openstack/glance-default-external-api-0" Feb 18 19:35:26 crc kubenswrapper[4942]: I0218 19:35:26.156059 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-external-api-0\" (UID: \"dc47abc8-8f2f-41c6-96c3-d6e81388e5b2\") " pod="openstack/glance-default-external-api-0" Feb 18 19:35:26 crc kubenswrapper[4942]: I0218 19:35:26.156109 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rk7vw\" (UniqueName: \"kubernetes.io/projected/dc47abc8-8f2f-41c6-96c3-d6e81388e5b2-kube-api-access-rk7vw\") pod \"glance-default-external-api-0\" (UID: \"dc47abc8-8f2f-41c6-96c3-d6e81388e5b2\") " pod="openstack/glance-default-external-api-0" Feb 18 19:35:26 crc kubenswrapper[4942]: I0218 19:35:26.156150 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dc47abc8-8f2f-41c6-96c3-d6e81388e5b2-config-data\") pod \"glance-default-external-api-0\" (UID: \"dc47abc8-8f2f-41c6-96c3-d6e81388e5b2\") " pod="openstack/glance-default-external-api-0" Feb 18 19:35:26 crc kubenswrapper[4942]: I0218 19:35:26.156240 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/dc47abc8-8f2f-41c6-96c3-d6e81388e5b2-logs\") pod \"glance-default-external-api-0\" (UID: \"dc47abc8-8f2f-41c6-96c3-d6e81388e5b2\") " pod="openstack/glance-default-external-api-0" Feb 18 19:35:26 crc kubenswrapper[4942]: I0218 19:35:26.178008 4942 scope.go:117] "RemoveContainer" containerID="c3ea01aa2e28d7af52196a675dad3daf6cebda76c104454a2dbd5773bb572698" Feb 18 19:35:26 crc kubenswrapper[4942]: E0218 19:35:26.178430 4942 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c3ea01aa2e28d7af52196a675dad3daf6cebda76c104454a2dbd5773bb572698\": container with ID starting with c3ea01aa2e28d7af52196a675dad3daf6cebda76c104454a2dbd5773bb572698 not found: ID does not exist" containerID="c3ea01aa2e28d7af52196a675dad3daf6cebda76c104454a2dbd5773bb572698" Feb 18 19:35:26 crc kubenswrapper[4942]: I0218 19:35:26.178465 4942 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c3ea01aa2e28d7af52196a675dad3daf6cebda76c104454a2dbd5773bb572698"} err="failed to get container status \"c3ea01aa2e28d7af52196a675dad3daf6cebda76c104454a2dbd5773bb572698\": rpc error: code = NotFound desc = could not find container \"c3ea01aa2e28d7af52196a675dad3daf6cebda76c104454a2dbd5773bb572698\": container with ID starting with c3ea01aa2e28d7af52196a675dad3daf6cebda76c104454a2dbd5773bb572698 not found: ID does not exist" Feb 18 19:35:26 crc kubenswrapper[4942]: I0218 19:35:26.178491 4942 scope.go:117] "RemoveContainer" containerID="9320749dc03b7598fafa195353940eae613dd86b6a5b319bcd099b683cd1ffb7" Feb 18 19:35:26 crc kubenswrapper[4942]: E0218 19:35:26.180443 4942 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9320749dc03b7598fafa195353940eae613dd86b6a5b319bcd099b683cd1ffb7\": container with ID starting with 9320749dc03b7598fafa195353940eae613dd86b6a5b319bcd099b683cd1ffb7 not found: ID does not exist" containerID="9320749dc03b7598fafa195353940eae613dd86b6a5b319bcd099b683cd1ffb7" Feb 18 19:35:26 crc kubenswrapper[4942]: I0218 19:35:26.180492 4942 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9320749dc03b7598fafa195353940eae613dd86b6a5b319bcd099b683cd1ffb7"} err="failed to get container status \"9320749dc03b7598fafa195353940eae613dd86b6a5b319bcd099b683cd1ffb7\": rpc error: code = NotFound desc = could not find container \"9320749dc03b7598fafa195353940eae613dd86b6a5b319bcd099b683cd1ffb7\": container with ID starting with 9320749dc03b7598fafa195353940eae613dd86b6a5b319bcd099b683cd1ffb7 not found: ID does not exist" Feb 18 19:35:26 crc kubenswrapper[4942]: I0218 19:35:26.180526 4942 scope.go:117] "RemoveContainer" containerID="c3ea01aa2e28d7af52196a675dad3daf6cebda76c104454a2dbd5773bb572698" Feb 18 19:35:26 crc kubenswrapper[4942]: I0218 19:35:26.180891 4942 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c3ea01aa2e28d7af52196a675dad3daf6cebda76c104454a2dbd5773bb572698"} err="failed to get container status \"c3ea01aa2e28d7af52196a675dad3daf6cebda76c104454a2dbd5773bb572698\": rpc error: code = NotFound desc = could not find container \"c3ea01aa2e28d7af52196a675dad3daf6cebda76c104454a2dbd5773bb572698\": container with ID starting with c3ea01aa2e28d7af52196a675dad3daf6cebda76c104454a2dbd5773bb572698 not found: ID does not exist" Feb 18 19:35:26 crc kubenswrapper[4942]: I0218 19:35:26.180916 4942 scope.go:117] "RemoveContainer" containerID="9320749dc03b7598fafa195353940eae613dd86b6a5b319bcd099b683cd1ffb7" Feb 18 19:35:26 crc kubenswrapper[4942]: I0218 19:35:26.183073 4942 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9320749dc03b7598fafa195353940eae613dd86b6a5b319bcd099b683cd1ffb7"} err="failed to get container status \"9320749dc03b7598fafa195353940eae613dd86b6a5b319bcd099b683cd1ffb7\": rpc error: code = NotFound desc = could not find container \"9320749dc03b7598fafa195353940eae613dd86b6a5b319bcd099b683cd1ffb7\": container with ID starting with 9320749dc03b7598fafa195353940eae613dd86b6a5b319bcd099b683cd1ffb7 not found: ID does not exist" Feb 18 19:35:26 crc kubenswrapper[4942]: I0218 19:35:26.257625 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/dc47abc8-8f2f-41c6-96c3-d6e81388e5b2-logs\") pod \"glance-default-external-api-0\" (UID: \"dc47abc8-8f2f-41c6-96c3-d6e81388e5b2\") " pod="openstack/glance-default-external-api-0" Feb 18 19:35:26 crc kubenswrapper[4942]: I0218 19:35:26.257682 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/dc47abc8-8f2f-41c6-96c3-d6e81388e5b2-scripts\") pod \"glance-default-external-api-0\" (UID: \"dc47abc8-8f2f-41c6-96c3-d6e81388e5b2\") " pod="openstack/glance-default-external-api-0" Feb 18 19:35:26 crc kubenswrapper[4942]: I0218 19:35:26.257740 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/dc47abc8-8f2f-41c6-96c3-d6e81388e5b2-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"dc47abc8-8f2f-41c6-96c3-d6e81388e5b2\") " pod="openstack/glance-default-external-api-0" Feb 18 19:35:26 crc kubenswrapper[4942]: I0218 19:35:26.257831 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dc47abc8-8f2f-41c6-96c3-d6e81388e5b2-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"dc47abc8-8f2f-41c6-96c3-d6e81388e5b2\") " pod="openstack/glance-default-external-api-0" Feb 18 19:35:26 crc kubenswrapper[4942]: I0218 19:35:26.257849 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/dc47abc8-8f2f-41c6-96c3-d6e81388e5b2-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"dc47abc8-8f2f-41c6-96c3-d6e81388e5b2\") " pod="openstack/glance-default-external-api-0" Feb 18 19:35:26 crc kubenswrapper[4942]: I0218 19:35:26.257874 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-external-api-0\" (UID: \"dc47abc8-8f2f-41c6-96c3-d6e81388e5b2\") " pod="openstack/glance-default-external-api-0" Feb 18 19:35:26 crc kubenswrapper[4942]: I0218 19:35:26.257893 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rk7vw\" (UniqueName: \"kubernetes.io/projected/dc47abc8-8f2f-41c6-96c3-d6e81388e5b2-kube-api-access-rk7vw\") pod \"glance-default-external-api-0\" (UID: \"dc47abc8-8f2f-41c6-96c3-d6e81388e5b2\") " pod="openstack/glance-default-external-api-0" Feb 18 19:35:26 crc kubenswrapper[4942]: I0218 19:35:26.257916 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dc47abc8-8f2f-41c6-96c3-d6e81388e5b2-config-data\") pod \"glance-default-external-api-0\" (UID: \"dc47abc8-8f2f-41c6-96c3-d6e81388e5b2\") " pod="openstack/glance-default-external-api-0" Feb 18 19:35:26 crc kubenswrapper[4942]: I0218 19:35:26.260500 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/dc47abc8-8f2f-41c6-96c3-d6e81388e5b2-logs\") pod \"glance-default-external-api-0\" (UID: \"dc47abc8-8f2f-41c6-96c3-d6e81388e5b2\") " pod="openstack/glance-default-external-api-0" Feb 18 19:35:26 crc kubenswrapper[4942]: I0218 19:35:26.261466 4942 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-external-api-0\" (UID: \"dc47abc8-8f2f-41c6-96c3-d6e81388e5b2\") device mount path \"/mnt/openstack/pv08\"" pod="openstack/glance-default-external-api-0" Feb 18 19:35:26 crc kubenswrapper[4942]: I0218 19:35:26.262744 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dc47abc8-8f2f-41c6-96c3-d6e81388e5b2-config-data\") pod \"glance-default-external-api-0\" (UID: \"dc47abc8-8f2f-41c6-96c3-d6e81388e5b2\") " pod="openstack/glance-default-external-api-0" Feb 18 19:35:26 crc kubenswrapper[4942]: I0218 19:35:26.265122 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/dc47abc8-8f2f-41c6-96c3-d6e81388e5b2-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"dc47abc8-8f2f-41c6-96c3-d6e81388e5b2\") " pod="openstack/glance-default-external-api-0" Feb 18 19:35:26 crc kubenswrapper[4942]: I0218 19:35:26.266275 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dc47abc8-8f2f-41c6-96c3-d6e81388e5b2-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"dc47abc8-8f2f-41c6-96c3-d6e81388e5b2\") " pod="openstack/glance-default-external-api-0" Feb 18 19:35:26 crc kubenswrapper[4942]: I0218 19:35:26.267523 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/dc47abc8-8f2f-41c6-96c3-d6e81388e5b2-scripts\") pod \"glance-default-external-api-0\" (UID: \"dc47abc8-8f2f-41c6-96c3-d6e81388e5b2\") " pod="openstack/glance-default-external-api-0" Feb 18 19:35:26 crc kubenswrapper[4942]: I0218 19:35:26.273338 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/dc47abc8-8f2f-41c6-96c3-d6e81388e5b2-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"dc47abc8-8f2f-41c6-96c3-d6e81388e5b2\") " pod="openstack/glance-default-external-api-0" Feb 18 19:35:26 crc kubenswrapper[4942]: I0218 19:35:26.283661 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rk7vw\" (UniqueName: \"kubernetes.io/projected/dc47abc8-8f2f-41c6-96c3-d6e81388e5b2-kube-api-access-rk7vw\") pod \"glance-default-external-api-0\" (UID: \"dc47abc8-8f2f-41c6-96c3-d6e81388e5b2\") " pod="openstack/glance-default-external-api-0" Feb 18 19:35:26 crc kubenswrapper[4942]: I0218 19:35:26.322985 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-external-api-0\" (UID: \"dc47abc8-8f2f-41c6-96c3-d6e81388e5b2\") " pod="openstack/glance-default-external-api-0" Feb 18 19:35:26 crc kubenswrapper[4942]: I0218 19:35:26.348240 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 18 19:35:26 crc kubenswrapper[4942]: I0218 19:35:26.417000 4942 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 18 19:35:26 crc kubenswrapper[4942]: I0218 19:35:26.448586 4942 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 18 19:35:26 crc kubenswrapper[4942]: I0218 19:35:26.461611 4942 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 18 19:35:26 crc kubenswrapper[4942]: I0218 19:35:26.463634 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 18 19:35:26 crc kubenswrapper[4942]: I0218 19:35:26.467150 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Feb 18 19:35:26 crc kubenswrapper[4942]: I0218 19:35:26.467385 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Feb 18 19:35:26 crc kubenswrapper[4942]: I0218 19:35:26.468827 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 18 19:35:26 crc kubenswrapper[4942]: I0218 19:35:26.505248 4942 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-9ntpw" Feb 18 19:35:26 crc kubenswrapper[4942]: I0218 19:35:26.567116 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5cd0efdc-b208-4270-9c23-33e01f7298be-config-data\") pod \"glance-default-internal-api-0\" (UID: \"5cd0efdc-b208-4270-9c23-33e01f7298be\") " pod="openstack/glance-default-internal-api-0" Feb 18 19:35:26 crc kubenswrapper[4942]: I0218 19:35:26.567184 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5cd0efdc-b208-4270-9c23-33e01f7298be-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"5cd0efdc-b208-4270-9c23-33e01f7298be\") " pod="openstack/glance-default-internal-api-0" Feb 18 19:35:26 crc kubenswrapper[4942]: I0218 19:35:26.572091 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/5cd0efdc-b208-4270-9c23-33e01f7298be-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"5cd0efdc-b208-4270-9c23-33e01f7298be\") " pod="openstack/glance-default-internal-api-0" Feb 18 19:35:26 crc kubenswrapper[4942]: I0218 19:35:26.572162 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5cd0efdc-b208-4270-9c23-33e01f7298be-logs\") pod \"glance-default-internal-api-0\" (UID: \"5cd0efdc-b208-4270-9c23-33e01f7298be\") " pod="openstack/glance-default-internal-api-0" Feb 18 19:35:26 crc kubenswrapper[4942]: I0218 19:35:26.572211 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/5cd0efdc-b208-4270-9c23-33e01f7298be-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"5cd0efdc-b208-4270-9c23-33e01f7298be\") " pod="openstack/glance-default-internal-api-0" Feb 18 19:35:26 crc kubenswrapper[4942]: I0218 19:35:26.572231 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8bv8n\" (UniqueName: \"kubernetes.io/projected/5cd0efdc-b208-4270-9c23-33e01f7298be-kube-api-access-8bv8n\") pod \"glance-default-internal-api-0\" (UID: \"5cd0efdc-b208-4270-9c23-33e01f7298be\") " pod="openstack/glance-default-internal-api-0" Feb 18 19:35:26 crc kubenswrapper[4942]: I0218 19:35:26.572334 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5cd0efdc-b208-4270-9c23-33e01f7298be-scripts\") pod \"glance-default-internal-api-0\" (UID: \"5cd0efdc-b208-4270-9c23-33e01f7298be\") " pod="openstack/glance-default-internal-api-0" Feb 18 19:35:26 crc kubenswrapper[4942]: I0218 19:35:26.572501 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"glance-default-internal-api-0\" (UID: \"5cd0efdc-b208-4270-9c23-33e01f7298be\") " pod="openstack/glance-default-internal-api-0" Feb 18 19:35:26 crc kubenswrapper[4942]: I0218 19:35:26.673366 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/af8e769c-00c3-41a1-97c4-d91902767dfe-combined-ca-bundle\") pod \"af8e769c-00c3-41a1-97c4-d91902767dfe\" (UID: \"af8e769c-00c3-41a1-97c4-d91902767dfe\") " Feb 18 19:35:26 crc kubenswrapper[4942]: I0218 19:35:26.673418 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/af8e769c-00c3-41a1-97c4-d91902767dfe-logs\") pod \"af8e769c-00c3-41a1-97c4-d91902767dfe\" (UID: \"af8e769c-00c3-41a1-97c4-d91902767dfe\") " Feb 18 19:35:26 crc kubenswrapper[4942]: I0218 19:35:26.673575 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/af8e769c-00c3-41a1-97c4-d91902767dfe-config-data\") pod \"af8e769c-00c3-41a1-97c4-d91902767dfe\" (UID: \"af8e769c-00c3-41a1-97c4-d91902767dfe\") " Feb 18 19:35:26 crc kubenswrapper[4942]: I0218 19:35:26.674204 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/af8e769c-00c3-41a1-97c4-d91902767dfe-logs" (OuterVolumeSpecName: "logs") pod "af8e769c-00c3-41a1-97c4-d91902767dfe" (UID: "af8e769c-00c3-41a1-97c4-d91902767dfe"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 19:35:26 crc kubenswrapper[4942]: I0218 19:35:26.685895 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/af8e769c-00c3-41a1-97c4-d91902767dfe-scripts\") pod \"af8e769c-00c3-41a1-97c4-d91902767dfe\" (UID: \"af8e769c-00c3-41a1-97c4-d91902767dfe\") " Feb 18 19:35:26 crc kubenswrapper[4942]: I0218 19:35:26.686135 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9dmdr\" (UniqueName: \"kubernetes.io/projected/af8e769c-00c3-41a1-97c4-d91902767dfe-kube-api-access-9dmdr\") pod \"af8e769c-00c3-41a1-97c4-d91902767dfe\" (UID: \"af8e769c-00c3-41a1-97c4-d91902767dfe\") " Feb 18 19:35:26 crc kubenswrapper[4942]: I0218 19:35:26.686503 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"glance-default-internal-api-0\" (UID: \"5cd0efdc-b208-4270-9c23-33e01f7298be\") " pod="openstack/glance-default-internal-api-0" Feb 18 19:35:26 crc kubenswrapper[4942]: I0218 19:35:26.686563 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5cd0efdc-b208-4270-9c23-33e01f7298be-config-data\") pod \"glance-default-internal-api-0\" (UID: \"5cd0efdc-b208-4270-9c23-33e01f7298be\") " pod="openstack/glance-default-internal-api-0" Feb 18 19:35:26 crc kubenswrapper[4942]: I0218 19:35:26.686624 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5cd0efdc-b208-4270-9c23-33e01f7298be-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"5cd0efdc-b208-4270-9c23-33e01f7298be\") " pod="openstack/glance-default-internal-api-0" Feb 18 19:35:26 crc kubenswrapper[4942]: I0218 19:35:26.686708 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/5cd0efdc-b208-4270-9c23-33e01f7298be-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"5cd0efdc-b208-4270-9c23-33e01f7298be\") " pod="openstack/glance-default-internal-api-0" Feb 18 19:35:26 crc kubenswrapper[4942]: I0218 19:35:26.686745 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5cd0efdc-b208-4270-9c23-33e01f7298be-logs\") pod \"glance-default-internal-api-0\" (UID: \"5cd0efdc-b208-4270-9c23-33e01f7298be\") " pod="openstack/glance-default-internal-api-0" Feb 18 19:35:26 crc kubenswrapper[4942]: I0218 19:35:26.686801 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/5cd0efdc-b208-4270-9c23-33e01f7298be-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"5cd0efdc-b208-4270-9c23-33e01f7298be\") " pod="openstack/glance-default-internal-api-0" Feb 18 19:35:26 crc kubenswrapper[4942]: I0218 19:35:26.686823 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8bv8n\" (UniqueName: \"kubernetes.io/projected/5cd0efdc-b208-4270-9c23-33e01f7298be-kube-api-access-8bv8n\") pod \"glance-default-internal-api-0\" (UID: \"5cd0efdc-b208-4270-9c23-33e01f7298be\") " pod="openstack/glance-default-internal-api-0" Feb 18 19:35:26 crc kubenswrapper[4942]: I0218 19:35:26.686940 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5cd0efdc-b208-4270-9c23-33e01f7298be-scripts\") pod \"glance-default-internal-api-0\" (UID: \"5cd0efdc-b208-4270-9c23-33e01f7298be\") " pod="openstack/glance-default-internal-api-0" Feb 18 19:35:26 crc kubenswrapper[4942]: I0218 19:35:26.687097 4942 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/af8e769c-00c3-41a1-97c4-d91902767dfe-logs\") on node \"crc\" DevicePath \"\"" Feb 18 19:35:26 crc kubenswrapper[4942]: I0218 19:35:26.690389 4942 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"glance-default-internal-api-0\" (UID: \"5cd0efdc-b208-4270-9c23-33e01f7298be\") device mount path \"/mnt/openstack/pv04\"" pod="openstack/glance-default-internal-api-0" Feb 18 19:35:26 crc kubenswrapper[4942]: I0218 19:35:26.694577 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5cd0efdc-b208-4270-9c23-33e01f7298be-logs\") pod \"glance-default-internal-api-0\" (UID: \"5cd0efdc-b208-4270-9c23-33e01f7298be\") " pod="openstack/glance-default-internal-api-0" Feb 18 19:35:26 crc kubenswrapper[4942]: I0218 19:35:26.694826 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/5cd0efdc-b208-4270-9c23-33e01f7298be-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"5cd0efdc-b208-4270-9c23-33e01f7298be\") " pod="openstack/glance-default-internal-api-0" Feb 18 19:35:26 crc kubenswrapper[4942]: I0218 19:35:26.698540 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5cd0efdc-b208-4270-9c23-33e01f7298be-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"5cd0efdc-b208-4270-9c23-33e01f7298be\") " pod="openstack/glance-default-internal-api-0" Feb 18 19:35:26 crc kubenswrapper[4942]: I0218 19:35:26.705966 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/af8e769c-00c3-41a1-97c4-d91902767dfe-kube-api-access-9dmdr" (OuterVolumeSpecName: "kube-api-access-9dmdr") pod "af8e769c-00c3-41a1-97c4-d91902767dfe" (UID: "af8e769c-00c3-41a1-97c4-d91902767dfe"). InnerVolumeSpecName "kube-api-access-9dmdr". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:35:26 crc kubenswrapper[4942]: I0218 19:35:26.705975 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/af8e769c-00c3-41a1-97c4-d91902767dfe-scripts" (OuterVolumeSpecName: "scripts") pod "af8e769c-00c3-41a1-97c4-d91902767dfe" (UID: "af8e769c-00c3-41a1-97c4-d91902767dfe"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:35:26 crc kubenswrapper[4942]: I0218 19:35:26.729406 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8bv8n\" (UniqueName: \"kubernetes.io/projected/5cd0efdc-b208-4270-9c23-33e01f7298be-kube-api-access-8bv8n\") pod \"glance-default-internal-api-0\" (UID: \"5cd0efdc-b208-4270-9c23-33e01f7298be\") " pod="openstack/glance-default-internal-api-0" Feb 18 19:35:26 crc kubenswrapper[4942]: I0218 19:35:26.729503 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/5cd0efdc-b208-4270-9c23-33e01f7298be-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"5cd0efdc-b208-4270-9c23-33e01f7298be\") " pod="openstack/glance-default-internal-api-0" Feb 18 19:35:26 crc kubenswrapper[4942]: I0218 19:35:26.731131 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5cd0efdc-b208-4270-9c23-33e01f7298be-scripts\") pod \"glance-default-internal-api-0\" (UID: \"5cd0efdc-b208-4270-9c23-33e01f7298be\") " pod="openstack/glance-default-internal-api-0" Feb 18 19:35:26 crc kubenswrapper[4942]: I0218 19:35:26.737750 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5cd0efdc-b208-4270-9c23-33e01f7298be-config-data\") pod \"glance-default-internal-api-0\" (UID: \"5cd0efdc-b208-4270-9c23-33e01f7298be\") " pod="openstack/glance-default-internal-api-0" Feb 18 19:35:26 crc kubenswrapper[4942]: I0218 19:35:26.753887 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/af8e769c-00c3-41a1-97c4-d91902767dfe-config-data" (OuterVolumeSpecName: "config-data") pod "af8e769c-00c3-41a1-97c4-d91902767dfe" (UID: "af8e769c-00c3-41a1-97c4-d91902767dfe"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:35:26 crc kubenswrapper[4942]: I0218 19:35:26.754153 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/af8e769c-00c3-41a1-97c4-d91902767dfe-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "af8e769c-00c3-41a1-97c4-d91902767dfe" (UID: "af8e769c-00c3-41a1-97c4-d91902767dfe"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:35:26 crc kubenswrapper[4942]: I0218 19:35:26.784883 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"glance-default-internal-api-0\" (UID: \"5cd0efdc-b208-4270-9c23-33e01f7298be\") " pod="openstack/glance-default-internal-api-0" Feb 18 19:35:26 crc kubenswrapper[4942]: I0218 19:35:26.790510 4942 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9dmdr\" (UniqueName: \"kubernetes.io/projected/af8e769c-00c3-41a1-97c4-d91902767dfe-kube-api-access-9dmdr\") on node \"crc\" DevicePath \"\"" Feb 18 19:35:26 crc kubenswrapper[4942]: I0218 19:35:26.790546 4942 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/af8e769c-00c3-41a1-97c4-d91902767dfe-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 19:35:26 crc kubenswrapper[4942]: I0218 19:35:26.790557 4942 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/af8e769c-00c3-41a1-97c4-d91902767dfe-config-data\") on node \"crc\" DevicePath \"\"" Feb 18 19:35:26 crc kubenswrapper[4942]: I0218 19:35:26.790568 4942 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/af8e769c-00c3-41a1-97c4-d91902767dfe-scripts\") on node \"crc\" DevicePath \"\"" Feb 18 19:35:26 crc kubenswrapper[4942]: I0218 19:35:26.812733 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 18 19:35:26 crc kubenswrapper[4942]: I0218 19:35:26.926366 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-9ntpw" event={"ID":"af8e769c-00c3-41a1-97c4-d91902767dfe","Type":"ContainerDied","Data":"eb7a8e3a23f3477cac51aacb10a95d5378f6772c63aae9e96752efd516b0a2a1"} Feb 18 19:35:26 crc kubenswrapper[4942]: I0218 19:35:26.926415 4942 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="eb7a8e3a23f3477cac51aacb10a95d5378f6772c63aae9e96752efd516b0a2a1" Feb 18 19:35:26 crc kubenswrapper[4942]: I0218 19:35:26.926492 4942 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-9ntpw" Feb 18 19:35:26 crc kubenswrapper[4942]: I0218 19:35:26.946244 4942 generic.go:334] "Generic (PLEG): container finished" podID="983d5293-8413-4a29-88b2-ba775b3b4a8b" containerID="96103ab065d78416959c1d84cf5d96a95a67496c5bf29a0bff2dd2c96318a211" exitCode=0 Feb 18 19:35:26 crc kubenswrapper[4942]: I0218 19:35:26.946310 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-db-sync-4h9n5" event={"ID":"983d5293-8413-4a29-88b2-ba775b3b4a8b","Type":"ContainerDied","Data":"96103ab065d78416959c1d84cf5d96a95a67496c5bf29a0bff2dd2c96318a211"} Feb 18 19:35:26 crc kubenswrapper[4942]: I0218 19:35:26.989215 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-6b8c9f8ffc-qtdr8" event={"ID":"921d1a28-ead8-42a6-933c-38a339741884","Type":"ContainerStarted","Data":"531ee7816fd7353cd71c0f54232b96ad0dd37eddd3c96b8ac1f0e58197be9795"} Feb 18 19:35:26 crc kubenswrapper[4942]: I0218 19:35:26.991203 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-6b8c9f8ffc-qtdr8" Feb 18 19:35:27 crc kubenswrapper[4942]: I0218 19:35:27.010478 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 18 19:35:27 crc kubenswrapper[4942]: W0218 19:35:27.023653 4942 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poddc47abc8_8f2f_41c6_96c3_d6e81388e5b2.slice/crio-5071cc9380a8d29894cf185feb69d5860ec44c77140f4a82c7520791aad9109c WatchSource:0}: Error finding container 5071cc9380a8d29894cf185feb69d5860ec44c77140f4a82c7520791aad9109c: Status 404 returned error can't find the container with id 5071cc9380a8d29894cf185feb69d5860ec44c77140f4a82c7520791aad9109c Feb 18 19:35:27 crc kubenswrapper[4942]: I0218 19:35:27.069637 4942 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1019761a-2eb2-43f0-bce6-94e8b11a5c6a" path="/var/lib/kubelet/pods/1019761a-2eb2-43f0-bce6-94e8b11a5c6a/volumes" Feb 18 19:35:27 crc kubenswrapper[4942]: I0218 19:35:27.070614 4942 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e6ae6fb3-34ed-41bd-beb7-7ecedb83a6e3" path="/var/lib/kubelet/pods/e6ae6fb3-34ed-41bd-beb7-7ecedb83a6e3/volumes" Feb 18 19:35:27 crc kubenswrapper[4942]: I0218 19:35:27.087191 4942 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-6b8c9f8ffc-qtdr8" podStartSLOduration=4.087173574 podStartE2EDuration="4.087173574s" podCreationTimestamp="2026-02-18 19:35:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:35:27.020970333 +0000 UTC m=+1086.725903008" watchObservedRunningTime="2026-02-18 19:35:27.087173574 +0000 UTC m=+1086.792106239" Feb 18 19:35:27 crc kubenswrapper[4942]: I0218 19:35:27.117940 4942 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-5794bf846d-82xzg"] Feb 18 19:35:27 crc kubenswrapper[4942]: E0218 19:35:27.118394 4942 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="af8e769c-00c3-41a1-97c4-d91902767dfe" containerName="placement-db-sync" Feb 18 19:35:27 crc kubenswrapper[4942]: I0218 19:35:27.118415 4942 state_mem.go:107] "Deleted CPUSet assignment" podUID="af8e769c-00c3-41a1-97c4-d91902767dfe" containerName="placement-db-sync" Feb 18 19:35:27 crc kubenswrapper[4942]: I0218 19:35:27.118617 4942 memory_manager.go:354] "RemoveStaleState removing state" podUID="af8e769c-00c3-41a1-97c4-d91902767dfe" containerName="placement-db-sync" Feb 18 19:35:27 crc kubenswrapper[4942]: I0218 19:35:27.119532 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-5794bf846d-82xzg" Feb 18 19:35:27 crc kubenswrapper[4942]: I0218 19:35:27.125986 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Feb 18 19:35:27 crc kubenswrapper[4942]: I0218 19:35:27.126149 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-internal-svc" Feb 18 19:35:27 crc kubenswrapper[4942]: I0218 19:35:27.131992 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Feb 18 19:35:27 crc kubenswrapper[4942]: I0218 19:35:27.132367 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-z4q86" Feb 18 19:35:27 crc kubenswrapper[4942]: I0218 19:35:27.132521 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-public-svc" Feb 18 19:35:27 crc kubenswrapper[4942]: I0218 19:35:27.132659 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-5794bf846d-82xzg"] Feb 18 19:35:27 crc kubenswrapper[4942]: I0218 19:35:27.201717 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ab301488-e86d-4ba2-b628-f4ea689acd3b-logs\") pod \"placement-5794bf846d-82xzg\" (UID: \"ab301488-e86d-4ba2-b628-f4ea689acd3b\") " pod="openstack/placement-5794bf846d-82xzg" Feb 18 19:35:27 crc kubenswrapper[4942]: I0218 19:35:27.201793 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ab301488-e86d-4ba2-b628-f4ea689acd3b-scripts\") pod \"placement-5794bf846d-82xzg\" (UID: \"ab301488-e86d-4ba2-b628-f4ea689acd3b\") " pod="openstack/placement-5794bf846d-82xzg" Feb 18 19:35:27 crc kubenswrapper[4942]: I0218 19:35:27.201823 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ab301488-e86d-4ba2-b628-f4ea689acd3b-combined-ca-bundle\") pod \"placement-5794bf846d-82xzg\" (UID: \"ab301488-e86d-4ba2-b628-f4ea689acd3b\") " pod="openstack/placement-5794bf846d-82xzg" Feb 18 19:35:27 crc kubenswrapper[4942]: I0218 19:35:27.201912 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ab301488-e86d-4ba2-b628-f4ea689acd3b-config-data\") pod \"placement-5794bf846d-82xzg\" (UID: \"ab301488-e86d-4ba2-b628-f4ea689acd3b\") " pod="openstack/placement-5794bf846d-82xzg" Feb 18 19:35:27 crc kubenswrapper[4942]: I0218 19:35:27.201955 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ab301488-e86d-4ba2-b628-f4ea689acd3b-public-tls-certs\") pod \"placement-5794bf846d-82xzg\" (UID: \"ab301488-e86d-4ba2-b628-f4ea689acd3b\") " pod="openstack/placement-5794bf846d-82xzg" Feb 18 19:35:27 crc kubenswrapper[4942]: I0218 19:35:27.201987 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v89br\" (UniqueName: \"kubernetes.io/projected/ab301488-e86d-4ba2-b628-f4ea689acd3b-kube-api-access-v89br\") pod \"placement-5794bf846d-82xzg\" (UID: \"ab301488-e86d-4ba2-b628-f4ea689acd3b\") " pod="openstack/placement-5794bf846d-82xzg" Feb 18 19:35:27 crc kubenswrapper[4942]: I0218 19:35:27.202032 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ab301488-e86d-4ba2-b628-f4ea689acd3b-internal-tls-certs\") pod \"placement-5794bf846d-82xzg\" (UID: \"ab301488-e86d-4ba2-b628-f4ea689acd3b\") " pod="openstack/placement-5794bf846d-82xzg" Feb 18 19:35:27 crc kubenswrapper[4942]: I0218 19:35:27.305468 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ab301488-e86d-4ba2-b628-f4ea689acd3b-logs\") pod \"placement-5794bf846d-82xzg\" (UID: \"ab301488-e86d-4ba2-b628-f4ea689acd3b\") " pod="openstack/placement-5794bf846d-82xzg" Feb 18 19:35:27 crc kubenswrapper[4942]: I0218 19:35:27.305851 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ab301488-e86d-4ba2-b628-f4ea689acd3b-scripts\") pod \"placement-5794bf846d-82xzg\" (UID: \"ab301488-e86d-4ba2-b628-f4ea689acd3b\") " pod="openstack/placement-5794bf846d-82xzg" Feb 18 19:35:27 crc kubenswrapper[4942]: I0218 19:35:27.306019 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ab301488-e86d-4ba2-b628-f4ea689acd3b-combined-ca-bundle\") pod \"placement-5794bf846d-82xzg\" (UID: \"ab301488-e86d-4ba2-b628-f4ea689acd3b\") " pod="openstack/placement-5794bf846d-82xzg" Feb 18 19:35:27 crc kubenswrapper[4942]: I0218 19:35:27.306129 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ab301488-e86d-4ba2-b628-f4ea689acd3b-config-data\") pod \"placement-5794bf846d-82xzg\" (UID: \"ab301488-e86d-4ba2-b628-f4ea689acd3b\") " pod="openstack/placement-5794bf846d-82xzg" Feb 18 19:35:27 crc kubenswrapper[4942]: I0218 19:35:27.306181 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ab301488-e86d-4ba2-b628-f4ea689acd3b-public-tls-certs\") pod \"placement-5794bf846d-82xzg\" (UID: \"ab301488-e86d-4ba2-b628-f4ea689acd3b\") " pod="openstack/placement-5794bf846d-82xzg" Feb 18 19:35:27 crc kubenswrapper[4942]: I0218 19:35:27.306222 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v89br\" (UniqueName: \"kubernetes.io/projected/ab301488-e86d-4ba2-b628-f4ea689acd3b-kube-api-access-v89br\") pod \"placement-5794bf846d-82xzg\" (UID: \"ab301488-e86d-4ba2-b628-f4ea689acd3b\") " pod="openstack/placement-5794bf846d-82xzg" Feb 18 19:35:27 crc kubenswrapper[4942]: I0218 19:35:27.306254 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ab301488-e86d-4ba2-b628-f4ea689acd3b-internal-tls-certs\") pod \"placement-5794bf846d-82xzg\" (UID: \"ab301488-e86d-4ba2-b628-f4ea689acd3b\") " pod="openstack/placement-5794bf846d-82xzg" Feb 18 19:35:27 crc kubenswrapper[4942]: I0218 19:35:27.308286 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ab301488-e86d-4ba2-b628-f4ea689acd3b-logs\") pod \"placement-5794bf846d-82xzg\" (UID: \"ab301488-e86d-4ba2-b628-f4ea689acd3b\") " pod="openstack/placement-5794bf846d-82xzg" Feb 18 19:35:27 crc kubenswrapper[4942]: I0218 19:35:27.312865 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ab301488-e86d-4ba2-b628-f4ea689acd3b-public-tls-certs\") pod \"placement-5794bf846d-82xzg\" (UID: \"ab301488-e86d-4ba2-b628-f4ea689acd3b\") " pod="openstack/placement-5794bf846d-82xzg" Feb 18 19:35:27 crc kubenswrapper[4942]: I0218 19:35:27.313142 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ab301488-e86d-4ba2-b628-f4ea689acd3b-scripts\") pod \"placement-5794bf846d-82xzg\" (UID: \"ab301488-e86d-4ba2-b628-f4ea689acd3b\") " pod="openstack/placement-5794bf846d-82xzg" Feb 18 19:35:27 crc kubenswrapper[4942]: I0218 19:35:27.313634 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ab301488-e86d-4ba2-b628-f4ea689acd3b-config-data\") pod \"placement-5794bf846d-82xzg\" (UID: \"ab301488-e86d-4ba2-b628-f4ea689acd3b\") " pod="openstack/placement-5794bf846d-82xzg" Feb 18 19:35:27 crc kubenswrapper[4942]: I0218 19:35:27.314359 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ab301488-e86d-4ba2-b628-f4ea689acd3b-combined-ca-bundle\") pod \"placement-5794bf846d-82xzg\" (UID: \"ab301488-e86d-4ba2-b628-f4ea689acd3b\") " pod="openstack/placement-5794bf846d-82xzg" Feb 18 19:35:27 crc kubenswrapper[4942]: I0218 19:35:27.314424 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ab301488-e86d-4ba2-b628-f4ea689acd3b-internal-tls-certs\") pod \"placement-5794bf846d-82xzg\" (UID: \"ab301488-e86d-4ba2-b628-f4ea689acd3b\") " pod="openstack/placement-5794bf846d-82xzg" Feb 18 19:35:27 crc kubenswrapper[4942]: I0218 19:35:27.326930 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v89br\" (UniqueName: \"kubernetes.io/projected/ab301488-e86d-4ba2-b628-f4ea689acd3b-kube-api-access-v89br\") pod \"placement-5794bf846d-82xzg\" (UID: \"ab301488-e86d-4ba2-b628-f4ea689acd3b\") " pod="openstack/placement-5794bf846d-82xzg" Feb 18 19:35:27 crc kubenswrapper[4942]: I0218 19:35:27.463503 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-5794bf846d-82xzg" Feb 18 19:35:27 crc kubenswrapper[4942]: I0218 19:35:27.503122 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 18 19:35:27 crc kubenswrapper[4942]: W0218 19:35:27.528869 4942 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5cd0efdc_b208_4270_9c23_33e01f7298be.slice/crio-a6a851f31a8af36c76a03d082cd2bcde730a917e0fda0acf37bf24b1cd98ff69 WatchSource:0}: Error finding container a6a851f31a8af36c76a03d082cd2bcde730a917e0fda0acf37bf24b1cd98ff69: Status 404 returned error can't find the container with id a6a851f31a8af36c76a03d082cd2bcde730a917e0fda0acf37bf24b1cd98ff69 Feb 18 19:35:28 crc kubenswrapper[4942]: I0218 19:35:28.052205 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"dc47abc8-8f2f-41c6-96c3-d6e81388e5b2","Type":"ContainerStarted","Data":"af0f17fdd4b111e87d9ffc74c4fed5912320cf203228fa25c7dde7a00ca05bb2"} Feb 18 19:35:28 crc kubenswrapper[4942]: I0218 19:35:28.052693 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"dc47abc8-8f2f-41c6-96c3-d6e81388e5b2","Type":"ContainerStarted","Data":"5071cc9380a8d29894cf185feb69d5860ec44c77140f4a82c7520791aad9109c"} Feb 18 19:35:28 crc kubenswrapper[4942]: I0218 19:35:28.052708 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-5794bf846d-82xzg"] Feb 18 19:35:28 crc kubenswrapper[4942]: I0218 19:35:28.061702 4942 generic.go:334] "Generic (PLEG): container finished" podID="f29ae8a1-b3cc-452c-ac99-b450ef3125d8" containerID="16fd17087ed9bd06ba590a2897d1853b93c4e9cb882e3c311955fd4cf453c84b" exitCode=0 Feb 18 19:35:28 crc kubenswrapper[4942]: I0218 19:35:28.061828 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-tnqg7" event={"ID":"f29ae8a1-b3cc-452c-ac99-b450ef3125d8","Type":"ContainerDied","Data":"16fd17087ed9bd06ba590a2897d1853b93c4e9cb882e3c311955fd4cf453c84b"} Feb 18 19:35:28 crc kubenswrapper[4942]: I0218 19:35:28.063561 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"5cd0efdc-b208-4270-9c23-33e01f7298be","Type":"ContainerStarted","Data":"a6a851f31a8af36c76a03d082cd2bcde730a917e0fda0acf37bf24b1cd98ff69"} Feb 18 19:35:28 crc kubenswrapper[4942]: I0218 19:35:28.716027 4942 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-db-sync-4h9n5" Feb 18 19:35:28 crc kubenswrapper[4942]: I0218 19:35:28.847664 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mfbhf\" (UniqueName: \"kubernetes.io/projected/983d5293-8413-4a29-88b2-ba775b3b4a8b-kube-api-access-mfbhf\") pod \"983d5293-8413-4a29-88b2-ba775b3b4a8b\" (UID: \"983d5293-8413-4a29-88b2-ba775b3b4a8b\") " Feb 18 19:35:28 crc kubenswrapper[4942]: I0218 19:35:28.847847 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/983d5293-8413-4a29-88b2-ba775b3b4a8b-config-data\") pod \"983d5293-8413-4a29-88b2-ba775b3b4a8b\" (UID: \"983d5293-8413-4a29-88b2-ba775b3b4a8b\") " Feb 18 19:35:28 crc kubenswrapper[4942]: I0218 19:35:28.847935 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/983d5293-8413-4a29-88b2-ba775b3b4a8b-db-sync-config-data\") pod \"983d5293-8413-4a29-88b2-ba775b3b4a8b\" (UID: \"983d5293-8413-4a29-88b2-ba775b3b4a8b\") " Feb 18 19:35:28 crc kubenswrapper[4942]: I0218 19:35:28.847968 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/983d5293-8413-4a29-88b2-ba775b3b4a8b-combined-ca-bundle\") pod \"983d5293-8413-4a29-88b2-ba775b3b4a8b\" (UID: \"983d5293-8413-4a29-88b2-ba775b3b4a8b\") " Feb 18 19:35:28 crc kubenswrapper[4942]: I0218 19:35:28.852282 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/983d5293-8413-4a29-88b2-ba775b3b4a8b-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "983d5293-8413-4a29-88b2-ba775b3b4a8b" (UID: "983d5293-8413-4a29-88b2-ba775b3b4a8b"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:35:28 crc kubenswrapper[4942]: I0218 19:35:28.860003 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/983d5293-8413-4a29-88b2-ba775b3b4a8b-kube-api-access-mfbhf" (OuterVolumeSpecName: "kube-api-access-mfbhf") pod "983d5293-8413-4a29-88b2-ba775b3b4a8b" (UID: "983d5293-8413-4a29-88b2-ba775b3b4a8b"). InnerVolumeSpecName "kube-api-access-mfbhf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:35:28 crc kubenswrapper[4942]: I0218 19:35:28.902918 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/983d5293-8413-4a29-88b2-ba775b3b4a8b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "983d5293-8413-4a29-88b2-ba775b3b4a8b" (UID: "983d5293-8413-4a29-88b2-ba775b3b4a8b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:35:28 crc kubenswrapper[4942]: I0218 19:35:28.923298 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/983d5293-8413-4a29-88b2-ba775b3b4a8b-config-data" (OuterVolumeSpecName: "config-data") pod "983d5293-8413-4a29-88b2-ba775b3b4a8b" (UID: "983d5293-8413-4a29-88b2-ba775b3b4a8b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:35:28 crc kubenswrapper[4942]: I0218 19:35:28.949777 4942 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/983d5293-8413-4a29-88b2-ba775b3b4a8b-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Feb 18 19:35:28 crc kubenswrapper[4942]: I0218 19:35:28.949806 4942 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/983d5293-8413-4a29-88b2-ba775b3b4a8b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 19:35:28 crc kubenswrapper[4942]: I0218 19:35:28.949815 4942 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mfbhf\" (UniqueName: \"kubernetes.io/projected/983d5293-8413-4a29-88b2-ba775b3b4a8b-kube-api-access-mfbhf\") on node \"crc\" DevicePath \"\"" Feb 18 19:35:28 crc kubenswrapper[4942]: I0218 19:35:28.949824 4942 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/983d5293-8413-4a29-88b2-ba775b3b4a8b-config-data\") on node \"crc\" DevicePath \"\"" Feb 18 19:35:29 crc kubenswrapper[4942]: I0218 19:35:29.107790 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"dc47abc8-8f2f-41c6-96c3-d6e81388e5b2","Type":"ContainerStarted","Data":"0c82f89cf5ce35ccda5a5b29f76963df047d6ffca2e6b1f0144d5f20d3dfe0a7"} Feb 18 19:35:29 crc kubenswrapper[4942]: I0218 19:35:29.110350 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-db-sync-4h9n5" event={"ID":"983d5293-8413-4a29-88b2-ba775b3b4a8b","Type":"ContainerDied","Data":"4d89390c95728bcf123b54a9e3391d1834069387fcdf07d8c1f1a0845cb094b5"} Feb 18 19:35:29 crc kubenswrapper[4942]: I0218 19:35:29.110378 4942 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4d89390c95728bcf123b54a9e3391d1834069387fcdf07d8c1f1a0845cb094b5" Feb 18 19:35:29 crc kubenswrapper[4942]: I0218 19:35:29.110438 4942 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-db-sync-4h9n5" Feb 18 19:35:29 crc kubenswrapper[4942]: I0218 19:35:29.134814 4942 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=4.13479775 podStartE2EDuration="4.13479775s" podCreationTimestamp="2026-02-18 19:35:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:35:29.118124057 +0000 UTC m=+1088.823056722" watchObservedRunningTime="2026-02-18 19:35:29.13479775 +0000 UTC m=+1088.839730415" Feb 18 19:35:29 crc kubenswrapper[4942]: I0218 19:35:29.139657 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"5cd0efdc-b208-4270-9c23-33e01f7298be","Type":"ContainerStarted","Data":"1b92a562ea433f43d820eeece6e874b38a343cedbb1b276827ec28ad7679c4ae"} Feb 18 19:35:29 crc kubenswrapper[4942]: I0218 19:35:29.163663 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-5794bf846d-82xzg" event={"ID":"ab301488-e86d-4ba2-b628-f4ea689acd3b","Type":"ContainerStarted","Data":"9ef44ea2e648e2bbfb3bd289c97d6ea2ed93750446192377e2017b04b006f489"} Feb 18 19:35:29 crc kubenswrapper[4942]: I0218 19:35:29.163699 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-5794bf846d-82xzg" event={"ID":"ab301488-e86d-4ba2-b628-f4ea689acd3b","Type":"ContainerStarted","Data":"f8a851dfe023e77ce2012d0b840a4729b646e24254cac11ed22579fa4353c01b"} Feb 18 19:35:29 crc kubenswrapper[4942]: I0218 19:35:29.163708 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-5794bf846d-82xzg" event={"ID":"ab301488-e86d-4ba2-b628-f4ea689acd3b","Type":"ContainerStarted","Data":"5655340f4bf0abd595b0c47b02dacb9178105661696797fd33a844b3ed3d1922"} Feb 18 19:35:29 crc kubenswrapper[4942]: I0218 19:35:29.168121 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-5794bf846d-82xzg" Feb 18 19:35:29 crc kubenswrapper[4942]: I0218 19:35:29.168159 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-5794bf846d-82xzg" Feb 18 19:35:29 crc kubenswrapper[4942]: I0218 19:35:29.194806 4942 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-5794bf846d-82xzg" podStartSLOduration=2.194783 podStartE2EDuration="2.194783s" podCreationTimestamp="2026-02-18 19:35:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:35:29.192447799 +0000 UTC m=+1088.897380464" watchObservedRunningTime="2026-02-18 19:35:29.194783 +0000 UTC m=+1088.899715665" Feb 18 19:35:29 crc kubenswrapper[4942]: I0218 19:35:29.259677 4942 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/watcher-decision-engine-0"] Feb 18 19:35:29 crc kubenswrapper[4942]: E0218 19:35:29.260203 4942 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="983d5293-8413-4a29-88b2-ba775b3b4a8b" containerName="watcher-db-sync" Feb 18 19:35:29 crc kubenswrapper[4942]: I0218 19:35:29.260215 4942 state_mem.go:107] "Deleted CPUSet assignment" podUID="983d5293-8413-4a29-88b2-ba775b3b4a8b" containerName="watcher-db-sync" Feb 18 19:35:29 crc kubenswrapper[4942]: I0218 19:35:29.260446 4942 memory_manager.go:354] "RemoveStaleState removing state" podUID="983d5293-8413-4a29-88b2-ba775b3b4a8b" containerName="watcher-db-sync" Feb 18 19:35:29 crc kubenswrapper[4942]: I0218 19:35:29.261326 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-decision-engine-0" Feb 18 19:35:29 crc kubenswrapper[4942]: I0218 19:35:29.265248 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"watcher-watcher-dockercfg-jp82k" Feb 18 19:35:29 crc kubenswrapper[4942]: I0218 19:35:29.265439 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"watcher-decision-engine-config-data" Feb 18 19:35:29 crc kubenswrapper[4942]: I0218 19:35:29.282696 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-decision-engine-0"] Feb 18 19:35:29 crc kubenswrapper[4942]: I0218 19:35:29.345866 4942 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/watcher-applier-0"] Feb 18 19:35:29 crc kubenswrapper[4942]: I0218 19:35:29.346969 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-applier-0" Feb 18 19:35:29 crc kubenswrapper[4942]: I0218 19:35:29.350783 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"watcher-applier-config-data" Feb 18 19:35:29 crc kubenswrapper[4942]: I0218 19:35:29.357077 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/9cf66c1e-2f67-4785-85e9-f0b06e578d29-custom-prometheus-ca\") pod \"watcher-decision-engine-0\" (UID: \"9cf66c1e-2f67-4785-85e9-f0b06e578d29\") " pod="openstack/watcher-decision-engine-0" Feb 18 19:35:29 crc kubenswrapper[4942]: I0218 19:35:29.357113 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2ldkj\" (UniqueName: \"kubernetes.io/projected/e9b5326c-208f-40ba-b395-8a6cf6b52399-kube-api-access-2ldkj\") pod \"watcher-applier-0\" (UID: \"e9b5326c-208f-40ba-b395-8a6cf6b52399\") " pod="openstack/watcher-applier-0" Feb 18 19:35:29 crc kubenswrapper[4942]: I0218 19:35:29.357127 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9cf66c1e-2f67-4785-85e9-f0b06e578d29-combined-ca-bundle\") pod \"watcher-decision-engine-0\" (UID: \"9cf66c1e-2f67-4785-85e9-f0b06e578d29\") " pod="openstack/watcher-decision-engine-0" Feb 18 19:35:29 crc kubenswrapper[4942]: I0218 19:35:29.357156 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9cf66c1e-2f67-4785-85e9-f0b06e578d29-logs\") pod \"watcher-decision-engine-0\" (UID: \"9cf66c1e-2f67-4785-85e9-f0b06e578d29\") " pod="openstack/watcher-decision-engine-0" Feb 18 19:35:29 crc kubenswrapper[4942]: I0218 19:35:29.357180 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e9b5326c-208f-40ba-b395-8a6cf6b52399-combined-ca-bundle\") pod \"watcher-applier-0\" (UID: \"e9b5326c-208f-40ba-b395-8a6cf6b52399\") " pod="openstack/watcher-applier-0" Feb 18 19:35:29 crc kubenswrapper[4942]: I0218 19:35:29.357266 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e9b5326c-208f-40ba-b395-8a6cf6b52399-logs\") pod \"watcher-applier-0\" (UID: \"e9b5326c-208f-40ba-b395-8a6cf6b52399\") " pod="openstack/watcher-applier-0" Feb 18 19:35:29 crc kubenswrapper[4942]: I0218 19:35:29.357280 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e9b5326c-208f-40ba-b395-8a6cf6b52399-config-data\") pod \"watcher-applier-0\" (UID: \"e9b5326c-208f-40ba-b395-8a6cf6b52399\") " pod="openstack/watcher-applier-0" Feb 18 19:35:29 crc kubenswrapper[4942]: I0218 19:35:29.357303 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9cf66c1e-2f67-4785-85e9-f0b06e578d29-config-data\") pod \"watcher-decision-engine-0\" (UID: \"9cf66c1e-2f67-4785-85e9-f0b06e578d29\") " pod="openstack/watcher-decision-engine-0" Feb 18 19:35:29 crc kubenswrapper[4942]: I0218 19:35:29.357348 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-89hpq\" (UniqueName: \"kubernetes.io/projected/9cf66c1e-2f67-4785-85e9-f0b06e578d29-kube-api-access-89hpq\") pod \"watcher-decision-engine-0\" (UID: \"9cf66c1e-2f67-4785-85e9-f0b06e578d29\") " pod="openstack/watcher-decision-engine-0" Feb 18 19:35:29 crc kubenswrapper[4942]: I0218 19:35:29.376069 4942 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/watcher-api-0"] Feb 18 19:35:29 crc kubenswrapper[4942]: I0218 19:35:29.378449 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-api-0" Feb 18 19:35:29 crc kubenswrapper[4942]: I0218 19:35:29.386027 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"watcher-api-config-data" Feb 18 19:35:29 crc kubenswrapper[4942]: I0218 19:35:29.396923 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-api-0"] Feb 18 19:35:29 crc kubenswrapper[4942]: I0218 19:35:29.403589 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-applier-0"] Feb 18 19:35:29 crc kubenswrapper[4942]: I0218 19:35:29.459201 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/cf325d20-c507-42cc-b96f-6e57ff55aa53-custom-prometheus-ca\") pod \"watcher-api-0\" (UID: \"cf325d20-c507-42cc-b96f-6e57ff55aa53\") " pod="openstack/watcher-api-0" Feb 18 19:35:29 crc kubenswrapper[4942]: I0218 19:35:29.459262 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-89hpq\" (UniqueName: \"kubernetes.io/projected/9cf66c1e-2f67-4785-85e9-f0b06e578d29-kube-api-access-89hpq\") pod \"watcher-decision-engine-0\" (UID: \"9cf66c1e-2f67-4785-85e9-f0b06e578d29\") " pod="openstack/watcher-decision-engine-0" Feb 18 19:35:29 crc kubenswrapper[4942]: I0218 19:35:29.459290 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cf325d20-c507-42cc-b96f-6e57ff55aa53-combined-ca-bundle\") pod \"watcher-api-0\" (UID: \"cf325d20-c507-42cc-b96f-6e57ff55aa53\") " pod="openstack/watcher-api-0" Feb 18 19:35:29 crc kubenswrapper[4942]: I0218 19:35:29.459354 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/9cf66c1e-2f67-4785-85e9-f0b06e578d29-custom-prometheus-ca\") pod \"watcher-decision-engine-0\" (UID: \"9cf66c1e-2f67-4785-85e9-f0b06e578d29\") " pod="openstack/watcher-decision-engine-0" Feb 18 19:35:29 crc kubenswrapper[4942]: I0218 19:35:29.459373 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2ldkj\" (UniqueName: \"kubernetes.io/projected/e9b5326c-208f-40ba-b395-8a6cf6b52399-kube-api-access-2ldkj\") pod \"watcher-applier-0\" (UID: \"e9b5326c-208f-40ba-b395-8a6cf6b52399\") " pod="openstack/watcher-applier-0" Feb 18 19:35:29 crc kubenswrapper[4942]: I0218 19:35:29.459389 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9cf66c1e-2f67-4785-85e9-f0b06e578d29-combined-ca-bundle\") pod \"watcher-decision-engine-0\" (UID: \"9cf66c1e-2f67-4785-85e9-f0b06e578d29\") " pod="openstack/watcher-decision-engine-0" Feb 18 19:35:29 crc kubenswrapper[4942]: I0218 19:35:29.459416 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cf325d20-c507-42cc-b96f-6e57ff55aa53-config-data\") pod \"watcher-api-0\" (UID: \"cf325d20-c507-42cc-b96f-6e57ff55aa53\") " pod="openstack/watcher-api-0" Feb 18 19:35:29 crc kubenswrapper[4942]: I0218 19:35:29.459445 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9cf66c1e-2f67-4785-85e9-f0b06e578d29-logs\") pod \"watcher-decision-engine-0\" (UID: \"9cf66c1e-2f67-4785-85e9-f0b06e578d29\") " pod="openstack/watcher-decision-engine-0" Feb 18 19:35:29 crc kubenswrapper[4942]: I0218 19:35:29.459470 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e9b5326c-208f-40ba-b395-8a6cf6b52399-combined-ca-bundle\") pod \"watcher-applier-0\" (UID: \"e9b5326c-208f-40ba-b395-8a6cf6b52399\") " pod="openstack/watcher-applier-0" Feb 18 19:35:29 crc kubenswrapper[4942]: I0218 19:35:29.459505 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-485k7\" (UniqueName: \"kubernetes.io/projected/cf325d20-c507-42cc-b96f-6e57ff55aa53-kube-api-access-485k7\") pod \"watcher-api-0\" (UID: \"cf325d20-c507-42cc-b96f-6e57ff55aa53\") " pod="openstack/watcher-api-0" Feb 18 19:35:29 crc kubenswrapper[4942]: I0218 19:35:29.459557 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e9b5326c-208f-40ba-b395-8a6cf6b52399-logs\") pod \"watcher-applier-0\" (UID: \"e9b5326c-208f-40ba-b395-8a6cf6b52399\") " pod="openstack/watcher-applier-0" Feb 18 19:35:29 crc kubenswrapper[4942]: I0218 19:35:29.459574 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e9b5326c-208f-40ba-b395-8a6cf6b52399-config-data\") pod \"watcher-applier-0\" (UID: \"e9b5326c-208f-40ba-b395-8a6cf6b52399\") " pod="openstack/watcher-applier-0" Feb 18 19:35:29 crc kubenswrapper[4942]: I0218 19:35:29.459605 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9cf66c1e-2f67-4785-85e9-f0b06e578d29-config-data\") pod \"watcher-decision-engine-0\" (UID: \"9cf66c1e-2f67-4785-85e9-f0b06e578d29\") " pod="openstack/watcher-decision-engine-0" Feb 18 19:35:29 crc kubenswrapper[4942]: I0218 19:35:29.459638 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cf325d20-c507-42cc-b96f-6e57ff55aa53-logs\") pod \"watcher-api-0\" (UID: \"cf325d20-c507-42cc-b96f-6e57ff55aa53\") " pod="openstack/watcher-api-0" Feb 18 19:35:29 crc kubenswrapper[4942]: I0218 19:35:29.460022 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9cf66c1e-2f67-4785-85e9-f0b06e578d29-logs\") pod \"watcher-decision-engine-0\" (UID: \"9cf66c1e-2f67-4785-85e9-f0b06e578d29\") " pod="openstack/watcher-decision-engine-0" Feb 18 19:35:29 crc kubenswrapper[4942]: I0218 19:35:29.460434 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e9b5326c-208f-40ba-b395-8a6cf6b52399-logs\") pod \"watcher-applier-0\" (UID: \"e9b5326c-208f-40ba-b395-8a6cf6b52399\") " pod="openstack/watcher-applier-0" Feb 18 19:35:29 crc kubenswrapper[4942]: I0218 19:35:29.466091 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9cf66c1e-2f67-4785-85e9-f0b06e578d29-combined-ca-bundle\") pod \"watcher-decision-engine-0\" (UID: \"9cf66c1e-2f67-4785-85e9-f0b06e578d29\") " pod="openstack/watcher-decision-engine-0" Feb 18 19:35:29 crc kubenswrapper[4942]: I0218 19:35:29.468114 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/9cf66c1e-2f67-4785-85e9-f0b06e578d29-custom-prometheus-ca\") pod \"watcher-decision-engine-0\" (UID: \"9cf66c1e-2f67-4785-85e9-f0b06e578d29\") " pod="openstack/watcher-decision-engine-0" Feb 18 19:35:29 crc kubenswrapper[4942]: I0218 19:35:29.479491 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e9b5326c-208f-40ba-b395-8a6cf6b52399-combined-ca-bundle\") pod \"watcher-applier-0\" (UID: \"e9b5326c-208f-40ba-b395-8a6cf6b52399\") " pod="openstack/watcher-applier-0" Feb 18 19:35:29 crc kubenswrapper[4942]: I0218 19:35:29.482785 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e9b5326c-208f-40ba-b395-8a6cf6b52399-config-data\") pod \"watcher-applier-0\" (UID: \"e9b5326c-208f-40ba-b395-8a6cf6b52399\") " pod="openstack/watcher-applier-0" Feb 18 19:35:29 crc kubenswrapper[4942]: I0218 19:35:29.484484 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9cf66c1e-2f67-4785-85e9-f0b06e578d29-config-data\") pod \"watcher-decision-engine-0\" (UID: \"9cf66c1e-2f67-4785-85e9-f0b06e578d29\") " pod="openstack/watcher-decision-engine-0" Feb 18 19:35:29 crc kubenswrapper[4942]: I0218 19:35:29.490836 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-89hpq\" (UniqueName: \"kubernetes.io/projected/9cf66c1e-2f67-4785-85e9-f0b06e578d29-kube-api-access-89hpq\") pod \"watcher-decision-engine-0\" (UID: \"9cf66c1e-2f67-4785-85e9-f0b06e578d29\") " pod="openstack/watcher-decision-engine-0" Feb 18 19:35:29 crc kubenswrapper[4942]: I0218 19:35:29.490915 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2ldkj\" (UniqueName: \"kubernetes.io/projected/e9b5326c-208f-40ba-b395-8a6cf6b52399-kube-api-access-2ldkj\") pod \"watcher-applier-0\" (UID: \"e9b5326c-208f-40ba-b395-8a6cf6b52399\") " pod="openstack/watcher-applier-0" Feb 18 19:35:29 crc kubenswrapper[4942]: I0218 19:35:29.527240 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-54d64cf59b-xp7rk" Feb 18 19:35:29 crc kubenswrapper[4942]: I0218 19:35:29.527405 4942 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-54d64cf59b-xp7rk" Feb 18 19:35:29 crc kubenswrapper[4942]: I0218 19:35:29.564220 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cf325d20-c507-42cc-b96f-6e57ff55aa53-logs\") pod \"watcher-api-0\" (UID: \"cf325d20-c507-42cc-b96f-6e57ff55aa53\") " pod="openstack/watcher-api-0" Feb 18 19:35:29 crc kubenswrapper[4942]: I0218 19:35:29.564279 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/cf325d20-c507-42cc-b96f-6e57ff55aa53-custom-prometheus-ca\") pod \"watcher-api-0\" (UID: \"cf325d20-c507-42cc-b96f-6e57ff55aa53\") " pod="openstack/watcher-api-0" Feb 18 19:35:29 crc kubenswrapper[4942]: I0218 19:35:29.564316 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cf325d20-c507-42cc-b96f-6e57ff55aa53-combined-ca-bundle\") pod \"watcher-api-0\" (UID: \"cf325d20-c507-42cc-b96f-6e57ff55aa53\") " pod="openstack/watcher-api-0" Feb 18 19:35:29 crc kubenswrapper[4942]: I0218 19:35:29.564414 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cf325d20-c507-42cc-b96f-6e57ff55aa53-config-data\") pod \"watcher-api-0\" (UID: \"cf325d20-c507-42cc-b96f-6e57ff55aa53\") " pod="openstack/watcher-api-0" Feb 18 19:35:29 crc kubenswrapper[4942]: I0218 19:35:29.564465 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-485k7\" (UniqueName: \"kubernetes.io/projected/cf325d20-c507-42cc-b96f-6e57ff55aa53-kube-api-access-485k7\") pod \"watcher-api-0\" (UID: \"cf325d20-c507-42cc-b96f-6e57ff55aa53\") " pod="openstack/watcher-api-0" Feb 18 19:35:29 crc kubenswrapper[4942]: I0218 19:35:29.566617 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cf325d20-c507-42cc-b96f-6e57ff55aa53-logs\") pod \"watcher-api-0\" (UID: \"cf325d20-c507-42cc-b96f-6e57ff55aa53\") " pod="openstack/watcher-api-0" Feb 18 19:35:29 crc kubenswrapper[4942]: I0218 19:35:29.578827 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cf325d20-c507-42cc-b96f-6e57ff55aa53-combined-ca-bundle\") pod \"watcher-api-0\" (UID: \"cf325d20-c507-42cc-b96f-6e57ff55aa53\") " pod="openstack/watcher-api-0" Feb 18 19:35:29 crc kubenswrapper[4942]: I0218 19:35:29.579861 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cf325d20-c507-42cc-b96f-6e57ff55aa53-config-data\") pod \"watcher-api-0\" (UID: \"cf325d20-c507-42cc-b96f-6e57ff55aa53\") " pod="openstack/watcher-api-0" Feb 18 19:35:29 crc kubenswrapper[4942]: I0218 19:35:29.580971 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/cf325d20-c507-42cc-b96f-6e57ff55aa53-custom-prometheus-ca\") pod \"watcher-api-0\" (UID: \"cf325d20-c507-42cc-b96f-6e57ff55aa53\") " pod="openstack/watcher-api-0" Feb 18 19:35:29 crc kubenswrapper[4942]: I0218 19:35:29.586065 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-485k7\" (UniqueName: \"kubernetes.io/projected/cf325d20-c507-42cc-b96f-6e57ff55aa53-kube-api-access-485k7\") pod \"watcher-api-0\" (UID: \"cf325d20-c507-42cc-b96f-6e57ff55aa53\") " pod="openstack/watcher-api-0" Feb 18 19:35:29 crc kubenswrapper[4942]: I0218 19:35:29.614549 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-7b6b6597b8-m8ngr" Feb 18 19:35:29 crc kubenswrapper[4942]: I0218 19:35:29.614611 4942 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-7b6b6597b8-m8ngr" Feb 18 19:35:29 crc kubenswrapper[4942]: I0218 19:35:29.621961 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-decision-engine-0" Feb 18 19:35:29 crc kubenswrapper[4942]: I0218 19:35:29.687585 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-applier-0" Feb 18 19:35:29 crc kubenswrapper[4942]: I0218 19:35:29.720193 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-api-0" Feb 18 19:35:29 crc kubenswrapper[4942]: I0218 19:35:29.798182 4942 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-tnqg7" Feb 18 19:35:29 crc kubenswrapper[4942]: I0218 19:35:29.883621 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/f29ae8a1-b3cc-452c-ac99-b450ef3125d8-credential-keys\") pod \"f29ae8a1-b3cc-452c-ac99-b450ef3125d8\" (UID: \"f29ae8a1-b3cc-452c-ac99-b450ef3125d8\") " Feb 18 19:35:29 crc kubenswrapper[4942]: I0218 19:35:29.883951 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-28x9f\" (UniqueName: \"kubernetes.io/projected/f29ae8a1-b3cc-452c-ac99-b450ef3125d8-kube-api-access-28x9f\") pod \"f29ae8a1-b3cc-452c-ac99-b450ef3125d8\" (UID: \"f29ae8a1-b3cc-452c-ac99-b450ef3125d8\") " Feb 18 19:35:29 crc kubenswrapper[4942]: I0218 19:35:29.883980 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f29ae8a1-b3cc-452c-ac99-b450ef3125d8-config-data\") pod \"f29ae8a1-b3cc-452c-ac99-b450ef3125d8\" (UID: \"f29ae8a1-b3cc-452c-ac99-b450ef3125d8\") " Feb 18 19:35:29 crc kubenswrapper[4942]: I0218 19:35:29.884135 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f29ae8a1-b3cc-452c-ac99-b450ef3125d8-combined-ca-bundle\") pod \"f29ae8a1-b3cc-452c-ac99-b450ef3125d8\" (UID: \"f29ae8a1-b3cc-452c-ac99-b450ef3125d8\") " Feb 18 19:35:29 crc kubenswrapper[4942]: I0218 19:35:29.884173 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f29ae8a1-b3cc-452c-ac99-b450ef3125d8-scripts\") pod \"f29ae8a1-b3cc-452c-ac99-b450ef3125d8\" (UID: \"f29ae8a1-b3cc-452c-ac99-b450ef3125d8\") " Feb 18 19:35:29 crc kubenswrapper[4942]: I0218 19:35:29.884205 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/f29ae8a1-b3cc-452c-ac99-b450ef3125d8-fernet-keys\") pod \"f29ae8a1-b3cc-452c-ac99-b450ef3125d8\" (UID: \"f29ae8a1-b3cc-452c-ac99-b450ef3125d8\") " Feb 18 19:35:29 crc kubenswrapper[4942]: I0218 19:35:29.891387 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f29ae8a1-b3cc-452c-ac99-b450ef3125d8-scripts" (OuterVolumeSpecName: "scripts") pod "f29ae8a1-b3cc-452c-ac99-b450ef3125d8" (UID: "f29ae8a1-b3cc-452c-ac99-b450ef3125d8"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:35:29 crc kubenswrapper[4942]: I0218 19:35:29.891921 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f29ae8a1-b3cc-452c-ac99-b450ef3125d8-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "f29ae8a1-b3cc-452c-ac99-b450ef3125d8" (UID: "f29ae8a1-b3cc-452c-ac99-b450ef3125d8"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:35:29 crc kubenswrapper[4942]: I0218 19:35:29.893301 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f29ae8a1-b3cc-452c-ac99-b450ef3125d8-kube-api-access-28x9f" (OuterVolumeSpecName: "kube-api-access-28x9f") pod "f29ae8a1-b3cc-452c-ac99-b450ef3125d8" (UID: "f29ae8a1-b3cc-452c-ac99-b450ef3125d8"). InnerVolumeSpecName "kube-api-access-28x9f". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:35:29 crc kubenswrapper[4942]: I0218 19:35:29.908592 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f29ae8a1-b3cc-452c-ac99-b450ef3125d8-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "f29ae8a1-b3cc-452c-ac99-b450ef3125d8" (UID: "f29ae8a1-b3cc-452c-ac99-b450ef3125d8"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:35:29 crc kubenswrapper[4942]: I0218 19:35:29.923040 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f29ae8a1-b3cc-452c-ac99-b450ef3125d8-config-data" (OuterVolumeSpecName: "config-data") pod "f29ae8a1-b3cc-452c-ac99-b450ef3125d8" (UID: "f29ae8a1-b3cc-452c-ac99-b450ef3125d8"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:35:29 crc kubenswrapper[4942]: I0218 19:35:29.955613 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f29ae8a1-b3cc-452c-ac99-b450ef3125d8-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f29ae8a1-b3cc-452c-ac99-b450ef3125d8" (UID: "f29ae8a1-b3cc-452c-ac99-b450ef3125d8"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:35:29 crc kubenswrapper[4942]: I0218 19:35:29.985935 4942 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f29ae8a1-b3cc-452c-ac99-b450ef3125d8-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 19:35:29 crc kubenswrapper[4942]: I0218 19:35:29.985962 4942 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f29ae8a1-b3cc-452c-ac99-b450ef3125d8-scripts\") on node \"crc\" DevicePath \"\"" Feb 18 19:35:29 crc kubenswrapper[4942]: I0218 19:35:29.985972 4942 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/f29ae8a1-b3cc-452c-ac99-b450ef3125d8-fernet-keys\") on node \"crc\" DevicePath \"\"" Feb 18 19:35:29 crc kubenswrapper[4942]: I0218 19:35:29.985980 4942 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/f29ae8a1-b3cc-452c-ac99-b450ef3125d8-credential-keys\") on node \"crc\" DevicePath \"\"" Feb 18 19:35:29 crc kubenswrapper[4942]: I0218 19:35:29.985988 4942 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-28x9f\" (UniqueName: \"kubernetes.io/projected/f29ae8a1-b3cc-452c-ac99-b450ef3125d8-kube-api-access-28x9f\") on node \"crc\" DevicePath \"\"" Feb 18 19:35:29 crc kubenswrapper[4942]: I0218 19:35:29.985998 4942 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f29ae8a1-b3cc-452c-ac99-b450ef3125d8-config-data\") on node \"crc\" DevicePath \"\"" Feb 18 19:35:30 crc kubenswrapper[4942]: I0218 19:35:30.234038 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-decision-engine-0"] Feb 18 19:35:30 crc kubenswrapper[4942]: I0218 19:35:30.244850 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-tnqg7" event={"ID":"f29ae8a1-b3cc-452c-ac99-b450ef3125d8","Type":"ContainerDied","Data":"6ec2961c66c2e9651f7fa79f615b6ade4d1fc9deb7327e765b2f6fda45cc46c8"} Feb 18 19:35:30 crc kubenswrapper[4942]: I0218 19:35:30.244899 4942 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6ec2961c66c2e9651f7fa79f615b6ade4d1fc9deb7327e765b2f6fda45cc46c8" Feb 18 19:35:30 crc kubenswrapper[4942]: I0218 19:35:30.244989 4942 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-tnqg7" Feb 18 19:35:30 crc kubenswrapper[4942]: I0218 19:35:30.250550 4942 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-56897c69bf-gkt87"] Feb 18 19:35:30 crc kubenswrapper[4942]: E0218 19:35:30.251027 4942 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f29ae8a1-b3cc-452c-ac99-b450ef3125d8" containerName="keystone-bootstrap" Feb 18 19:35:30 crc kubenswrapper[4942]: I0218 19:35:30.251045 4942 state_mem.go:107] "Deleted CPUSet assignment" podUID="f29ae8a1-b3cc-452c-ac99-b450ef3125d8" containerName="keystone-bootstrap" Feb 18 19:35:30 crc kubenswrapper[4942]: I0218 19:35:30.251279 4942 memory_manager.go:354] "RemoveStaleState removing state" podUID="f29ae8a1-b3cc-452c-ac99-b450ef3125d8" containerName="keystone-bootstrap" Feb 18 19:35:30 crc kubenswrapper[4942]: I0218 19:35:30.252126 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-56897c69bf-gkt87" Feb 18 19:35:30 crc kubenswrapper[4942]: I0218 19:35:30.260487 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-9szpl" Feb 18 19:35:30 crc kubenswrapper[4942]: I0218 19:35:30.261219 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-public-svc" Feb 18 19:35:30 crc kubenswrapper[4942]: I0218 19:35:30.261625 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Feb 18 19:35:30 crc kubenswrapper[4942]: I0218 19:35:30.263976 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-internal-svc" Feb 18 19:35:30 crc kubenswrapper[4942]: I0218 19:35:30.264200 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Feb 18 19:35:30 crc kubenswrapper[4942]: I0218 19:35:30.265221 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Feb 18 19:35:30 crc kubenswrapper[4942]: I0218 19:35:30.285364 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-56897c69bf-gkt87"] Feb 18 19:35:30 crc kubenswrapper[4942]: W0218 19:35:30.293252 4942 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9cf66c1e_2f67_4785_85e9_f0b06e578d29.slice/crio-e7e10840e11edbe6af151474727a77162010126b060487f8547836dcab0bb348 WatchSource:0}: Error finding container e7e10840e11edbe6af151474727a77162010126b060487f8547836dcab0bb348: Status 404 returned error can't find the container with id e7e10840e11edbe6af151474727a77162010126b060487f8547836dcab0bb348 Feb 18 19:35:30 crc kubenswrapper[4942]: I0218 19:35:30.302621 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/df16a440-84af-448f-a26c-9407514d1eda-internal-tls-certs\") pod \"keystone-56897c69bf-gkt87\" (UID: \"df16a440-84af-448f-a26c-9407514d1eda\") " pod="openstack/keystone-56897c69bf-gkt87" Feb 18 19:35:30 crc kubenswrapper[4942]: I0218 19:35:30.302806 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/df16a440-84af-448f-a26c-9407514d1eda-fernet-keys\") pod \"keystone-56897c69bf-gkt87\" (UID: \"df16a440-84af-448f-a26c-9407514d1eda\") " pod="openstack/keystone-56897c69bf-gkt87" Feb 18 19:35:30 crc kubenswrapper[4942]: I0218 19:35:30.302846 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/df16a440-84af-448f-a26c-9407514d1eda-public-tls-certs\") pod \"keystone-56897c69bf-gkt87\" (UID: \"df16a440-84af-448f-a26c-9407514d1eda\") " pod="openstack/keystone-56897c69bf-gkt87" Feb 18 19:35:30 crc kubenswrapper[4942]: I0218 19:35:30.302937 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/df16a440-84af-448f-a26c-9407514d1eda-credential-keys\") pod \"keystone-56897c69bf-gkt87\" (UID: \"df16a440-84af-448f-a26c-9407514d1eda\") " pod="openstack/keystone-56897c69bf-gkt87" Feb 18 19:35:30 crc kubenswrapper[4942]: I0218 19:35:30.302998 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/df16a440-84af-448f-a26c-9407514d1eda-scripts\") pod \"keystone-56897c69bf-gkt87\" (UID: \"df16a440-84af-448f-a26c-9407514d1eda\") " pod="openstack/keystone-56897c69bf-gkt87" Feb 18 19:35:30 crc kubenswrapper[4942]: I0218 19:35:30.303023 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tsrsr\" (UniqueName: \"kubernetes.io/projected/df16a440-84af-448f-a26c-9407514d1eda-kube-api-access-tsrsr\") pod \"keystone-56897c69bf-gkt87\" (UID: \"df16a440-84af-448f-a26c-9407514d1eda\") " pod="openstack/keystone-56897c69bf-gkt87" Feb 18 19:35:30 crc kubenswrapper[4942]: I0218 19:35:30.303048 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/df16a440-84af-448f-a26c-9407514d1eda-config-data\") pod \"keystone-56897c69bf-gkt87\" (UID: \"df16a440-84af-448f-a26c-9407514d1eda\") " pod="openstack/keystone-56897c69bf-gkt87" Feb 18 19:35:30 crc kubenswrapper[4942]: I0218 19:35:30.303068 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/df16a440-84af-448f-a26c-9407514d1eda-combined-ca-bundle\") pod \"keystone-56897c69bf-gkt87\" (UID: \"df16a440-84af-448f-a26c-9407514d1eda\") " pod="openstack/keystone-56897c69bf-gkt87" Feb 18 19:35:30 crc kubenswrapper[4942]: I0218 19:35:30.388872 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-applier-0"] Feb 18 19:35:30 crc kubenswrapper[4942]: I0218 19:35:30.402629 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-api-0"] Feb 18 19:35:30 crc kubenswrapper[4942]: I0218 19:35:30.404750 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/df16a440-84af-448f-a26c-9407514d1eda-fernet-keys\") pod \"keystone-56897c69bf-gkt87\" (UID: \"df16a440-84af-448f-a26c-9407514d1eda\") " pod="openstack/keystone-56897c69bf-gkt87" Feb 18 19:35:30 crc kubenswrapper[4942]: I0218 19:35:30.404809 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/df16a440-84af-448f-a26c-9407514d1eda-public-tls-certs\") pod \"keystone-56897c69bf-gkt87\" (UID: \"df16a440-84af-448f-a26c-9407514d1eda\") " pod="openstack/keystone-56897c69bf-gkt87" Feb 18 19:35:30 crc kubenswrapper[4942]: I0218 19:35:30.404849 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/df16a440-84af-448f-a26c-9407514d1eda-credential-keys\") pod \"keystone-56897c69bf-gkt87\" (UID: \"df16a440-84af-448f-a26c-9407514d1eda\") " pod="openstack/keystone-56897c69bf-gkt87" Feb 18 19:35:30 crc kubenswrapper[4942]: I0218 19:35:30.404889 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/df16a440-84af-448f-a26c-9407514d1eda-scripts\") pod \"keystone-56897c69bf-gkt87\" (UID: \"df16a440-84af-448f-a26c-9407514d1eda\") " pod="openstack/keystone-56897c69bf-gkt87" Feb 18 19:35:30 crc kubenswrapper[4942]: I0218 19:35:30.404911 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tsrsr\" (UniqueName: \"kubernetes.io/projected/df16a440-84af-448f-a26c-9407514d1eda-kube-api-access-tsrsr\") pod \"keystone-56897c69bf-gkt87\" (UID: \"df16a440-84af-448f-a26c-9407514d1eda\") " pod="openstack/keystone-56897c69bf-gkt87" Feb 18 19:35:30 crc kubenswrapper[4942]: I0218 19:35:30.404935 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/df16a440-84af-448f-a26c-9407514d1eda-combined-ca-bundle\") pod \"keystone-56897c69bf-gkt87\" (UID: \"df16a440-84af-448f-a26c-9407514d1eda\") " pod="openstack/keystone-56897c69bf-gkt87" Feb 18 19:35:30 crc kubenswrapper[4942]: I0218 19:35:30.404957 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/df16a440-84af-448f-a26c-9407514d1eda-config-data\") pod \"keystone-56897c69bf-gkt87\" (UID: \"df16a440-84af-448f-a26c-9407514d1eda\") " pod="openstack/keystone-56897c69bf-gkt87" Feb 18 19:35:30 crc kubenswrapper[4942]: I0218 19:35:30.405054 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/df16a440-84af-448f-a26c-9407514d1eda-internal-tls-certs\") pod \"keystone-56897c69bf-gkt87\" (UID: \"df16a440-84af-448f-a26c-9407514d1eda\") " pod="openstack/keystone-56897c69bf-gkt87" Feb 18 19:35:30 crc kubenswrapper[4942]: I0218 19:35:30.412517 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/df16a440-84af-448f-a26c-9407514d1eda-scripts\") pod \"keystone-56897c69bf-gkt87\" (UID: \"df16a440-84af-448f-a26c-9407514d1eda\") " pod="openstack/keystone-56897c69bf-gkt87" Feb 18 19:35:30 crc kubenswrapper[4942]: I0218 19:35:30.413207 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/df16a440-84af-448f-a26c-9407514d1eda-internal-tls-certs\") pod \"keystone-56897c69bf-gkt87\" (UID: \"df16a440-84af-448f-a26c-9407514d1eda\") " pod="openstack/keystone-56897c69bf-gkt87" Feb 18 19:35:30 crc kubenswrapper[4942]: I0218 19:35:30.413364 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/df16a440-84af-448f-a26c-9407514d1eda-fernet-keys\") pod \"keystone-56897c69bf-gkt87\" (UID: \"df16a440-84af-448f-a26c-9407514d1eda\") " pod="openstack/keystone-56897c69bf-gkt87" Feb 18 19:35:30 crc kubenswrapper[4942]: I0218 19:35:30.413489 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/df16a440-84af-448f-a26c-9407514d1eda-public-tls-certs\") pod \"keystone-56897c69bf-gkt87\" (UID: \"df16a440-84af-448f-a26c-9407514d1eda\") " pod="openstack/keystone-56897c69bf-gkt87" Feb 18 19:35:30 crc kubenswrapper[4942]: I0218 19:35:30.414683 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/df16a440-84af-448f-a26c-9407514d1eda-config-data\") pod \"keystone-56897c69bf-gkt87\" (UID: \"df16a440-84af-448f-a26c-9407514d1eda\") " pod="openstack/keystone-56897c69bf-gkt87" Feb 18 19:35:30 crc kubenswrapper[4942]: I0218 19:35:30.417871 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/df16a440-84af-448f-a26c-9407514d1eda-combined-ca-bundle\") pod \"keystone-56897c69bf-gkt87\" (UID: \"df16a440-84af-448f-a26c-9407514d1eda\") " pod="openstack/keystone-56897c69bf-gkt87" Feb 18 19:35:30 crc kubenswrapper[4942]: I0218 19:35:30.422216 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/df16a440-84af-448f-a26c-9407514d1eda-credential-keys\") pod \"keystone-56897c69bf-gkt87\" (UID: \"df16a440-84af-448f-a26c-9407514d1eda\") " pod="openstack/keystone-56897c69bf-gkt87" Feb 18 19:35:30 crc kubenswrapper[4942]: I0218 19:35:30.438243 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tsrsr\" (UniqueName: \"kubernetes.io/projected/df16a440-84af-448f-a26c-9407514d1eda-kube-api-access-tsrsr\") pod \"keystone-56897c69bf-gkt87\" (UID: \"df16a440-84af-448f-a26c-9407514d1eda\") " pod="openstack/keystone-56897c69bf-gkt87" Feb 18 19:35:30 crc kubenswrapper[4942]: I0218 19:35:30.653188 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-56897c69bf-gkt87" Feb 18 19:35:31 crc kubenswrapper[4942]: W0218 19:35:31.150976 4942 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poddf16a440_84af_448f_a26c_9407514d1eda.slice/crio-de57c852e3f6a3a01d4d6f462a0802e52e93276dbf9a642383bd559fcb122c17 WatchSource:0}: Error finding container de57c852e3f6a3a01d4d6f462a0802e52e93276dbf9a642383bd559fcb122c17: Status 404 returned error can't find the container with id de57c852e3f6a3a01d4d6f462a0802e52e93276dbf9a642383bd559fcb122c17 Feb 18 19:35:31 crc kubenswrapper[4942]: I0218 19:35:31.170360 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-56897c69bf-gkt87"] Feb 18 19:35:31 crc kubenswrapper[4942]: I0218 19:35:31.258106 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-applier-0" event={"ID":"e9b5326c-208f-40ba-b395-8a6cf6b52399","Type":"ContainerStarted","Data":"1d5d88a2fcdce7e521ce89119f4619b9b739dd19a073e77bd14da576b76cc719"} Feb 18 19:35:31 crc kubenswrapper[4942]: I0218 19:35:31.261267 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-56897c69bf-gkt87" event={"ID":"df16a440-84af-448f-a26c-9407514d1eda","Type":"ContainerStarted","Data":"de57c852e3f6a3a01d4d6f462a0802e52e93276dbf9a642383bd559fcb122c17"} Feb 18 19:35:31 crc kubenswrapper[4942]: I0218 19:35:31.272487 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-decision-engine-0" event={"ID":"9cf66c1e-2f67-4785-85e9-f0b06e578d29","Type":"ContainerStarted","Data":"e7e10840e11edbe6af151474727a77162010126b060487f8547836dcab0bb348"} Feb 18 19:35:31 crc kubenswrapper[4942]: I0218 19:35:31.279009 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-api-0" event={"ID":"cf325d20-c507-42cc-b96f-6e57ff55aa53","Type":"ContainerStarted","Data":"796b3cc6f87bbc8cea79f9f672a04a291cbb2f04782a6f0d27d4592a418cd947"} Feb 18 19:35:31 crc kubenswrapper[4942]: I0218 19:35:31.768935 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-55f844cf75-b4sf9" Feb 18 19:35:31 crc kubenswrapper[4942]: I0218 19:35:31.838336 4942 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-58dd9ff6bc-pdwb6"] Feb 18 19:35:31 crc kubenswrapper[4942]: I0218 19:35:31.838558 4942 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-58dd9ff6bc-pdwb6" podUID="f354be6c-0a53-41b2-923d-60de99a6ed65" containerName="dnsmasq-dns" containerID="cri-o://cd8e8a9783f92883c4d637d09eea3e643009a45c5a511f5d36eb98f2dff7bd34" gracePeriod=10 Feb 18 19:35:32 crc kubenswrapper[4942]: I0218 19:35:32.289455 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-56897c69bf-gkt87" event={"ID":"df16a440-84af-448f-a26c-9407514d1eda","Type":"ContainerStarted","Data":"954b02989454405595ca18cbf334497a3b6b28da79771c0bf9e99c46d8b5cd6a"} Feb 18 19:35:33 crc kubenswrapper[4942]: I0218 19:35:33.075800 4942 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-58dd9ff6bc-pdwb6" Feb 18 19:35:33 crc kubenswrapper[4942]: I0218 19:35:33.184304 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cs5jn\" (UniqueName: \"kubernetes.io/projected/f354be6c-0a53-41b2-923d-60de99a6ed65-kube-api-access-cs5jn\") pod \"f354be6c-0a53-41b2-923d-60de99a6ed65\" (UID: \"f354be6c-0a53-41b2-923d-60de99a6ed65\") " Feb 18 19:35:33 crc kubenswrapper[4942]: I0218 19:35:33.184507 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f354be6c-0a53-41b2-923d-60de99a6ed65-config\") pod \"f354be6c-0a53-41b2-923d-60de99a6ed65\" (UID: \"f354be6c-0a53-41b2-923d-60de99a6ed65\") " Feb 18 19:35:33 crc kubenswrapper[4942]: I0218 19:35:33.184539 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f354be6c-0a53-41b2-923d-60de99a6ed65-ovsdbserver-sb\") pod \"f354be6c-0a53-41b2-923d-60de99a6ed65\" (UID: \"f354be6c-0a53-41b2-923d-60de99a6ed65\") " Feb 18 19:35:33 crc kubenswrapper[4942]: I0218 19:35:33.184591 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/f354be6c-0a53-41b2-923d-60de99a6ed65-dns-swift-storage-0\") pod \"f354be6c-0a53-41b2-923d-60de99a6ed65\" (UID: \"f354be6c-0a53-41b2-923d-60de99a6ed65\") " Feb 18 19:35:33 crc kubenswrapper[4942]: I0218 19:35:33.184616 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f354be6c-0a53-41b2-923d-60de99a6ed65-dns-svc\") pod \"f354be6c-0a53-41b2-923d-60de99a6ed65\" (UID: \"f354be6c-0a53-41b2-923d-60de99a6ed65\") " Feb 18 19:35:33 crc kubenswrapper[4942]: I0218 19:35:33.184677 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f354be6c-0a53-41b2-923d-60de99a6ed65-ovsdbserver-nb\") pod \"f354be6c-0a53-41b2-923d-60de99a6ed65\" (UID: \"f354be6c-0a53-41b2-923d-60de99a6ed65\") " Feb 18 19:35:33 crc kubenswrapper[4942]: I0218 19:35:33.193264 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f354be6c-0a53-41b2-923d-60de99a6ed65-kube-api-access-cs5jn" (OuterVolumeSpecName: "kube-api-access-cs5jn") pod "f354be6c-0a53-41b2-923d-60de99a6ed65" (UID: "f354be6c-0a53-41b2-923d-60de99a6ed65"). InnerVolumeSpecName "kube-api-access-cs5jn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:35:33 crc kubenswrapper[4942]: I0218 19:35:33.236884 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f354be6c-0a53-41b2-923d-60de99a6ed65-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "f354be6c-0a53-41b2-923d-60de99a6ed65" (UID: "f354be6c-0a53-41b2-923d-60de99a6ed65"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:35:33 crc kubenswrapper[4942]: I0218 19:35:33.249388 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f354be6c-0a53-41b2-923d-60de99a6ed65-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "f354be6c-0a53-41b2-923d-60de99a6ed65" (UID: "f354be6c-0a53-41b2-923d-60de99a6ed65"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:35:33 crc kubenswrapper[4942]: I0218 19:35:33.251115 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f354be6c-0a53-41b2-923d-60de99a6ed65-config" (OuterVolumeSpecName: "config") pod "f354be6c-0a53-41b2-923d-60de99a6ed65" (UID: "f354be6c-0a53-41b2-923d-60de99a6ed65"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:35:33 crc kubenswrapper[4942]: I0218 19:35:33.256227 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f354be6c-0a53-41b2-923d-60de99a6ed65-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "f354be6c-0a53-41b2-923d-60de99a6ed65" (UID: "f354be6c-0a53-41b2-923d-60de99a6ed65"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:35:33 crc kubenswrapper[4942]: I0218 19:35:33.266197 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f354be6c-0a53-41b2-923d-60de99a6ed65-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "f354be6c-0a53-41b2-923d-60de99a6ed65" (UID: "f354be6c-0a53-41b2-923d-60de99a6ed65"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:35:33 crc kubenswrapper[4942]: I0218 19:35:33.287086 4942 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f354be6c-0a53-41b2-923d-60de99a6ed65-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 18 19:35:33 crc kubenswrapper[4942]: I0218 19:35:33.287247 4942 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cs5jn\" (UniqueName: \"kubernetes.io/projected/f354be6c-0a53-41b2-923d-60de99a6ed65-kube-api-access-cs5jn\") on node \"crc\" DevicePath \"\"" Feb 18 19:35:33 crc kubenswrapper[4942]: I0218 19:35:33.287307 4942 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f354be6c-0a53-41b2-923d-60de99a6ed65-config\") on node \"crc\" DevicePath \"\"" Feb 18 19:35:33 crc kubenswrapper[4942]: I0218 19:35:33.287360 4942 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f354be6c-0a53-41b2-923d-60de99a6ed65-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 18 19:35:33 crc kubenswrapper[4942]: I0218 19:35:33.287410 4942 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/f354be6c-0a53-41b2-923d-60de99a6ed65-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 18 19:35:33 crc kubenswrapper[4942]: I0218 19:35:33.287459 4942 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f354be6c-0a53-41b2-923d-60de99a6ed65-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 18 19:35:33 crc kubenswrapper[4942]: I0218 19:35:33.306523 4942 generic.go:334] "Generic (PLEG): container finished" podID="f354be6c-0a53-41b2-923d-60de99a6ed65" containerID="cd8e8a9783f92883c4d637d09eea3e643009a45c5a511f5d36eb98f2dff7bd34" exitCode=0 Feb 18 19:35:33 crc kubenswrapper[4942]: I0218 19:35:33.306586 4942 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-58dd9ff6bc-pdwb6" Feb 18 19:35:33 crc kubenswrapper[4942]: I0218 19:35:33.306599 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-58dd9ff6bc-pdwb6" event={"ID":"f354be6c-0a53-41b2-923d-60de99a6ed65","Type":"ContainerDied","Data":"cd8e8a9783f92883c4d637d09eea3e643009a45c5a511f5d36eb98f2dff7bd34"} Feb 18 19:35:33 crc kubenswrapper[4942]: I0218 19:35:33.306664 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-58dd9ff6bc-pdwb6" event={"ID":"f354be6c-0a53-41b2-923d-60de99a6ed65","Type":"ContainerDied","Data":"d0d48456629f18d0d25f803c1de4ee3c6cb53d9140b37084a9b2aa9d6750f014"} Feb 18 19:35:33 crc kubenswrapper[4942]: I0218 19:35:33.306689 4942 scope.go:117] "RemoveContainer" containerID="cd8e8a9783f92883c4d637d09eea3e643009a45c5a511f5d36eb98f2dff7bd34" Feb 18 19:35:33 crc kubenswrapper[4942]: I0218 19:35:33.335476 4942 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-56897c69bf-gkt87" podStartSLOduration=3.335451583 podStartE2EDuration="3.335451583s" podCreationTimestamp="2026-02-18 19:35:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:35:33.32609721 +0000 UTC m=+1093.031029885" watchObservedRunningTime="2026-02-18 19:35:33.335451583 +0000 UTC m=+1093.040384248" Feb 18 19:35:33 crc kubenswrapper[4942]: I0218 19:35:33.374955 4942 scope.go:117] "RemoveContainer" containerID="64088a0ac3e8c72656fdd5f6eb8640c5b4d051cdc758b1b3613619d364046d6d" Feb 18 19:35:33 crc kubenswrapper[4942]: I0218 19:35:33.385984 4942 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-58dd9ff6bc-pdwb6"] Feb 18 19:35:33 crc kubenswrapper[4942]: I0218 19:35:33.394678 4942 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-58dd9ff6bc-pdwb6"] Feb 18 19:35:33 crc kubenswrapper[4942]: I0218 19:35:33.407451 4942 scope.go:117] "RemoveContainer" containerID="cd8e8a9783f92883c4d637d09eea3e643009a45c5a511f5d36eb98f2dff7bd34" Feb 18 19:35:33 crc kubenswrapper[4942]: E0218 19:35:33.407868 4942 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cd8e8a9783f92883c4d637d09eea3e643009a45c5a511f5d36eb98f2dff7bd34\": container with ID starting with cd8e8a9783f92883c4d637d09eea3e643009a45c5a511f5d36eb98f2dff7bd34 not found: ID does not exist" containerID="cd8e8a9783f92883c4d637d09eea3e643009a45c5a511f5d36eb98f2dff7bd34" Feb 18 19:35:33 crc kubenswrapper[4942]: I0218 19:35:33.407895 4942 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cd8e8a9783f92883c4d637d09eea3e643009a45c5a511f5d36eb98f2dff7bd34"} err="failed to get container status \"cd8e8a9783f92883c4d637d09eea3e643009a45c5a511f5d36eb98f2dff7bd34\": rpc error: code = NotFound desc = could not find container \"cd8e8a9783f92883c4d637d09eea3e643009a45c5a511f5d36eb98f2dff7bd34\": container with ID starting with cd8e8a9783f92883c4d637d09eea3e643009a45c5a511f5d36eb98f2dff7bd34 not found: ID does not exist" Feb 18 19:35:33 crc kubenswrapper[4942]: I0218 19:35:33.407917 4942 scope.go:117] "RemoveContainer" containerID="64088a0ac3e8c72656fdd5f6eb8640c5b4d051cdc758b1b3613619d364046d6d" Feb 18 19:35:33 crc kubenswrapper[4942]: E0218 19:35:33.408229 4942 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"64088a0ac3e8c72656fdd5f6eb8640c5b4d051cdc758b1b3613619d364046d6d\": container with ID starting with 64088a0ac3e8c72656fdd5f6eb8640c5b4d051cdc758b1b3613619d364046d6d not found: ID does not exist" containerID="64088a0ac3e8c72656fdd5f6eb8640c5b4d051cdc758b1b3613619d364046d6d" Feb 18 19:35:33 crc kubenswrapper[4942]: I0218 19:35:33.408284 4942 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"64088a0ac3e8c72656fdd5f6eb8640c5b4d051cdc758b1b3613619d364046d6d"} err="failed to get container status \"64088a0ac3e8c72656fdd5f6eb8640c5b4d051cdc758b1b3613619d364046d6d\": rpc error: code = NotFound desc = could not find container \"64088a0ac3e8c72656fdd5f6eb8640c5b4d051cdc758b1b3613619d364046d6d\": container with ID starting with 64088a0ac3e8c72656fdd5f6eb8640c5b4d051cdc758b1b3613619d364046d6d not found: ID does not exist" Feb 18 19:35:34 crc kubenswrapper[4942]: I0218 19:35:34.327379 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"5cd0efdc-b208-4270-9c23-33e01f7298be","Type":"ContainerStarted","Data":"7f7ecb8106c4011dd2affe0db157078ed440c3dc9a5f336a7fd4922172637f01"} Feb 18 19:35:34 crc kubenswrapper[4942]: I0218 19:35:34.331508 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-api-0" event={"ID":"cf325d20-c507-42cc-b96f-6e57ff55aa53","Type":"ContainerStarted","Data":"aa132dbcbfbe636d2466bf98fe3a945bcf6b8f37a1c6b00263bbaa8b8d41b75b"} Feb 18 19:35:34 crc kubenswrapper[4942]: I0218 19:35:34.354181 4942 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=8.354158484 podStartE2EDuration="8.354158484s" podCreationTimestamp="2026-02-18 19:35:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:35:34.344667728 +0000 UTC m=+1094.049600393" watchObservedRunningTime="2026-02-18 19:35:34.354158484 +0000 UTC m=+1094.059091149" Feb 18 19:35:35 crc kubenswrapper[4942]: I0218 19:35:35.048022 4942 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f354be6c-0a53-41b2-923d-60de99a6ed65" path="/var/lib/kubelet/pods/f354be6c-0a53-41b2-923d-60de99a6ed65/volumes" Feb 18 19:35:36 crc kubenswrapper[4942]: I0218 19:35:36.349472 4942 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Feb 18 19:35:36 crc kubenswrapper[4942]: I0218 19:35:36.349824 4942 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Feb 18 19:35:36 crc kubenswrapper[4942]: I0218 19:35:36.383200 4942 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Feb 18 19:35:36 crc kubenswrapper[4942]: I0218 19:35:36.389051 4942 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Feb 18 19:35:36 crc kubenswrapper[4942]: I0218 19:35:36.813008 4942 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Feb 18 19:35:36 crc kubenswrapper[4942]: I0218 19:35:36.813313 4942 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Feb 18 19:35:36 crc kubenswrapper[4942]: I0218 19:35:36.858336 4942 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Feb 18 19:35:36 crc kubenswrapper[4942]: I0218 19:35:36.871977 4942 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Feb 18 19:35:37 crc kubenswrapper[4942]: I0218 19:35:37.357309 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Feb 18 19:35:37 crc kubenswrapper[4942]: I0218 19:35:37.357605 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Feb 18 19:35:37 crc kubenswrapper[4942]: I0218 19:35:37.357625 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Feb 18 19:35:37 crc kubenswrapper[4942]: I0218 19:35:37.357635 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Feb 18 19:35:39 crc kubenswrapper[4942]: I0218 19:35:39.382801 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-api-0" event={"ID":"cf325d20-c507-42cc-b96f-6e57ff55aa53","Type":"ContainerStarted","Data":"a5770f508e1c40bf4ef682bff10bac69873d582c1a0625dbd01c701b14695817"} Feb 18 19:35:39 crc kubenswrapper[4942]: I0218 19:35:39.383434 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/watcher-api-0" Feb 18 19:35:39 crc kubenswrapper[4942]: I0218 19:35:39.386106 4942 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/watcher-api-0" podUID="cf325d20-c507-42cc-b96f-6e57ff55aa53" containerName="watcher-api" probeResult="failure" output="Get \"http://10.217.0.172:9322/\": dial tcp 10.217.0.172:9322: connect: connection refused" Feb 18 19:35:39 crc kubenswrapper[4942]: I0218 19:35:39.413574 4942 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/watcher-api-0" podStartSLOduration=10.413555809 podStartE2EDuration="10.413555809s" podCreationTimestamp="2026-02-18 19:35:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:35:39.412179093 +0000 UTC m=+1099.117111788" watchObservedRunningTime="2026-02-18 19:35:39.413555809 +0000 UTC m=+1099.118488474" Feb 18 19:35:39 crc kubenswrapper[4942]: I0218 19:35:39.529976 4942 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-54d64cf59b-xp7rk" podUID="3ecc91e6-4e7f-438f-8530-bb8dd55764c5" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.158:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.158:8443: connect: connection refused" Feb 18 19:35:39 crc kubenswrapper[4942]: I0218 19:35:39.617753 4942 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-7b6b6597b8-m8ngr" podUID="55d24776-2d1c-413a-8ba1-06cdadf63d04" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.159:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.159:8443: connect: connection refused" Feb 18 19:35:39 crc kubenswrapper[4942]: I0218 19:35:39.654587 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Feb 18 19:35:39 crc kubenswrapper[4942]: I0218 19:35:39.658563 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Feb 18 19:35:39 crc kubenswrapper[4942]: I0218 19:35:39.658657 4942 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 18 19:35:39 crc kubenswrapper[4942]: I0218 19:35:39.720477 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/watcher-api-0" Feb 18 19:35:39 crc kubenswrapper[4942]: I0218 19:35:39.720522 4942 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/watcher-api-0" Feb 18 19:35:40 crc kubenswrapper[4942]: I0218 19:35:40.033998 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Feb 18 19:35:40 crc kubenswrapper[4942]: I0218 19:35:40.409316 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-applier-0" event={"ID":"e9b5326c-208f-40ba-b395-8a6cf6b52399","Type":"ContainerStarted","Data":"a6d830c5dfd1037f768c2a7f3ca288e31e574b8af1c450ae54908c2a7cf2a5bf"} Feb 18 19:35:40 crc kubenswrapper[4942]: I0218 19:35:40.413963 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-h2kjs" event={"ID":"8aeac097-ba93-4859-a14f-839ae1421e28","Type":"ContainerStarted","Data":"4d566d8d0c1f2395dae51975108188a50f273b881992f487f3b84531a9f2e9f1"} Feb 18 19:35:40 crc kubenswrapper[4942]: I0218 19:35:40.417039 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e4517368-322e-4467-b31a-45b487e1035b","Type":"ContainerStarted","Data":"cc9e9ad424bb99e035b269c6d15c8bb5153037019b02d571a224f399df6aeed3"} Feb 18 19:35:40 crc kubenswrapper[4942]: I0218 19:35:40.427500 4942 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/watcher-applier-0" podStartSLOduration=2.038048685 podStartE2EDuration="11.427482296s" podCreationTimestamp="2026-02-18 19:35:29 +0000 UTC" firstStartedPulling="2026-02-18 19:35:30.391541519 +0000 UTC m=+1090.096474184" lastFinishedPulling="2026-02-18 19:35:39.78097513 +0000 UTC m=+1099.485907795" observedRunningTime="2026-02-18 19:35:40.42688576 +0000 UTC m=+1100.131818425" watchObservedRunningTime="2026-02-18 19:35:40.427482296 +0000 UTC m=+1100.132414961" Feb 18 19:35:40 crc kubenswrapper[4942]: I0218 19:35:40.448455 4942 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-db-sync-h2kjs" podStartSLOduration=2.905393158 podStartE2EDuration="51.44843879s" podCreationTimestamp="2026-02-18 19:34:49 +0000 UTC" firstStartedPulling="2026-02-18 19:34:51.219193791 +0000 UTC m=+1050.924126456" lastFinishedPulling="2026-02-18 19:35:39.762239423 +0000 UTC m=+1099.467172088" observedRunningTime="2026-02-18 19:35:40.444676213 +0000 UTC m=+1100.149608888" watchObservedRunningTime="2026-02-18 19:35:40.44843879 +0000 UTC m=+1100.153371455" Feb 18 19:35:40 crc kubenswrapper[4942]: I0218 19:35:40.762027 4942 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/watcher-api-0" podUID="cf325d20-c507-42cc-b96f-6e57ff55aa53" containerName="watcher-api-log" probeResult="failure" output="Get \"http://10.217.0.172:9322/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 18 19:35:41 crc kubenswrapper[4942]: I0218 19:35:41.270396 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Feb 18 19:35:41 crc kubenswrapper[4942]: I0218 19:35:41.428883 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-qvzh5" event={"ID":"8db7f68b-a733-44fc-90b9-a1dd489fb42d","Type":"ContainerStarted","Data":"e0015f6cb0ed0e4e677017a14f5fcb4378f27372b8c41b1fdca89664675f56a0"} Feb 18 19:35:41 crc kubenswrapper[4942]: I0218 19:35:41.431266 4942 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 18 19:35:41 crc kubenswrapper[4942]: I0218 19:35:41.432047 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-decision-engine-0" event={"ID":"9cf66c1e-2f67-4785-85e9-f0b06e578d29","Type":"ContainerStarted","Data":"565df78e0898331235735ffa8948cdc3dea82d61dc2d3519faa61301dd4f6ffd"} Feb 18 19:35:41 crc kubenswrapper[4942]: I0218 19:35:41.450912 4942 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-db-sync-qvzh5" podStartSLOduration=4.508829439 podStartE2EDuration="52.450894419s" podCreationTimestamp="2026-02-18 19:34:49 +0000 UTC" firstStartedPulling="2026-02-18 19:34:51.311627554 +0000 UTC m=+1051.016560219" lastFinishedPulling="2026-02-18 19:35:39.253692534 +0000 UTC m=+1098.958625199" observedRunningTime="2026-02-18 19:35:41.444885923 +0000 UTC m=+1101.149818588" watchObservedRunningTime="2026-02-18 19:35:41.450894419 +0000 UTC m=+1101.155827084" Feb 18 19:35:41 crc kubenswrapper[4942]: I0218 19:35:41.464978 4942 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/watcher-decision-engine-0" podStartSLOduration=1.7960282520000002 podStartE2EDuration="12.464960934s" podCreationTimestamp="2026-02-18 19:35:29 +0000 UTC" firstStartedPulling="2026-02-18 19:35:30.307273108 +0000 UTC m=+1090.012205773" lastFinishedPulling="2026-02-18 19:35:40.97620578 +0000 UTC m=+1100.681138455" observedRunningTime="2026-02-18 19:35:41.463121867 +0000 UTC m=+1101.168054542" watchObservedRunningTime="2026-02-18 19:35:41.464960934 +0000 UTC m=+1101.169893599" Feb 18 19:35:42 crc kubenswrapper[4942]: I0218 19:35:42.530751 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/watcher-api-0" Feb 18 19:35:44 crc kubenswrapper[4942]: I0218 19:35:44.687939 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/watcher-applier-0" Feb 18 19:35:44 crc kubenswrapper[4942]: I0218 19:35:44.803895 4942 generic.go:334] "Generic (PLEG): container finished" podID="8aeac097-ba93-4859-a14f-839ae1421e28" containerID="4d566d8d0c1f2395dae51975108188a50f273b881992f487f3b84531a9f2e9f1" exitCode=0 Feb 18 19:35:44 crc kubenswrapper[4942]: I0218 19:35:44.804021 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-h2kjs" event={"ID":"8aeac097-ba93-4859-a14f-839ae1421e28","Type":"ContainerDied","Data":"4d566d8d0c1f2395dae51975108188a50f273b881992f487f3b84531a9f2e9f1"} Feb 18 19:35:46 crc kubenswrapper[4942]: I0218 19:35:46.828055 4942 generic.go:334] "Generic (PLEG): container finished" podID="8db7f68b-a733-44fc-90b9-a1dd489fb42d" containerID="e0015f6cb0ed0e4e677017a14f5fcb4378f27372b8c41b1fdca89664675f56a0" exitCode=0 Feb 18 19:35:46 crc kubenswrapper[4942]: I0218 19:35:46.828147 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-qvzh5" event={"ID":"8db7f68b-a733-44fc-90b9-a1dd489fb42d","Type":"ContainerDied","Data":"e0015f6cb0ed0e4e677017a14f5fcb4378f27372b8c41b1fdca89664675f56a0"} Feb 18 19:35:49 crc kubenswrapper[4942]: I0218 19:35:49.308994 4942 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-h2kjs" Feb 18 19:35:49 crc kubenswrapper[4942]: I0218 19:35:49.313611 4942 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-qvzh5" Feb 18 19:35:49 crc kubenswrapper[4942]: I0218 19:35:49.350170 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/8aeac097-ba93-4859-a14f-839ae1421e28-db-sync-config-data\") pod \"8aeac097-ba93-4859-a14f-839ae1421e28\" (UID: \"8aeac097-ba93-4859-a14f-839ae1421e28\") " Feb 18 19:35:49 crc kubenswrapper[4942]: I0218 19:35:49.350214 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8aeac097-ba93-4859-a14f-839ae1421e28-combined-ca-bundle\") pod \"8aeac097-ba93-4859-a14f-839ae1421e28\" (UID: \"8aeac097-ba93-4859-a14f-839ae1421e28\") " Feb 18 19:35:49 crc kubenswrapper[4942]: I0218 19:35:49.350238 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kdzfp\" (UniqueName: \"kubernetes.io/projected/8aeac097-ba93-4859-a14f-839ae1421e28-kube-api-access-kdzfp\") pod \"8aeac097-ba93-4859-a14f-839ae1421e28\" (UID: \"8aeac097-ba93-4859-a14f-839ae1421e28\") " Feb 18 19:35:49 crc kubenswrapper[4942]: I0218 19:35:49.350272 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8db7f68b-a733-44fc-90b9-a1dd489fb42d-combined-ca-bundle\") pod \"8db7f68b-a733-44fc-90b9-a1dd489fb42d\" (UID: \"8db7f68b-a733-44fc-90b9-a1dd489fb42d\") " Feb 18 19:35:49 crc kubenswrapper[4942]: I0218 19:35:49.350293 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z75l6\" (UniqueName: \"kubernetes.io/projected/8db7f68b-a733-44fc-90b9-a1dd489fb42d-kube-api-access-z75l6\") pod \"8db7f68b-a733-44fc-90b9-a1dd489fb42d\" (UID: \"8db7f68b-a733-44fc-90b9-a1dd489fb42d\") " Feb 18 19:35:49 crc kubenswrapper[4942]: I0218 19:35:49.350317 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8db7f68b-a733-44fc-90b9-a1dd489fb42d-scripts\") pod \"8db7f68b-a733-44fc-90b9-a1dd489fb42d\" (UID: \"8db7f68b-a733-44fc-90b9-a1dd489fb42d\") " Feb 18 19:35:49 crc kubenswrapper[4942]: I0218 19:35:49.350334 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/8db7f68b-a733-44fc-90b9-a1dd489fb42d-etc-machine-id\") pod \"8db7f68b-a733-44fc-90b9-a1dd489fb42d\" (UID: \"8db7f68b-a733-44fc-90b9-a1dd489fb42d\") " Feb 18 19:35:49 crc kubenswrapper[4942]: I0218 19:35:49.350399 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/8db7f68b-a733-44fc-90b9-a1dd489fb42d-db-sync-config-data\") pod \"8db7f68b-a733-44fc-90b9-a1dd489fb42d\" (UID: \"8db7f68b-a733-44fc-90b9-a1dd489fb42d\") " Feb 18 19:35:49 crc kubenswrapper[4942]: I0218 19:35:49.350439 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8db7f68b-a733-44fc-90b9-a1dd489fb42d-config-data\") pod \"8db7f68b-a733-44fc-90b9-a1dd489fb42d\" (UID: \"8db7f68b-a733-44fc-90b9-a1dd489fb42d\") " Feb 18 19:35:49 crc kubenswrapper[4942]: I0218 19:35:49.353021 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8db7f68b-a733-44fc-90b9-a1dd489fb42d-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "8db7f68b-a733-44fc-90b9-a1dd489fb42d" (UID: "8db7f68b-a733-44fc-90b9-a1dd489fb42d"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 18 19:35:49 crc kubenswrapper[4942]: I0218 19:35:49.363943 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8db7f68b-a733-44fc-90b9-a1dd489fb42d-scripts" (OuterVolumeSpecName: "scripts") pod "8db7f68b-a733-44fc-90b9-a1dd489fb42d" (UID: "8db7f68b-a733-44fc-90b9-a1dd489fb42d"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:35:49 crc kubenswrapper[4942]: I0218 19:35:49.364098 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8aeac097-ba93-4859-a14f-839ae1421e28-kube-api-access-kdzfp" (OuterVolumeSpecName: "kube-api-access-kdzfp") pod "8aeac097-ba93-4859-a14f-839ae1421e28" (UID: "8aeac097-ba93-4859-a14f-839ae1421e28"). InnerVolumeSpecName "kube-api-access-kdzfp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:35:49 crc kubenswrapper[4942]: I0218 19:35:49.367991 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8db7f68b-a733-44fc-90b9-a1dd489fb42d-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "8db7f68b-a733-44fc-90b9-a1dd489fb42d" (UID: "8db7f68b-a733-44fc-90b9-a1dd489fb42d"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:35:49 crc kubenswrapper[4942]: I0218 19:35:49.378022 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8db7f68b-a733-44fc-90b9-a1dd489fb42d-kube-api-access-z75l6" (OuterVolumeSpecName: "kube-api-access-z75l6") pod "8db7f68b-a733-44fc-90b9-a1dd489fb42d" (UID: "8db7f68b-a733-44fc-90b9-a1dd489fb42d"). InnerVolumeSpecName "kube-api-access-z75l6". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:35:49 crc kubenswrapper[4942]: I0218 19:35:49.381749 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8aeac097-ba93-4859-a14f-839ae1421e28-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "8aeac097-ba93-4859-a14f-839ae1421e28" (UID: "8aeac097-ba93-4859-a14f-839ae1421e28"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:35:49 crc kubenswrapper[4942]: I0218 19:35:49.395014 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8db7f68b-a733-44fc-90b9-a1dd489fb42d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "8db7f68b-a733-44fc-90b9-a1dd489fb42d" (UID: "8db7f68b-a733-44fc-90b9-a1dd489fb42d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:35:49 crc kubenswrapper[4942]: I0218 19:35:49.396952 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8aeac097-ba93-4859-a14f-839ae1421e28-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "8aeac097-ba93-4859-a14f-839ae1421e28" (UID: "8aeac097-ba93-4859-a14f-839ae1421e28"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:35:49 crc kubenswrapper[4942]: I0218 19:35:49.417916 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8db7f68b-a733-44fc-90b9-a1dd489fb42d-config-data" (OuterVolumeSpecName: "config-data") pod "8db7f68b-a733-44fc-90b9-a1dd489fb42d" (UID: "8db7f68b-a733-44fc-90b9-a1dd489fb42d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:35:49 crc kubenswrapper[4942]: I0218 19:35:49.452636 4942 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/8aeac097-ba93-4859-a14f-839ae1421e28-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Feb 18 19:35:49 crc kubenswrapper[4942]: I0218 19:35:49.452663 4942 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8aeac097-ba93-4859-a14f-839ae1421e28-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 19:35:49 crc kubenswrapper[4942]: I0218 19:35:49.452673 4942 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kdzfp\" (UniqueName: \"kubernetes.io/projected/8aeac097-ba93-4859-a14f-839ae1421e28-kube-api-access-kdzfp\") on node \"crc\" DevicePath \"\"" Feb 18 19:35:49 crc kubenswrapper[4942]: I0218 19:35:49.452684 4942 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8db7f68b-a733-44fc-90b9-a1dd489fb42d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 19:35:49 crc kubenswrapper[4942]: I0218 19:35:49.452692 4942 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z75l6\" (UniqueName: \"kubernetes.io/projected/8db7f68b-a733-44fc-90b9-a1dd489fb42d-kube-api-access-z75l6\") on node \"crc\" DevicePath \"\"" Feb 18 19:35:49 crc kubenswrapper[4942]: I0218 19:35:49.452700 4942 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8db7f68b-a733-44fc-90b9-a1dd489fb42d-scripts\") on node \"crc\" DevicePath \"\"" Feb 18 19:35:49 crc kubenswrapper[4942]: I0218 19:35:49.452710 4942 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/8db7f68b-a733-44fc-90b9-a1dd489fb42d-etc-machine-id\") on node \"crc\" DevicePath \"\"" Feb 18 19:35:49 crc kubenswrapper[4942]: I0218 19:35:49.452718 4942 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/8db7f68b-a733-44fc-90b9-a1dd489fb42d-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Feb 18 19:35:49 crc kubenswrapper[4942]: I0218 19:35:49.452726 4942 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8db7f68b-a733-44fc-90b9-a1dd489fb42d-config-data\") on node \"crc\" DevicePath \"\"" Feb 18 19:35:49 crc kubenswrapper[4942]: I0218 19:35:49.622992 4942 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/watcher-decision-engine-0" Feb 18 19:35:49 crc kubenswrapper[4942]: I0218 19:35:49.659312 4942 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/watcher-decision-engine-0" Feb 18 19:35:49 crc kubenswrapper[4942]: I0218 19:35:49.688023 4942 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/watcher-applier-0" Feb 18 19:35:49 crc kubenswrapper[4942]: I0218 19:35:49.714266 4942 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/watcher-applier-0" Feb 18 19:35:49 crc kubenswrapper[4942]: I0218 19:35:49.736199 4942 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/watcher-api-0" Feb 18 19:35:49 crc kubenswrapper[4942]: I0218 19:35:49.741242 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/watcher-api-0" Feb 18 19:35:49 crc kubenswrapper[4942]: I0218 19:35:49.867359 4942 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-h2kjs" Feb 18 19:35:49 crc kubenswrapper[4942]: I0218 19:35:49.867405 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-h2kjs" event={"ID":"8aeac097-ba93-4859-a14f-839ae1421e28","Type":"ContainerDied","Data":"e12d1b9fecda9ebe7bb6c836765d71cc803f359fe9c297ce1d8263fb74f3fe1c"} Feb 18 19:35:49 crc kubenswrapper[4942]: I0218 19:35:49.868221 4942 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e12d1b9fecda9ebe7bb6c836765d71cc803f359fe9c297ce1d8263fb74f3fe1c" Feb 18 19:35:49 crc kubenswrapper[4942]: I0218 19:35:49.869663 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-qvzh5" event={"ID":"8db7f68b-a733-44fc-90b9-a1dd489fb42d","Type":"ContainerDied","Data":"e6bd17d6977af834a72bbf74bee36179b26553390413854446805a67a2e12afa"} Feb 18 19:35:49 crc kubenswrapper[4942]: I0218 19:35:49.869700 4942 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e6bd17d6977af834a72bbf74bee36179b26553390413854446805a67a2e12afa" Feb 18 19:35:49 crc kubenswrapper[4942]: I0218 19:35:49.869891 4942 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-qvzh5" Feb 18 19:35:49 crc kubenswrapper[4942]: I0218 19:35:49.872272 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e4517368-322e-4467-b31a-45b487e1035b","Type":"ContainerStarted","Data":"3ce89a3d92b53a41feec8224f4fc75ea2cc11cd4761428cd9dab597a1c7d6d0a"} Feb 18 19:35:49 crc kubenswrapper[4942]: I0218 19:35:49.872460 4942 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="e4517368-322e-4467-b31a-45b487e1035b" containerName="ceilometer-central-agent" containerID="cri-o://36f35a87fe58dff89b8aed800be1382b5a73805c6babc09fce366da3515f6407" gracePeriod=30 Feb 18 19:35:49 crc kubenswrapper[4942]: I0218 19:35:49.872691 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 18 19:35:49 crc kubenswrapper[4942]: I0218 19:35:49.872969 4942 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="e4517368-322e-4467-b31a-45b487e1035b" containerName="proxy-httpd" containerID="cri-o://3ce89a3d92b53a41feec8224f4fc75ea2cc11cd4761428cd9dab597a1c7d6d0a" gracePeriod=30 Feb 18 19:35:49 crc kubenswrapper[4942]: I0218 19:35:49.873064 4942 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="e4517368-322e-4467-b31a-45b487e1035b" containerName="sg-core" containerID="cri-o://cc9e9ad424bb99e035b269c6d15c8bb5153037019b02d571a224f399df6aeed3" gracePeriod=30 Feb 18 19:35:49 crc kubenswrapper[4942]: I0218 19:35:49.873089 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/watcher-decision-engine-0" Feb 18 19:35:49 crc kubenswrapper[4942]: I0218 19:35:49.873117 4942 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="e4517368-322e-4467-b31a-45b487e1035b" containerName="ceilometer-notification-agent" containerID="cri-o://e4a549323fce47497ee0c4cfa6ce99131c2b1fa4f1a33956d55a73512533ebbd" gracePeriod=30 Feb 18 19:35:49 crc kubenswrapper[4942]: I0218 19:35:49.921634 4942 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=1.843364486 podStartE2EDuration="59.921610388s" podCreationTimestamp="2026-02-18 19:34:50 +0000 UTC" firstStartedPulling="2026-02-18 19:34:51.257100617 +0000 UTC m=+1050.962033282" lastFinishedPulling="2026-02-18 19:35:49.335346519 +0000 UTC m=+1109.040279184" observedRunningTime="2026-02-18 19:35:49.909044822 +0000 UTC m=+1109.613977497" watchObservedRunningTime="2026-02-18 19:35:49.921610388 +0000 UTC m=+1109.626543053" Feb 18 19:35:49 crc kubenswrapper[4942]: I0218 19:35:49.959212 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/watcher-decision-engine-0" Feb 18 19:35:49 crc kubenswrapper[4942]: I0218 19:35:49.964953 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/watcher-applier-0" Feb 18 19:35:50 crc kubenswrapper[4942]: I0218 19:35:50.555183 4942 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-worker-ffd58675f-h7jk6"] Feb 18 19:35:50 crc kubenswrapper[4942]: E0218 19:35:50.555618 4942 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8db7f68b-a733-44fc-90b9-a1dd489fb42d" containerName="cinder-db-sync" Feb 18 19:35:50 crc kubenswrapper[4942]: I0218 19:35:50.555629 4942 state_mem.go:107] "Deleted CPUSet assignment" podUID="8db7f68b-a733-44fc-90b9-a1dd489fb42d" containerName="cinder-db-sync" Feb 18 19:35:50 crc kubenswrapper[4942]: E0218 19:35:50.555661 4942 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8aeac097-ba93-4859-a14f-839ae1421e28" containerName="barbican-db-sync" Feb 18 19:35:50 crc kubenswrapper[4942]: I0218 19:35:50.555667 4942 state_mem.go:107] "Deleted CPUSet assignment" podUID="8aeac097-ba93-4859-a14f-839ae1421e28" containerName="barbican-db-sync" Feb 18 19:35:50 crc kubenswrapper[4942]: E0218 19:35:50.555675 4942 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f354be6c-0a53-41b2-923d-60de99a6ed65" containerName="init" Feb 18 19:35:50 crc kubenswrapper[4942]: I0218 19:35:50.555681 4942 state_mem.go:107] "Deleted CPUSet assignment" podUID="f354be6c-0a53-41b2-923d-60de99a6ed65" containerName="init" Feb 18 19:35:50 crc kubenswrapper[4942]: E0218 19:35:50.555690 4942 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f354be6c-0a53-41b2-923d-60de99a6ed65" containerName="dnsmasq-dns" Feb 18 19:35:50 crc kubenswrapper[4942]: I0218 19:35:50.555695 4942 state_mem.go:107] "Deleted CPUSet assignment" podUID="f354be6c-0a53-41b2-923d-60de99a6ed65" containerName="dnsmasq-dns" Feb 18 19:35:50 crc kubenswrapper[4942]: I0218 19:35:50.555891 4942 memory_manager.go:354] "RemoveStaleState removing state" podUID="8aeac097-ba93-4859-a14f-839ae1421e28" containerName="barbican-db-sync" Feb 18 19:35:50 crc kubenswrapper[4942]: I0218 19:35:50.555903 4942 memory_manager.go:354] "RemoveStaleState removing state" podUID="8db7f68b-a733-44fc-90b9-a1dd489fb42d" containerName="cinder-db-sync" Feb 18 19:35:50 crc kubenswrapper[4942]: I0218 19:35:50.555917 4942 memory_manager.go:354] "RemoveStaleState removing state" podUID="f354be6c-0a53-41b2-923d-60de99a6ed65" containerName="dnsmasq-dns" Feb 18 19:35:50 crc kubenswrapper[4942]: I0218 19:35:50.556824 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-ffd58675f-h7jk6" Feb 18 19:35:50 crc kubenswrapper[4942]: I0218 19:35:50.569440 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-worker-config-data" Feb 18 19:35:50 crc kubenswrapper[4942]: I0218 19:35:50.572503 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-qg5fj" Feb 18 19:35:50 crc kubenswrapper[4942]: I0218 19:35:50.577394 4942 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-keystone-listener-5987dd846-f7dd9"] Feb 18 19:35:50 crc kubenswrapper[4942]: I0218 19:35:50.578316 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Feb 18 19:35:50 crc kubenswrapper[4942]: I0218 19:35:50.579161 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-5987dd846-f7dd9" Feb 18 19:35:50 crc kubenswrapper[4942]: I0218 19:35:50.583109 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-keystone-listener-config-data" Feb 18 19:35:50 crc kubenswrapper[4942]: I0218 19:35:50.606429 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-5987dd846-f7dd9"] Feb 18 19:35:50 crc kubenswrapper[4942]: I0218 19:35:50.623822 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-ffd58675f-h7jk6"] Feb 18 19:35:50 crc kubenswrapper[4942]: I0218 19:35:50.680431 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bwwb2\" (UniqueName: \"kubernetes.io/projected/0e207482-f349-415e-86d3-800b0caf9a78-kube-api-access-bwwb2\") pod \"barbican-worker-ffd58675f-h7jk6\" (UID: \"0e207482-f349-415e-86d3-800b0caf9a78\") " pod="openstack/barbican-worker-ffd58675f-h7jk6" Feb 18 19:35:50 crc kubenswrapper[4942]: I0218 19:35:50.680484 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0e207482-f349-415e-86d3-800b0caf9a78-config-data\") pod \"barbican-worker-ffd58675f-h7jk6\" (UID: \"0e207482-f349-415e-86d3-800b0caf9a78\") " pod="openstack/barbican-worker-ffd58675f-h7jk6" Feb 18 19:35:50 crc kubenswrapper[4942]: I0218 19:35:50.680506 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/db966aef-0b18-400a-b3e8-49487308bf05-logs\") pod \"barbican-keystone-listener-5987dd846-f7dd9\" (UID: \"db966aef-0b18-400a-b3e8-49487308bf05\") " pod="openstack/barbican-keystone-listener-5987dd846-f7dd9" Feb 18 19:35:50 crc kubenswrapper[4942]: I0218 19:35:50.680563 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/db966aef-0b18-400a-b3e8-49487308bf05-config-data\") pod \"barbican-keystone-listener-5987dd846-f7dd9\" (UID: \"db966aef-0b18-400a-b3e8-49487308bf05\") " pod="openstack/barbican-keystone-listener-5987dd846-f7dd9" Feb 18 19:35:50 crc kubenswrapper[4942]: I0218 19:35:50.680577 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/db966aef-0b18-400a-b3e8-49487308bf05-combined-ca-bundle\") pod \"barbican-keystone-listener-5987dd846-f7dd9\" (UID: \"db966aef-0b18-400a-b3e8-49487308bf05\") " pod="openstack/barbican-keystone-listener-5987dd846-f7dd9" Feb 18 19:35:50 crc kubenswrapper[4942]: I0218 19:35:50.680596 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/db966aef-0b18-400a-b3e8-49487308bf05-config-data-custom\") pod \"barbican-keystone-listener-5987dd846-f7dd9\" (UID: \"db966aef-0b18-400a-b3e8-49487308bf05\") " pod="openstack/barbican-keystone-listener-5987dd846-f7dd9" Feb 18 19:35:50 crc kubenswrapper[4942]: I0218 19:35:50.680615 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0e207482-f349-415e-86d3-800b0caf9a78-logs\") pod \"barbican-worker-ffd58675f-h7jk6\" (UID: \"0e207482-f349-415e-86d3-800b0caf9a78\") " pod="openstack/barbican-worker-ffd58675f-h7jk6" Feb 18 19:35:50 crc kubenswrapper[4942]: I0218 19:35:50.680634 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n6xvm\" (UniqueName: \"kubernetes.io/projected/db966aef-0b18-400a-b3e8-49487308bf05-kube-api-access-n6xvm\") pod \"barbican-keystone-listener-5987dd846-f7dd9\" (UID: \"db966aef-0b18-400a-b3e8-49487308bf05\") " pod="openstack/barbican-keystone-listener-5987dd846-f7dd9" Feb 18 19:35:50 crc kubenswrapper[4942]: I0218 19:35:50.680669 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0e207482-f349-415e-86d3-800b0caf9a78-combined-ca-bundle\") pod \"barbican-worker-ffd58675f-h7jk6\" (UID: \"0e207482-f349-415e-86d3-800b0caf9a78\") " pod="openstack/barbican-worker-ffd58675f-h7jk6" Feb 18 19:35:50 crc kubenswrapper[4942]: I0218 19:35:50.680694 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/0e207482-f349-415e-86d3-800b0caf9a78-config-data-custom\") pod \"barbican-worker-ffd58675f-h7jk6\" (UID: \"0e207482-f349-415e-86d3-800b0caf9a78\") " pod="openstack/barbican-worker-ffd58675f-h7jk6" Feb 18 19:35:50 crc kubenswrapper[4942]: I0218 19:35:50.747227 4942 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-b895b5785-t2j2r"] Feb 18 19:35:50 crc kubenswrapper[4942]: I0218 19:35:50.748687 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-b895b5785-t2j2r" Feb 18 19:35:50 crc kubenswrapper[4942]: I0218 19:35:50.775814 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-b895b5785-t2j2r"] Feb 18 19:35:50 crc kubenswrapper[4942]: I0218 19:35:50.783825 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/0e207482-f349-415e-86d3-800b0caf9a78-config-data-custom\") pod \"barbican-worker-ffd58675f-h7jk6\" (UID: \"0e207482-f349-415e-86d3-800b0caf9a78\") " pod="openstack/barbican-worker-ffd58675f-h7jk6" Feb 18 19:35:50 crc kubenswrapper[4942]: I0218 19:35:50.783917 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bwwb2\" (UniqueName: \"kubernetes.io/projected/0e207482-f349-415e-86d3-800b0caf9a78-kube-api-access-bwwb2\") pod \"barbican-worker-ffd58675f-h7jk6\" (UID: \"0e207482-f349-415e-86d3-800b0caf9a78\") " pod="openstack/barbican-worker-ffd58675f-h7jk6" Feb 18 19:35:50 crc kubenswrapper[4942]: I0218 19:35:50.783947 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0e207482-f349-415e-86d3-800b0caf9a78-config-data\") pod \"barbican-worker-ffd58675f-h7jk6\" (UID: \"0e207482-f349-415e-86d3-800b0caf9a78\") " pod="openstack/barbican-worker-ffd58675f-h7jk6" Feb 18 19:35:50 crc kubenswrapper[4942]: I0218 19:35:50.783966 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/db966aef-0b18-400a-b3e8-49487308bf05-logs\") pod \"barbican-keystone-listener-5987dd846-f7dd9\" (UID: \"db966aef-0b18-400a-b3e8-49487308bf05\") " pod="openstack/barbican-keystone-listener-5987dd846-f7dd9" Feb 18 19:35:50 crc kubenswrapper[4942]: I0218 19:35:50.784020 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/db966aef-0b18-400a-b3e8-49487308bf05-config-data\") pod \"barbican-keystone-listener-5987dd846-f7dd9\" (UID: \"db966aef-0b18-400a-b3e8-49487308bf05\") " pod="openstack/barbican-keystone-listener-5987dd846-f7dd9" Feb 18 19:35:50 crc kubenswrapper[4942]: I0218 19:35:50.784037 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/db966aef-0b18-400a-b3e8-49487308bf05-combined-ca-bundle\") pod \"barbican-keystone-listener-5987dd846-f7dd9\" (UID: \"db966aef-0b18-400a-b3e8-49487308bf05\") " pod="openstack/barbican-keystone-listener-5987dd846-f7dd9" Feb 18 19:35:50 crc kubenswrapper[4942]: I0218 19:35:50.784056 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/db966aef-0b18-400a-b3e8-49487308bf05-config-data-custom\") pod \"barbican-keystone-listener-5987dd846-f7dd9\" (UID: \"db966aef-0b18-400a-b3e8-49487308bf05\") " pod="openstack/barbican-keystone-listener-5987dd846-f7dd9" Feb 18 19:35:50 crc kubenswrapper[4942]: I0218 19:35:50.784072 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0e207482-f349-415e-86d3-800b0caf9a78-logs\") pod \"barbican-worker-ffd58675f-h7jk6\" (UID: \"0e207482-f349-415e-86d3-800b0caf9a78\") " pod="openstack/barbican-worker-ffd58675f-h7jk6" Feb 18 19:35:50 crc kubenswrapper[4942]: I0218 19:35:50.784091 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n6xvm\" (UniqueName: \"kubernetes.io/projected/db966aef-0b18-400a-b3e8-49487308bf05-kube-api-access-n6xvm\") pod \"barbican-keystone-listener-5987dd846-f7dd9\" (UID: \"db966aef-0b18-400a-b3e8-49487308bf05\") " pod="openstack/barbican-keystone-listener-5987dd846-f7dd9" Feb 18 19:35:50 crc kubenswrapper[4942]: I0218 19:35:50.784123 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0e207482-f349-415e-86d3-800b0caf9a78-combined-ca-bundle\") pod \"barbican-worker-ffd58675f-h7jk6\" (UID: \"0e207482-f349-415e-86d3-800b0caf9a78\") " pod="openstack/barbican-worker-ffd58675f-h7jk6" Feb 18 19:35:50 crc kubenswrapper[4942]: I0218 19:35:50.792017 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/db966aef-0b18-400a-b3e8-49487308bf05-logs\") pod \"barbican-keystone-listener-5987dd846-f7dd9\" (UID: \"db966aef-0b18-400a-b3e8-49487308bf05\") " pod="openstack/barbican-keystone-listener-5987dd846-f7dd9" Feb 18 19:35:50 crc kubenswrapper[4942]: I0218 19:35:50.797534 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0e207482-f349-415e-86d3-800b0caf9a78-logs\") pod \"barbican-worker-ffd58675f-h7jk6\" (UID: \"0e207482-f349-415e-86d3-800b0caf9a78\") " pod="openstack/barbican-worker-ffd58675f-h7jk6" Feb 18 19:35:50 crc kubenswrapper[4942]: I0218 19:35:50.797672 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0e207482-f349-415e-86d3-800b0caf9a78-combined-ca-bundle\") pod \"barbican-worker-ffd58675f-h7jk6\" (UID: \"0e207482-f349-415e-86d3-800b0caf9a78\") " pod="openstack/barbican-worker-ffd58675f-h7jk6" Feb 18 19:35:50 crc kubenswrapper[4942]: I0218 19:35:50.800650 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0e207482-f349-415e-86d3-800b0caf9a78-config-data\") pod \"barbican-worker-ffd58675f-h7jk6\" (UID: \"0e207482-f349-415e-86d3-800b0caf9a78\") " pod="openstack/barbican-worker-ffd58675f-h7jk6" Feb 18 19:35:50 crc kubenswrapper[4942]: I0218 19:35:50.800866 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/0e207482-f349-415e-86d3-800b0caf9a78-config-data-custom\") pod \"barbican-worker-ffd58675f-h7jk6\" (UID: \"0e207482-f349-415e-86d3-800b0caf9a78\") " pod="openstack/barbican-worker-ffd58675f-h7jk6" Feb 18 19:35:50 crc kubenswrapper[4942]: I0218 19:35:50.807396 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/db966aef-0b18-400a-b3e8-49487308bf05-config-data-custom\") pod \"barbican-keystone-listener-5987dd846-f7dd9\" (UID: \"db966aef-0b18-400a-b3e8-49487308bf05\") " pod="openstack/barbican-keystone-listener-5987dd846-f7dd9" Feb 18 19:35:50 crc kubenswrapper[4942]: I0218 19:35:50.818970 4942 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Feb 18 19:35:50 crc kubenswrapper[4942]: I0218 19:35:50.819873 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/db966aef-0b18-400a-b3e8-49487308bf05-config-data\") pod \"barbican-keystone-listener-5987dd846-f7dd9\" (UID: \"db966aef-0b18-400a-b3e8-49487308bf05\") " pod="openstack/barbican-keystone-listener-5987dd846-f7dd9" Feb 18 19:35:50 crc kubenswrapper[4942]: I0218 19:35:50.820374 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Feb 18 19:35:50 crc kubenswrapper[4942]: I0218 19:35:50.823315 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/db966aef-0b18-400a-b3e8-49487308bf05-combined-ca-bundle\") pod \"barbican-keystone-listener-5987dd846-f7dd9\" (UID: \"db966aef-0b18-400a-b3e8-49487308bf05\") " pod="openstack/barbican-keystone-listener-5987dd846-f7dd9" Feb 18 19:35:50 crc kubenswrapper[4942]: I0218 19:35:50.828322 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bwwb2\" (UniqueName: \"kubernetes.io/projected/0e207482-f349-415e-86d3-800b0caf9a78-kube-api-access-bwwb2\") pod \"barbican-worker-ffd58675f-h7jk6\" (UID: \"0e207482-f349-415e-86d3-800b0caf9a78\") " pod="openstack/barbican-worker-ffd58675f-h7jk6" Feb 18 19:35:50 crc kubenswrapper[4942]: I0218 19:35:50.856193 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-rhdz8" Feb 18 19:35:50 crc kubenswrapper[4942]: I0218 19:35:50.856390 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Feb 18 19:35:50 crc kubenswrapper[4942]: I0218 19:35:50.856492 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Feb 18 19:35:50 crc kubenswrapper[4942]: I0218 19:35:50.856610 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Feb 18 19:35:50 crc kubenswrapper[4942]: I0218 19:35:50.857971 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 18 19:35:50 crc kubenswrapper[4942]: I0218 19:35:50.872403 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n6xvm\" (UniqueName: \"kubernetes.io/projected/db966aef-0b18-400a-b3e8-49487308bf05-kube-api-access-n6xvm\") pod \"barbican-keystone-listener-5987dd846-f7dd9\" (UID: \"db966aef-0b18-400a-b3e8-49487308bf05\") " pod="openstack/barbican-keystone-listener-5987dd846-f7dd9" Feb 18 19:35:50 crc kubenswrapper[4942]: I0218 19:35:50.875960 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-ffd58675f-h7jk6" Feb 18 19:35:50 crc kubenswrapper[4942]: I0218 19:35:50.887393 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/25b5e8eb-ac39-4f81-9601-0b2cc0a54a13-ovsdbserver-nb\") pod \"dnsmasq-dns-b895b5785-t2j2r\" (UID: \"25b5e8eb-ac39-4f81-9601-0b2cc0a54a13\") " pod="openstack/dnsmasq-dns-b895b5785-t2j2r" Feb 18 19:35:50 crc kubenswrapper[4942]: I0218 19:35:50.887485 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v9h6h\" (UniqueName: \"kubernetes.io/projected/25b5e8eb-ac39-4f81-9601-0b2cc0a54a13-kube-api-access-v9h6h\") pod \"dnsmasq-dns-b895b5785-t2j2r\" (UID: \"25b5e8eb-ac39-4f81-9601-0b2cc0a54a13\") " pod="openstack/dnsmasq-dns-b895b5785-t2j2r" Feb 18 19:35:50 crc kubenswrapper[4942]: I0218 19:35:50.887523 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/25b5e8eb-ac39-4f81-9601-0b2cc0a54a13-dns-svc\") pod \"dnsmasq-dns-b895b5785-t2j2r\" (UID: \"25b5e8eb-ac39-4f81-9601-0b2cc0a54a13\") " pod="openstack/dnsmasq-dns-b895b5785-t2j2r" Feb 18 19:35:50 crc kubenswrapper[4942]: I0218 19:35:50.887556 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/25b5e8eb-ac39-4f81-9601-0b2cc0a54a13-ovsdbserver-sb\") pod \"dnsmasq-dns-b895b5785-t2j2r\" (UID: \"25b5e8eb-ac39-4f81-9601-0b2cc0a54a13\") " pod="openstack/dnsmasq-dns-b895b5785-t2j2r" Feb 18 19:35:50 crc kubenswrapper[4942]: I0218 19:35:50.887579 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/25b5e8eb-ac39-4f81-9601-0b2cc0a54a13-dns-swift-storage-0\") pod \"dnsmasq-dns-b895b5785-t2j2r\" (UID: \"25b5e8eb-ac39-4f81-9601-0b2cc0a54a13\") " pod="openstack/dnsmasq-dns-b895b5785-t2j2r" Feb 18 19:35:50 crc kubenswrapper[4942]: I0218 19:35:50.887600 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/25b5e8eb-ac39-4f81-9601-0b2cc0a54a13-config\") pod \"dnsmasq-dns-b895b5785-t2j2r\" (UID: \"25b5e8eb-ac39-4f81-9601-0b2cc0a54a13\") " pod="openstack/dnsmasq-dns-b895b5785-t2j2r" Feb 18 19:35:50 crc kubenswrapper[4942]: I0218 19:35:50.895924 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-5987dd846-f7dd9" Feb 18 19:35:50 crc kubenswrapper[4942]: I0218 19:35:50.965366 4942 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-b895b5785-t2j2r"] Feb 18 19:35:50 crc kubenswrapper[4942]: E0218 19:35:50.966003 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[config dns-svc dns-swift-storage-0 kube-api-access-v9h6h ovsdbserver-nb ovsdbserver-sb], unattached volumes=[], failed to process volumes=[]: context canceled" pod="openstack/dnsmasq-dns-b895b5785-t2j2r" podUID="25b5e8eb-ac39-4f81-9601-0b2cc0a54a13" Feb 18 19:35:50 crc kubenswrapper[4942]: I0218 19:35:50.971097 4942 generic.go:334] "Generic (PLEG): container finished" podID="e4517368-322e-4467-b31a-45b487e1035b" containerID="3ce89a3d92b53a41feec8224f4fc75ea2cc11cd4761428cd9dab597a1c7d6d0a" exitCode=0 Feb 18 19:35:50 crc kubenswrapper[4942]: I0218 19:35:50.971128 4942 generic.go:334] "Generic (PLEG): container finished" podID="e4517368-322e-4467-b31a-45b487e1035b" containerID="cc9e9ad424bb99e035b269c6d15c8bb5153037019b02d571a224f399df6aeed3" exitCode=2 Feb 18 19:35:50 crc kubenswrapper[4942]: I0218 19:35:50.971136 4942 generic.go:334] "Generic (PLEG): container finished" podID="e4517368-322e-4467-b31a-45b487e1035b" containerID="36f35a87fe58dff89b8aed800be1382b5a73805c6babc09fce366da3515f6407" exitCode=0 Feb 18 19:35:50 crc kubenswrapper[4942]: I0218 19:35:50.971705 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e4517368-322e-4467-b31a-45b487e1035b","Type":"ContainerDied","Data":"3ce89a3d92b53a41feec8224f4fc75ea2cc11cd4761428cd9dab597a1c7d6d0a"} Feb 18 19:35:50 crc kubenswrapper[4942]: I0218 19:35:50.971735 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e4517368-322e-4467-b31a-45b487e1035b","Type":"ContainerDied","Data":"cc9e9ad424bb99e035b269c6d15c8bb5153037019b02d571a224f399df6aeed3"} Feb 18 19:35:50 crc kubenswrapper[4942]: I0218 19:35:50.971746 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e4517368-322e-4467-b31a-45b487e1035b","Type":"ContainerDied","Data":"36f35a87fe58dff89b8aed800be1382b5a73805c6babc09fce366da3515f6407"} Feb 18 19:35:50 crc kubenswrapper[4942]: I0218 19:35:50.992158 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/25b5e8eb-ac39-4f81-9601-0b2cc0a54a13-config\") pod \"dnsmasq-dns-b895b5785-t2j2r\" (UID: \"25b5e8eb-ac39-4f81-9601-0b2cc0a54a13\") " pod="openstack/dnsmasq-dns-b895b5785-t2j2r" Feb 18 19:35:50 crc kubenswrapper[4942]: I0218 19:35:50.992230 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/17399208-02d7-46c9-b5ea-b01563e8baf1-scripts\") pod \"cinder-scheduler-0\" (UID: \"17399208-02d7-46c9-b5ea-b01563e8baf1\") " pod="openstack/cinder-scheduler-0" Feb 18 19:35:50 crc kubenswrapper[4942]: I0218 19:35:50.992256 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nb8kr\" (UniqueName: \"kubernetes.io/projected/17399208-02d7-46c9-b5ea-b01563e8baf1-kube-api-access-nb8kr\") pod \"cinder-scheduler-0\" (UID: \"17399208-02d7-46c9-b5ea-b01563e8baf1\") " pod="openstack/cinder-scheduler-0" Feb 18 19:35:50 crc kubenswrapper[4942]: I0218 19:35:50.992279 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/25b5e8eb-ac39-4f81-9601-0b2cc0a54a13-ovsdbserver-nb\") pod \"dnsmasq-dns-b895b5785-t2j2r\" (UID: \"25b5e8eb-ac39-4f81-9601-0b2cc0a54a13\") " pod="openstack/dnsmasq-dns-b895b5785-t2j2r" Feb 18 19:35:50 crc kubenswrapper[4942]: I0218 19:35:50.992297 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/17399208-02d7-46c9-b5ea-b01563e8baf1-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"17399208-02d7-46c9-b5ea-b01563e8baf1\") " pod="openstack/cinder-scheduler-0" Feb 18 19:35:50 crc kubenswrapper[4942]: I0218 19:35:50.992354 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/17399208-02d7-46c9-b5ea-b01563e8baf1-config-data\") pod \"cinder-scheduler-0\" (UID: \"17399208-02d7-46c9-b5ea-b01563e8baf1\") " pod="openstack/cinder-scheduler-0" Feb 18 19:35:50 crc kubenswrapper[4942]: I0218 19:35:50.992377 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v9h6h\" (UniqueName: \"kubernetes.io/projected/25b5e8eb-ac39-4f81-9601-0b2cc0a54a13-kube-api-access-v9h6h\") pod \"dnsmasq-dns-b895b5785-t2j2r\" (UID: \"25b5e8eb-ac39-4f81-9601-0b2cc0a54a13\") " pod="openstack/dnsmasq-dns-b895b5785-t2j2r" Feb 18 19:35:50 crc kubenswrapper[4942]: I0218 19:35:50.992410 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/25b5e8eb-ac39-4f81-9601-0b2cc0a54a13-dns-svc\") pod \"dnsmasq-dns-b895b5785-t2j2r\" (UID: \"25b5e8eb-ac39-4f81-9601-0b2cc0a54a13\") " pod="openstack/dnsmasq-dns-b895b5785-t2j2r" Feb 18 19:35:50 crc kubenswrapper[4942]: I0218 19:35:50.992426 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/17399208-02d7-46c9-b5ea-b01563e8baf1-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"17399208-02d7-46c9-b5ea-b01563e8baf1\") " pod="openstack/cinder-scheduler-0" Feb 18 19:35:50 crc kubenswrapper[4942]: I0218 19:35:50.992447 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/17399208-02d7-46c9-b5ea-b01563e8baf1-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"17399208-02d7-46c9-b5ea-b01563e8baf1\") " pod="openstack/cinder-scheduler-0" Feb 18 19:35:50 crc kubenswrapper[4942]: I0218 19:35:50.992470 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/25b5e8eb-ac39-4f81-9601-0b2cc0a54a13-ovsdbserver-sb\") pod \"dnsmasq-dns-b895b5785-t2j2r\" (UID: \"25b5e8eb-ac39-4f81-9601-0b2cc0a54a13\") " pod="openstack/dnsmasq-dns-b895b5785-t2j2r" Feb 18 19:35:50 crc kubenswrapper[4942]: I0218 19:35:50.992492 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/25b5e8eb-ac39-4f81-9601-0b2cc0a54a13-dns-swift-storage-0\") pod \"dnsmasq-dns-b895b5785-t2j2r\" (UID: \"25b5e8eb-ac39-4f81-9601-0b2cc0a54a13\") " pod="openstack/dnsmasq-dns-b895b5785-t2j2r" Feb 18 19:35:50 crc kubenswrapper[4942]: I0218 19:35:50.993279 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/25b5e8eb-ac39-4f81-9601-0b2cc0a54a13-dns-swift-storage-0\") pod \"dnsmasq-dns-b895b5785-t2j2r\" (UID: \"25b5e8eb-ac39-4f81-9601-0b2cc0a54a13\") " pod="openstack/dnsmasq-dns-b895b5785-t2j2r" Feb 18 19:35:50 crc kubenswrapper[4942]: I0218 19:35:50.993811 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/25b5e8eb-ac39-4f81-9601-0b2cc0a54a13-config\") pod \"dnsmasq-dns-b895b5785-t2j2r\" (UID: \"25b5e8eb-ac39-4f81-9601-0b2cc0a54a13\") " pod="openstack/dnsmasq-dns-b895b5785-t2j2r" Feb 18 19:35:50 crc kubenswrapper[4942]: I0218 19:35:50.994337 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/25b5e8eb-ac39-4f81-9601-0b2cc0a54a13-ovsdbserver-sb\") pod \"dnsmasq-dns-b895b5785-t2j2r\" (UID: \"25b5e8eb-ac39-4f81-9601-0b2cc0a54a13\") " pod="openstack/dnsmasq-dns-b895b5785-t2j2r" Feb 18 19:35:51 crc kubenswrapper[4942]: I0218 19:35:51.007934 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/25b5e8eb-ac39-4f81-9601-0b2cc0a54a13-ovsdbserver-nb\") pod \"dnsmasq-dns-b895b5785-t2j2r\" (UID: \"25b5e8eb-ac39-4f81-9601-0b2cc0a54a13\") " pod="openstack/dnsmasq-dns-b895b5785-t2j2r" Feb 18 19:35:51 crc kubenswrapper[4942]: I0218 19:35:51.009271 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/25b5e8eb-ac39-4f81-9601-0b2cc0a54a13-dns-svc\") pod \"dnsmasq-dns-b895b5785-t2j2r\" (UID: \"25b5e8eb-ac39-4f81-9601-0b2cc0a54a13\") " pod="openstack/dnsmasq-dns-b895b5785-t2j2r" Feb 18 19:35:51 crc kubenswrapper[4942]: I0218 19:35:51.037379 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v9h6h\" (UniqueName: \"kubernetes.io/projected/25b5e8eb-ac39-4f81-9601-0b2cc0a54a13-kube-api-access-v9h6h\") pod \"dnsmasq-dns-b895b5785-t2j2r\" (UID: \"25b5e8eb-ac39-4f81-9601-0b2cc0a54a13\") " pod="openstack/dnsmasq-dns-b895b5785-t2j2r" Feb 18 19:35:51 crc kubenswrapper[4942]: I0218 19:35:51.094480 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/17399208-02d7-46c9-b5ea-b01563e8baf1-scripts\") pod \"cinder-scheduler-0\" (UID: \"17399208-02d7-46c9-b5ea-b01563e8baf1\") " pod="openstack/cinder-scheduler-0" Feb 18 19:35:51 crc kubenswrapper[4942]: I0218 19:35:51.094567 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nb8kr\" (UniqueName: \"kubernetes.io/projected/17399208-02d7-46c9-b5ea-b01563e8baf1-kube-api-access-nb8kr\") pod \"cinder-scheduler-0\" (UID: \"17399208-02d7-46c9-b5ea-b01563e8baf1\") " pod="openstack/cinder-scheduler-0" Feb 18 19:35:51 crc kubenswrapper[4942]: I0218 19:35:51.094593 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/17399208-02d7-46c9-b5ea-b01563e8baf1-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"17399208-02d7-46c9-b5ea-b01563e8baf1\") " pod="openstack/cinder-scheduler-0" Feb 18 19:35:51 crc kubenswrapper[4942]: I0218 19:35:51.094673 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/17399208-02d7-46c9-b5ea-b01563e8baf1-config-data\") pod \"cinder-scheduler-0\" (UID: \"17399208-02d7-46c9-b5ea-b01563e8baf1\") " pod="openstack/cinder-scheduler-0" Feb 18 19:35:51 crc kubenswrapper[4942]: I0218 19:35:51.094737 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/17399208-02d7-46c9-b5ea-b01563e8baf1-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"17399208-02d7-46c9-b5ea-b01563e8baf1\") " pod="openstack/cinder-scheduler-0" Feb 18 19:35:51 crc kubenswrapper[4942]: I0218 19:35:51.094778 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/17399208-02d7-46c9-b5ea-b01563e8baf1-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"17399208-02d7-46c9-b5ea-b01563e8baf1\") " pod="openstack/cinder-scheduler-0" Feb 18 19:35:51 crc kubenswrapper[4942]: I0218 19:35:51.098513 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/17399208-02d7-46c9-b5ea-b01563e8baf1-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"17399208-02d7-46c9-b5ea-b01563e8baf1\") " pod="openstack/cinder-scheduler-0" Feb 18 19:35:51 crc kubenswrapper[4942]: I0218 19:35:51.098978 4942 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5c9776ccc5-lrqxl"] Feb 18 19:35:51 crc kubenswrapper[4942]: I0218 19:35:51.098993 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/17399208-02d7-46c9-b5ea-b01563e8baf1-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"17399208-02d7-46c9-b5ea-b01563e8baf1\") " pod="openstack/cinder-scheduler-0" Feb 18 19:35:51 crc kubenswrapper[4942]: I0218 19:35:51.100234 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5c9776ccc5-lrqxl"] Feb 18 19:35:51 crc kubenswrapper[4942]: I0218 19:35:51.100306 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c9776ccc5-lrqxl" Feb 18 19:35:51 crc kubenswrapper[4942]: I0218 19:35:51.107677 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/17399208-02d7-46c9-b5ea-b01563e8baf1-scripts\") pod \"cinder-scheduler-0\" (UID: \"17399208-02d7-46c9-b5ea-b01563e8baf1\") " pod="openstack/cinder-scheduler-0" Feb 18 19:35:51 crc kubenswrapper[4942]: I0218 19:35:51.112874 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/17399208-02d7-46c9-b5ea-b01563e8baf1-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"17399208-02d7-46c9-b5ea-b01563e8baf1\") " pod="openstack/cinder-scheduler-0" Feb 18 19:35:51 crc kubenswrapper[4942]: I0218 19:35:51.118871 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/17399208-02d7-46c9-b5ea-b01563e8baf1-config-data\") pod \"cinder-scheduler-0\" (UID: \"17399208-02d7-46c9-b5ea-b01563e8baf1\") " pod="openstack/cinder-scheduler-0" Feb 18 19:35:51 crc kubenswrapper[4942]: I0218 19:35:51.120799 4942 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-9d65dd5d-c4zgj"] Feb 18 19:35:51 crc kubenswrapper[4942]: I0218 19:35:51.133237 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-9d65dd5d-c4zgj" Feb 18 19:35:51 crc kubenswrapper[4942]: I0218 19:35:51.139938 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-api-config-data" Feb 18 19:35:51 crc kubenswrapper[4942]: I0218 19:35:51.163772 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nb8kr\" (UniqueName: \"kubernetes.io/projected/17399208-02d7-46c9-b5ea-b01563e8baf1-kube-api-access-nb8kr\") pod \"cinder-scheduler-0\" (UID: \"17399208-02d7-46c9-b5ea-b01563e8baf1\") " pod="openstack/cinder-scheduler-0" Feb 18 19:35:51 crc kubenswrapper[4942]: I0218 19:35:51.181803 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-9d65dd5d-c4zgj"] Feb 18 19:35:51 crc kubenswrapper[4942]: I0218 19:35:51.198580 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e4cc3ba2-abea-4fa2-9272-65ac8721c87d-config-data-custom\") pod \"barbican-api-9d65dd5d-c4zgj\" (UID: \"e4cc3ba2-abea-4fa2-9272-65ac8721c87d\") " pod="openstack/barbican-api-9d65dd5d-c4zgj" Feb 18 19:35:51 crc kubenswrapper[4942]: I0218 19:35:51.198857 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-92ffg\" (UniqueName: \"kubernetes.io/projected/e4cc3ba2-abea-4fa2-9272-65ac8721c87d-kube-api-access-92ffg\") pod \"barbican-api-9d65dd5d-c4zgj\" (UID: \"e4cc3ba2-abea-4fa2-9272-65ac8721c87d\") " pod="openstack/barbican-api-9d65dd5d-c4zgj" Feb 18 19:35:51 crc kubenswrapper[4942]: I0218 19:35:51.198883 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d97cbf5c-7da6-4d2e-a3d6-6bbe07441ae0-ovsdbserver-sb\") pod \"dnsmasq-dns-5c9776ccc5-lrqxl\" (UID: \"d97cbf5c-7da6-4d2e-a3d6-6bbe07441ae0\") " pod="openstack/dnsmasq-dns-5c9776ccc5-lrqxl" Feb 18 19:35:51 crc kubenswrapper[4942]: I0218 19:35:51.198937 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e4cc3ba2-abea-4fa2-9272-65ac8721c87d-logs\") pod \"barbican-api-9d65dd5d-c4zgj\" (UID: \"e4cc3ba2-abea-4fa2-9272-65ac8721c87d\") " pod="openstack/barbican-api-9d65dd5d-c4zgj" Feb 18 19:35:51 crc kubenswrapper[4942]: I0218 19:35:51.198957 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e4cc3ba2-abea-4fa2-9272-65ac8721c87d-config-data\") pod \"barbican-api-9d65dd5d-c4zgj\" (UID: \"e4cc3ba2-abea-4fa2-9272-65ac8721c87d\") " pod="openstack/barbican-api-9d65dd5d-c4zgj" Feb 18 19:35:51 crc kubenswrapper[4942]: I0218 19:35:51.199018 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e4cc3ba2-abea-4fa2-9272-65ac8721c87d-combined-ca-bundle\") pod \"barbican-api-9d65dd5d-c4zgj\" (UID: \"e4cc3ba2-abea-4fa2-9272-65ac8721c87d\") " pod="openstack/barbican-api-9d65dd5d-c4zgj" Feb 18 19:35:51 crc kubenswrapper[4942]: I0218 19:35:51.200011 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d97cbf5c-7da6-4d2e-a3d6-6bbe07441ae0-dns-svc\") pod \"dnsmasq-dns-5c9776ccc5-lrqxl\" (UID: \"d97cbf5c-7da6-4d2e-a3d6-6bbe07441ae0\") " pod="openstack/dnsmasq-dns-5c9776ccc5-lrqxl" Feb 18 19:35:51 crc kubenswrapper[4942]: I0218 19:35:51.200037 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d97cbf5c-7da6-4d2e-a3d6-6bbe07441ae0-dns-swift-storage-0\") pod \"dnsmasq-dns-5c9776ccc5-lrqxl\" (UID: \"d97cbf5c-7da6-4d2e-a3d6-6bbe07441ae0\") " pod="openstack/dnsmasq-dns-5c9776ccc5-lrqxl" Feb 18 19:35:51 crc kubenswrapper[4942]: I0218 19:35:51.200078 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d97cbf5c-7da6-4d2e-a3d6-6bbe07441ae0-ovsdbserver-nb\") pod \"dnsmasq-dns-5c9776ccc5-lrqxl\" (UID: \"d97cbf5c-7da6-4d2e-a3d6-6bbe07441ae0\") " pod="openstack/dnsmasq-dns-5c9776ccc5-lrqxl" Feb 18 19:35:51 crc kubenswrapper[4942]: I0218 19:35:51.200094 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d97cbf5c-7da6-4d2e-a3d6-6bbe07441ae0-config\") pod \"dnsmasq-dns-5c9776ccc5-lrqxl\" (UID: \"d97cbf5c-7da6-4d2e-a3d6-6bbe07441ae0\") " pod="openstack/dnsmasq-dns-5c9776ccc5-lrqxl" Feb 18 19:35:51 crc kubenswrapper[4942]: I0218 19:35:51.200112 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8tb7v\" (UniqueName: \"kubernetes.io/projected/d97cbf5c-7da6-4d2e-a3d6-6bbe07441ae0-kube-api-access-8tb7v\") pod \"dnsmasq-dns-5c9776ccc5-lrqxl\" (UID: \"d97cbf5c-7da6-4d2e-a3d6-6bbe07441ae0\") " pod="openstack/dnsmasq-dns-5c9776ccc5-lrqxl" Feb 18 19:35:51 crc kubenswrapper[4942]: I0218 19:35:51.228057 4942 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Feb 18 19:35:51 crc kubenswrapper[4942]: I0218 19:35:51.230630 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Feb 18 19:35:51 crc kubenswrapper[4942]: I0218 19:35:51.242729 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Feb 18 19:35:51 crc kubenswrapper[4942]: I0218 19:35:51.280021 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Feb 18 19:35:51 crc kubenswrapper[4942]: I0218 19:35:51.306211 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Feb 18 19:35:51 crc kubenswrapper[4942]: I0218 19:35:51.307124 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d97cbf5c-7da6-4d2e-a3d6-6bbe07441ae0-dns-svc\") pod \"dnsmasq-dns-5c9776ccc5-lrqxl\" (UID: \"d97cbf5c-7da6-4d2e-a3d6-6bbe07441ae0\") " pod="openstack/dnsmasq-dns-5c9776ccc5-lrqxl" Feb 18 19:35:51 crc kubenswrapper[4942]: I0218 19:35:51.307161 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d97cbf5c-7da6-4d2e-a3d6-6bbe07441ae0-dns-swift-storage-0\") pod \"dnsmasq-dns-5c9776ccc5-lrqxl\" (UID: \"d97cbf5c-7da6-4d2e-a3d6-6bbe07441ae0\") " pod="openstack/dnsmasq-dns-5c9776ccc5-lrqxl" Feb 18 19:35:51 crc kubenswrapper[4942]: I0218 19:35:51.307189 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/82d59804-3d83-4594-855b-f08b93e146a4-config-data-custom\") pod \"cinder-api-0\" (UID: \"82d59804-3d83-4594-855b-f08b93e146a4\") " pod="openstack/cinder-api-0" Feb 18 19:35:51 crc kubenswrapper[4942]: I0218 19:35:51.307211 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d97cbf5c-7da6-4d2e-a3d6-6bbe07441ae0-ovsdbserver-nb\") pod \"dnsmasq-dns-5c9776ccc5-lrqxl\" (UID: \"d97cbf5c-7da6-4d2e-a3d6-6bbe07441ae0\") " pod="openstack/dnsmasq-dns-5c9776ccc5-lrqxl" Feb 18 19:35:51 crc kubenswrapper[4942]: I0218 19:35:51.307233 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d97cbf5c-7da6-4d2e-a3d6-6bbe07441ae0-config\") pod \"dnsmasq-dns-5c9776ccc5-lrqxl\" (UID: \"d97cbf5c-7da6-4d2e-a3d6-6bbe07441ae0\") " pod="openstack/dnsmasq-dns-5c9776ccc5-lrqxl" Feb 18 19:35:51 crc kubenswrapper[4942]: I0218 19:35:51.307250 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8tb7v\" (UniqueName: \"kubernetes.io/projected/d97cbf5c-7da6-4d2e-a3d6-6bbe07441ae0-kube-api-access-8tb7v\") pod \"dnsmasq-dns-5c9776ccc5-lrqxl\" (UID: \"d97cbf5c-7da6-4d2e-a3d6-6bbe07441ae0\") " pod="openstack/dnsmasq-dns-5c9776ccc5-lrqxl" Feb 18 19:35:51 crc kubenswrapper[4942]: I0218 19:35:51.308831 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/82d59804-3d83-4594-855b-f08b93e146a4-etc-machine-id\") pod \"cinder-api-0\" (UID: \"82d59804-3d83-4594-855b-f08b93e146a4\") " pod="openstack/cinder-api-0" Feb 18 19:35:51 crc kubenswrapper[4942]: I0218 19:35:51.308933 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e4cc3ba2-abea-4fa2-9272-65ac8721c87d-config-data-custom\") pod \"barbican-api-9d65dd5d-c4zgj\" (UID: \"e4cc3ba2-abea-4fa2-9272-65ac8721c87d\") " pod="openstack/barbican-api-9d65dd5d-c4zgj" Feb 18 19:35:51 crc kubenswrapper[4942]: I0218 19:35:51.308994 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/82d59804-3d83-4594-855b-f08b93e146a4-scripts\") pod \"cinder-api-0\" (UID: \"82d59804-3d83-4594-855b-f08b93e146a4\") " pod="openstack/cinder-api-0" Feb 18 19:35:51 crc kubenswrapper[4942]: I0218 19:35:51.309036 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wdj52\" (UniqueName: \"kubernetes.io/projected/82d59804-3d83-4594-855b-f08b93e146a4-kube-api-access-wdj52\") pod \"cinder-api-0\" (UID: \"82d59804-3d83-4594-855b-f08b93e146a4\") " pod="openstack/cinder-api-0" Feb 18 19:35:51 crc kubenswrapper[4942]: I0218 19:35:51.309159 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/82d59804-3d83-4594-855b-f08b93e146a4-config-data\") pod \"cinder-api-0\" (UID: \"82d59804-3d83-4594-855b-f08b93e146a4\") " pod="openstack/cinder-api-0" Feb 18 19:35:51 crc kubenswrapper[4942]: I0218 19:35:51.309185 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/82d59804-3d83-4594-855b-f08b93e146a4-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"82d59804-3d83-4594-855b-f08b93e146a4\") " pod="openstack/cinder-api-0" Feb 18 19:35:51 crc kubenswrapper[4942]: I0218 19:35:51.309277 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-92ffg\" (UniqueName: \"kubernetes.io/projected/e4cc3ba2-abea-4fa2-9272-65ac8721c87d-kube-api-access-92ffg\") pod \"barbican-api-9d65dd5d-c4zgj\" (UID: \"e4cc3ba2-abea-4fa2-9272-65ac8721c87d\") " pod="openstack/barbican-api-9d65dd5d-c4zgj" Feb 18 19:35:51 crc kubenswrapper[4942]: I0218 19:35:51.309325 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d97cbf5c-7da6-4d2e-a3d6-6bbe07441ae0-ovsdbserver-sb\") pod \"dnsmasq-dns-5c9776ccc5-lrqxl\" (UID: \"d97cbf5c-7da6-4d2e-a3d6-6bbe07441ae0\") " pod="openstack/dnsmasq-dns-5c9776ccc5-lrqxl" Feb 18 19:35:51 crc kubenswrapper[4942]: I0218 19:35:51.309439 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e4cc3ba2-abea-4fa2-9272-65ac8721c87d-logs\") pod \"barbican-api-9d65dd5d-c4zgj\" (UID: \"e4cc3ba2-abea-4fa2-9272-65ac8721c87d\") " pod="openstack/barbican-api-9d65dd5d-c4zgj" Feb 18 19:35:51 crc kubenswrapper[4942]: I0218 19:35:51.309470 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e4cc3ba2-abea-4fa2-9272-65ac8721c87d-config-data\") pod \"barbican-api-9d65dd5d-c4zgj\" (UID: \"e4cc3ba2-abea-4fa2-9272-65ac8721c87d\") " pod="openstack/barbican-api-9d65dd5d-c4zgj" Feb 18 19:35:51 crc kubenswrapper[4942]: I0218 19:35:51.309559 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/82d59804-3d83-4594-855b-f08b93e146a4-logs\") pod \"cinder-api-0\" (UID: \"82d59804-3d83-4594-855b-f08b93e146a4\") " pod="openstack/cinder-api-0" Feb 18 19:35:51 crc kubenswrapper[4942]: I0218 19:35:51.309593 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d97cbf5c-7da6-4d2e-a3d6-6bbe07441ae0-ovsdbserver-nb\") pod \"dnsmasq-dns-5c9776ccc5-lrqxl\" (UID: \"d97cbf5c-7da6-4d2e-a3d6-6bbe07441ae0\") " pod="openstack/dnsmasq-dns-5c9776ccc5-lrqxl" Feb 18 19:35:51 crc kubenswrapper[4942]: I0218 19:35:51.309601 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e4cc3ba2-abea-4fa2-9272-65ac8721c87d-combined-ca-bundle\") pod \"barbican-api-9d65dd5d-c4zgj\" (UID: \"e4cc3ba2-abea-4fa2-9272-65ac8721c87d\") " pod="openstack/barbican-api-9d65dd5d-c4zgj" Feb 18 19:35:51 crc kubenswrapper[4942]: I0218 19:35:51.310282 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d97cbf5c-7da6-4d2e-a3d6-6bbe07441ae0-config\") pod \"dnsmasq-dns-5c9776ccc5-lrqxl\" (UID: \"d97cbf5c-7da6-4d2e-a3d6-6bbe07441ae0\") " pod="openstack/dnsmasq-dns-5c9776ccc5-lrqxl" Feb 18 19:35:51 crc kubenswrapper[4942]: I0218 19:35:51.308838 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d97cbf5c-7da6-4d2e-a3d6-6bbe07441ae0-dns-swift-storage-0\") pod \"dnsmasq-dns-5c9776ccc5-lrqxl\" (UID: \"d97cbf5c-7da6-4d2e-a3d6-6bbe07441ae0\") " pod="openstack/dnsmasq-dns-5c9776ccc5-lrqxl" Feb 18 19:35:51 crc kubenswrapper[4942]: I0218 19:35:51.310316 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d97cbf5c-7da6-4d2e-a3d6-6bbe07441ae0-dns-svc\") pod \"dnsmasq-dns-5c9776ccc5-lrqxl\" (UID: \"d97cbf5c-7da6-4d2e-a3d6-6bbe07441ae0\") " pod="openstack/dnsmasq-dns-5c9776ccc5-lrqxl" Feb 18 19:35:51 crc kubenswrapper[4942]: I0218 19:35:51.311146 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d97cbf5c-7da6-4d2e-a3d6-6bbe07441ae0-ovsdbserver-sb\") pod \"dnsmasq-dns-5c9776ccc5-lrqxl\" (UID: \"d97cbf5c-7da6-4d2e-a3d6-6bbe07441ae0\") " pod="openstack/dnsmasq-dns-5c9776ccc5-lrqxl" Feb 18 19:35:51 crc kubenswrapper[4942]: I0218 19:35:51.312163 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e4cc3ba2-abea-4fa2-9272-65ac8721c87d-logs\") pod \"barbican-api-9d65dd5d-c4zgj\" (UID: \"e4cc3ba2-abea-4fa2-9272-65ac8721c87d\") " pod="openstack/barbican-api-9d65dd5d-c4zgj" Feb 18 19:35:51 crc kubenswrapper[4942]: I0218 19:35:51.318730 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e4cc3ba2-abea-4fa2-9272-65ac8721c87d-config-data-custom\") pod \"barbican-api-9d65dd5d-c4zgj\" (UID: \"e4cc3ba2-abea-4fa2-9272-65ac8721c87d\") " pod="openstack/barbican-api-9d65dd5d-c4zgj" Feb 18 19:35:51 crc kubenswrapper[4942]: I0218 19:35:51.326366 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e4cc3ba2-abea-4fa2-9272-65ac8721c87d-config-data\") pod \"barbican-api-9d65dd5d-c4zgj\" (UID: \"e4cc3ba2-abea-4fa2-9272-65ac8721c87d\") " pod="openstack/barbican-api-9d65dd5d-c4zgj" Feb 18 19:35:51 crc kubenswrapper[4942]: I0218 19:35:51.330593 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8tb7v\" (UniqueName: \"kubernetes.io/projected/d97cbf5c-7da6-4d2e-a3d6-6bbe07441ae0-kube-api-access-8tb7v\") pod \"dnsmasq-dns-5c9776ccc5-lrqxl\" (UID: \"d97cbf5c-7da6-4d2e-a3d6-6bbe07441ae0\") " pod="openstack/dnsmasq-dns-5c9776ccc5-lrqxl" Feb 18 19:35:51 crc kubenswrapper[4942]: I0218 19:35:51.332194 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-92ffg\" (UniqueName: \"kubernetes.io/projected/e4cc3ba2-abea-4fa2-9272-65ac8721c87d-kube-api-access-92ffg\") pod \"barbican-api-9d65dd5d-c4zgj\" (UID: \"e4cc3ba2-abea-4fa2-9272-65ac8721c87d\") " pod="openstack/barbican-api-9d65dd5d-c4zgj" Feb 18 19:35:51 crc kubenswrapper[4942]: I0218 19:35:51.334417 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e4cc3ba2-abea-4fa2-9272-65ac8721c87d-combined-ca-bundle\") pod \"barbican-api-9d65dd5d-c4zgj\" (UID: \"e4cc3ba2-abea-4fa2-9272-65ac8721c87d\") " pod="openstack/barbican-api-9d65dd5d-c4zgj" Feb 18 19:35:51 crc kubenswrapper[4942]: I0218 19:35:51.415621 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/82d59804-3d83-4594-855b-f08b93e146a4-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"82d59804-3d83-4594-855b-f08b93e146a4\") " pod="openstack/cinder-api-0" Feb 18 19:35:51 crc kubenswrapper[4942]: I0218 19:35:51.415976 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/82d59804-3d83-4594-855b-f08b93e146a4-config-data\") pod \"cinder-api-0\" (UID: \"82d59804-3d83-4594-855b-f08b93e146a4\") " pod="openstack/cinder-api-0" Feb 18 19:35:51 crc kubenswrapper[4942]: I0218 19:35:51.416072 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/82d59804-3d83-4594-855b-f08b93e146a4-logs\") pod \"cinder-api-0\" (UID: \"82d59804-3d83-4594-855b-f08b93e146a4\") " pod="openstack/cinder-api-0" Feb 18 19:35:51 crc kubenswrapper[4942]: I0218 19:35:51.416137 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/82d59804-3d83-4594-855b-f08b93e146a4-config-data-custom\") pod \"cinder-api-0\" (UID: \"82d59804-3d83-4594-855b-f08b93e146a4\") " pod="openstack/cinder-api-0" Feb 18 19:35:51 crc kubenswrapper[4942]: I0218 19:35:51.416181 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/82d59804-3d83-4594-855b-f08b93e146a4-etc-machine-id\") pod \"cinder-api-0\" (UID: \"82d59804-3d83-4594-855b-f08b93e146a4\") " pod="openstack/cinder-api-0" Feb 18 19:35:51 crc kubenswrapper[4942]: I0218 19:35:51.416294 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/82d59804-3d83-4594-855b-f08b93e146a4-scripts\") pod \"cinder-api-0\" (UID: \"82d59804-3d83-4594-855b-f08b93e146a4\") " pod="openstack/cinder-api-0" Feb 18 19:35:51 crc kubenswrapper[4942]: I0218 19:35:51.416324 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wdj52\" (UniqueName: \"kubernetes.io/projected/82d59804-3d83-4594-855b-f08b93e146a4-kube-api-access-wdj52\") pod \"cinder-api-0\" (UID: \"82d59804-3d83-4594-855b-f08b93e146a4\") " pod="openstack/cinder-api-0" Feb 18 19:35:51 crc kubenswrapper[4942]: I0218 19:35:51.418989 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/82d59804-3d83-4594-855b-f08b93e146a4-etc-machine-id\") pod \"cinder-api-0\" (UID: \"82d59804-3d83-4594-855b-f08b93e146a4\") " pod="openstack/cinder-api-0" Feb 18 19:35:51 crc kubenswrapper[4942]: I0218 19:35:51.424124 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/82d59804-3d83-4594-855b-f08b93e146a4-logs\") pod \"cinder-api-0\" (UID: \"82d59804-3d83-4594-855b-f08b93e146a4\") " pod="openstack/cinder-api-0" Feb 18 19:35:51 crc kubenswrapper[4942]: I0218 19:35:51.431147 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/82d59804-3d83-4594-855b-f08b93e146a4-scripts\") pod \"cinder-api-0\" (UID: \"82d59804-3d83-4594-855b-f08b93e146a4\") " pod="openstack/cinder-api-0" Feb 18 19:35:51 crc kubenswrapper[4942]: I0218 19:35:51.431211 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/82d59804-3d83-4594-855b-f08b93e146a4-config-data-custom\") pod \"cinder-api-0\" (UID: \"82d59804-3d83-4594-855b-f08b93e146a4\") " pod="openstack/cinder-api-0" Feb 18 19:35:51 crc kubenswrapper[4942]: I0218 19:35:51.431370 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/82d59804-3d83-4594-855b-f08b93e146a4-config-data\") pod \"cinder-api-0\" (UID: \"82d59804-3d83-4594-855b-f08b93e146a4\") " pod="openstack/cinder-api-0" Feb 18 19:35:51 crc kubenswrapper[4942]: I0218 19:35:51.431775 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/82d59804-3d83-4594-855b-f08b93e146a4-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"82d59804-3d83-4594-855b-f08b93e146a4\") " pod="openstack/cinder-api-0" Feb 18 19:35:51 crc kubenswrapper[4942]: I0218 19:35:51.438252 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wdj52\" (UniqueName: \"kubernetes.io/projected/82d59804-3d83-4594-855b-f08b93e146a4-kube-api-access-wdj52\") pod \"cinder-api-0\" (UID: \"82d59804-3d83-4594-855b-f08b93e146a4\") " pod="openstack/cinder-api-0" Feb 18 19:35:51 crc kubenswrapper[4942]: I0218 19:35:51.451241 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c9776ccc5-lrqxl" Feb 18 19:35:51 crc kubenswrapper[4942]: I0218 19:35:51.480245 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-9d65dd5d-c4zgj" Feb 18 19:35:51 crc kubenswrapper[4942]: I0218 19:35:51.568350 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Feb 18 19:35:51 crc kubenswrapper[4942]: I0218 19:35:51.694568 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-ffd58675f-h7jk6"] Feb 18 19:35:51 crc kubenswrapper[4942]: I0218 19:35:51.802428 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-67cc44d6c6-sp59w" Feb 18 19:35:51 crc kubenswrapper[4942]: I0218 19:35:51.810492 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-5987dd846-f7dd9"] Feb 18 19:35:51 crc kubenswrapper[4942]: I0218 19:35:51.928745 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 18 19:35:51 crc kubenswrapper[4942]: I0218 19:35:51.987056 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-5987dd846-f7dd9" event={"ID":"db966aef-0b18-400a-b3e8-49487308bf05","Type":"ContainerStarted","Data":"475c72870f27467190ac06d0fda886059a940e78ec7c42749f579ef22ae8d000"} Feb 18 19:35:51 crc kubenswrapper[4942]: I0218 19:35:51.991935 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-ffd58675f-h7jk6" event={"ID":"0e207482-f349-415e-86d3-800b0caf9a78","Type":"ContainerStarted","Data":"bf40e2824d87c3428f6357a33e3b4dc50ec72ed94982cd5e48acd2a1a672ca8d"} Feb 18 19:35:51 crc kubenswrapper[4942]: I0218 19:35:51.995942 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"17399208-02d7-46c9-b5ea-b01563e8baf1","Type":"ContainerStarted","Data":"0e634a244135542433fea3600e46692e4afcef32f4f22d2c4274a7c75eb4af2b"} Feb 18 19:35:51 crc kubenswrapper[4942]: I0218 19:35:51.996044 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-b895b5785-t2j2r" Feb 18 19:35:52 crc kubenswrapper[4942]: I0218 19:35:52.012331 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-b895b5785-t2j2r" Feb 18 19:35:52 crc kubenswrapper[4942]: I0218 19:35:52.028487 4942 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-6b8c9f8ffc-qtdr8"] Feb 18 19:35:52 crc kubenswrapper[4942]: I0218 19:35:52.028774 4942 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-6b8c9f8ffc-qtdr8" podUID="921d1a28-ead8-42a6-933c-38a339741884" containerName="neutron-api" containerID="cri-o://5406c6b90781279268f75608c064a21d3a65e4eb4c8a4c7e959d4465b49185b9" gracePeriod=30 Feb 18 19:35:52 crc kubenswrapper[4942]: I0218 19:35:52.028942 4942 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-6b8c9f8ffc-qtdr8" podUID="921d1a28-ead8-42a6-933c-38a339741884" containerName="neutron-httpd" containerID="cri-o://531ee7816fd7353cd71c0f54232b96ad0dd37eddd3c96b8ac1f0e58197be9795" gracePeriod=30 Feb 18 19:35:52 crc kubenswrapper[4942]: I0218 19:35:52.044283 4942 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/neutron-6b8c9f8ffc-qtdr8" podUID="921d1a28-ead8-42a6-933c-38a339741884" containerName="neutron-httpd" probeResult="failure" output="Get \"https://10.217.0.166:9696/\": read tcp 10.217.0.2:58106->10.217.0.166:9696: read: connection reset by peer" Feb 18 19:35:52 crc kubenswrapper[4942]: I0218 19:35:52.056010 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5c9776ccc5-lrqxl"] Feb 18 19:35:52 crc kubenswrapper[4942]: I0218 19:35:52.068507 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-9d65dd5d-c4zgj"] Feb 18 19:35:52 crc kubenswrapper[4942]: I0218 19:35:52.083414 4942 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-555cb4cc6f-xh69m"] Feb 18 19:35:52 crc kubenswrapper[4942]: I0218 19:35:52.087082 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-555cb4cc6f-xh69m" Feb 18 19:35:52 crc kubenswrapper[4942]: I0218 19:35:52.092832 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-555cb4cc6f-xh69m"] Feb 18 19:35:52 crc kubenswrapper[4942]: I0218 19:35:52.132441 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/25b5e8eb-ac39-4f81-9601-0b2cc0a54a13-config\") pod \"25b5e8eb-ac39-4f81-9601-0b2cc0a54a13\" (UID: \"25b5e8eb-ac39-4f81-9601-0b2cc0a54a13\") " Feb 18 19:35:52 crc kubenswrapper[4942]: I0218 19:35:52.132494 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/25b5e8eb-ac39-4f81-9601-0b2cc0a54a13-ovsdbserver-sb\") pod \"25b5e8eb-ac39-4f81-9601-0b2cc0a54a13\" (UID: \"25b5e8eb-ac39-4f81-9601-0b2cc0a54a13\") " Feb 18 19:35:52 crc kubenswrapper[4942]: I0218 19:35:52.132533 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/25b5e8eb-ac39-4f81-9601-0b2cc0a54a13-dns-swift-storage-0\") pod \"25b5e8eb-ac39-4f81-9601-0b2cc0a54a13\" (UID: \"25b5e8eb-ac39-4f81-9601-0b2cc0a54a13\") " Feb 18 19:35:52 crc kubenswrapper[4942]: I0218 19:35:52.132590 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/25b5e8eb-ac39-4f81-9601-0b2cc0a54a13-ovsdbserver-nb\") pod \"25b5e8eb-ac39-4f81-9601-0b2cc0a54a13\" (UID: \"25b5e8eb-ac39-4f81-9601-0b2cc0a54a13\") " Feb 18 19:35:52 crc kubenswrapper[4942]: I0218 19:35:52.132639 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v9h6h\" (UniqueName: \"kubernetes.io/projected/25b5e8eb-ac39-4f81-9601-0b2cc0a54a13-kube-api-access-v9h6h\") pod \"25b5e8eb-ac39-4f81-9601-0b2cc0a54a13\" (UID: \"25b5e8eb-ac39-4f81-9601-0b2cc0a54a13\") " Feb 18 19:35:52 crc kubenswrapper[4942]: I0218 19:35:52.132671 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/25b5e8eb-ac39-4f81-9601-0b2cc0a54a13-dns-svc\") pod \"25b5e8eb-ac39-4f81-9601-0b2cc0a54a13\" (UID: \"25b5e8eb-ac39-4f81-9601-0b2cc0a54a13\") " Feb 18 19:35:52 crc kubenswrapper[4942]: I0218 19:35:52.133726 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/25b5e8eb-ac39-4f81-9601-0b2cc0a54a13-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "25b5e8eb-ac39-4f81-9601-0b2cc0a54a13" (UID: "25b5e8eb-ac39-4f81-9601-0b2cc0a54a13"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:35:52 crc kubenswrapper[4942]: I0218 19:35:52.134205 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/25b5e8eb-ac39-4f81-9601-0b2cc0a54a13-config" (OuterVolumeSpecName: "config") pod "25b5e8eb-ac39-4f81-9601-0b2cc0a54a13" (UID: "25b5e8eb-ac39-4f81-9601-0b2cc0a54a13"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:35:52 crc kubenswrapper[4942]: I0218 19:35:52.134217 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/25b5e8eb-ac39-4f81-9601-0b2cc0a54a13-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "25b5e8eb-ac39-4f81-9601-0b2cc0a54a13" (UID: "25b5e8eb-ac39-4f81-9601-0b2cc0a54a13"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:35:52 crc kubenswrapper[4942]: I0218 19:35:52.134613 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/25b5e8eb-ac39-4f81-9601-0b2cc0a54a13-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "25b5e8eb-ac39-4f81-9601-0b2cc0a54a13" (UID: "25b5e8eb-ac39-4f81-9601-0b2cc0a54a13"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:35:52 crc kubenswrapper[4942]: I0218 19:35:52.134753 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/25b5e8eb-ac39-4f81-9601-0b2cc0a54a13-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "25b5e8eb-ac39-4f81-9601-0b2cc0a54a13" (UID: "25b5e8eb-ac39-4f81-9601-0b2cc0a54a13"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:35:52 crc kubenswrapper[4942]: I0218 19:35:52.138043 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/25b5e8eb-ac39-4f81-9601-0b2cc0a54a13-kube-api-access-v9h6h" (OuterVolumeSpecName: "kube-api-access-v9h6h") pod "25b5e8eb-ac39-4f81-9601-0b2cc0a54a13" (UID: "25b5e8eb-ac39-4f81-9601-0b2cc0a54a13"). InnerVolumeSpecName "kube-api-access-v9h6h". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:35:52 crc kubenswrapper[4942]: I0218 19:35:52.194915 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Feb 18 19:35:52 crc kubenswrapper[4942]: I0218 19:35:52.234746 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fb5df9b1-974d-4c39-9278-b79355109acb-combined-ca-bundle\") pod \"neutron-555cb4cc6f-xh69m\" (UID: \"fb5df9b1-974d-4c39-9278-b79355109acb\") " pod="openstack/neutron-555cb4cc6f-xh69m" Feb 18 19:35:52 crc kubenswrapper[4942]: I0218 19:35:52.234927 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/fb5df9b1-974d-4c39-9278-b79355109acb-internal-tls-certs\") pod \"neutron-555cb4cc6f-xh69m\" (UID: \"fb5df9b1-974d-4c39-9278-b79355109acb\") " pod="openstack/neutron-555cb4cc6f-xh69m" Feb 18 19:35:52 crc kubenswrapper[4942]: I0218 19:35:52.234968 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/fb5df9b1-974d-4c39-9278-b79355109acb-config\") pod \"neutron-555cb4cc6f-xh69m\" (UID: \"fb5df9b1-974d-4c39-9278-b79355109acb\") " pod="openstack/neutron-555cb4cc6f-xh69m" Feb 18 19:35:52 crc kubenswrapper[4942]: I0218 19:35:52.234993 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/fb5df9b1-974d-4c39-9278-b79355109acb-ovndb-tls-certs\") pod \"neutron-555cb4cc6f-xh69m\" (UID: \"fb5df9b1-974d-4c39-9278-b79355109acb\") " pod="openstack/neutron-555cb4cc6f-xh69m" Feb 18 19:35:52 crc kubenswrapper[4942]: I0218 19:35:52.235028 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/fb5df9b1-974d-4c39-9278-b79355109acb-httpd-config\") pod \"neutron-555cb4cc6f-xh69m\" (UID: \"fb5df9b1-974d-4c39-9278-b79355109acb\") " pod="openstack/neutron-555cb4cc6f-xh69m" Feb 18 19:35:52 crc kubenswrapper[4942]: I0218 19:35:52.235077 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lb48j\" (UniqueName: \"kubernetes.io/projected/fb5df9b1-974d-4c39-9278-b79355109acb-kube-api-access-lb48j\") pod \"neutron-555cb4cc6f-xh69m\" (UID: \"fb5df9b1-974d-4c39-9278-b79355109acb\") " pod="openstack/neutron-555cb4cc6f-xh69m" Feb 18 19:35:52 crc kubenswrapper[4942]: I0218 19:35:52.235107 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/fb5df9b1-974d-4c39-9278-b79355109acb-public-tls-certs\") pod \"neutron-555cb4cc6f-xh69m\" (UID: \"fb5df9b1-974d-4c39-9278-b79355109acb\") " pod="openstack/neutron-555cb4cc6f-xh69m" Feb 18 19:35:52 crc kubenswrapper[4942]: I0218 19:35:52.235212 4942 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/25b5e8eb-ac39-4f81-9601-0b2cc0a54a13-config\") on node \"crc\" DevicePath \"\"" Feb 18 19:35:52 crc kubenswrapper[4942]: I0218 19:35:52.235225 4942 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/25b5e8eb-ac39-4f81-9601-0b2cc0a54a13-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 18 19:35:52 crc kubenswrapper[4942]: I0218 19:35:52.235238 4942 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/25b5e8eb-ac39-4f81-9601-0b2cc0a54a13-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 18 19:35:52 crc kubenswrapper[4942]: I0218 19:35:52.235250 4942 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/25b5e8eb-ac39-4f81-9601-0b2cc0a54a13-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 18 19:35:52 crc kubenswrapper[4942]: I0218 19:35:52.235262 4942 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v9h6h\" (UniqueName: \"kubernetes.io/projected/25b5e8eb-ac39-4f81-9601-0b2cc0a54a13-kube-api-access-v9h6h\") on node \"crc\" DevicePath \"\"" Feb 18 19:35:52 crc kubenswrapper[4942]: I0218 19:35:52.235272 4942 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/25b5e8eb-ac39-4f81-9601-0b2cc0a54a13-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 18 19:35:52 crc kubenswrapper[4942]: I0218 19:35:52.339957 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/fb5df9b1-974d-4c39-9278-b79355109acb-config\") pod \"neutron-555cb4cc6f-xh69m\" (UID: \"fb5df9b1-974d-4c39-9278-b79355109acb\") " pod="openstack/neutron-555cb4cc6f-xh69m" Feb 18 19:35:52 crc kubenswrapper[4942]: I0218 19:35:52.340004 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/fb5df9b1-974d-4c39-9278-b79355109acb-ovndb-tls-certs\") pod \"neutron-555cb4cc6f-xh69m\" (UID: \"fb5df9b1-974d-4c39-9278-b79355109acb\") " pod="openstack/neutron-555cb4cc6f-xh69m" Feb 18 19:35:52 crc kubenswrapper[4942]: I0218 19:35:52.340047 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/fb5df9b1-974d-4c39-9278-b79355109acb-httpd-config\") pod \"neutron-555cb4cc6f-xh69m\" (UID: \"fb5df9b1-974d-4c39-9278-b79355109acb\") " pod="openstack/neutron-555cb4cc6f-xh69m" Feb 18 19:35:52 crc kubenswrapper[4942]: I0218 19:35:52.340108 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lb48j\" (UniqueName: \"kubernetes.io/projected/fb5df9b1-974d-4c39-9278-b79355109acb-kube-api-access-lb48j\") pod \"neutron-555cb4cc6f-xh69m\" (UID: \"fb5df9b1-974d-4c39-9278-b79355109acb\") " pod="openstack/neutron-555cb4cc6f-xh69m" Feb 18 19:35:52 crc kubenswrapper[4942]: I0218 19:35:52.340143 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/fb5df9b1-974d-4c39-9278-b79355109acb-public-tls-certs\") pod \"neutron-555cb4cc6f-xh69m\" (UID: \"fb5df9b1-974d-4c39-9278-b79355109acb\") " pod="openstack/neutron-555cb4cc6f-xh69m" Feb 18 19:35:52 crc kubenswrapper[4942]: I0218 19:35:52.340246 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fb5df9b1-974d-4c39-9278-b79355109acb-combined-ca-bundle\") pod \"neutron-555cb4cc6f-xh69m\" (UID: \"fb5df9b1-974d-4c39-9278-b79355109acb\") " pod="openstack/neutron-555cb4cc6f-xh69m" Feb 18 19:35:52 crc kubenswrapper[4942]: I0218 19:35:52.340331 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/fb5df9b1-974d-4c39-9278-b79355109acb-internal-tls-certs\") pod \"neutron-555cb4cc6f-xh69m\" (UID: \"fb5df9b1-974d-4c39-9278-b79355109acb\") " pod="openstack/neutron-555cb4cc6f-xh69m" Feb 18 19:35:52 crc kubenswrapper[4942]: I0218 19:35:52.345183 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fb5df9b1-974d-4c39-9278-b79355109acb-combined-ca-bundle\") pod \"neutron-555cb4cc6f-xh69m\" (UID: \"fb5df9b1-974d-4c39-9278-b79355109acb\") " pod="openstack/neutron-555cb4cc6f-xh69m" Feb 18 19:35:52 crc kubenswrapper[4942]: I0218 19:35:52.345850 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/fb5df9b1-974d-4c39-9278-b79355109acb-config\") pod \"neutron-555cb4cc6f-xh69m\" (UID: \"fb5df9b1-974d-4c39-9278-b79355109acb\") " pod="openstack/neutron-555cb4cc6f-xh69m" Feb 18 19:35:52 crc kubenswrapper[4942]: I0218 19:35:52.350379 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/fb5df9b1-974d-4c39-9278-b79355109acb-internal-tls-certs\") pod \"neutron-555cb4cc6f-xh69m\" (UID: \"fb5df9b1-974d-4c39-9278-b79355109acb\") " pod="openstack/neutron-555cb4cc6f-xh69m" Feb 18 19:35:52 crc kubenswrapper[4942]: I0218 19:35:52.353367 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/fb5df9b1-974d-4c39-9278-b79355109acb-public-tls-certs\") pod \"neutron-555cb4cc6f-xh69m\" (UID: \"fb5df9b1-974d-4c39-9278-b79355109acb\") " pod="openstack/neutron-555cb4cc6f-xh69m" Feb 18 19:35:52 crc kubenswrapper[4942]: I0218 19:35:52.357059 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/fb5df9b1-974d-4c39-9278-b79355109acb-ovndb-tls-certs\") pod \"neutron-555cb4cc6f-xh69m\" (UID: \"fb5df9b1-974d-4c39-9278-b79355109acb\") " pod="openstack/neutron-555cb4cc6f-xh69m" Feb 18 19:35:52 crc kubenswrapper[4942]: I0218 19:35:52.365838 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/fb5df9b1-974d-4c39-9278-b79355109acb-httpd-config\") pod \"neutron-555cb4cc6f-xh69m\" (UID: \"fb5df9b1-974d-4c39-9278-b79355109acb\") " pod="openstack/neutron-555cb4cc6f-xh69m" Feb 18 19:35:52 crc kubenswrapper[4942]: I0218 19:35:52.378591 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lb48j\" (UniqueName: \"kubernetes.io/projected/fb5df9b1-974d-4c39-9278-b79355109acb-kube-api-access-lb48j\") pod \"neutron-555cb4cc6f-xh69m\" (UID: \"fb5df9b1-974d-4c39-9278-b79355109acb\") " pod="openstack/neutron-555cb4cc6f-xh69m" Feb 18 19:35:52 crc kubenswrapper[4942]: I0218 19:35:52.405796 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-555cb4cc6f-xh69m" Feb 18 19:35:52 crc kubenswrapper[4942]: I0218 19:35:52.677917 4942 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/horizon-54d64cf59b-xp7rk" Feb 18 19:35:53 crc kubenswrapper[4942]: I0218 19:35:53.022009 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-555cb4cc6f-xh69m"] Feb 18 19:35:53 crc kubenswrapper[4942]: I0218 19:35:53.078462 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"82d59804-3d83-4594-855b-f08b93e146a4","Type":"ContainerStarted","Data":"3311d8cddbe87c83a85f89c5d8660e6aa1ae4c9bc3dfb708ff87ff00d9bd9163"} Feb 18 19:35:53 crc kubenswrapper[4942]: I0218 19:35:53.084729 4942 generic.go:334] "Generic (PLEG): container finished" podID="d97cbf5c-7da6-4d2e-a3d6-6bbe07441ae0" containerID="dd0e1dffa19992cdfee9a8283a58b64cddc29aa874d5b918d39a6e3462563edd" exitCode=0 Feb 18 19:35:53 crc kubenswrapper[4942]: I0218 19:35:53.085714 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c9776ccc5-lrqxl" event={"ID":"d97cbf5c-7da6-4d2e-a3d6-6bbe07441ae0","Type":"ContainerDied","Data":"dd0e1dffa19992cdfee9a8283a58b64cddc29aa874d5b918d39a6e3462563edd"} Feb 18 19:35:53 crc kubenswrapper[4942]: I0218 19:35:53.085752 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c9776ccc5-lrqxl" event={"ID":"d97cbf5c-7da6-4d2e-a3d6-6bbe07441ae0","Type":"ContainerStarted","Data":"e8c28553c41794bb048bb9a7187c1a1ab7f1585b41b9526ea5b0ab594f5efa4f"} Feb 18 19:35:53 crc kubenswrapper[4942]: I0218 19:35:53.095230 4942 generic.go:334] "Generic (PLEG): container finished" podID="921d1a28-ead8-42a6-933c-38a339741884" containerID="531ee7816fd7353cd71c0f54232b96ad0dd37eddd3c96b8ac1f0e58197be9795" exitCode=0 Feb 18 19:35:53 crc kubenswrapper[4942]: I0218 19:35:53.095318 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-6b8c9f8ffc-qtdr8" event={"ID":"921d1a28-ead8-42a6-933c-38a339741884","Type":"ContainerDied","Data":"531ee7816fd7353cd71c0f54232b96ad0dd37eddd3c96b8ac1f0e58197be9795"} Feb 18 19:35:53 crc kubenswrapper[4942]: I0218 19:35:53.098791 4942 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/horizon-7b6b6597b8-m8ngr" Feb 18 19:35:53 crc kubenswrapper[4942]: I0218 19:35:53.100466 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-b895b5785-t2j2r" Feb 18 19:35:53 crc kubenswrapper[4942]: I0218 19:35:53.101294 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-9d65dd5d-c4zgj" event={"ID":"e4cc3ba2-abea-4fa2-9272-65ac8721c87d","Type":"ContainerStarted","Data":"ecbd025dc0394b9034d21e03a44147434ce1904d40c5ab1c61c7e88c90aadd1e"} Feb 18 19:35:53 crc kubenswrapper[4942]: I0218 19:35:53.101320 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-9d65dd5d-c4zgj" Feb 18 19:35:53 crc kubenswrapper[4942]: I0218 19:35:53.101332 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-9d65dd5d-c4zgj" event={"ID":"e4cc3ba2-abea-4fa2-9272-65ac8721c87d","Type":"ContainerStarted","Data":"2b088a9056603d3d58e3baff59e58248fc06291c4ff662a1d08a6fc2664c9a1c"} Feb 18 19:35:53 crc kubenswrapper[4942]: I0218 19:35:53.101342 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-9d65dd5d-c4zgj" event={"ID":"e4cc3ba2-abea-4fa2-9272-65ac8721c87d","Type":"ContainerStarted","Data":"508f30ffb0657c1e039b8b11a78534bab62a7a31f3ad591584cdc61bbaa73274"} Feb 18 19:35:53 crc kubenswrapper[4942]: I0218 19:35:53.101358 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-9d65dd5d-c4zgj" Feb 18 19:35:53 crc kubenswrapper[4942]: I0218 19:35:53.144345 4942 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-9d65dd5d-c4zgj" podStartSLOduration=2.14430182 podStartE2EDuration="2.14430182s" podCreationTimestamp="2026-02-18 19:35:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:35:53.131638381 +0000 UTC m=+1112.836571046" watchObservedRunningTime="2026-02-18 19:35:53.14430182 +0000 UTC m=+1112.849234485" Feb 18 19:35:53 crc kubenswrapper[4942]: I0218 19:35:53.201004 4942 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-b895b5785-t2j2r"] Feb 18 19:35:53 crc kubenswrapper[4942]: I0218 19:35:53.230856 4942 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-b895b5785-t2j2r"] Feb 18 19:35:54 crc kubenswrapper[4942]: I0218 19:35:54.124281 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-555cb4cc6f-xh69m" event={"ID":"fb5df9b1-974d-4c39-9278-b79355109acb","Type":"ContainerStarted","Data":"df10b2e774d393aaf9e8c0074048e695a19eea0844a790d5e500020634603cb1"} Feb 18 19:35:54 crc kubenswrapper[4942]: I0218 19:35:54.128335 4942 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 18 19:35:54 crc kubenswrapper[4942]: I0218 19:35:54.141753 4942 generic.go:334] "Generic (PLEG): container finished" podID="e4517368-322e-4467-b31a-45b487e1035b" containerID="e4a549323fce47497ee0c4cfa6ce99131c2b1fa4f1a33956d55a73512533ebbd" exitCode=0 Feb 18 19:35:54 crc kubenswrapper[4942]: I0218 19:35:54.141827 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e4517368-322e-4467-b31a-45b487e1035b","Type":"ContainerDied","Data":"e4a549323fce47497ee0c4cfa6ce99131c2b1fa4f1a33956d55a73512533ebbd"} Feb 18 19:35:54 crc kubenswrapper[4942]: I0218 19:35:54.141853 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e4517368-322e-4467-b31a-45b487e1035b","Type":"ContainerDied","Data":"6813065f5777b4af8dd89f8c25333785bb85a450b21a1a7ab93d214ca1b8049c"} Feb 18 19:35:54 crc kubenswrapper[4942]: I0218 19:35:54.141871 4942 scope.go:117] "RemoveContainer" containerID="3ce89a3d92b53a41feec8224f4fc75ea2cc11cd4761428cd9dab597a1c7d6d0a" Feb 18 19:35:54 crc kubenswrapper[4942]: I0218 19:35:54.159271 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"82d59804-3d83-4594-855b-f08b93e146a4","Type":"ContainerStarted","Data":"06ece880bf54f2ff4ce2bd899b5f766401d58f723ea7b2a3f90e7165416928ff"} Feb 18 19:35:54 crc kubenswrapper[4942]: I0218 19:35:54.204314 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e4517368-322e-4467-b31a-45b487e1035b-log-httpd\") pod \"e4517368-322e-4467-b31a-45b487e1035b\" (UID: \"e4517368-322e-4467-b31a-45b487e1035b\") " Feb 18 19:35:54 crc kubenswrapper[4942]: I0218 19:35:54.204398 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e4517368-322e-4467-b31a-45b487e1035b-scripts\") pod \"e4517368-322e-4467-b31a-45b487e1035b\" (UID: \"e4517368-322e-4467-b31a-45b487e1035b\") " Feb 18 19:35:54 crc kubenswrapper[4942]: I0218 19:35:54.204450 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e4517368-322e-4467-b31a-45b487e1035b-config-data\") pod \"e4517368-322e-4467-b31a-45b487e1035b\" (UID: \"e4517368-322e-4467-b31a-45b487e1035b\") " Feb 18 19:35:54 crc kubenswrapper[4942]: I0218 19:35:54.204527 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e4517368-322e-4467-b31a-45b487e1035b-run-httpd\") pod \"e4517368-322e-4467-b31a-45b487e1035b\" (UID: \"e4517368-322e-4467-b31a-45b487e1035b\") " Feb 18 19:35:54 crc kubenswrapper[4942]: I0218 19:35:54.204589 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/e4517368-322e-4467-b31a-45b487e1035b-sg-core-conf-yaml\") pod \"e4517368-322e-4467-b31a-45b487e1035b\" (UID: \"e4517368-322e-4467-b31a-45b487e1035b\") " Feb 18 19:35:54 crc kubenswrapper[4942]: I0218 19:35:54.204669 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-68tzs\" (UniqueName: \"kubernetes.io/projected/e4517368-322e-4467-b31a-45b487e1035b-kube-api-access-68tzs\") pod \"e4517368-322e-4467-b31a-45b487e1035b\" (UID: \"e4517368-322e-4467-b31a-45b487e1035b\") " Feb 18 19:35:54 crc kubenswrapper[4942]: I0218 19:35:54.204693 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e4517368-322e-4467-b31a-45b487e1035b-combined-ca-bundle\") pod \"e4517368-322e-4467-b31a-45b487e1035b\" (UID: \"e4517368-322e-4467-b31a-45b487e1035b\") " Feb 18 19:35:54 crc kubenswrapper[4942]: I0218 19:35:54.204715 4942 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/neutron-6b8c9f8ffc-qtdr8" podUID="921d1a28-ead8-42a6-933c-38a339741884" containerName="neutron-httpd" probeResult="failure" output="Get \"https://10.217.0.166:9696/\": dial tcp 10.217.0.166:9696: connect: connection refused" Feb 18 19:35:54 crc kubenswrapper[4942]: I0218 19:35:54.224354 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e4517368-322e-4467-b31a-45b487e1035b-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "e4517368-322e-4467-b31a-45b487e1035b" (UID: "e4517368-322e-4467-b31a-45b487e1035b"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 19:35:54 crc kubenswrapper[4942]: I0218 19:35:54.224625 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e4517368-322e-4467-b31a-45b487e1035b-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "e4517368-322e-4467-b31a-45b487e1035b" (UID: "e4517368-322e-4467-b31a-45b487e1035b"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 19:35:54 crc kubenswrapper[4942]: I0218 19:35:54.226352 4942 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Feb 18 19:35:54 crc kubenswrapper[4942]: I0218 19:35:54.230918 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e4517368-322e-4467-b31a-45b487e1035b-kube-api-access-68tzs" (OuterVolumeSpecName: "kube-api-access-68tzs") pod "e4517368-322e-4467-b31a-45b487e1035b" (UID: "e4517368-322e-4467-b31a-45b487e1035b"). InnerVolumeSpecName "kube-api-access-68tzs". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:35:54 crc kubenswrapper[4942]: I0218 19:35:54.277230 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e4517368-322e-4467-b31a-45b487e1035b-scripts" (OuterVolumeSpecName: "scripts") pod "e4517368-322e-4467-b31a-45b487e1035b" (UID: "e4517368-322e-4467-b31a-45b487e1035b"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:35:54 crc kubenswrapper[4942]: I0218 19:35:54.315598 4942 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e4517368-322e-4467-b31a-45b487e1035b-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 18 19:35:54 crc kubenswrapper[4942]: I0218 19:35:54.315627 4942 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-68tzs\" (UniqueName: \"kubernetes.io/projected/e4517368-322e-4467-b31a-45b487e1035b-kube-api-access-68tzs\") on node \"crc\" DevicePath \"\"" Feb 18 19:35:54 crc kubenswrapper[4942]: I0218 19:35:54.315635 4942 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e4517368-322e-4467-b31a-45b487e1035b-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 18 19:35:54 crc kubenswrapper[4942]: I0218 19:35:54.315645 4942 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e4517368-322e-4467-b31a-45b487e1035b-scripts\") on node \"crc\" DevicePath \"\"" Feb 18 19:35:54 crc kubenswrapper[4942]: I0218 19:35:54.343022 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e4517368-322e-4467-b31a-45b487e1035b-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "e4517368-322e-4467-b31a-45b487e1035b" (UID: "e4517368-322e-4467-b31a-45b487e1035b"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:35:54 crc kubenswrapper[4942]: I0218 19:35:54.394342 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e4517368-322e-4467-b31a-45b487e1035b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e4517368-322e-4467-b31a-45b487e1035b" (UID: "e4517368-322e-4467-b31a-45b487e1035b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:35:54 crc kubenswrapper[4942]: I0218 19:35:54.417947 4942 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e4517368-322e-4467-b31a-45b487e1035b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 19:35:54 crc kubenswrapper[4942]: I0218 19:35:54.417979 4942 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/e4517368-322e-4467-b31a-45b487e1035b-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 18 19:35:54 crc kubenswrapper[4942]: I0218 19:35:54.462359 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e4517368-322e-4467-b31a-45b487e1035b-config-data" (OuterVolumeSpecName: "config-data") pod "e4517368-322e-4467-b31a-45b487e1035b" (UID: "e4517368-322e-4467-b31a-45b487e1035b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:35:54 crc kubenswrapper[4942]: I0218 19:35:54.519530 4942 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e4517368-322e-4467-b31a-45b487e1035b-config-data\") on node \"crc\" DevicePath \"\"" Feb 18 19:35:54 crc kubenswrapper[4942]: I0218 19:35:54.519666 4942 scope.go:117] "RemoveContainer" containerID="cc9e9ad424bb99e035b269c6d15c8bb5153037019b02d571a224f399df6aeed3" Feb 18 19:35:54 crc kubenswrapper[4942]: I0218 19:35:54.546287 4942 scope.go:117] "RemoveContainer" containerID="e4a549323fce47497ee0c4cfa6ce99131c2b1fa4f1a33956d55a73512533ebbd" Feb 18 19:35:54 crc kubenswrapper[4942]: I0218 19:35:54.574462 4942 scope.go:117] "RemoveContainer" containerID="36f35a87fe58dff89b8aed800be1382b5a73805c6babc09fce366da3515f6407" Feb 18 19:35:54 crc kubenswrapper[4942]: I0218 19:35:54.595502 4942 scope.go:117] "RemoveContainer" containerID="3ce89a3d92b53a41feec8224f4fc75ea2cc11cd4761428cd9dab597a1c7d6d0a" Feb 18 19:35:54 crc kubenswrapper[4942]: E0218 19:35:54.595984 4942 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3ce89a3d92b53a41feec8224f4fc75ea2cc11cd4761428cd9dab597a1c7d6d0a\": container with ID starting with 3ce89a3d92b53a41feec8224f4fc75ea2cc11cd4761428cd9dab597a1c7d6d0a not found: ID does not exist" containerID="3ce89a3d92b53a41feec8224f4fc75ea2cc11cd4761428cd9dab597a1c7d6d0a" Feb 18 19:35:54 crc kubenswrapper[4942]: I0218 19:35:54.596035 4942 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3ce89a3d92b53a41feec8224f4fc75ea2cc11cd4761428cd9dab597a1c7d6d0a"} err="failed to get container status \"3ce89a3d92b53a41feec8224f4fc75ea2cc11cd4761428cd9dab597a1c7d6d0a\": rpc error: code = NotFound desc = could not find container \"3ce89a3d92b53a41feec8224f4fc75ea2cc11cd4761428cd9dab597a1c7d6d0a\": container with ID starting with 3ce89a3d92b53a41feec8224f4fc75ea2cc11cd4761428cd9dab597a1c7d6d0a not found: ID does not exist" Feb 18 19:35:54 crc kubenswrapper[4942]: I0218 19:35:54.596062 4942 scope.go:117] "RemoveContainer" containerID="cc9e9ad424bb99e035b269c6d15c8bb5153037019b02d571a224f399df6aeed3" Feb 18 19:35:54 crc kubenswrapper[4942]: E0218 19:35:54.596440 4942 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cc9e9ad424bb99e035b269c6d15c8bb5153037019b02d571a224f399df6aeed3\": container with ID starting with cc9e9ad424bb99e035b269c6d15c8bb5153037019b02d571a224f399df6aeed3 not found: ID does not exist" containerID="cc9e9ad424bb99e035b269c6d15c8bb5153037019b02d571a224f399df6aeed3" Feb 18 19:35:54 crc kubenswrapper[4942]: I0218 19:35:54.596476 4942 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cc9e9ad424bb99e035b269c6d15c8bb5153037019b02d571a224f399df6aeed3"} err="failed to get container status \"cc9e9ad424bb99e035b269c6d15c8bb5153037019b02d571a224f399df6aeed3\": rpc error: code = NotFound desc = could not find container \"cc9e9ad424bb99e035b269c6d15c8bb5153037019b02d571a224f399df6aeed3\": container with ID starting with cc9e9ad424bb99e035b269c6d15c8bb5153037019b02d571a224f399df6aeed3 not found: ID does not exist" Feb 18 19:35:54 crc kubenswrapper[4942]: I0218 19:35:54.596497 4942 scope.go:117] "RemoveContainer" containerID="e4a549323fce47497ee0c4cfa6ce99131c2b1fa4f1a33956d55a73512533ebbd" Feb 18 19:35:54 crc kubenswrapper[4942]: E0218 19:35:54.596691 4942 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e4a549323fce47497ee0c4cfa6ce99131c2b1fa4f1a33956d55a73512533ebbd\": container with ID starting with e4a549323fce47497ee0c4cfa6ce99131c2b1fa4f1a33956d55a73512533ebbd not found: ID does not exist" containerID="e4a549323fce47497ee0c4cfa6ce99131c2b1fa4f1a33956d55a73512533ebbd" Feb 18 19:35:54 crc kubenswrapper[4942]: I0218 19:35:54.596947 4942 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e4a549323fce47497ee0c4cfa6ce99131c2b1fa4f1a33956d55a73512533ebbd"} err="failed to get container status \"e4a549323fce47497ee0c4cfa6ce99131c2b1fa4f1a33956d55a73512533ebbd\": rpc error: code = NotFound desc = could not find container \"e4a549323fce47497ee0c4cfa6ce99131c2b1fa4f1a33956d55a73512533ebbd\": container with ID starting with e4a549323fce47497ee0c4cfa6ce99131c2b1fa4f1a33956d55a73512533ebbd not found: ID does not exist" Feb 18 19:35:54 crc kubenswrapper[4942]: I0218 19:35:54.596963 4942 scope.go:117] "RemoveContainer" containerID="36f35a87fe58dff89b8aed800be1382b5a73805c6babc09fce366da3515f6407" Feb 18 19:35:54 crc kubenswrapper[4942]: E0218 19:35:54.597262 4942 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"36f35a87fe58dff89b8aed800be1382b5a73805c6babc09fce366da3515f6407\": container with ID starting with 36f35a87fe58dff89b8aed800be1382b5a73805c6babc09fce366da3515f6407 not found: ID does not exist" containerID="36f35a87fe58dff89b8aed800be1382b5a73805c6babc09fce366da3515f6407" Feb 18 19:35:54 crc kubenswrapper[4942]: I0218 19:35:54.597293 4942 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"36f35a87fe58dff89b8aed800be1382b5a73805c6babc09fce366da3515f6407"} err="failed to get container status \"36f35a87fe58dff89b8aed800be1382b5a73805c6babc09fce366da3515f6407\": rpc error: code = NotFound desc = could not find container \"36f35a87fe58dff89b8aed800be1382b5a73805c6babc09fce366da3515f6407\": container with ID starting with 36f35a87fe58dff89b8aed800be1382b5a73805c6babc09fce366da3515f6407 not found: ID does not exist" Feb 18 19:35:55 crc kubenswrapper[4942]: I0218 19:35:55.010331 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/horizon-54d64cf59b-xp7rk" Feb 18 19:35:55 crc kubenswrapper[4942]: I0218 19:35:55.045839 4942 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="25b5e8eb-ac39-4f81-9601-0b2cc0a54a13" path="/var/lib/kubelet/pods/25b5e8eb-ac39-4f81-9601-0b2cc0a54a13/volumes" Feb 18 19:35:55 crc kubenswrapper[4942]: I0218 19:35:55.173130 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-ffd58675f-h7jk6" event={"ID":"0e207482-f349-415e-86d3-800b0caf9a78","Type":"ContainerStarted","Data":"82f116b1fadc9ee58ff0d0e721f26d25af42e5fdecc0bceaf15464762fe8cbb8"} Feb 18 19:35:55 crc kubenswrapper[4942]: I0218 19:35:55.173405 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-ffd58675f-h7jk6" event={"ID":"0e207482-f349-415e-86d3-800b0caf9a78","Type":"ContainerStarted","Data":"88c9bd2cae0182f053bfe66d65099b836ca1dbef4ceef315043341409b84d63c"} Feb 18 19:35:55 crc kubenswrapper[4942]: I0218 19:35:55.177989 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c9776ccc5-lrqxl" event={"ID":"d97cbf5c-7da6-4d2e-a3d6-6bbe07441ae0","Type":"ContainerStarted","Data":"2e107e6e0a09eb362ca701ccec933f2884a01ef22670bcf63ff6185d0e31a00b"} Feb 18 19:35:55 crc kubenswrapper[4942]: I0218 19:35:55.178498 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5c9776ccc5-lrqxl" Feb 18 19:35:55 crc kubenswrapper[4942]: I0218 19:35:55.181302 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-5987dd846-f7dd9" event={"ID":"db966aef-0b18-400a-b3e8-49487308bf05","Type":"ContainerStarted","Data":"96bc861a905a28c4a6e6caa9c8098b62f50f8ee4ae1714468f6fa925f5792b05"} Feb 18 19:35:55 crc kubenswrapper[4942]: I0218 19:35:55.181336 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-5987dd846-f7dd9" event={"ID":"db966aef-0b18-400a-b3e8-49487308bf05","Type":"ContainerStarted","Data":"a17ade9db8f76f59cdc5fa928b002e53b11d6359c565eba8567629af4417fdbf"} Feb 18 19:35:55 crc kubenswrapper[4942]: I0218 19:35:55.183361 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-555cb4cc6f-xh69m" event={"ID":"fb5df9b1-974d-4c39-9278-b79355109acb","Type":"ContainerStarted","Data":"028284f4d716a71cf9e550a8b8a0724dad91e7b8f56a396013671a08337d28f3"} Feb 18 19:35:55 crc kubenswrapper[4942]: I0218 19:35:55.189637 4942 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 18 19:35:55 crc kubenswrapper[4942]: I0218 19:35:55.195398 4942 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-worker-ffd58675f-h7jk6" podStartSLOduration=2.710985648 podStartE2EDuration="5.195377517s" podCreationTimestamp="2026-02-18 19:35:50 +0000 UTC" firstStartedPulling="2026-02-18 19:35:51.70780626 +0000 UTC m=+1111.412738925" lastFinishedPulling="2026-02-18 19:35:54.192198129 +0000 UTC m=+1113.897130794" observedRunningTime="2026-02-18 19:35:55.188230351 +0000 UTC m=+1114.893163036" watchObservedRunningTime="2026-02-18 19:35:55.195377517 +0000 UTC m=+1114.900310182" Feb 18 19:35:55 crc kubenswrapper[4942]: I0218 19:35:55.222091 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/horizon-7b6b6597b8-m8ngr" Feb 18 19:35:55 crc kubenswrapper[4942]: I0218 19:35:55.233899 4942 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-keystone-listener-5987dd846-f7dd9" podStartSLOduration=2.849130488 podStartE2EDuration="5.233879608s" podCreationTimestamp="2026-02-18 19:35:50 +0000 UTC" firstStartedPulling="2026-02-18 19:35:51.808468546 +0000 UTC m=+1111.513401221" lastFinishedPulling="2026-02-18 19:35:54.193217676 +0000 UTC m=+1113.898150341" observedRunningTime="2026-02-18 19:35:55.231138606 +0000 UTC m=+1114.936071281" watchObservedRunningTime="2026-02-18 19:35:55.233879608 +0000 UTC m=+1114.938812273" Feb 18 19:35:55 crc kubenswrapper[4942]: I0218 19:35:55.307899 4942 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5c9776ccc5-lrqxl" podStartSLOduration=5.307873421 podStartE2EDuration="5.307873421s" podCreationTimestamp="2026-02-18 19:35:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:35:55.275503759 +0000 UTC m=+1114.980436444" watchObservedRunningTime="2026-02-18 19:35:55.307873421 +0000 UTC m=+1115.012806086" Feb 18 19:35:55 crc kubenswrapper[4942]: I0218 19:35:55.378935 4942 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 18 19:35:55 crc kubenswrapper[4942]: I0218 19:35:55.465084 4942 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 18 19:35:55 crc kubenswrapper[4942]: I0218 19:35:55.486993 4942 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 18 19:35:55 crc kubenswrapper[4942]: E0218 19:35:55.487490 4942 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e4517368-322e-4467-b31a-45b487e1035b" containerName="ceilometer-central-agent" Feb 18 19:35:55 crc kubenswrapper[4942]: I0218 19:35:55.487512 4942 state_mem.go:107] "Deleted CPUSet assignment" podUID="e4517368-322e-4467-b31a-45b487e1035b" containerName="ceilometer-central-agent" Feb 18 19:35:55 crc kubenswrapper[4942]: E0218 19:35:55.487532 4942 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e4517368-322e-4467-b31a-45b487e1035b" containerName="proxy-httpd" Feb 18 19:35:55 crc kubenswrapper[4942]: I0218 19:35:55.487540 4942 state_mem.go:107] "Deleted CPUSet assignment" podUID="e4517368-322e-4467-b31a-45b487e1035b" containerName="proxy-httpd" Feb 18 19:35:55 crc kubenswrapper[4942]: E0218 19:35:55.487555 4942 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e4517368-322e-4467-b31a-45b487e1035b" containerName="ceilometer-notification-agent" Feb 18 19:35:55 crc kubenswrapper[4942]: I0218 19:35:55.487880 4942 state_mem.go:107] "Deleted CPUSet assignment" podUID="e4517368-322e-4467-b31a-45b487e1035b" containerName="ceilometer-notification-agent" Feb 18 19:35:55 crc kubenswrapper[4942]: E0218 19:35:55.487926 4942 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e4517368-322e-4467-b31a-45b487e1035b" containerName="sg-core" Feb 18 19:35:55 crc kubenswrapper[4942]: I0218 19:35:55.487934 4942 state_mem.go:107] "Deleted CPUSet assignment" podUID="e4517368-322e-4467-b31a-45b487e1035b" containerName="sg-core" Feb 18 19:35:55 crc kubenswrapper[4942]: I0218 19:35:55.488136 4942 memory_manager.go:354] "RemoveStaleState removing state" podUID="e4517368-322e-4467-b31a-45b487e1035b" containerName="sg-core" Feb 18 19:35:55 crc kubenswrapper[4942]: I0218 19:35:55.488161 4942 memory_manager.go:354] "RemoveStaleState removing state" podUID="e4517368-322e-4467-b31a-45b487e1035b" containerName="ceilometer-central-agent" Feb 18 19:35:55 crc kubenswrapper[4942]: I0218 19:35:55.488176 4942 memory_manager.go:354] "RemoveStaleState removing state" podUID="e4517368-322e-4467-b31a-45b487e1035b" containerName="proxy-httpd" Feb 18 19:35:55 crc kubenswrapper[4942]: I0218 19:35:55.488195 4942 memory_manager.go:354] "RemoveStaleState removing state" podUID="e4517368-322e-4467-b31a-45b487e1035b" containerName="ceilometer-notification-agent" Feb 18 19:35:55 crc kubenswrapper[4942]: I0218 19:35:55.490229 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 18 19:35:55 crc kubenswrapper[4942]: I0218 19:35:55.502127 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 18 19:35:55 crc kubenswrapper[4942]: I0218 19:35:55.502229 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 18 19:35:55 crc kubenswrapper[4942]: I0218 19:35:55.538827 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cb08df0a-0162-4e04-a641-6fd65af9048b-config-data\") pod \"ceilometer-0\" (UID: \"cb08df0a-0162-4e04-a641-6fd65af9048b\") " pod="openstack/ceilometer-0" Feb 18 19:35:55 crc kubenswrapper[4942]: I0218 19:35:55.538917 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cb08df0a-0162-4e04-a641-6fd65af9048b-scripts\") pod \"ceilometer-0\" (UID: \"cb08df0a-0162-4e04-a641-6fd65af9048b\") " pod="openstack/ceilometer-0" Feb 18 19:35:55 crc kubenswrapper[4942]: I0218 19:35:55.538935 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/cb08df0a-0162-4e04-a641-6fd65af9048b-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"cb08df0a-0162-4e04-a641-6fd65af9048b\") " pod="openstack/ceilometer-0" Feb 18 19:35:55 crc kubenswrapper[4942]: I0218 19:35:55.538964 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2plf5\" (UniqueName: \"kubernetes.io/projected/cb08df0a-0162-4e04-a641-6fd65af9048b-kube-api-access-2plf5\") pod \"ceilometer-0\" (UID: \"cb08df0a-0162-4e04-a641-6fd65af9048b\") " pod="openstack/ceilometer-0" Feb 18 19:35:55 crc kubenswrapper[4942]: I0218 19:35:55.539010 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/cb08df0a-0162-4e04-a641-6fd65af9048b-run-httpd\") pod \"ceilometer-0\" (UID: \"cb08df0a-0162-4e04-a641-6fd65af9048b\") " pod="openstack/ceilometer-0" Feb 18 19:35:55 crc kubenswrapper[4942]: I0218 19:35:55.539031 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cb08df0a-0162-4e04-a641-6fd65af9048b-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"cb08df0a-0162-4e04-a641-6fd65af9048b\") " pod="openstack/ceilometer-0" Feb 18 19:35:55 crc kubenswrapper[4942]: I0218 19:35:55.539063 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/cb08df0a-0162-4e04-a641-6fd65af9048b-log-httpd\") pod \"ceilometer-0\" (UID: \"cb08df0a-0162-4e04-a641-6fd65af9048b\") " pod="openstack/ceilometer-0" Feb 18 19:35:55 crc kubenswrapper[4942]: I0218 19:35:55.583187 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 18 19:35:55 crc kubenswrapper[4942]: I0218 19:35:55.623430 4942 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-54d64cf59b-xp7rk"] Feb 18 19:35:55 crc kubenswrapper[4942]: I0218 19:35:55.623730 4942 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-54d64cf59b-xp7rk" podUID="3ecc91e6-4e7f-438f-8530-bb8dd55764c5" containerName="horizon-log" containerID="cri-o://036dc92b12e420ef80458fb3e23d3375424a9aed1ed6d80a904da58e73ba2659" gracePeriod=30 Feb 18 19:35:55 crc kubenswrapper[4942]: I0218 19:35:55.624248 4942 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-54d64cf59b-xp7rk" podUID="3ecc91e6-4e7f-438f-8530-bb8dd55764c5" containerName="horizon" containerID="cri-o://4bd98068ec637cd03846de3ac7d0bc145a81ebf089811ebc4b9501aa76cae874" gracePeriod=30 Feb 18 19:35:55 crc kubenswrapper[4942]: I0218 19:35:55.646964 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cb08df0a-0162-4e04-a641-6fd65af9048b-scripts\") pod \"ceilometer-0\" (UID: \"cb08df0a-0162-4e04-a641-6fd65af9048b\") " pod="openstack/ceilometer-0" Feb 18 19:35:55 crc kubenswrapper[4942]: I0218 19:35:55.647012 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/cb08df0a-0162-4e04-a641-6fd65af9048b-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"cb08df0a-0162-4e04-a641-6fd65af9048b\") " pod="openstack/ceilometer-0" Feb 18 19:35:55 crc kubenswrapper[4942]: I0218 19:35:55.647081 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2plf5\" (UniqueName: \"kubernetes.io/projected/cb08df0a-0162-4e04-a641-6fd65af9048b-kube-api-access-2plf5\") pod \"ceilometer-0\" (UID: \"cb08df0a-0162-4e04-a641-6fd65af9048b\") " pod="openstack/ceilometer-0" Feb 18 19:35:55 crc kubenswrapper[4942]: I0218 19:35:55.647183 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/cb08df0a-0162-4e04-a641-6fd65af9048b-run-httpd\") pod \"ceilometer-0\" (UID: \"cb08df0a-0162-4e04-a641-6fd65af9048b\") " pod="openstack/ceilometer-0" Feb 18 19:35:55 crc kubenswrapper[4942]: I0218 19:35:55.647217 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cb08df0a-0162-4e04-a641-6fd65af9048b-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"cb08df0a-0162-4e04-a641-6fd65af9048b\") " pod="openstack/ceilometer-0" Feb 18 19:35:55 crc kubenswrapper[4942]: I0218 19:35:55.647285 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/cb08df0a-0162-4e04-a641-6fd65af9048b-log-httpd\") pod \"ceilometer-0\" (UID: \"cb08df0a-0162-4e04-a641-6fd65af9048b\") " pod="openstack/ceilometer-0" Feb 18 19:35:55 crc kubenswrapper[4942]: I0218 19:35:55.647373 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cb08df0a-0162-4e04-a641-6fd65af9048b-config-data\") pod \"ceilometer-0\" (UID: \"cb08df0a-0162-4e04-a641-6fd65af9048b\") " pod="openstack/ceilometer-0" Feb 18 19:35:55 crc kubenswrapper[4942]: I0218 19:35:55.648836 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/cb08df0a-0162-4e04-a641-6fd65af9048b-log-httpd\") pod \"ceilometer-0\" (UID: \"cb08df0a-0162-4e04-a641-6fd65af9048b\") " pod="openstack/ceilometer-0" Feb 18 19:35:55 crc kubenswrapper[4942]: I0218 19:35:55.649506 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/cb08df0a-0162-4e04-a641-6fd65af9048b-run-httpd\") pod \"ceilometer-0\" (UID: \"cb08df0a-0162-4e04-a641-6fd65af9048b\") " pod="openstack/ceilometer-0" Feb 18 19:35:55 crc kubenswrapper[4942]: I0218 19:35:55.669407 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cb08df0a-0162-4e04-a641-6fd65af9048b-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"cb08df0a-0162-4e04-a641-6fd65af9048b\") " pod="openstack/ceilometer-0" Feb 18 19:35:55 crc kubenswrapper[4942]: I0218 19:35:55.679538 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cb08df0a-0162-4e04-a641-6fd65af9048b-config-data\") pod \"ceilometer-0\" (UID: \"cb08df0a-0162-4e04-a641-6fd65af9048b\") " pod="openstack/ceilometer-0" Feb 18 19:35:55 crc kubenswrapper[4942]: I0218 19:35:55.680383 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cb08df0a-0162-4e04-a641-6fd65af9048b-scripts\") pod \"ceilometer-0\" (UID: \"cb08df0a-0162-4e04-a641-6fd65af9048b\") " pod="openstack/ceilometer-0" Feb 18 19:35:55 crc kubenswrapper[4942]: I0218 19:35:55.689444 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2plf5\" (UniqueName: \"kubernetes.io/projected/cb08df0a-0162-4e04-a641-6fd65af9048b-kube-api-access-2plf5\") pod \"ceilometer-0\" (UID: \"cb08df0a-0162-4e04-a641-6fd65af9048b\") " pod="openstack/ceilometer-0" Feb 18 19:35:55 crc kubenswrapper[4942]: I0218 19:35:55.701793 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/cb08df0a-0162-4e04-a641-6fd65af9048b-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"cb08df0a-0162-4e04-a641-6fd65af9048b\") " pod="openstack/ceilometer-0" Feb 18 19:35:55 crc kubenswrapper[4942]: I0218 19:35:55.873506 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 18 19:35:56 crc kubenswrapper[4942]: I0218 19:35:56.213202 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"17399208-02d7-46c9-b5ea-b01563e8baf1","Type":"ContainerStarted","Data":"ab6c4d04ee142d2e6670d2eba83ed1f3609e146414eca1aa78da29e2ecfc3a7c"} Feb 18 19:35:56 crc kubenswrapper[4942]: I0218 19:35:56.221167 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"82d59804-3d83-4594-855b-f08b93e146a4","Type":"ContainerStarted","Data":"125aa27e3a6a563686671243e3a03b123d6140e97f57742ca414e38ef07e285d"} Feb 18 19:35:56 crc kubenswrapper[4942]: I0218 19:35:56.221318 4942 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="82d59804-3d83-4594-855b-f08b93e146a4" containerName="cinder-api-log" containerID="cri-o://06ece880bf54f2ff4ce2bd899b5f766401d58f723ea7b2a3f90e7165416928ff" gracePeriod=30 Feb 18 19:35:56 crc kubenswrapper[4942]: I0218 19:35:56.221394 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Feb 18 19:35:56 crc kubenswrapper[4942]: I0218 19:35:56.221772 4942 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="82d59804-3d83-4594-855b-f08b93e146a4" containerName="cinder-api" containerID="cri-o://125aa27e3a6a563686671243e3a03b123d6140e97f57742ca414e38ef07e285d" gracePeriod=30 Feb 18 19:35:56 crc kubenswrapper[4942]: I0218 19:35:56.237371 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-555cb4cc6f-xh69m" event={"ID":"fb5df9b1-974d-4c39-9278-b79355109acb","Type":"ContainerStarted","Data":"209f9439a272fd3644d3f8d4eb1b3879920c1b7b426b77d83f71a0d05d16e49e"} Feb 18 19:35:56 crc kubenswrapper[4942]: I0218 19:35:56.239533 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-555cb4cc6f-xh69m" Feb 18 19:35:56 crc kubenswrapper[4942]: I0218 19:35:56.274115 4942 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=5.274094106 podStartE2EDuration="5.274094106s" podCreationTimestamp="2026-02-18 19:35:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:35:56.24767341 +0000 UTC m=+1115.952606075" watchObservedRunningTime="2026-02-18 19:35:56.274094106 +0000 UTC m=+1115.979026771" Feb 18 19:35:56 crc kubenswrapper[4942]: I0218 19:35:56.277315 4942 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-555cb4cc6f-xh69m" podStartSLOduration=4.27730205 podStartE2EDuration="4.27730205s" podCreationTimestamp="2026-02-18 19:35:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:35:56.274042985 +0000 UTC m=+1115.978975650" watchObservedRunningTime="2026-02-18 19:35:56.27730205 +0000 UTC m=+1115.982234715" Feb 18 19:35:56 crc kubenswrapper[4942]: I0218 19:35:56.345065 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 18 19:35:56 crc kubenswrapper[4942]: W0218 19:35:56.349319 4942 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podcb08df0a_0162_4e04_a641_6fd65af9048b.slice/crio-7b5c07d9023f0c81a3490adcfb94e32fc0800eeb0c4be517c4b9b978e0bb5083 WatchSource:0}: Error finding container 7b5c07d9023f0c81a3490adcfb94e32fc0800eeb0c4be517c4b9b978e0bb5083: Status 404 returned error can't find the container with id 7b5c07d9023f0c81a3490adcfb94e32fc0800eeb0c4be517c4b9b978e0bb5083 Feb 18 19:35:56 crc kubenswrapper[4942]: I0218 19:35:56.793473 4942 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Feb 18 19:35:56 crc kubenswrapper[4942]: I0218 19:35:56.871373 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/82d59804-3d83-4594-855b-f08b93e146a4-combined-ca-bundle\") pod \"82d59804-3d83-4594-855b-f08b93e146a4\" (UID: \"82d59804-3d83-4594-855b-f08b93e146a4\") " Feb 18 19:35:56 crc kubenswrapper[4942]: I0218 19:35:56.871499 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/82d59804-3d83-4594-855b-f08b93e146a4-logs\") pod \"82d59804-3d83-4594-855b-f08b93e146a4\" (UID: \"82d59804-3d83-4594-855b-f08b93e146a4\") " Feb 18 19:35:56 crc kubenswrapper[4942]: I0218 19:35:56.871573 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/82d59804-3d83-4594-855b-f08b93e146a4-config-data-custom\") pod \"82d59804-3d83-4594-855b-f08b93e146a4\" (UID: \"82d59804-3d83-4594-855b-f08b93e146a4\") " Feb 18 19:35:56 crc kubenswrapper[4942]: I0218 19:35:56.871597 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/82d59804-3d83-4594-855b-f08b93e146a4-config-data\") pod \"82d59804-3d83-4594-855b-f08b93e146a4\" (UID: \"82d59804-3d83-4594-855b-f08b93e146a4\") " Feb 18 19:35:56 crc kubenswrapper[4942]: I0218 19:35:56.871647 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/82d59804-3d83-4594-855b-f08b93e146a4-scripts\") pod \"82d59804-3d83-4594-855b-f08b93e146a4\" (UID: \"82d59804-3d83-4594-855b-f08b93e146a4\") " Feb 18 19:35:56 crc kubenswrapper[4942]: I0218 19:35:56.871676 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/82d59804-3d83-4594-855b-f08b93e146a4-etc-machine-id\") pod \"82d59804-3d83-4594-855b-f08b93e146a4\" (UID: \"82d59804-3d83-4594-855b-f08b93e146a4\") " Feb 18 19:35:56 crc kubenswrapper[4942]: I0218 19:35:56.871713 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wdj52\" (UniqueName: \"kubernetes.io/projected/82d59804-3d83-4594-855b-f08b93e146a4-kube-api-access-wdj52\") pod \"82d59804-3d83-4594-855b-f08b93e146a4\" (UID: \"82d59804-3d83-4594-855b-f08b93e146a4\") " Feb 18 19:35:56 crc kubenswrapper[4942]: I0218 19:35:56.873897 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/82d59804-3d83-4594-855b-f08b93e146a4-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "82d59804-3d83-4594-855b-f08b93e146a4" (UID: "82d59804-3d83-4594-855b-f08b93e146a4"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 18 19:35:56 crc kubenswrapper[4942]: I0218 19:35:56.874266 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/82d59804-3d83-4594-855b-f08b93e146a4-logs" (OuterVolumeSpecName: "logs") pod "82d59804-3d83-4594-855b-f08b93e146a4" (UID: "82d59804-3d83-4594-855b-f08b93e146a4"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 19:35:56 crc kubenswrapper[4942]: I0218 19:35:56.889673 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/82d59804-3d83-4594-855b-f08b93e146a4-kube-api-access-wdj52" (OuterVolumeSpecName: "kube-api-access-wdj52") pod "82d59804-3d83-4594-855b-f08b93e146a4" (UID: "82d59804-3d83-4594-855b-f08b93e146a4"). InnerVolumeSpecName "kube-api-access-wdj52". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:35:56 crc kubenswrapper[4942]: I0218 19:35:56.905914 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/82d59804-3d83-4594-855b-f08b93e146a4-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "82d59804-3d83-4594-855b-f08b93e146a4" (UID: "82d59804-3d83-4594-855b-f08b93e146a4"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:35:56 crc kubenswrapper[4942]: I0218 19:35:56.906054 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/82d59804-3d83-4594-855b-f08b93e146a4-scripts" (OuterVolumeSpecName: "scripts") pod "82d59804-3d83-4594-855b-f08b93e146a4" (UID: "82d59804-3d83-4594-855b-f08b93e146a4"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:35:56 crc kubenswrapper[4942]: I0218 19:35:56.926938 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/82d59804-3d83-4594-855b-f08b93e146a4-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "82d59804-3d83-4594-855b-f08b93e146a4" (UID: "82d59804-3d83-4594-855b-f08b93e146a4"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:35:56 crc kubenswrapper[4942]: I0218 19:35:56.976132 4942 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wdj52\" (UniqueName: \"kubernetes.io/projected/82d59804-3d83-4594-855b-f08b93e146a4-kube-api-access-wdj52\") on node \"crc\" DevicePath \"\"" Feb 18 19:35:56 crc kubenswrapper[4942]: I0218 19:35:56.976448 4942 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/82d59804-3d83-4594-855b-f08b93e146a4-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 19:35:56 crc kubenswrapper[4942]: I0218 19:35:56.976457 4942 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/82d59804-3d83-4594-855b-f08b93e146a4-logs\") on node \"crc\" DevicePath \"\"" Feb 18 19:35:56 crc kubenswrapper[4942]: I0218 19:35:56.976467 4942 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/82d59804-3d83-4594-855b-f08b93e146a4-config-data-custom\") on node \"crc\" DevicePath \"\"" Feb 18 19:35:56 crc kubenswrapper[4942]: I0218 19:35:56.976477 4942 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/82d59804-3d83-4594-855b-f08b93e146a4-scripts\") on node \"crc\" DevicePath \"\"" Feb 18 19:35:56 crc kubenswrapper[4942]: I0218 19:35:56.976485 4942 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/82d59804-3d83-4594-855b-f08b93e146a4-etc-machine-id\") on node \"crc\" DevicePath \"\"" Feb 18 19:35:56 crc kubenswrapper[4942]: I0218 19:35:56.977866 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/82d59804-3d83-4594-855b-f08b93e146a4-config-data" (OuterVolumeSpecName: "config-data") pod "82d59804-3d83-4594-855b-f08b93e146a4" (UID: "82d59804-3d83-4594-855b-f08b93e146a4"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:35:57 crc kubenswrapper[4942]: I0218 19:35:57.048329 4942 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e4517368-322e-4467-b31a-45b487e1035b" path="/var/lib/kubelet/pods/e4517368-322e-4467-b31a-45b487e1035b/volumes" Feb 18 19:35:57 crc kubenswrapper[4942]: I0218 19:35:57.078056 4942 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/82d59804-3d83-4594-855b-f08b93e146a4-config-data\") on node \"crc\" DevicePath \"\"" Feb 18 19:35:57 crc kubenswrapper[4942]: I0218 19:35:57.248075 4942 generic.go:334] "Generic (PLEG): container finished" podID="82d59804-3d83-4594-855b-f08b93e146a4" containerID="125aa27e3a6a563686671243e3a03b123d6140e97f57742ca414e38ef07e285d" exitCode=0 Feb 18 19:35:57 crc kubenswrapper[4942]: I0218 19:35:57.248106 4942 generic.go:334] "Generic (PLEG): container finished" podID="82d59804-3d83-4594-855b-f08b93e146a4" containerID="06ece880bf54f2ff4ce2bd899b5f766401d58f723ea7b2a3f90e7165416928ff" exitCode=143 Feb 18 19:35:57 crc kubenswrapper[4942]: I0218 19:35:57.248151 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"82d59804-3d83-4594-855b-f08b93e146a4","Type":"ContainerDied","Data":"125aa27e3a6a563686671243e3a03b123d6140e97f57742ca414e38ef07e285d"} Feb 18 19:35:57 crc kubenswrapper[4942]: I0218 19:35:57.248167 4942 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Feb 18 19:35:57 crc kubenswrapper[4942]: I0218 19:35:57.248190 4942 scope.go:117] "RemoveContainer" containerID="125aa27e3a6a563686671243e3a03b123d6140e97f57742ca414e38ef07e285d" Feb 18 19:35:57 crc kubenswrapper[4942]: I0218 19:35:57.248178 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"82d59804-3d83-4594-855b-f08b93e146a4","Type":"ContainerDied","Data":"06ece880bf54f2ff4ce2bd899b5f766401d58f723ea7b2a3f90e7165416928ff"} Feb 18 19:35:57 crc kubenswrapper[4942]: I0218 19:35:57.248350 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"82d59804-3d83-4594-855b-f08b93e146a4","Type":"ContainerDied","Data":"3311d8cddbe87c83a85f89c5d8660e6aa1ae4c9bc3dfb708ff87ff00d9bd9163"} Feb 18 19:35:57 crc kubenswrapper[4942]: I0218 19:35:57.250665 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"cb08df0a-0162-4e04-a641-6fd65af9048b","Type":"ContainerStarted","Data":"ed48b1a780714eb223b18d06dc51c76e72512cff5c52173a2e3ee292ee687994"} Feb 18 19:35:57 crc kubenswrapper[4942]: I0218 19:35:57.250689 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"cb08df0a-0162-4e04-a641-6fd65af9048b","Type":"ContainerStarted","Data":"7b5c07d9023f0c81a3490adcfb94e32fc0800eeb0c4be517c4b9b978e0bb5083"} Feb 18 19:35:57 crc kubenswrapper[4942]: I0218 19:35:57.254692 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"17399208-02d7-46c9-b5ea-b01563e8baf1","Type":"ContainerStarted","Data":"24e727d5d2fb180c7e5b210ba8e9f70f0b0d6335ad6d3b2ef9160574585ddb26"} Feb 18 19:35:57 crc kubenswrapper[4942]: I0218 19:35:57.286565 4942 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=6.154593301 podStartE2EDuration="7.286551435s" podCreationTimestamp="2026-02-18 19:35:50 +0000 UTC" firstStartedPulling="2026-02-18 19:35:51.928965319 +0000 UTC m=+1111.633897984" lastFinishedPulling="2026-02-18 19:35:53.060923453 +0000 UTC m=+1112.765856118" observedRunningTime="2026-02-18 19:35:57.286435232 +0000 UTC m=+1116.991367897" watchObservedRunningTime="2026-02-18 19:35:57.286551435 +0000 UTC m=+1116.991484100" Feb 18 19:35:57 crc kubenswrapper[4942]: I0218 19:35:57.307416 4942 scope.go:117] "RemoveContainer" containerID="06ece880bf54f2ff4ce2bd899b5f766401d58f723ea7b2a3f90e7165416928ff" Feb 18 19:35:57 crc kubenswrapper[4942]: I0218 19:35:57.311905 4942 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Feb 18 19:35:57 crc kubenswrapper[4942]: I0218 19:35:57.324634 4942 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-api-0"] Feb 18 19:35:57 crc kubenswrapper[4942]: I0218 19:35:57.381511 4942 scope.go:117] "RemoveContainer" containerID="125aa27e3a6a563686671243e3a03b123d6140e97f57742ca414e38ef07e285d" Feb 18 19:35:57 crc kubenswrapper[4942]: E0218 19:35:57.387920 4942 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"125aa27e3a6a563686671243e3a03b123d6140e97f57742ca414e38ef07e285d\": container with ID starting with 125aa27e3a6a563686671243e3a03b123d6140e97f57742ca414e38ef07e285d not found: ID does not exist" containerID="125aa27e3a6a563686671243e3a03b123d6140e97f57742ca414e38ef07e285d" Feb 18 19:35:57 crc kubenswrapper[4942]: I0218 19:35:57.387979 4942 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"125aa27e3a6a563686671243e3a03b123d6140e97f57742ca414e38ef07e285d"} err="failed to get container status \"125aa27e3a6a563686671243e3a03b123d6140e97f57742ca414e38ef07e285d\": rpc error: code = NotFound desc = could not find container \"125aa27e3a6a563686671243e3a03b123d6140e97f57742ca414e38ef07e285d\": container with ID starting with 125aa27e3a6a563686671243e3a03b123d6140e97f57742ca414e38ef07e285d not found: ID does not exist" Feb 18 19:35:57 crc kubenswrapper[4942]: I0218 19:35:57.388010 4942 scope.go:117] "RemoveContainer" containerID="06ece880bf54f2ff4ce2bd899b5f766401d58f723ea7b2a3f90e7165416928ff" Feb 18 19:35:57 crc kubenswrapper[4942]: E0218 19:35:57.392713 4942 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"06ece880bf54f2ff4ce2bd899b5f766401d58f723ea7b2a3f90e7165416928ff\": container with ID starting with 06ece880bf54f2ff4ce2bd899b5f766401d58f723ea7b2a3f90e7165416928ff not found: ID does not exist" containerID="06ece880bf54f2ff4ce2bd899b5f766401d58f723ea7b2a3f90e7165416928ff" Feb 18 19:35:57 crc kubenswrapper[4942]: I0218 19:35:57.392756 4942 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"06ece880bf54f2ff4ce2bd899b5f766401d58f723ea7b2a3f90e7165416928ff"} err="failed to get container status \"06ece880bf54f2ff4ce2bd899b5f766401d58f723ea7b2a3f90e7165416928ff\": rpc error: code = NotFound desc = could not find container \"06ece880bf54f2ff4ce2bd899b5f766401d58f723ea7b2a3f90e7165416928ff\": container with ID starting with 06ece880bf54f2ff4ce2bd899b5f766401d58f723ea7b2a3f90e7165416928ff not found: ID does not exist" Feb 18 19:35:57 crc kubenswrapper[4942]: I0218 19:35:57.392796 4942 scope.go:117] "RemoveContainer" containerID="125aa27e3a6a563686671243e3a03b123d6140e97f57742ca414e38ef07e285d" Feb 18 19:35:57 crc kubenswrapper[4942]: I0218 19:35:57.392896 4942 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Feb 18 19:35:57 crc kubenswrapper[4942]: E0218 19:35:57.393279 4942 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="82d59804-3d83-4594-855b-f08b93e146a4" containerName="cinder-api-log" Feb 18 19:35:57 crc kubenswrapper[4942]: I0218 19:35:57.393297 4942 state_mem.go:107] "Deleted CPUSet assignment" podUID="82d59804-3d83-4594-855b-f08b93e146a4" containerName="cinder-api-log" Feb 18 19:35:57 crc kubenswrapper[4942]: E0218 19:35:57.393321 4942 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="82d59804-3d83-4594-855b-f08b93e146a4" containerName="cinder-api" Feb 18 19:35:57 crc kubenswrapper[4942]: I0218 19:35:57.393328 4942 state_mem.go:107] "Deleted CPUSet assignment" podUID="82d59804-3d83-4594-855b-f08b93e146a4" containerName="cinder-api" Feb 18 19:35:57 crc kubenswrapper[4942]: I0218 19:35:57.393535 4942 memory_manager.go:354] "RemoveStaleState removing state" podUID="82d59804-3d83-4594-855b-f08b93e146a4" containerName="cinder-api-log" Feb 18 19:35:57 crc kubenswrapper[4942]: I0218 19:35:57.393555 4942 memory_manager.go:354] "RemoveStaleState removing state" podUID="82d59804-3d83-4594-855b-f08b93e146a4" containerName="cinder-api" Feb 18 19:35:57 crc kubenswrapper[4942]: I0218 19:35:57.394575 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Feb 18 19:35:57 crc kubenswrapper[4942]: I0218 19:35:57.394860 4942 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"125aa27e3a6a563686671243e3a03b123d6140e97f57742ca414e38ef07e285d"} err="failed to get container status \"125aa27e3a6a563686671243e3a03b123d6140e97f57742ca414e38ef07e285d\": rpc error: code = NotFound desc = could not find container \"125aa27e3a6a563686671243e3a03b123d6140e97f57742ca414e38ef07e285d\": container with ID starting with 125aa27e3a6a563686671243e3a03b123d6140e97f57742ca414e38ef07e285d not found: ID does not exist" Feb 18 19:35:57 crc kubenswrapper[4942]: I0218 19:35:57.394942 4942 scope.go:117] "RemoveContainer" containerID="06ece880bf54f2ff4ce2bd899b5f766401d58f723ea7b2a3f90e7165416928ff" Feb 18 19:35:57 crc kubenswrapper[4942]: I0218 19:35:57.409935 4942 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"06ece880bf54f2ff4ce2bd899b5f766401d58f723ea7b2a3f90e7165416928ff"} err="failed to get container status \"06ece880bf54f2ff4ce2bd899b5f766401d58f723ea7b2a3f90e7165416928ff\": rpc error: code = NotFound desc = could not find container \"06ece880bf54f2ff4ce2bd899b5f766401d58f723ea7b2a3f90e7165416928ff\": container with ID starting with 06ece880bf54f2ff4ce2bd899b5f766401d58f723ea7b2a3f90e7165416928ff not found: ID does not exist" Feb 18 19:35:57 crc kubenswrapper[4942]: I0218 19:35:57.410242 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-public-svc" Feb 18 19:35:57 crc kubenswrapper[4942]: I0218 19:35:57.410294 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Feb 18 19:35:57 crc kubenswrapper[4942]: I0218 19:35:57.410808 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-internal-svc" Feb 18 19:35:57 crc kubenswrapper[4942]: I0218 19:35:57.420754 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Feb 18 19:35:57 crc kubenswrapper[4942]: I0218 19:35:57.485107 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b2b3ec4f-5cab-4036-8450-0a9f7a5eae33-scripts\") pod \"cinder-api-0\" (UID: \"b2b3ec4f-5cab-4036-8450-0a9f7a5eae33\") " pod="openstack/cinder-api-0" Feb 18 19:35:57 crc kubenswrapper[4942]: I0218 19:35:57.485368 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/b2b3ec4f-5cab-4036-8450-0a9f7a5eae33-etc-machine-id\") pod \"cinder-api-0\" (UID: \"b2b3ec4f-5cab-4036-8450-0a9f7a5eae33\") " pod="openstack/cinder-api-0" Feb 18 19:35:57 crc kubenswrapper[4942]: I0218 19:35:57.485399 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b2b3ec4f-5cab-4036-8450-0a9f7a5eae33-logs\") pod \"cinder-api-0\" (UID: \"b2b3ec4f-5cab-4036-8450-0a9f7a5eae33\") " pod="openstack/cinder-api-0" Feb 18 19:35:57 crc kubenswrapper[4942]: I0218 19:35:57.485419 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-plv8b\" (UniqueName: \"kubernetes.io/projected/b2b3ec4f-5cab-4036-8450-0a9f7a5eae33-kube-api-access-plv8b\") pod \"cinder-api-0\" (UID: \"b2b3ec4f-5cab-4036-8450-0a9f7a5eae33\") " pod="openstack/cinder-api-0" Feb 18 19:35:57 crc kubenswrapper[4942]: I0218 19:35:57.485456 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b2b3ec4f-5cab-4036-8450-0a9f7a5eae33-config-data\") pod \"cinder-api-0\" (UID: \"b2b3ec4f-5cab-4036-8450-0a9f7a5eae33\") " pod="openstack/cinder-api-0" Feb 18 19:35:57 crc kubenswrapper[4942]: I0218 19:35:57.485472 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/b2b3ec4f-5cab-4036-8450-0a9f7a5eae33-config-data-custom\") pod \"cinder-api-0\" (UID: \"b2b3ec4f-5cab-4036-8450-0a9f7a5eae33\") " pod="openstack/cinder-api-0" Feb 18 19:35:57 crc kubenswrapper[4942]: I0218 19:35:57.485506 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/b2b3ec4f-5cab-4036-8450-0a9f7a5eae33-public-tls-certs\") pod \"cinder-api-0\" (UID: \"b2b3ec4f-5cab-4036-8450-0a9f7a5eae33\") " pod="openstack/cinder-api-0" Feb 18 19:35:57 crc kubenswrapper[4942]: I0218 19:35:57.485526 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b2b3ec4f-5cab-4036-8450-0a9f7a5eae33-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"b2b3ec4f-5cab-4036-8450-0a9f7a5eae33\") " pod="openstack/cinder-api-0" Feb 18 19:35:57 crc kubenswrapper[4942]: I0218 19:35:57.485561 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b2b3ec4f-5cab-4036-8450-0a9f7a5eae33-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"b2b3ec4f-5cab-4036-8450-0a9f7a5eae33\") " pod="openstack/cinder-api-0" Feb 18 19:35:57 crc kubenswrapper[4942]: I0218 19:35:57.527814 4942 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-7466887594-rv5fb"] Feb 18 19:35:57 crc kubenswrapper[4942]: I0218 19:35:57.529378 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-7466887594-rv5fb" Feb 18 19:35:57 crc kubenswrapper[4942]: I0218 19:35:57.537276 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-internal-svc" Feb 18 19:35:57 crc kubenswrapper[4942]: I0218 19:35:57.537505 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-public-svc" Feb 18 19:35:57 crc kubenswrapper[4942]: I0218 19:35:57.557832 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-7466887594-rv5fb"] Feb 18 19:35:57 crc kubenswrapper[4942]: I0218 19:35:57.589813 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b2b3ec4f-5cab-4036-8450-0a9f7a5eae33-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"b2b3ec4f-5cab-4036-8450-0a9f7a5eae33\") " pod="openstack/cinder-api-0" Feb 18 19:35:57 crc kubenswrapper[4942]: I0218 19:35:57.589870 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b2b3ec4f-5cab-4036-8450-0a9f7a5eae33-scripts\") pod \"cinder-api-0\" (UID: \"b2b3ec4f-5cab-4036-8450-0a9f7a5eae33\") " pod="openstack/cinder-api-0" Feb 18 19:35:57 crc kubenswrapper[4942]: I0218 19:35:57.589905 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/46e4caaf-033e-499f-ba62-77297ea9bf09-config-data\") pod \"barbican-api-7466887594-rv5fb\" (UID: \"46e4caaf-033e-499f-ba62-77297ea9bf09\") " pod="openstack/barbican-api-7466887594-rv5fb" Feb 18 19:35:57 crc kubenswrapper[4942]: I0218 19:35:57.589939 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/b2b3ec4f-5cab-4036-8450-0a9f7a5eae33-etc-machine-id\") pod \"cinder-api-0\" (UID: \"b2b3ec4f-5cab-4036-8450-0a9f7a5eae33\") " pod="openstack/cinder-api-0" Feb 18 19:35:57 crc kubenswrapper[4942]: I0218 19:35:57.589972 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b2b3ec4f-5cab-4036-8450-0a9f7a5eae33-logs\") pod \"cinder-api-0\" (UID: \"b2b3ec4f-5cab-4036-8450-0a9f7a5eae33\") " pod="openstack/cinder-api-0" Feb 18 19:35:57 crc kubenswrapper[4942]: I0218 19:35:57.589990 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-plv8b\" (UniqueName: \"kubernetes.io/projected/b2b3ec4f-5cab-4036-8450-0a9f7a5eae33-kube-api-access-plv8b\") pod \"cinder-api-0\" (UID: \"b2b3ec4f-5cab-4036-8450-0a9f7a5eae33\") " pod="openstack/cinder-api-0" Feb 18 19:35:57 crc kubenswrapper[4942]: I0218 19:35:57.590036 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b2b3ec4f-5cab-4036-8450-0a9f7a5eae33-config-data\") pod \"cinder-api-0\" (UID: \"b2b3ec4f-5cab-4036-8450-0a9f7a5eae33\") " pod="openstack/cinder-api-0" Feb 18 19:35:57 crc kubenswrapper[4942]: I0218 19:35:57.590055 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/46e4caaf-033e-499f-ba62-77297ea9bf09-logs\") pod \"barbican-api-7466887594-rv5fb\" (UID: \"46e4caaf-033e-499f-ba62-77297ea9bf09\") " pod="openstack/barbican-api-7466887594-rv5fb" Feb 18 19:35:57 crc kubenswrapper[4942]: I0218 19:35:57.590071 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/46e4caaf-033e-499f-ba62-77297ea9bf09-public-tls-certs\") pod \"barbican-api-7466887594-rv5fb\" (UID: \"46e4caaf-033e-499f-ba62-77297ea9bf09\") " pod="openstack/barbican-api-7466887594-rv5fb" Feb 18 19:35:57 crc kubenswrapper[4942]: I0218 19:35:57.590088 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/b2b3ec4f-5cab-4036-8450-0a9f7a5eae33-config-data-custom\") pod \"cinder-api-0\" (UID: \"b2b3ec4f-5cab-4036-8450-0a9f7a5eae33\") " pod="openstack/cinder-api-0" Feb 18 19:35:57 crc kubenswrapper[4942]: I0218 19:35:57.590104 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/46e4caaf-033e-499f-ba62-77297ea9bf09-internal-tls-certs\") pod \"barbican-api-7466887594-rv5fb\" (UID: \"46e4caaf-033e-499f-ba62-77297ea9bf09\") " pod="openstack/barbican-api-7466887594-rv5fb" Feb 18 19:35:57 crc kubenswrapper[4942]: I0218 19:35:57.590140 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d6rsp\" (UniqueName: \"kubernetes.io/projected/46e4caaf-033e-499f-ba62-77297ea9bf09-kube-api-access-d6rsp\") pod \"barbican-api-7466887594-rv5fb\" (UID: \"46e4caaf-033e-499f-ba62-77297ea9bf09\") " pod="openstack/barbican-api-7466887594-rv5fb" Feb 18 19:35:57 crc kubenswrapper[4942]: I0218 19:35:57.590165 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/b2b3ec4f-5cab-4036-8450-0a9f7a5eae33-public-tls-certs\") pod \"cinder-api-0\" (UID: \"b2b3ec4f-5cab-4036-8450-0a9f7a5eae33\") " pod="openstack/cinder-api-0" Feb 18 19:35:57 crc kubenswrapper[4942]: I0218 19:35:57.590188 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/46e4caaf-033e-499f-ba62-77297ea9bf09-combined-ca-bundle\") pod \"barbican-api-7466887594-rv5fb\" (UID: \"46e4caaf-033e-499f-ba62-77297ea9bf09\") " pod="openstack/barbican-api-7466887594-rv5fb" Feb 18 19:35:57 crc kubenswrapper[4942]: I0218 19:35:57.590216 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b2b3ec4f-5cab-4036-8450-0a9f7a5eae33-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"b2b3ec4f-5cab-4036-8450-0a9f7a5eae33\") " pod="openstack/cinder-api-0" Feb 18 19:35:57 crc kubenswrapper[4942]: I0218 19:35:57.590254 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/46e4caaf-033e-499f-ba62-77297ea9bf09-config-data-custom\") pod \"barbican-api-7466887594-rv5fb\" (UID: \"46e4caaf-033e-499f-ba62-77297ea9bf09\") " pod="openstack/barbican-api-7466887594-rv5fb" Feb 18 19:35:57 crc kubenswrapper[4942]: I0218 19:35:57.603132 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b2b3ec4f-5cab-4036-8450-0a9f7a5eae33-logs\") pod \"cinder-api-0\" (UID: \"b2b3ec4f-5cab-4036-8450-0a9f7a5eae33\") " pod="openstack/cinder-api-0" Feb 18 19:35:57 crc kubenswrapper[4942]: I0218 19:35:57.603216 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/b2b3ec4f-5cab-4036-8450-0a9f7a5eae33-etc-machine-id\") pod \"cinder-api-0\" (UID: \"b2b3ec4f-5cab-4036-8450-0a9f7a5eae33\") " pod="openstack/cinder-api-0" Feb 18 19:35:57 crc kubenswrapper[4942]: I0218 19:35:57.605357 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b2b3ec4f-5cab-4036-8450-0a9f7a5eae33-scripts\") pod \"cinder-api-0\" (UID: \"b2b3ec4f-5cab-4036-8450-0a9f7a5eae33\") " pod="openstack/cinder-api-0" Feb 18 19:35:57 crc kubenswrapper[4942]: I0218 19:35:57.605431 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/b2b3ec4f-5cab-4036-8450-0a9f7a5eae33-config-data-custom\") pod \"cinder-api-0\" (UID: \"b2b3ec4f-5cab-4036-8450-0a9f7a5eae33\") " pod="openstack/cinder-api-0" Feb 18 19:35:57 crc kubenswrapper[4942]: I0218 19:35:57.605716 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b2b3ec4f-5cab-4036-8450-0a9f7a5eae33-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"b2b3ec4f-5cab-4036-8450-0a9f7a5eae33\") " pod="openstack/cinder-api-0" Feb 18 19:35:57 crc kubenswrapper[4942]: I0218 19:35:57.619391 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b2b3ec4f-5cab-4036-8450-0a9f7a5eae33-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"b2b3ec4f-5cab-4036-8450-0a9f7a5eae33\") " pod="openstack/cinder-api-0" Feb 18 19:35:57 crc kubenswrapper[4942]: I0218 19:35:57.628009 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b2b3ec4f-5cab-4036-8450-0a9f7a5eae33-config-data\") pod \"cinder-api-0\" (UID: \"b2b3ec4f-5cab-4036-8450-0a9f7a5eae33\") " pod="openstack/cinder-api-0" Feb 18 19:35:57 crc kubenswrapper[4942]: I0218 19:35:57.629264 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/b2b3ec4f-5cab-4036-8450-0a9f7a5eae33-public-tls-certs\") pod \"cinder-api-0\" (UID: \"b2b3ec4f-5cab-4036-8450-0a9f7a5eae33\") " pod="openstack/cinder-api-0" Feb 18 19:35:57 crc kubenswrapper[4942]: I0218 19:35:57.629405 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-plv8b\" (UniqueName: \"kubernetes.io/projected/b2b3ec4f-5cab-4036-8450-0a9f7a5eae33-kube-api-access-plv8b\") pod \"cinder-api-0\" (UID: \"b2b3ec4f-5cab-4036-8450-0a9f7a5eae33\") " pod="openstack/cinder-api-0" Feb 18 19:35:57 crc kubenswrapper[4942]: I0218 19:35:57.691962 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/46e4caaf-033e-499f-ba62-77297ea9bf09-config-data\") pod \"barbican-api-7466887594-rv5fb\" (UID: \"46e4caaf-033e-499f-ba62-77297ea9bf09\") " pod="openstack/barbican-api-7466887594-rv5fb" Feb 18 19:35:57 crc kubenswrapper[4942]: I0218 19:35:57.692068 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/46e4caaf-033e-499f-ba62-77297ea9bf09-logs\") pod \"barbican-api-7466887594-rv5fb\" (UID: \"46e4caaf-033e-499f-ba62-77297ea9bf09\") " pod="openstack/barbican-api-7466887594-rv5fb" Feb 18 19:35:57 crc kubenswrapper[4942]: I0218 19:35:57.692087 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/46e4caaf-033e-499f-ba62-77297ea9bf09-public-tls-certs\") pod \"barbican-api-7466887594-rv5fb\" (UID: \"46e4caaf-033e-499f-ba62-77297ea9bf09\") " pod="openstack/barbican-api-7466887594-rv5fb" Feb 18 19:35:57 crc kubenswrapper[4942]: I0218 19:35:57.692103 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/46e4caaf-033e-499f-ba62-77297ea9bf09-internal-tls-certs\") pod \"barbican-api-7466887594-rv5fb\" (UID: \"46e4caaf-033e-499f-ba62-77297ea9bf09\") " pod="openstack/barbican-api-7466887594-rv5fb" Feb 18 19:35:57 crc kubenswrapper[4942]: I0218 19:35:57.692133 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d6rsp\" (UniqueName: \"kubernetes.io/projected/46e4caaf-033e-499f-ba62-77297ea9bf09-kube-api-access-d6rsp\") pod \"barbican-api-7466887594-rv5fb\" (UID: \"46e4caaf-033e-499f-ba62-77297ea9bf09\") " pod="openstack/barbican-api-7466887594-rv5fb" Feb 18 19:35:57 crc kubenswrapper[4942]: I0218 19:35:57.692157 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/46e4caaf-033e-499f-ba62-77297ea9bf09-combined-ca-bundle\") pod \"barbican-api-7466887594-rv5fb\" (UID: \"46e4caaf-033e-499f-ba62-77297ea9bf09\") " pod="openstack/barbican-api-7466887594-rv5fb" Feb 18 19:35:57 crc kubenswrapper[4942]: I0218 19:35:57.692190 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/46e4caaf-033e-499f-ba62-77297ea9bf09-config-data-custom\") pod \"barbican-api-7466887594-rv5fb\" (UID: \"46e4caaf-033e-499f-ba62-77297ea9bf09\") " pod="openstack/barbican-api-7466887594-rv5fb" Feb 18 19:35:57 crc kubenswrapper[4942]: I0218 19:35:57.697293 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/46e4caaf-033e-499f-ba62-77297ea9bf09-logs\") pod \"barbican-api-7466887594-rv5fb\" (UID: \"46e4caaf-033e-499f-ba62-77297ea9bf09\") " pod="openstack/barbican-api-7466887594-rv5fb" Feb 18 19:35:57 crc kubenswrapper[4942]: I0218 19:35:57.697511 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/46e4caaf-033e-499f-ba62-77297ea9bf09-config-data-custom\") pod \"barbican-api-7466887594-rv5fb\" (UID: \"46e4caaf-033e-499f-ba62-77297ea9bf09\") " pod="openstack/barbican-api-7466887594-rv5fb" Feb 18 19:35:57 crc kubenswrapper[4942]: I0218 19:35:57.700428 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/46e4caaf-033e-499f-ba62-77297ea9bf09-combined-ca-bundle\") pod \"barbican-api-7466887594-rv5fb\" (UID: \"46e4caaf-033e-499f-ba62-77297ea9bf09\") " pod="openstack/barbican-api-7466887594-rv5fb" Feb 18 19:35:57 crc kubenswrapper[4942]: I0218 19:35:57.702358 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/46e4caaf-033e-499f-ba62-77297ea9bf09-internal-tls-certs\") pod \"barbican-api-7466887594-rv5fb\" (UID: \"46e4caaf-033e-499f-ba62-77297ea9bf09\") " pod="openstack/barbican-api-7466887594-rv5fb" Feb 18 19:35:57 crc kubenswrapper[4942]: I0218 19:35:57.702543 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/46e4caaf-033e-499f-ba62-77297ea9bf09-public-tls-certs\") pod \"barbican-api-7466887594-rv5fb\" (UID: \"46e4caaf-033e-499f-ba62-77297ea9bf09\") " pod="openstack/barbican-api-7466887594-rv5fb" Feb 18 19:35:57 crc kubenswrapper[4942]: I0218 19:35:57.702810 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/46e4caaf-033e-499f-ba62-77297ea9bf09-config-data\") pod \"barbican-api-7466887594-rv5fb\" (UID: \"46e4caaf-033e-499f-ba62-77297ea9bf09\") " pod="openstack/barbican-api-7466887594-rv5fb" Feb 18 19:35:57 crc kubenswrapper[4942]: I0218 19:35:57.716270 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d6rsp\" (UniqueName: \"kubernetes.io/projected/46e4caaf-033e-499f-ba62-77297ea9bf09-kube-api-access-d6rsp\") pod \"barbican-api-7466887594-rv5fb\" (UID: \"46e4caaf-033e-499f-ba62-77297ea9bf09\") " pod="openstack/barbican-api-7466887594-rv5fb" Feb 18 19:35:57 crc kubenswrapper[4942]: I0218 19:35:57.740505 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Feb 18 19:35:58 crc kubenswrapper[4942]: I0218 19:35:58.004814 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-7466887594-rv5fb" Feb 18 19:35:58 crc kubenswrapper[4942]: I0218 19:35:58.307016 4942 generic.go:334] "Generic (PLEG): container finished" podID="921d1a28-ead8-42a6-933c-38a339741884" containerID="5406c6b90781279268f75608c064a21d3a65e4eb4c8a4c7e959d4465b49185b9" exitCode=0 Feb 18 19:35:58 crc kubenswrapper[4942]: I0218 19:35:58.307288 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-6b8c9f8ffc-qtdr8" event={"ID":"921d1a28-ead8-42a6-933c-38a339741884","Type":"ContainerDied","Data":"5406c6b90781279268f75608c064a21d3a65e4eb4c8a4c7e959d4465b49185b9"} Feb 18 19:35:58 crc kubenswrapper[4942]: I0218 19:35:58.333638 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"cb08df0a-0162-4e04-a641-6fd65af9048b","Type":"ContainerStarted","Data":"28ebb3effac1a702e96312e12a7195c54046ef1e0a31212d28c03650f2be31be"} Feb 18 19:35:58 crc kubenswrapper[4942]: I0218 19:35:58.350977 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Feb 18 19:35:58 crc kubenswrapper[4942]: I0218 19:35:58.624854 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-7466887594-rv5fb"] Feb 18 19:35:58 crc kubenswrapper[4942]: I0218 19:35:58.722425 4942 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-6b8c9f8ffc-qtdr8" Feb 18 19:35:58 crc kubenswrapper[4942]: I0218 19:35:58.826649 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/921d1a28-ead8-42a6-933c-38a339741884-public-tls-certs\") pod \"921d1a28-ead8-42a6-933c-38a339741884\" (UID: \"921d1a28-ead8-42a6-933c-38a339741884\") " Feb 18 19:35:58 crc kubenswrapper[4942]: I0218 19:35:58.826724 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/921d1a28-ead8-42a6-933c-38a339741884-httpd-config\") pod \"921d1a28-ead8-42a6-933c-38a339741884\" (UID: \"921d1a28-ead8-42a6-933c-38a339741884\") " Feb 18 19:35:58 crc kubenswrapper[4942]: I0218 19:35:58.826836 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/921d1a28-ead8-42a6-933c-38a339741884-internal-tls-certs\") pod \"921d1a28-ead8-42a6-933c-38a339741884\" (UID: \"921d1a28-ead8-42a6-933c-38a339741884\") " Feb 18 19:35:58 crc kubenswrapper[4942]: I0218 19:35:58.826921 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/921d1a28-ead8-42a6-933c-38a339741884-combined-ca-bundle\") pod \"921d1a28-ead8-42a6-933c-38a339741884\" (UID: \"921d1a28-ead8-42a6-933c-38a339741884\") " Feb 18 19:35:58 crc kubenswrapper[4942]: I0218 19:35:58.826977 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/921d1a28-ead8-42a6-933c-38a339741884-ovndb-tls-certs\") pod \"921d1a28-ead8-42a6-933c-38a339741884\" (UID: \"921d1a28-ead8-42a6-933c-38a339741884\") " Feb 18 19:35:58 crc kubenswrapper[4942]: I0218 19:35:58.827046 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4hnxj\" (UniqueName: \"kubernetes.io/projected/921d1a28-ead8-42a6-933c-38a339741884-kube-api-access-4hnxj\") pod \"921d1a28-ead8-42a6-933c-38a339741884\" (UID: \"921d1a28-ead8-42a6-933c-38a339741884\") " Feb 18 19:35:58 crc kubenswrapper[4942]: I0218 19:35:58.827072 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/921d1a28-ead8-42a6-933c-38a339741884-config\") pod \"921d1a28-ead8-42a6-933c-38a339741884\" (UID: \"921d1a28-ead8-42a6-933c-38a339741884\") " Feb 18 19:35:58 crc kubenswrapper[4942]: I0218 19:35:58.835922 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/921d1a28-ead8-42a6-933c-38a339741884-httpd-config" (OuterVolumeSpecName: "httpd-config") pod "921d1a28-ead8-42a6-933c-38a339741884" (UID: "921d1a28-ead8-42a6-933c-38a339741884"). InnerVolumeSpecName "httpd-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:35:58 crc kubenswrapper[4942]: I0218 19:35:58.846951 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/921d1a28-ead8-42a6-933c-38a339741884-kube-api-access-4hnxj" (OuterVolumeSpecName: "kube-api-access-4hnxj") pod "921d1a28-ead8-42a6-933c-38a339741884" (UID: "921d1a28-ead8-42a6-933c-38a339741884"). InnerVolumeSpecName "kube-api-access-4hnxj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:35:58 crc kubenswrapper[4942]: I0218 19:35:58.928772 4942 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4hnxj\" (UniqueName: \"kubernetes.io/projected/921d1a28-ead8-42a6-933c-38a339741884-kube-api-access-4hnxj\") on node \"crc\" DevicePath \"\"" Feb 18 19:35:58 crc kubenswrapper[4942]: I0218 19:35:58.928795 4942 reconciler_common.go:293] "Volume detached for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/921d1a28-ead8-42a6-933c-38a339741884-httpd-config\") on node \"crc\" DevicePath \"\"" Feb 18 19:35:58 crc kubenswrapper[4942]: I0218 19:35:58.969638 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-5794bf846d-82xzg" Feb 18 19:35:58 crc kubenswrapper[4942]: I0218 19:35:58.991490 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/921d1a28-ead8-42a6-933c-38a339741884-config" (OuterVolumeSpecName: "config") pod "921d1a28-ead8-42a6-933c-38a339741884" (UID: "921d1a28-ead8-42a6-933c-38a339741884"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:35:59 crc kubenswrapper[4942]: I0218 19:35:59.021921 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/921d1a28-ead8-42a6-933c-38a339741884-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "921d1a28-ead8-42a6-933c-38a339741884" (UID: "921d1a28-ead8-42a6-933c-38a339741884"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:35:59 crc kubenswrapper[4942]: I0218 19:35:59.027189 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/921d1a28-ead8-42a6-933c-38a339741884-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "921d1a28-ead8-42a6-933c-38a339741884" (UID: "921d1a28-ead8-42a6-933c-38a339741884"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:35:59 crc kubenswrapper[4942]: I0218 19:35:59.034463 4942 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/921d1a28-ead8-42a6-933c-38a339741884-config\") on node \"crc\" DevicePath \"\"" Feb 18 19:35:59 crc kubenswrapper[4942]: I0218 19:35:59.034495 4942 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/921d1a28-ead8-42a6-933c-38a339741884-public-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 18 19:35:59 crc kubenswrapper[4942]: I0218 19:35:59.034504 4942 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/921d1a28-ead8-42a6-933c-38a339741884-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 18 19:35:59 crc kubenswrapper[4942]: I0218 19:35:59.048536 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/921d1a28-ead8-42a6-933c-38a339741884-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "921d1a28-ead8-42a6-933c-38a339741884" (UID: "921d1a28-ead8-42a6-933c-38a339741884"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:35:59 crc kubenswrapper[4942]: I0218 19:35:59.050649 4942 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="82d59804-3d83-4594-855b-f08b93e146a4" path="/var/lib/kubelet/pods/82d59804-3d83-4594-855b-f08b93e146a4/volumes" Feb 18 19:35:59 crc kubenswrapper[4942]: I0218 19:35:59.061819 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/921d1a28-ead8-42a6-933c-38a339741884-ovndb-tls-certs" (OuterVolumeSpecName: "ovndb-tls-certs") pod "921d1a28-ead8-42a6-933c-38a339741884" (UID: "921d1a28-ead8-42a6-933c-38a339741884"). InnerVolumeSpecName "ovndb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:35:59 crc kubenswrapper[4942]: I0218 19:35:59.137152 4942 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/921d1a28-ead8-42a6-933c-38a339741884-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 19:35:59 crc kubenswrapper[4942]: I0218 19:35:59.137193 4942 reconciler_common.go:293] "Volume detached for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/921d1a28-ead8-42a6-933c-38a339741884-ovndb-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 18 19:35:59 crc kubenswrapper[4942]: I0218 19:35:59.345598 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-6b8c9f8ffc-qtdr8" event={"ID":"921d1a28-ead8-42a6-933c-38a339741884","Type":"ContainerDied","Data":"66f57c246570cb64775a601036f5870a5885605c57cb8be2088eae510c596f8b"} Feb 18 19:35:59 crc kubenswrapper[4942]: I0218 19:35:59.345645 4942 scope.go:117] "RemoveContainer" containerID="531ee7816fd7353cd71c0f54232b96ad0dd37eddd3c96b8ac1f0e58197be9795" Feb 18 19:35:59 crc kubenswrapper[4942]: I0218 19:35:59.345803 4942 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-6b8c9f8ffc-qtdr8" Feb 18 19:35:59 crc kubenswrapper[4942]: I0218 19:35:59.353143 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"cb08df0a-0162-4e04-a641-6fd65af9048b","Type":"ContainerStarted","Data":"9ecd7aaddb526f7a536755bf17c5ed2cdffb53f01f22747fc9607ce810b409a8"} Feb 18 19:35:59 crc kubenswrapper[4942]: I0218 19:35:59.355361 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"b2b3ec4f-5cab-4036-8450-0a9f7a5eae33","Type":"ContainerStarted","Data":"ad038fc1ee429cd544c0e75765ad1ed5a8e87869e90710cfa255fd8624784168"} Feb 18 19:35:59 crc kubenswrapper[4942]: I0218 19:35:59.367080 4942 generic.go:334] "Generic (PLEG): container finished" podID="3ecc91e6-4e7f-438f-8530-bb8dd55764c5" containerID="4bd98068ec637cd03846de3ac7d0bc145a81ebf089811ebc4b9501aa76cae874" exitCode=0 Feb 18 19:35:59 crc kubenswrapper[4942]: I0218 19:35:59.367197 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-54d64cf59b-xp7rk" event={"ID":"3ecc91e6-4e7f-438f-8530-bb8dd55764c5","Type":"ContainerDied","Data":"4bd98068ec637cd03846de3ac7d0bc145a81ebf089811ebc4b9501aa76cae874"} Feb 18 19:35:59 crc kubenswrapper[4942]: I0218 19:35:59.368972 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-7466887594-rv5fb" event={"ID":"46e4caaf-033e-499f-ba62-77297ea9bf09","Type":"ContainerStarted","Data":"4ffa8e9432b6d4ce19fa4001c63b4c9090479b3a053a642b9e8553aa1018e9d7"} Feb 18 19:35:59 crc kubenswrapper[4942]: I0218 19:35:59.369021 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-7466887594-rv5fb" event={"ID":"46e4caaf-033e-499f-ba62-77297ea9bf09","Type":"ContainerStarted","Data":"f1833c1ccdd12413f87fcdc260f82fdab94d8363b1c89c2dbb4056950bdcb7cf"} Feb 18 19:35:59 crc kubenswrapper[4942]: I0218 19:35:59.370311 4942 scope.go:117] "RemoveContainer" containerID="5406c6b90781279268f75608c064a21d3a65e4eb4c8a4c7e959d4465b49185b9" Feb 18 19:35:59 crc kubenswrapper[4942]: I0218 19:35:59.391206 4942 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-6b8c9f8ffc-qtdr8"] Feb 18 19:35:59 crc kubenswrapper[4942]: I0218 19:35:59.398555 4942 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-6b8c9f8ffc-qtdr8"] Feb 18 19:35:59 crc kubenswrapper[4942]: I0218 19:35:59.527294 4942 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-54d64cf59b-xp7rk" podUID="3ecc91e6-4e7f-438f-8530-bb8dd55764c5" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.158:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.158:8443: connect: connection refused" Feb 18 19:36:00 crc kubenswrapper[4942]: I0218 19:36:00.163980 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-5794bf846d-82xzg" Feb 18 19:36:00 crc kubenswrapper[4942]: I0218 19:36:00.380732 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-7466887594-rv5fb" event={"ID":"46e4caaf-033e-499f-ba62-77297ea9bf09","Type":"ContainerStarted","Data":"496841a8a2bcd2e51d01f58997e1633d5cebae01d7a3e67cfc824c322b9f302a"} Feb 18 19:36:00 crc kubenswrapper[4942]: I0218 19:36:00.381806 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-7466887594-rv5fb" Feb 18 19:36:00 crc kubenswrapper[4942]: I0218 19:36:00.381833 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-7466887594-rv5fb" Feb 18 19:36:00 crc kubenswrapper[4942]: I0218 19:36:00.390559 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"b2b3ec4f-5cab-4036-8450-0a9f7a5eae33","Type":"ContainerStarted","Data":"fe39f5bff16e5bbe2053562d9c1cf7bbf6bd07f8ce8109ac820b957d380fec40"} Feb 18 19:36:00 crc kubenswrapper[4942]: I0218 19:36:00.390594 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"b2b3ec4f-5cab-4036-8450-0a9f7a5eae33","Type":"ContainerStarted","Data":"810ebbebb7330da8a4a8589f64ea283676bd49a0b492e606a4519a0883509167"} Feb 18 19:36:00 crc kubenswrapper[4942]: I0218 19:36:00.391131 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Feb 18 19:36:00 crc kubenswrapper[4942]: I0218 19:36:00.412998 4942 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-9bf555976-zxfhl"] Feb 18 19:36:00 crc kubenswrapper[4942]: E0218 19:36:00.413368 4942 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="921d1a28-ead8-42a6-933c-38a339741884" containerName="neutron-api" Feb 18 19:36:00 crc kubenswrapper[4942]: I0218 19:36:00.413384 4942 state_mem.go:107] "Deleted CPUSet assignment" podUID="921d1a28-ead8-42a6-933c-38a339741884" containerName="neutron-api" Feb 18 19:36:00 crc kubenswrapper[4942]: E0218 19:36:00.413405 4942 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="921d1a28-ead8-42a6-933c-38a339741884" containerName="neutron-httpd" Feb 18 19:36:00 crc kubenswrapper[4942]: I0218 19:36:00.413412 4942 state_mem.go:107] "Deleted CPUSet assignment" podUID="921d1a28-ead8-42a6-933c-38a339741884" containerName="neutron-httpd" Feb 18 19:36:00 crc kubenswrapper[4942]: I0218 19:36:00.413595 4942 memory_manager.go:354] "RemoveStaleState removing state" podUID="921d1a28-ead8-42a6-933c-38a339741884" containerName="neutron-httpd" Feb 18 19:36:00 crc kubenswrapper[4942]: I0218 19:36:00.413615 4942 memory_manager.go:354] "RemoveStaleState removing state" podUID="921d1a28-ead8-42a6-933c-38a339741884" containerName="neutron-api" Feb 18 19:36:00 crc kubenswrapper[4942]: I0218 19:36:00.414549 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-9bf555976-zxfhl" Feb 18 19:36:00 crc kubenswrapper[4942]: I0218 19:36:00.420119 4942 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-7466887594-rv5fb" podStartSLOduration=3.420102179 podStartE2EDuration="3.420102179s" podCreationTimestamp="2026-02-18 19:35:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:36:00.412331857 +0000 UTC m=+1120.117264522" watchObservedRunningTime="2026-02-18 19:36:00.420102179 +0000 UTC m=+1120.125034844" Feb 18 19:36:00 crc kubenswrapper[4942]: I0218 19:36:00.452948 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-9bf555976-zxfhl"] Feb 18 19:36:00 crc kubenswrapper[4942]: I0218 19:36:00.468540 4942 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=3.468520738 podStartE2EDuration="3.468520738s" podCreationTimestamp="2026-02-18 19:35:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:36:00.43745918 +0000 UTC m=+1120.142391845" watchObservedRunningTime="2026-02-18 19:36:00.468520738 +0000 UTC m=+1120.173453403" Feb 18 19:36:00 crc kubenswrapper[4942]: I0218 19:36:00.487490 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/19290621-80f0-4d8b-b200-d3cce6889538-logs\") pod \"placement-9bf555976-zxfhl\" (UID: \"19290621-80f0-4d8b-b200-d3cce6889538\") " pod="openstack/placement-9bf555976-zxfhl" Feb 18 19:36:00 crc kubenswrapper[4942]: I0218 19:36:00.487530 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/19290621-80f0-4d8b-b200-d3cce6889538-internal-tls-certs\") pod \"placement-9bf555976-zxfhl\" (UID: \"19290621-80f0-4d8b-b200-d3cce6889538\") " pod="openstack/placement-9bf555976-zxfhl" Feb 18 19:36:00 crc kubenswrapper[4942]: I0218 19:36:00.487550 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p7l6m\" (UniqueName: \"kubernetes.io/projected/19290621-80f0-4d8b-b200-d3cce6889538-kube-api-access-p7l6m\") pod \"placement-9bf555976-zxfhl\" (UID: \"19290621-80f0-4d8b-b200-d3cce6889538\") " pod="openstack/placement-9bf555976-zxfhl" Feb 18 19:36:00 crc kubenswrapper[4942]: I0218 19:36:00.487570 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/19290621-80f0-4d8b-b200-d3cce6889538-combined-ca-bundle\") pod \"placement-9bf555976-zxfhl\" (UID: \"19290621-80f0-4d8b-b200-d3cce6889538\") " pod="openstack/placement-9bf555976-zxfhl" Feb 18 19:36:00 crc kubenswrapper[4942]: I0218 19:36:00.487623 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/19290621-80f0-4d8b-b200-d3cce6889538-public-tls-certs\") pod \"placement-9bf555976-zxfhl\" (UID: \"19290621-80f0-4d8b-b200-d3cce6889538\") " pod="openstack/placement-9bf555976-zxfhl" Feb 18 19:36:00 crc kubenswrapper[4942]: I0218 19:36:00.487663 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/19290621-80f0-4d8b-b200-d3cce6889538-scripts\") pod \"placement-9bf555976-zxfhl\" (UID: \"19290621-80f0-4d8b-b200-d3cce6889538\") " pod="openstack/placement-9bf555976-zxfhl" Feb 18 19:36:00 crc kubenswrapper[4942]: I0218 19:36:00.487807 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/19290621-80f0-4d8b-b200-d3cce6889538-config-data\") pod \"placement-9bf555976-zxfhl\" (UID: \"19290621-80f0-4d8b-b200-d3cce6889538\") " pod="openstack/placement-9bf555976-zxfhl" Feb 18 19:36:00 crc kubenswrapper[4942]: I0218 19:36:00.589869 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/19290621-80f0-4d8b-b200-d3cce6889538-logs\") pod \"placement-9bf555976-zxfhl\" (UID: \"19290621-80f0-4d8b-b200-d3cce6889538\") " pod="openstack/placement-9bf555976-zxfhl" Feb 18 19:36:00 crc kubenswrapper[4942]: I0218 19:36:00.589955 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/19290621-80f0-4d8b-b200-d3cce6889538-internal-tls-certs\") pod \"placement-9bf555976-zxfhl\" (UID: \"19290621-80f0-4d8b-b200-d3cce6889538\") " pod="openstack/placement-9bf555976-zxfhl" Feb 18 19:36:00 crc kubenswrapper[4942]: I0218 19:36:00.589985 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p7l6m\" (UniqueName: \"kubernetes.io/projected/19290621-80f0-4d8b-b200-d3cce6889538-kube-api-access-p7l6m\") pod \"placement-9bf555976-zxfhl\" (UID: \"19290621-80f0-4d8b-b200-d3cce6889538\") " pod="openstack/placement-9bf555976-zxfhl" Feb 18 19:36:00 crc kubenswrapper[4942]: I0218 19:36:00.590018 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/19290621-80f0-4d8b-b200-d3cce6889538-combined-ca-bundle\") pod \"placement-9bf555976-zxfhl\" (UID: \"19290621-80f0-4d8b-b200-d3cce6889538\") " pod="openstack/placement-9bf555976-zxfhl" Feb 18 19:36:00 crc kubenswrapper[4942]: I0218 19:36:00.590079 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/19290621-80f0-4d8b-b200-d3cce6889538-public-tls-certs\") pod \"placement-9bf555976-zxfhl\" (UID: \"19290621-80f0-4d8b-b200-d3cce6889538\") " pod="openstack/placement-9bf555976-zxfhl" Feb 18 19:36:00 crc kubenswrapper[4942]: I0218 19:36:00.590129 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/19290621-80f0-4d8b-b200-d3cce6889538-scripts\") pod \"placement-9bf555976-zxfhl\" (UID: \"19290621-80f0-4d8b-b200-d3cce6889538\") " pod="openstack/placement-9bf555976-zxfhl" Feb 18 19:36:00 crc kubenswrapper[4942]: I0218 19:36:00.590250 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/19290621-80f0-4d8b-b200-d3cce6889538-config-data\") pod \"placement-9bf555976-zxfhl\" (UID: \"19290621-80f0-4d8b-b200-d3cce6889538\") " pod="openstack/placement-9bf555976-zxfhl" Feb 18 19:36:00 crc kubenswrapper[4942]: I0218 19:36:00.590352 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/19290621-80f0-4d8b-b200-d3cce6889538-logs\") pod \"placement-9bf555976-zxfhl\" (UID: \"19290621-80f0-4d8b-b200-d3cce6889538\") " pod="openstack/placement-9bf555976-zxfhl" Feb 18 19:36:00 crc kubenswrapper[4942]: I0218 19:36:00.599956 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/19290621-80f0-4d8b-b200-d3cce6889538-config-data\") pod \"placement-9bf555976-zxfhl\" (UID: \"19290621-80f0-4d8b-b200-d3cce6889538\") " pod="openstack/placement-9bf555976-zxfhl" Feb 18 19:36:00 crc kubenswrapper[4942]: I0218 19:36:00.601403 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/19290621-80f0-4d8b-b200-d3cce6889538-internal-tls-certs\") pod \"placement-9bf555976-zxfhl\" (UID: \"19290621-80f0-4d8b-b200-d3cce6889538\") " pod="openstack/placement-9bf555976-zxfhl" Feb 18 19:36:00 crc kubenswrapper[4942]: I0218 19:36:00.605270 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/19290621-80f0-4d8b-b200-d3cce6889538-scripts\") pod \"placement-9bf555976-zxfhl\" (UID: \"19290621-80f0-4d8b-b200-d3cce6889538\") " pod="openstack/placement-9bf555976-zxfhl" Feb 18 19:36:00 crc kubenswrapper[4942]: I0218 19:36:00.607376 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/19290621-80f0-4d8b-b200-d3cce6889538-public-tls-certs\") pod \"placement-9bf555976-zxfhl\" (UID: \"19290621-80f0-4d8b-b200-d3cce6889538\") " pod="openstack/placement-9bf555976-zxfhl" Feb 18 19:36:00 crc kubenswrapper[4942]: I0218 19:36:00.613544 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/19290621-80f0-4d8b-b200-d3cce6889538-combined-ca-bundle\") pod \"placement-9bf555976-zxfhl\" (UID: \"19290621-80f0-4d8b-b200-d3cce6889538\") " pod="openstack/placement-9bf555976-zxfhl" Feb 18 19:36:00 crc kubenswrapper[4942]: I0218 19:36:00.618250 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p7l6m\" (UniqueName: \"kubernetes.io/projected/19290621-80f0-4d8b-b200-d3cce6889538-kube-api-access-p7l6m\") pod \"placement-9bf555976-zxfhl\" (UID: \"19290621-80f0-4d8b-b200-d3cce6889538\") " pod="openstack/placement-9bf555976-zxfhl" Feb 18 19:36:00 crc kubenswrapper[4942]: I0218 19:36:00.655344 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/keystone-56897c69bf-gkt87" Feb 18 19:36:00 crc kubenswrapper[4942]: I0218 19:36:00.731803 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-9bf555976-zxfhl" Feb 18 19:36:01 crc kubenswrapper[4942]: I0218 19:36:01.060308 4942 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="921d1a28-ead8-42a6-933c-38a339741884" path="/var/lib/kubelet/pods/921d1a28-ead8-42a6-933c-38a339741884/volumes" Feb 18 19:36:01 crc kubenswrapper[4942]: I0218 19:36:01.181150 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-9bf555976-zxfhl"] Feb 18 19:36:01 crc kubenswrapper[4942]: I0218 19:36:01.281061 4942 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Feb 18 19:36:01 crc kubenswrapper[4942]: I0218 19:36:01.400490 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-9bf555976-zxfhl" event={"ID":"19290621-80f0-4d8b-b200-d3cce6889538","Type":"ContainerStarted","Data":"3a879eb8eca35f47b596b651b0ea4eff90e88803d3978b94f1b13f1e9a9997ca"} Feb 18 19:36:01 crc kubenswrapper[4942]: I0218 19:36:01.454951 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-5c9776ccc5-lrqxl" Feb 18 19:36:01 crc kubenswrapper[4942]: I0218 19:36:01.564805 4942 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-55f844cf75-b4sf9"] Feb 18 19:36:01 crc kubenswrapper[4942]: I0218 19:36:01.565028 4942 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-55f844cf75-b4sf9" podUID="3eb861b2-8f3f-482a-98b8-e4aa9de98ecd" containerName="dnsmasq-dns" containerID="cri-o://07ed859237f582f1701b07e571f92a114e6576149d9ab982ddb17cd24aca3587" gracePeriod=10 Feb 18 19:36:01 crc kubenswrapper[4942]: I0218 19:36:01.777182 4942 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-55f844cf75-b4sf9" podUID="3eb861b2-8f3f-482a-98b8-e4aa9de98ecd" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.164:5353: connect: connection refused" Feb 18 19:36:01 crc kubenswrapper[4942]: I0218 19:36:01.809830 4942 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Feb 18 19:36:01 crc kubenswrapper[4942]: I0218 19:36:01.873119 4942 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 18 19:36:02 crc kubenswrapper[4942]: I0218 19:36:02.170490 4942 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/watcher-api-0"] Feb 18 19:36:02 crc kubenswrapper[4942]: I0218 19:36:02.170819 4942 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/watcher-api-0" podUID="cf325d20-c507-42cc-b96f-6e57ff55aa53" containerName="watcher-api-log" containerID="cri-o://aa132dbcbfbe636d2466bf98fe3a945bcf6b8f37a1c6b00263bbaa8b8d41b75b" gracePeriod=30 Feb 18 19:36:02 crc kubenswrapper[4942]: I0218 19:36:02.171385 4942 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/watcher-api-0" podUID="cf325d20-c507-42cc-b96f-6e57ff55aa53" containerName="watcher-api" containerID="cri-o://a5770f508e1c40bf4ef682bff10bac69873d582c1a0625dbd01c701b14695817" gracePeriod=30 Feb 18 19:36:02 crc kubenswrapper[4942]: I0218 19:36:02.225269 4942 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-55f844cf75-b4sf9" Feb 18 19:36:02 crc kubenswrapper[4942]: I0218 19:36:02.356054 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7gq76\" (UniqueName: \"kubernetes.io/projected/3eb861b2-8f3f-482a-98b8-e4aa9de98ecd-kube-api-access-7gq76\") pod \"3eb861b2-8f3f-482a-98b8-e4aa9de98ecd\" (UID: \"3eb861b2-8f3f-482a-98b8-e4aa9de98ecd\") " Feb 18 19:36:02 crc kubenswrapper[4942]: I0218 19:36:02.356370 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3eb861b2-8f3f-482a-98b8-e4aa9de98ecd-ovsdbserver-sb\") pod \"3eb861b2-8f3f-482a-98b8-e4aa9de98ecd\" (UID: \"3eb861b2-8f3f-482a-98b8-e4aa9de98ecd\") " Feb 18 19:36:02 crc kubenswrapper[4942]: I0218 19:36:02.356470 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3eb861b2-8f3f-482a-98b8-e4aa9de98ecd-config\") pod \"3eb861b2-8f3f-482a-98b8-e4aa9de98ecd\" (UID: \"3eb861b2-8f3f-482a-98b8-e4aa9de98ecd\") " Feb 18 19:36:02 crc kubenswrapper[4942]: I0218 19:36:02.356486 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3eb861b2-8f3f-482a-98b8-e4aa9de98ecd-ovsdbserver-nb\") pod \"3eb861b2-8f3f-482a-98b8-e4aa9de98ecd\" (UID: \"3eb861b2-8f3f-482a-98b8-e4aa9de98ecd\") " Feb 18 19:36:02 crc kubenswrapper[4942]: I0218 19:36:02.356544 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/3eb861b2-8f3f-482a-98b8-e4aa9de98ecd-dns-swift-storage-0\") pod \"3eb861b2-8f3f-482a-98b8-e4aa9de98ecd\" (UID: \"3eb861b2-8f3f-482a-98b8-e4aa9de98ecd\") " Feb 18 19:36:02 crc kubenswrapper[4942]: I0218 19:36:02.356566 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3eb861b2-8f3f-482a-98b8-e4aa9de98ecd-dns-svc\") pod \"3eb861b2-8f3f-482a-98b8-e4aa9de98ecd\" (UID: \"3eb861b2-8f3f-482a-98b8-e4aa9de98ecd\") " Feb 18 19:36:02 crc kubenswrapper[4942]: I0218 19:36:02.369958 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3eb861b2-8f3f-482a-98b8-e4aa9de98ecd-kube-api-access-7gq76" (OuterVolumeSpecName: "kube-api-access-7gq76") pod "3eb861b2-8f3f-482a-98b8-e4aa9de98ecd" (UID: "3eb861b2-8f3f-482a-98b8-e4aa9de98ecd"). InnerVolumeSpecName "kube-api-access-7gq76". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:36:02 crc kubenswrapper[4942]: I0218 19:36:02.445137 4942 generic.go:334] "Generic (PLEG): container finished" podID="3eb861b2-8f3f-482a-98b8-e4aa9de98ecd" containerID="07ed859237f582f1701b07e571f92a114e6576149d9ab982ddb17cd24aca3587" exitCode=0 Feb 18 19:36:02 crc kubenswrapper[4942]: I0218 19:36:02.445224 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-55f844cf75-b4sf9" event={"ID":"3eb861b2-8f3f-482a-98b8-e4aa9de98ecd","Type":"ContainerDied","Data":"07ed859237f582f1701b07e571f92a114e6576149d9ab982ddb17cd24aca3587"} Feb 18 19:36:02 crc kubenswrapper[4942]: I0218 19:36:02.445250 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-55f844cf75-b4sf9" event={"ID":"3eb861b2-8f3f-482a-98b8-e4aa9de98ecd","Type":"ContainerDied","Data":"a28152676e5bbeaa52dbf0acfa190644662ce9fce2d0b5f7310504317b4faf82"} Feb 18 19:36:02 crc kubenswrapper[4942]: I0218 19:36:02.445265 4942 scope.go:117] "RemoveContainer" containerID="07ed859237f582f1701b07e571f92a114e6576149d9ab982ddb17cd24aca3587" Feb 18 19:36:02 crc kubenswrapper[4942]: I0218 19:36:02.445389 4942 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-55f844cf75-b4sf9" Feb 18 19:36:02 crc kubenswrapper[4942]: I0218 19:36:02.458462 4942 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7gq76\" (UniqueName: \"kubernetes.io/projected/3eb861b2-8f3f-482a-98b8-e4aa9de98ecd-kube-api-access-7gq76\") on node \"crc\" DevicePath \"\"" Feb 18 19:36:02 crc kubenswrapper[4942]: I0218 19:36:02.458612 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3eb861b2-8f3f-482a-98b8-e4aa9de98ecd-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "3eb861b2-8f3f-482a-98b8-e4aa9de98ecd" (UID: "3eb861b2-8f3f-482a-98b8-e4aa9de98ecd"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:36:02 crc kubenswrapper[4942]: I0218 19:36:02.462958 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3eb861b2-8f3f-482a-98b8-e4aa9de98ecd-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "3eb861b2-8f3f-482a-98b8-e4aa9de98ecd" (UID: "3eb861b2-8f3f-482a-98b8-e4aa9de98ecd"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:36:02 crc kubenswrapper[4942]: I0218 19:36:02.463912 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3eb861b2-8f3f-482a-98b8-e4aa9de98ecd-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "3eb861b2-8f3f-482a-98b8-e4aa9de98ecd" (UID: "3eb861b2-8f3f-482a-98b8-e4aa9de98ecd"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:36:02 crc kubenswrapper[4942]: I0218 19:36:02.467222 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"cb08df0a-0162-4e04-a641-6fd65af9048b","Type":"ContainerStarted","Data":"33f88e67e2d64ef0cdf5c3ea9ad2d23061784bba770fa1c0fe079285a1cbbc56"} Feb 18 19:36:02 crc kubenswrapper[4942]: I0218 19:36:02.468392 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 18 19:36:02 crc kubenswrapper[4942]: I0218 19:36:02.487342 4942 generic.go:334] "Generic (PLEG): container finished" podID="cf325d20-c507-42cc-b96f-6e57ff55aa53" containerID="aa132dbcbfbe636d2466bf98fe3a945bcf6b8f37a1c6b00263bbaa8b8d41b75b" exitCode=143 Feb 18 19:36:02 crc kubenswrapper[4942]: I0218 19:36:02.487429 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-api-0" event={"ID":"cf325d20-c507-42cc-b96f-6e57ff55aa53","Type":"ContainerDied","Data":"aa132dbcbfbe636d2466bf98fe3a945bcf6b8f37a1c6b00263bbaa8b8d41b75b"} Feb 18 19:36:02 crc kubenswrapper[4942]: I0218 19:36:02.498054 4942 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="17399208-02d7-46c9-b5ea-b01563e8baf1" containerName="cinder-scheduler" containerID="cri-o://ab6c4d04ee142d2e6670d2eba83ed1f3609e146414eca1aa78da29e2ecfc3a7c" gracePeriod=30 Feb 18 19:36:02 crc kubenswrapper[4942]: I0218 19:36:02.498627 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-9bf555976-zxfhl" event={"ID":"19290621-80f0-4d8b-b200-d3cce6889538","Type":"ContainerStarted","Data":"ab7f32936c01a7d49bbf7291938fdcdda17569c42b2beb30aa56125c0e8689d4"} Feb 18 19:36:02 crc kubenswrapper[4942]: I0218 19:36:02.498712 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-9bf555976-zxfhl" Feb 18 19:36:02 crc kubenswrapper[4942]: I0218 19:36:02.498729 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-9bf555976-zxfhl" event={"ID":"19290621-80f0-4d8b-b200-d3cce6889538","Type":"ContainerStarted","Data":"318cdd790ce9717634ea5f676a2cb9d466b36b0a3d6579495f195b9ff09ceada"} Feb 18 19:36:02 crc kubenswrapper[4942]: I0218 19:36:02.498821 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-9bf555976-zxfhl" Feb 18 19:36:02 crc kubenswrapper[4942]: I0218 19:36:02.499179 4942 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="17399208-02d7-46c9-b5ea-b01563e8baf1" containerName="probe" containerID="cri-o://24e727d5d2fb180c7e5b210ba8e9f70f0b0d6335ad6d3b2ef9160574585ddb26" gracePeriod=30 Feb 18 19:36:02 crc kubenswrapper[4942]: I0218 19:36:02.507937 4942 scope.go:117] "RemoveContainer" containerID="9bb47534d9e06becc5f445ae59185cbfce5bbc93ac6da1f08bbfa8a94ab2efbe" Feb 18 19:36:02 crc kubenswrapper[4942]: I0218 19:36:02.526252 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3eb861b2-8f3f-482a-98b8-e4aa9de98ecd-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "3eb861b2-8f3f-482a-98b8-e4aa9de98ecd" (UID: "3eb861b2-8f3f-482a-98b8-e4aa9de98ecd"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:36:02 crc kubenswrapper[4942]: I0218 19:36:02.529680 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3eb861b2-8f3f-482a-98b8-e4aa9de98ecd-config" (OuterVolumeSpecName: "config") pod "3eb861b2-8f3f-482a-98b8-e4aa9de98ecd" (UID: "3eb861b2-8f3f-482a-98b8-e4aa9de98ecd"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:36:02 crc kubenswrapper[4942]: I0218 19:36:02.539117 4942 scope.go:117] "RemoveContainer" containerID="07ed859237f582f1701b07e571f92a114e6576149d9ab982ddb17cd24aca3587" Feb 18 19:36:02 crc kubenswrapper[4942]: E0218 19:36:02.541803 4942 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"07ed859237f582f1701b07e571f92a114e6576149d9ab982ddb17cd24aca3587\": container with ID starting with 07ed859237f582f1701b07e571f92a114e6576149d9ab982ddb17cd24aca3587 not found: ID does not exist" containerID="07ed859237f582f1701b07e571f92a114e6576149d9ab982ddb17cd24aca3587" Feb 18 19:36:02 crc kubenswrapper[4942]: I0218 19:36:02.541876 4942 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"07ed859237f582f1701b07e571f92a114e6576149d9ab982ddb17cd24aca3587"} err="failed to get container status \"07ed859237f582f1701b07e571f92a114e6576149d9ab982ddb17cd24aca3587\": rpc error: code = NotFound desc = could not find container \"07ed859237f582f1701b07e571f92a114e6576149d9ab982ddb17cd24aca3587\": container with ID starting with 07ed859237f582f1701b07e571f92a114e6576149d9ab982ddb17cd24aca3587 not found: ID does not exist" Feb 18 19:36:02 crc kubenswrapper[4942]: I0218 19:36:02.541904 4942 scope.go:117] "RemoveContainer" containerID="9bb47534d9e06becc5f445ae59185cbfce5bbc93ac6da1f08bbfa8a94ab2efbe" Feb 18 19:36:02 crc kubenswrapper[4942]: E0218 19:36:02.542485 4942 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9bb47534d9e06becc5f445ae59185cbfce5bbc93ac6da1f08bbfa8a94ab2efbe\": container with ID starting with 9bb47534d9e06becc5f445ae59185cbfce5bbc93ac6da1f08bbfa8a94ab2efbe not found: ID does not exist" containerID="9bb47534d9e06becc5f445ae59185cbfce5bbc93ac6da1f08bbfa8a94ab2efbe" Feb 18 19:36:02 crc kubenswrapper[4942]: I0218 19:36:02.542563 4942 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9bb47534d9e06becc5f445ae59185cbfce5bbc93ac6da1f08bbfa8a94ab2efbe"} err="failed to get container status \"9bb47534d9e06becc5f445ae59185cbfce5bbc93ac6da1f08bbfa8a94ab2efbe\": rpc error: code = NotFound desc = could not find container \"9bb47534d9e06becc5f445ae59185cbfce5bbc93ac6da1f08bbfa8a94ab2efbe\": container with ID starting with 9bb47534d9e06becc5f445ae59185cbfce5bbc93ac6da1f08bbfa8a94ab2efbe not found: ID does not exist" Feb 18 19:36:02 crc kubenswrapper[4942]: I0218 19:36:02.552865 4942 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.365925938 podStartE2EDuration="7.552840929s" podCreationTimestamp="2026-02-18 19:35:55 +0000 UTC" firstStartedPulling="2026-02-18 19:35:56.351526079 +0000 UTC m=+1116.056458744" lastFinishedPulling="2026-02-18 19:36:01.53844107 +0000 UTC m=+1121.243373735" observedRunningTime="2026-02-18 19:36:02.511129705 +0000 UTC m=+1122.216062370" watchObservedRunningTime="2026-02-18 19:36:02.552840929 +0000 UTC m=+1122.257773594" Feb 18 19:36:02 crc kubenswrapper[4942]: I0218 19:36:02.554118 4942 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-9bf555976-zxfhl" podStartSLOduration=2.554110112 podStartE2EDuration="2.554110112s" podCreationTimestamp="2026-02-18 19:36:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:36:02.538097035 +0000 UTC m=+1122.243029700" watchObservedRunningTime="2026-02-18 19:36:02.554110112 +0000 UTC m=+1122.259042777" Feb 18 19:36:02 crc kubenswrapper[4942]: I0218 19:36:02.560542 4942 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3eb861b2-8f3f-482a-98b8-e4aa9de98ecd-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 18 19:36:02 crc kubenswrapper[4942]: I0218 19:36:02.560567 4942 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3eb861b2-8f3f-482a-98b8-e4aa9de98ecd-config\") on node \"crc\" DevicePath \"\"" Feb 18 19:36:02 crc kubenswrapper[4942]: I0218 19:36:02.560576 4942 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3eb861b2-8f3f-482a-98b8-e4aa9de98ecd-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 18 19:36:02 crc kubenswrapper[4942]: I0218 19:36:02.560609 4942 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/3eb861b2-8f3f-482a-98b8-e4aa9de98ecd-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 18 19:36:02 crc kubenswrapper[4942]: I0218 19:36:02.560622 4942 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3eb861b2-8f3f-482a-98b8-e4aa9de98ecd-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 18 19:36:02 crc kubenswrapper[4942]: I0218 19:36:02.786091 4942 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-55f844cf75-b4sf9"] Feb 18 19:36:02 crc kubenswrapper[4942]: I0218 19:36:02.800892 4942 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-55f844cf75-b4sf9"] Feb 18 19:36:03 crc kubenswrapper[4942]: I0218 19:36:03.047477 4942 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3eb861b2-8f3f-482a-98b8-e4aa9de98ecd" path="/var/lib/kubelet/pods/3eb861b2-8f3f-482a-98b8-e4aa9de98ecd/volumes" Feb 18 19:36:03 crc kubenswrapper[4942]: I0218 19:36:03.490147 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/keystone-56897c69bf-gkt87" Feb 18 19:36:03 crc kubenswrapper[4942]: I0218 19:36:03.498749 4942 generic.go:334] "Generic (PLEG): container finished" podID="17399208-02d7-46c9-b5ea-b01563e8baf1" containerID="24e727d5d2fb180c7e5b210ba8e9f70f0b0d6335ad6d3b2ef9160574585ddb26" exitCode=0 Feb 18 19:36:03 crc kubenswrapper[4942]: I0218 19:36:03.498801 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"17399208-02d7-46c9-b5ea-b01563e8baf1","Type":"ContainerDied","Data":"24e727d5d2fb180c7e5b210ba8e9f70f0b0d6335ad6d3b2ef9160574585ddb26"} Feb 18 19:36:03 crc kubenswrapper[4942]: I0218 19:36:03.651400 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-9d65dd5d-c4zgj" Feb 18 19:36:03 crc kubenswrapper[4942]: I0218 19:36:03.769440 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-9d65dd5d-c4zgj" Feb 18 19:36:04 crc kubenswrapper[4942]: I0218 19:36:04.511954 4942 generic.go:334] "Generic (PLEG): container finished" podID="17399208-02d7-46c9-b5ea-b01563e8baf1" containerID="ab6c4d04ee142d2e6670d2eba83ed1f3609e146414eca1aa78da29e2ecfc3a7c" exitCode=0 Feb 18 19:36:04 crc kubenswrapper[4942]: I0218 19:36:04.512105 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"17399208-02d7-46c9-b5ea-b01563e8baf1","Type":"ContainerDied","Data":"ab6c4d04ee142d2e6670d2eba83ed1f3609e146414eca1aa78da29e2ecfc3a7c"} Feb 18 19:36:04 crc kubenswrapper[4942]: I0218 19:36:04.846992 4942 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Feb 18 19:36:04 crc kubenswrapper[4942]: I0218 19:36:04.921222 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nb8kr\" (UniqueName: \"kubernetes.io/projected/17399208-02d7-46c9-b5ea-b01563e8baf1-kube-api-access-nb8kr\") pod \"17399208-02d7-46c9-b5ea-b01563e8baf1\" (UID: \"17399208-02d7-46c9-b5ea-b01563e8baf1\") " Feb 18 19:36:04 crc kubenswrapper[4942]: I0218 19:36:04.921273 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/17399208-02d7-46c9-b5ea-b01563e8baf1-config-data\") pod \"17399208-02d7-46c9-b5ea-b01563e8baf1\" (UID: \"17399208-02d7-46c9-b5ea-b01563e8baf1\") " Feb 18 19:36:04 crc kubenswrapper[4942]: I0218 19:36:04.921347 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/17399208-02d7-46c9-b5ea-b01563e8baf1-scripts\") pod \"17399208-02d7-46c9-b5ea-b01563e8baf1\" (UID: \"17399208-02d7-46c9-b5ea-b01563e8baf1\") " Feb 18 19:36:04 crc kubenswrapper[4942]: I0218 19:36:04.921403 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/17399208-02d7-46c9-b5ea-b01563e8baf1-config-data-custom\") pod \"17399208-02d7-46c9-b5ea-b01563e8baf1\" (UID: \"17399208-02d7-46c9-b5ea-b01563e8baf1\") " Feb 18 19:36:04 crc kubenswrapper[4942]: I0218 19:36:04.921457 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/17399208-02d7-46c9-b5ea-b01563e8baf1-combined-ca-bundle\") pod \"17399208-02d7-46c9-b5ea-b01563e8baf1\" (UID: \"17399208-02d7-46c9-b5ea-b01563e8baf1\") " Feb 18 19:36:04 crc kubenswrapper[4942]: I0218 19:36:04.921527 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/17399208-02d7-46c9-b5ea-b01563e8baf1-etc-machine-id\") pod \"17399208-02d7-46c9-b5ea-b01563e8baf1\" (UID: \"17399208-02d7-46c9-b5ea-b01563e8baf1\") " Feb 18 19:36:04 crc kubenswrapper[4942]: I0218 19:36:04.922075 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/17399208-02d7-46c9-b5ea-b01563e8baf1-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "17399208-02d7-46c9-b5ea-b01563e8baf1" (UID: "17399208-02d7-46c9-b5ea-b01563e8baf1"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 18 19:36:04 crc kubenswrapper[4942]: I0218 19:36:04.937278 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/17399208-02d7-46c9-b5ea-b01563e8baf1-kube-api-access-nb8kr" (OuterVolumeSpecName: "kube-api-access-nb8kr") pod "17399208-02d7-46c9-b5ea-b01563e8baf1" (UID: "17399208-02d7-46c9-b5ea-b01563e8baf1"). InnerVolumeSpecName "kube-api-access-nb8kr". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:36:04 crc kubenswrapper[4942]: I0218 19:36:04.937498 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/17399208-02d7-46c9-b5ea-b01563e8baf1-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "17399208-02d7-46c9-b5ea-b01563e8baf1" (UID: "17399208-02d7-46c9-b5ea-b01563e8baf1"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:36:04 crc kubenswrapper[4942]: I0218 19:36:04.939875 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/17399208-02d7-46c9-b5ea-b01563e8baf1-scripts" (OuterVolumeSpecName: "scripts") pod "17399208-02d7-46c9-b5ea-b01563e8baf1" (UID: "17399208-02d7-46c9-b5ea-b01563e8baf1"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:36:05 crc kubenswrapper[4942]: I0218 19:36:05.026571 4942 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/17399208-02d7-46c9-b5ea-b01563e8baf1-etc-machine-id\") on node \"crc\" DevicePath \"\"" Feb 18 19:36:05 crc kubenswrapper[4942]: I0218 19:36:05.026603 4942 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nb8kr\" (UniqueName: \"kubernetes.io/projected/17399208-02d7-46c9-b5ea-b01563e8baf1-kube-api-access-nb8kr\") on node \"crc\" DevicePath \"\"" Feb 18 19:36:05 crc kubenswrapper[4942]: I0218 19:36:05.026613 4942 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/17399208-02d7-46c9-b5ea-b01563e8baf1-scripts\") on node \"crc\" DevicePath \"\"" Feb 18 19:36:05 crc kubenswrapper[4942]: I0218 19:36:05.026622 4942 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/17399208-02d7-46c9-b5ea-b01563e8baf1-config-data-custom\") on node \"crc\" DevicePath \"\"" Feb 18 19:36:05 crc kubenswrapper[4942]: I0218 19:36:05.047862 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/17399208-02d7-46c9-b5ea-b01563e8baf1-config-data" (OuterVolumeSpecName: "config-data") pod "17399208-02d7-46c9-b5ea-b01563e8baf1" (UID: "17399208-02d7-46c9-b5ea-b01563e8baf1"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:36:05 crc kubenswrapper[4942]: I0218 19:36:05.047936 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/17399208-02d7-46c9-b5ea-b01563e8baf1-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "17399208-02d7-46c9-b5ea-b01563e8baf1" (UID: "17399208-02d7-46c9-b5ea-b01563e8baf1"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:36:05 crc kubenswrapper[4942]: I0218 19:36:05.057963 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-7466887594-rv5fb" Feb 18 19:36:05 crc kubenswrapper[4942]: I0218 19:36:05.129454 4942 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/17399208-02d7-46c9-b5ea-b01563e8baf1-config-data\") on node \"crc\" DevicePath \"\"" Feb 18 19:36:05 crc kubenswrapper[4942]: I0218 19:36:05.129673 4942 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/17399208-02d7-46c9-b5ea-b01563e8baf1-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 19:36:05 crc kubenswrapper[4942]: I0218 19:36:05.335902 4942 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/watcher-api-0" podUID="cf325d20-c507-42cc-b96f-6e57ff55aa53" containerName="watcher-api-log" probeResult="failure" output="Get \"http://10.217.0.172:9322/\": read tcp 10.217.0.2:50166->10.217.0.172:9322: read: connection reset by peer" Feb 18 19:36:05 crc kubenswrapper[4942]: I0218 19:36:05.335964 4942 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/watcher-api-0" podUID="cf325d20-c507-42cc-b96f-6e57ff55aa53" containerName="watcher-api" probeResult="failure" output="Get \"http://10.217.0.172:9322/\": read tcp 10.217.0.2:50150->10.217.0.172:9322: read: connection reset by peer" Feb 18 19:36:05 crc kubenswrapper[4942]: I0218 19:36:05.523004 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"17399208-02d7-46c9-b5ea-b01563e8baf1","Type":"ContainerDied","Data":"0e634a244135542433fea3600e46692e4afcef32f4f22d2c4274a7c75eb4af2b"} Feb 18 19:36:05 crc kubenswrapper[4942]: I0218 19:36:05.523055 4942 scope.go:117] "RemoveContainer" containerID="24e727d5d2fb180c7e5b210ba8e9f70f0b0d6335ad6d3b2ef9160574585ddb26" Feb 18 19:36:05 crc kubenswrapper[4942]: I0218 19:36:05.523183 4942 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Feb 18 19:36:05 crc kubenswrapper[4942]: I0218 19:36:05.528714 4942 generic.go:334] "Generic (PLEG): container finished" podID="cf325d20-c507-42cc-b96f-6e57ff55aa53" containerID="a5770f508e1c40bf4ef682bff10bac69873d582c1a0625dbd01c701b14695817" exitCode=0 Feb 18 19:36:05 crc kubenswrapper[4942]: I0218 19:36:05.528866 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-api-0" event={"ID":"cf325d20-c507-42cc-b96f-6e57ff55aa53","Type":"ContainerDied","Data":"a5770f508e1c40bf4ef682bff10bac69873d582c1a0625dbd01c701b14695817"} Feb 18 19:36:05 crc kubenswrapper[4942]: I0218 19:36:05.555961 4942 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 18 19:36:05 crc kubenswrapper[4942]: I0218 19:36:05.584594 4942 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 18 19:36:05 crc kubenswrapper[4942]: I0218 19:36:05.601831 4942 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Feb 18 19:36:05 crc kubenswrapper[4942]: E0218 19:36:05.602524 4942 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="17399208-02d7-46c9-b5ea-b01563e8baf1" containerName="probe" Feb 18 19:36:05 crc kubenswrapper[4942]: I0218 19:36:05.602544 4942 state_mem.go:107] "Deleted CPUSet assignment" podUID="17399208-02d7-46c9-b5ea-b01563e8baf1" containerName="probe" Feb 18 19:36:05 crc kubenswrapper[4942]: E0218 19:36:05.602574 4942 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3eb861b2-8f3f-482a-98b8-e4aa9de98ecd" containerName="init" Feb 18 19:36:05 crc kubenswrapper[4942]: I0218 19:36:05.602580 4942 state_mem.go:107] "Deleted CPUSet assignment" podUID="3eb861b2-8f3f-482a-98b8-e4aa9de98ecd" containerName="init" Feb 18 19:36:05 crc kubenswrapper[4942]: E0218 19:36:05.602616 4942 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="17399208-02d7-46c9-b5ea-b01563e8baf1" containerName="cinder-scheduler" Feb 18 19:36:05 crc kubenswrapper[4942]: I0218 19:36:05.602622 4942 state_mem.go:107] "Deleted CPUSet assignment" podUID="17399208-02d7-46c9-b5ea-b01563e8baf1" containerName="cinder-scheduler" Feb 18 19:36:05 crc kubenswrapper[4942]: E0218 19:36:05.602650 4942 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3eb861b2-8f3f-482a-98b8-e4aa9de98ecd" containerName="dnsmasq-dns" Feb 18 19:36:05 crc kubenswrapper[4942]: I0218 19:36:05.602657 4942 state_mem.go:107] "Deleted CPUSet assignment" podUID="3eb861b2-8f3f-482a-98b8-e4aa9de98ecd" containerName="dnsmasq-dns" Feb 18 19:36:05 crc kubenswrapper[4942]: I0218 19:36:05.602883 4942 memory_manager.go:354] "RemoveStaleState removing state" podUID="3eb861b2-8f3f-482a-98b8-e4aa9de98ecd" containerName="dnsmasq-dns" Feb 18 19:36:05 crc kubenswrapper[4942]: I0218 19:36:05.602900 4942 memory_manager.go:354] "RemoveStaleState removing state" podUID="17399208-02d7-46c9-b5ea-b01563e8baf1" containerName="probe" Feb 18 19:36:05 crc kubenswrapper[4942]: I0218 19:36:05.602933 4942 memory_manager.go:354] "RemoveStaleState removing state" podUID="17399208-02d7-46c9-b5ea-b01563e8baf1" containerName="cinder-scheduler" Feb 18 19:36:05 crc kubenswrapper[4942]: I0218 19:36:05.604191 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Feb 18 19:36:05 crc kubenswrapper[4942]: I0218 19:36:05.607046 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Feb 18 19:36:05 crc kubenswrapper[4942]: I0218 19:36:05.607095 4942 scope.go:117] "RemoveContainer" containerID="ab6c4d04ee142d2e6670d2eba83ed1f3609e146414eca1aa78da29e2ecfc3a7c" Feb 18 19:36:05 crc kubenswrapper[4942]: I0218 19:36:05.619586 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 18 19:36:05 crc kubenswrapper[4942]: I0218 19:36:05.638141 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e7ce79f4-8fac-499d-aa4d-1ca6b2b50259-scripts\") pod \"cinder-scheduler-0\" (UID: \"e7ce79f4-8fac-499d-aa4d-1ca6b2b50259\") " pod="openstack/cinder-scheduler-0" Feb 18 19:36:05 crc kubenswrapper[4942]: I0218 19:36:05.638176 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e7ce79f4-8fac-499d-aa4d-1ca6b2b50259-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"e7ce79f4-8fac-499d-aa4d-1ca6b2b50259\") " pod="openstack/cinder-scheduler-0" Feb 18 19:36:05 crc kubenswrapper[4942]: I0218 19:36:05.638194 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e7ce79f4-8fac-499d-aa4d-1ca6b2b50259-config-data\") pod \"cinder-scheduler-0\" (UID: \"e7ce79f4-8fac-499d-aa4d-1ca6b2b50259\") " pod="openstack/cinder-scheduler-0" Feb 18 19:36:05 crc kubenswrapper[4942]: I0218 19:36:05.638218 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5rs4l\" (UniqueName: \"kubernetes.io/projected/e7ce79f4-8fac-499d-aa4d-1ca6b2b50259-kube-api-access-5rs4l\") pod \"cinder-scheduler-0\" (UID: \"e7ce79f4-8fac-499d-aa4d-1ca6b2b50259\") " pod="openstack/cinder-scheduler-0" Feb 18 19:36:05 crc kubenswrapper[4942]: I0218 19:36:05.638237 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/e7ce79f4-8fac-499d-aa4d-1ca6b2b50259-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"e7ce79f4-8fac-499d-aa4d-1ca6b2b50259\") " pod="openstack/cinder-scheduler-0" Feb 18 19:36:05 crc kubenswrapper[4942]: I0218 19:36:05.638273 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e7ce79f4-8fac-499d-aa4d-1ca6b2b50259-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"e7ce79f4-8fac-499d-aa4d-1ca6b2b50259\") " pod="openstack/cinder-scheduler-0" Feb 18 19:36:05 crc kubenswrapper[4942]: I0218 19:36:05.741736 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e7ce79f4-8fac-499d-aa4d-1ca6b2b50259-scripts\") pod \"cinder-scheduler-0\" (UID: \"e7ce79f4-8fac-499d-aa4d-1ca6b2b50259\") " pod="openstack/cinder-scheduler-0" Feb 18 19:36:05 crc kubenswrapper[4942]: I0218 19:36:05.741794 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e7ce79f4-8fac-499d-aa4d-1ca6b2b50259-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"e7ce79f4-8fac-499d-aa4d-1ca6b2b50259\") " pod="openstack/cinder-scheduler-0" Feb 18 19:36:05 crc kubenswrapper[4942]: I0218 19:36:05.741816 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e7ce79f4-8fac-499d-aa4d-1ca6b2b50259-config-data\") pod \"cinder-scheduler-0\" (UID: \"e7ce79f4-8fac-499d-aa4d-1ca6b2b50259\") " pod="openstack/cinder-scheduler-0" Feb 18 19:36:05 crc kubenswrapper[4942]: I0218 19:36:05.741844 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5rs4l\" (UniqueName: \"kubernetes.io/projected/e7ce79f4-8fac-499d-aa4d-1ca6b2b50259-kube-api-access-5rs4l\") pod \"cinder-scheduler-0\" (UID: \"e7ce79f4-8fac-499d-aa4d-1ca6b2b50259\") " pod="openstack/cinder-scheduler-0" Feb 18 19:36:05 crc kubenswrapper[4942]: I0218 19:36:05.741864 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/e7ce79f4-8fac-499d-aa4d-1ca6b2b50259-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"e7ce79f4-8fac-499d-aa4d-1ca6b2b50259\") " pod="openstack/cinder-scheduler-0" Feb 18 19:36:05 crc kubenswrapper[4942]: I0218 19:36:05.741907 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e7ce79f4-8fac-499d-aa4d-1ca6b2b50259-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"e7ce79f4-8fac-499d-aa4d-1ca6b2b50259\") " pod="openstack/cinder-scheduler-0" Feb 18 19:36:05 crc kubenswrapper[4942]: I0218 19:36:05.746679 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e7ce79f4-8fac-499d-aa4d-1ca6b2b50259-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"e7ce79f4-8fac-499d-aa4d-1ca6b2b50259\") " pod="openstack/cinder-scheduler-0" Feb 18 19:36:05 crc kubenswrapper[4942]: I0218 19:36:05.746990 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/e7ce79f4-8fac-499d-aa4d-1ca6b2b50259-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"e7ce79f4-8fac-499d-aa4d-1ca6b2b50259\") " pod="openstack/cinder-scheduler-0" Feb 18 19:36:05 crc kubenswrapper[4942]: I0218 19:36:05.750739 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e7ce79f4-8fac-499d-aa4d-1ca6b2b50259-config-data\") pod \"cinder-scheduler-0\" (UID: \"e7ce79f4-8fac-499d-aa4d-1ca6b2b50259\") " pod="openstack/cinder-scheduler-0" Feb 18 19:36:05 crc kubenswrapper[4942]: I0218 19:36:05.755020 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e7ce79f4-8fac-499d-aa4d-1ca6b2b50259-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"e7ce79f4-8fac-499d-aa4d-1ca6b2b50259\") " pod="openstack/cinder-scheduler-0" Feb 18 19:36:05 crc kubenswrapper[4942]: I0218 19:36:05.755328 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e7ce79f4-8fac-499d-aa4d-1ca6b2b50259-scripts\") pod \"cinder-scheduler-0\" (UID: \"e7ce79f4-8fac-499d-aa4d-1ca6b2b50259\") " pod="openstack/cinder-scheduler-0" Feb 18 19:36:05 crc kubenswrapper[4942]: I0218 19:36:05.789483 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5rs4l\" (UniqueName: \"kubernetes.io/projected/e7ce79f4-8fac-499d-aa4d-1ca6b2b50259-kube-api-access-5rs4l\") pod \"cinder-scheduler-0\" (UID: \"e7ce79f4-8fac-499d-aa4d-1ca6b2b50259\") " pod="openstack/cinder-scheduler-0" Feb 18 19:36:05 crc kubenswrapper[4942]: I0218 19:36:05.818198 4942 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-api-0" Feb 18 19:36:05 crc kubenswrapper[4942]: I0218 19:36:05.944656 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cf325d20-c507-42cc-b96f-6e57ff55aa53-config-data\") pod \"cf325d20-c507-42cc-b96f-6e57ff55aa53\" (UID: \"cf325d20-c507-42cc-b96f-6e57ff55aa53\") " Feb 18 19:36:05 crc kubenswrapper[4942]: I0218 19:36:05.944787 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cf325d20-c507-42cc-b96f-6e57ff55aa53-logs\") pod \"cf325d20-c507-42cc-b96f-6e57ff55aa53\" (UID: \"cf325d20-c507-42cc-b96f-6e57ff55aa53\") " Feb 18 19:36:05 crc kubenswrapper[4942]: I0218 19:36:05.944832 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cf325d20-c507-42cc-b96f-6e57ff55aa53-combined-ca-bundle\") pod \"cf325d20-c507-42cc-b96f-6e57ff55aa53\" (UID: \"cf325d20-c507-42cc-b96f-6e57ff55aa53\") " Feb 18 19:36:05 crc kubenswrapper[4942]: I0218 19:36:05.944851 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-485k7\" (UniqueName: \"kubernetes.io/projected/cf325d20-c507-42cc-b96f-6e57ff55aa53-kube-api-access-485k7\") pod \"cf325d20-c507-42cc-b96f-6e57ff55aa53\" (UID: \"cf325d20-c507-42cc-b96f-6e57ff55aa53\") " Feb 18 19:36:05 crc kubenswrapper[4942]: I0218 19:36:05.944906 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/cf325d20-c507-42cc-b96f-6e57ff55aa53-custom-prometheus-ca\") pod \"cf325d20-c507-42cc-b96f-6e57ff55aa53\" (UID: \"cf325d20-c507-42cc-b96f-6e57ff55aa53\") " Feb 18 19:36:05 crc kubenswrapper[4942]: I0218 19:36:05.945459 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cf325d20-c507-42cc-b96f-6e57ff55aa53-logs" (OuterVolumeSpecName: "logs") pod "cf325d20-c507-42cc-b96f-6e57ff55aa53" (UID: "cf325d20-c507-42cc-b96f-6e57ff55aa53"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 19:36:05 crc kubenswrapper[4942]: I0218 19:36:05.948595 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cf325d20-c507-42cc-b96f-6e57ff55aa53-kube-api-access-485k7" (OuterVolumeSpecName: "kube-api-access-485k7") pod "cf325d20-c507-42cc-b96f-6e57ff55aa53" (UID: "cf325d20-c507-42cc-b96f-6e57ff55aa53"). InnerVolumeSpecName "kube-api-access-485k7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:36:05 crc kubenswrapper[4942]: I0218 19:36:05.952955 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Feb 18 19:36:05 crc kubenswrapper[4942]: I0218 19:36:05.984673 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cf325d20-c507-42cc-b96f-6e57ff55aa53-custom-prometheus-ca" (OuterVolumeSpecName: "custom-prometheus-ca") pod "cf325d20-c507-42cc-b96f-6e57ff55aa53" (UID: "cf325d20-c507-42cc-b96f-6e57ff55aa53"). InnerVolumeSpecName "custom-prometheus-ca". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:36:06 crc kubenswrapper[4942]: I0218 19:36:06.003521 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cf325d20-c507-42cc-b96f-6e57ff55aa53-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "cf325d20-c507-42cc-b96f-6e57ff55aa53" (UID: "cf325d20-c507-42cc-b96f-6e57ff55aa53"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:36:06 crc kubenswrapper[4942]: I0218 19:36:06.008854 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cf325d20-c507-42cc-b96f-6e57ff55aa53-config-data" (OuterVolumeSpecName: "config-data") pod "cf325d20-c507-42cc-b96f-6e57ff55aa53" (UID: "cf325d20-c507-42cc-b96f-6e57ff55aa53"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:36:06 crc kubenswrapper[4942]: I0218 19:36:06.046553 4942 reconciler_common.go:293] "Volume detached for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/cf325d20-c507-42cc-b96f-6e57ff55aa53-custom-prometheus-ca\") on node \"crc\" DevicePath \"\"" Feb 18 19:36:06 crc kubenswrapper[4942]: I0218 19:36:06.046581 4942 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cf325d20-c507-42cc-b96f-6e57ff55aa53-config-data\") on node \"crc\" DevicePath \"\"" Feb 18 19:36:06 crc kubenswrapper[4942]: I0218 19:36:06.046589 4942 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cf325d20-c507-42cc-b96f-6e57ff55aa53-logs\") on node \"crc\" DevicePath \"\"" Feb 18 19:36:06 crc kubenswrapper[4942]: I0218 19:36:06.046600 4942 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cf325d20-c507-42cc-b96f-6e57ff55aa53-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 19:36:06 crc kubenswrapper[4942]: I0218 19:36:06.046610 4942 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-485k7\" (UniqueName: \"kubernetes.io/projected/cf325d20-c507-42cc-b96f-6e57ff55aa53-kube-api-access-485k7\") on node \"crc\" DevicePath \"\"" Feb 18 19:36:06 crc kubenswrapper[4942]: W0218 19:36:06.490931 4942 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode7ce79f4_8fac_499d_aa4d_1ca6b2b50259.slice/crio-0b8007bb2b22198f3e91f17ff9f81ad24951fa3b38c0d678886241682b40539e WatchSource:0}: Error finding container 0b8007bb2b22198f3e91f17ff9f81ad24951fa3b38c0d678886241682b40539e: Status 404 returned error can't find the container with id 0b8007bb2b22198f3e91f17ff9f81ad24951fa3b38c0d678886241682b40539e Feb 18 19:36:06 crc kubenswrapper[4942]: I0218 19:36:06.498731 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 18 19:36:06 crc kubenswrapper[4942]: I0218 19:36:06.555713 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-api-0" event={"ID":"cf325d20-c507-42cc-b96f-6e57ff55aa53","Type":"ContainerDied","Data":"796b3cc6f87bbc8cea79f9f672a04a291cbb2f04782a6f0d27d4592a418cd947"} Feb 18 19:36:06 crc kubenswrapper[4942]: I0218 19:36:06.555775 4942 scope.go:117] "RemoveContainer" containerID="a5770f508e1c40bf4ef682bff10bac69873d582c1a0625dbd01c701b14695817" Feb 18 19:36:06 crc kubenswrapper[4942]: I0218 19:36:06.555864 4942 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-api-0" Feb 18 19:36:06 crc kubenswrapper[4942]: I0218 19:36:06.563303 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"e7ce79f4-8fac-499d-aa4d-1ca6b2b50259","Type":"ContainerStarted","Data":"0b8007bb2b22198f3e91f17ff9f81ad24951fa3b38c0d678886241682b40539e"} Feb 18 19:36:06 crc kubenswrapper[4942]: I0218 19:36:06.624921 4942 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/watcher-api-0"] Feb 18 19:36:06 crc kubenswrapper[4942]: I0218 19:36:06.629971 4942 scope.go:117] "RemoveContainer" containerID="aa132dbcbfbe636d2466bf98fe3a945bcf6b8f37a1c6b00263bbaa8b8d41b75b" Feb 18 19:36:06 crc kubenswrapper[4942]: I0218 19:36:06.652834 4942 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/watcher-api-0"] Feb 18 19:36:06 crc kubenswrapper[4942]: I0218 19:36:06.660552 4942 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/watcher-api-0"] Feb 18 19:36:06 crc kubenswrapper[4942]: E0218 19:36:06.660928 4942 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cf325d20-c507-42cc-b96f-6e57ff55aa53" containerName="watcher-api-log" Feb 18 19:36:06 crc kubenswrapper[4942]: I0218 19:36:06.660943 4942 state_mem.go:107] "Deleted CPUSet assignment" podUID="cf325d20-c507-42cc-b96f-6e57ff55aa53" containerName="watcher-api-log" Feb 18 19:36:06 crc kubenswrapper[4942]: E0218 19:36:06.660951 4942 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cf325d20-c507-42cc-b96f-6e57ff55aa53" containerName="watcher-api" Feb 18 19:36:06 crc kubenswrapper[4942]: I0218 19:36:06.660957 4942 state_mem.go:107] "Deleted CPUSet assignment" podUID="cf325d20-c507-42cc-b96f-6e57ff55aa53" containerName="watcher-api" Feb 18 19:36:06 crc kubenswrapper[4942]: I0218 19:36:06.661140 4942 memory_manager.go:354] "RemoveStaleState removing state" podUID="cf325d20-c507-42cc-b96f-6e57ff55aa53" containerName="watcher-api-log" Feb 18 19:36:06 crc kubenswrapper[4942]: I0218 19:36:06.661164 4942 memory_manager.go:354] "RemoveStaleState removing state" podUID="cf325d20-c507-42cc-b96f-6e57ff55aa53" containerName="watcher-api" Feb 18 19:36:06 crc kubenswrapper[4942]: I0218 19:36:06.662072 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-api-0" Feb 18 19:36:06 crc kubenswrapper[4942]: I0218 19:36:06.668239 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-watcher-internal-svc" Feb 18 19:36:06 crc kubenswrapper[4942]: I0218 19:36:06.668535 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-watcher-public-svc" Feb 18 19:36:06 crc kubenswrapper[4942]: I0218 19:36:06.668676 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"watcher-api-config-data" Feb 18 19:36:06 crc kubenswrapper[4942]: I0218 19:36:06.702816 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-api-0"] Feb 18 19:36:06 crc kubenswrapper[4942]: I0218 19:36:06.762772 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5qkvd\" (UniqueName: \"kubernetes.io/projected/618db7e3-a45b-472e-8341-bce342277a17-kube-api-access-5qkvd\") pod \"watcher-api-0\" (UID: \"618db7e3-a45b-472e-8341-bce342277a17\") " pod="openstack/watcher-api-0" Feb 18 19:36:06 crc kubenswrapper[4942]: I0218 19:36:06.762820 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/618db7e3-a45b-472e-8341-bce342277a17-logs\") pod \"watcher-api-0\" (UID: \"618db7e3-a45b-472e-8341-bce342277a17\") " pod="openstack/watcher-api-0" Feb 18 19:36:06 crc kubenswrapper[4942]: I0218 19:36:06.762862 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/618db7e3-a45b-472e-8341-bce342277a17-public-tls-certs\") pod \"watcher-api-0\" (UID: \"618db7e3-a45b-472e-8341-bce342277a17\") " pod="openstack/watcher-api-0" Feb 18 19:36:06 crc kubenswrapper[4942]: I0218 19:36:06.762880 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/618db7e3-a45b-472e-8341-bce342277a17-config-data\") pod \"watcher-api-0\" (UID: \"618db7e3-a45b-472e-8341-bce342277a17\") " pod="openstack/watcher-api-0" Feb 18 19:36:06 crc kubenswrapper[4942]: I0218 19:36:06.762912 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/618db7e3-a45b-472e-8341-bce342277a17-internal-tls-certs\") pod \"watcher-api-0\" (UID: \"618db7e3-a45b-472e-8341-bce342277a17\") " pod="openstack/watcher-api-0" Feb 18 19:36:06 crc kubenswrapper[4942]: I0218 19:36:06.762936 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/618db7e3-a45b-472e-8341-bce342277a17-custom-prometheus-ca\") pod \"watcher-api-0\" (UID: \"618db7e3-a45b-472e-8341-bce342277a17\") " pod="openstack/watcher-api-0" Feb 18 19:36:06 crc kubenswrapper[4942]: I0218 19:36:06.762958 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/618db7e3-a45b-472e-8341-bce342277a17-combined-ca-bundle\") pod \"watcher-api-0\" (UID: \"618db7e3-a45b-472e-8341-bce342277a17\") " pod="openstack/watcher-api-0" Feb 18 19:36:06 crc kubenswrapper[4942]: I0218 19:36:06.801632 4942 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstackclient"] Feb 18 19:36:06 crc kubenswrapper[4942]: I0218 19:36:06.802969 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Feb 18 19:36:06 crc kubenswrapper[4942]: I0218 19:36:06.812512 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-config-secret" Feb 18 19:36:06 crc kubenswrapper[4942]: I0218 19:36:06.812570 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstackclient-openstackclient-dockercfg-lmtd6" Feb 18 19:36:06 crc kubenswrapper[4942]: I0218 19:36:06.812707 4942 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config" Feb 18 19:36:06 crc kubenswrapper[4942]: I0218 19:36:06.850108 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Feb 18 19:36:06 crc kubenswrapper[4942]: I0218 19:36:06.864081 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/38accb89-093d-4b4b-b098-b4f73a4bb561-openstack-config-secret\") pod \"openstackclient\" (UID: \"38accb89-093d-4b4b-b098-b4f73a4bb561\") " pod="openstack/openstackclient" Feb 18 19:36:06 crc kubenswrapper[4942]: I0218 19:36:06.864122 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6qbj4\" (UniqueName: \"kubernetes.io/projected/38accb89-093d-4b4b-b098-b4f73a4bb561-kube-api-access-6qbj4\") pod \"openstackclient\" (UID: \"38accb89-093d-4b4b-b098-b4f73a4bb561\") " pod="openstack/openstackclient" Feb 18 19:36:06 crc kubenswrapper[4942]: I0218 19:36:06.864153 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5qkvd\" (UniqueName: \"kubernetes.io/projected/618db7e3-a45b-472e-8341-bce342277a17-kube-api-access-5qkvd\") pod \"watcher-api-0\" (UID: \"618db7e3-a45b-472e-8341-bce342277a17\") " pod="openstack/watcher-api-0" Feb 18 19:36:06 crc kubenswrapper[4942]: I0218 19:36:06.864178 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/618db7e3-a45b-472e-8341-bce342277a17-logs\") pod \"watcher-api-0\" (UID: \"618db7e3-a45b-472e-8341-bce342277a17\") " pod="openstack/watcher-api-0" Feb 18 19:36:06 crc kubenswrapper[4942]: I0218 19:36:06.864223 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/618db7e3-a45b-472e-8341-bce342277a17-public-tls-certs\") pod \"watcher-api-0\" (UID: \"618db7e3-a45b-472e-8341-bce342277a17\") " pod="openstack/watcher-api-0" Feb 18 19:36:06 crc kubenswrapper[4942]: I0218 19:36:06.864241 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/618db7e3-a45b-472e-8341-bce342277a17-config-data\") pod \"watcher-api-0\" (UID: \"618db7e3-a45b-472e-8341-bce342277a17\") " pod="openstack/watcher-api-0" Feb 18 19:36:06 crc kubenswrapper[4942]: I0218 19:36:06.864262 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/38accb89-093d-4b4b-b098-b4f73a4bb561-openstack-config\") pod \"openstackclient\" (UID: \"38accb89-093d-4b4b-b098-b4f73a4bb561\") " pod="openstack/openstackclient" Feb 18 19:36:06 crc kubenswrapper[4942]: I0218 19:36:06.864283 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/618db7e3-a45b-472e-8341-bce342277a17-internal-tls-certs\") pod \"watcher-api-0\" (UID: \"618db7e3-a45b-472e-8341-bce342277a17\") " pod="openstack/watcher-api-0" Feb 18 19:36:06 crc kubenswrapper[4942]: I0218 19:36:06.864305 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/618db7e3-a45b-472e-8341-bce342277a17-custom-prometheus-ca\") pod \"watcher-api-0\" (UID: \"618db7e3-a45b-472e-8341-bce342277a17\") " pod="openstack/watcher-api-0" Feb 18 19:36:06 crc kubenswrapper[4942]: I0218 19:36:06.864328 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/618db7e3-a45b-472e-8341-bce342277a17-combined-ca-bundle\") pod \"watcher-api-0\" (UID: \"618db7e3-a45b-472e-8341-bce342277a17\") " pod="openstack/watcher-api-0" Feb 18 19:36:06 crc kubenswrapper[4942]: I0218 19:36:06.864349 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/38accb89-093d-4b4b-b098-b4f73a4bb561-combined-ca-bundle\") pod \"openstackclient\" (UID: \"38accb89-093d-4b4b-b098-b4f73a4bb561\") " pod="openstack/openstackclient" Feb 18 19:36:06 crc kubenswrapper[4942]: I0218 19:36:06.865068 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/618db7e3-a45b-472e-8341-bce342277a17-logs\") pod \"watcher-api-0\" (UID: \"618db7e3-a45b-472e-8341-bce342277a17\") " pod="openstack/watcher-api-0" Feb 18 19:36:06 crc kubenswrapper[4942]: I0218 19:36:06.869860 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/618db7e3-a45b-472e-8341-bce342277a17-public-tls-certs\") pod \"watcher-api-0\" (UID: \"618db7e3-a45b-472e-8341-bce342277a17\") " pod="openstack/watcher-api-0" Feb 18 19:36:06 crc kubenswrapper[4942]: I0218 19:36:06.871742 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/618db7e3-a45b-472e-8341-bce342277a17-combined-ca-bundle\") pod \"watcher-api-0\" (UID: \"618db7e3-a45b-472e-8341-bce342277a17\") " pod="openstack/watcher-api-0" Feb 18 19:36:06 crc kubenswrapper[4942]: I0218 19:36:06.872061 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/618db7e3-a45b-472e-8341-bce342277a17-internal-tls-certs\") pod \"watcher-api-0\" (UID: \"618db7e3-a45b-472e-8341-bce342277a17\") " pod="openstack/watcher-api-0" Feb 18 19:36:06 crc kubenswrapper[4942]: I0218 19:36:06.873644 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/618db7e3-a45b-472e-8341-bce342277a17-config-data\") pod \"watcher-api-0\" (UID: \"618db7e3-a45b-472e-8341-bce342277a17\") " pod="openstack/watcher-api-0" Feb 18 19:36:06 crc kubenswrapper[4942]: I0218 19:36:06.874103 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/618db7e3-a45b-472e-8341-bce342277a17-custom-prometheus-ca\") pod \"watcher-api-0\" (UID: \"618db7e3-a45b-472e-8341-bce342277a17\") " pod="openstack/watcher-api-0" Feb 18 19:36:06 crc kubenswrapper[4942]: I0218 19:36:06.879989 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5qkvd\" (UniqueName: \"kubernetes.io/projected/618db7e3-a45b-472e-8341-bce342277a17-kube-api-access-5qkvd\") pod \"watcher-api-0\" (UID: \"618db7e3-a45b-472e-8341-bce342277a17\") " pod="openstack/watcher-api-0" Feb 18 19:36:06 crc kubenswrapper[4942]: I0218 19:36:06.966463 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/38accb89-093d-4b4b-b098-b4f73a4bb561-openstack-config\") pod \"openstackclient\" (UID: \"38accb89-093d-4b4b-b098-b4f73a4bb561\") " pod="openstack/openstackclient" Feb 18 19:36:06 crc kubenswrapper[4942]: I0218 19:36:06.966535 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/38accb89-093d-4b4b-b098-b4f73a4bb561-combined-ca-bundle\") pod \"openstackclient\" (UID: \"38accb89-093d-4b4b-b098-b4f73a4bb561\") " pod="openstack/openstackclient" Feb 18 19:36:06 crc kubenswrapper[4942]: I0218 19:36:06.966618 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/38accb89-093d-4b4b-b098-b4f73a4bb561-openstack-config-secret\") pod \"openstackclient\" (UID: \"38accb89-093d-4b4b-b098-b4f73a4bb561\") " pod="openstack/openstackclient" Feb 18 19:36:06 crc kubenswrapper[4942]: I0218 19:36:06.966639 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6qbj4\" (UniqueName: \"kubernetes.io/projected/38accb89-093d-4b4b-b098-b4f73a4bb561-kube-api-access-6qbj4\") pod \"openstackclient\" (UID: \"38accb89-093d-4b4b-b098-b4f73a4bb561\") " pod="openstack/openstackclient" Feb 18 19:36:06 crc kubenswrapper[4942]: I0218 19:36:06.967648 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/38accb89-093d-4b4b-b098-b4f73a4bb561-openstack-config\") pod \"openstackclient\" (UID: \"38accb89-093d-4b4b-b098-b4f73a4bb561\") " pod="openstack/openstackclient" Feb 18 19:36:06 crc kubenswrapper[4942]: I0218 19:36:06.974383 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/38accb89-093d-4b4b-b098-b4f73a4bb561-openstack-config-secret\") pod \"openstackclient\" (UID: \"38accb89-093d-4b4b-b098-b4f73a4bb561\") " pod="openstack/openstackclient" Feb 18 19:36:06 crc kubenswrapper[4942]: I0218 19:36:06.974937 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/38accb89-093d-4b4b-b098-b4f73a4bb561-combined-ca-bundle\") pod \"openstackclient\" (UID: \"38accb89-093d-4b4b-b098-b4f73a4bb561\") " pod="openstack/openstackclient" Feb 18 19:36:06 crc kubenswrapper[4942]: I0218 19:36:06.990786 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-api-0" Feb 18 19:36:06 crc kubenswrapper[4942]: I0218 19:36:06.996995 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6qbj4\" (UniqueName: \"kubernetes.io/projected/38accb89-093d-4b4b-b098-b4f73a4bb561-kube-api-access-6qbj4\") pod \"openstackclient\" (UID: \"38accb89-093d-4b4b-b098-b4f73a4bb561\") " pod="openstack/openstackclient" Feb 18 19:36:07 crc kubenswrapper[4942]: I0218 19:36:07.078412 4942 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="17399208-02d7-46c9-b5ea-b01563e8baf1" path="/var/lib/kubelet/pods/17399208-02d7-46c9-b5ea-b01563e8baf1/volumes" Feb 18 19:36:07 crc kubenswrapper[4942]: I0218 19:36:07.080628 4942 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cf325d20-c507-42cc-b96f-6e57ff55aa53" path="/var/lib/kubelet/pods/cf325d20-c507-42cc-b96f-6e57ff55aa53/volumes" Feb 18 19:36:07 crc kubenswrapper[4942]: I0218 19:36:07.115573 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-7466887594-rv5fb" Feb 18 19:36:07 crc kubenswrapper[4942]: I0218 19:36:07.143098 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Feb 18 19:36:07 crc kubenswrapper[4942]: I0218 19:36:07.174608 4942 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-9d65dd5d-c4zgj"] Feb 18 19:36:07 crc kubenswrapper[4942]: I0218 19:36:07.174888 4942 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-9d65dd5d-c4zgj" podUID="e4cc3ba2-abea-4fa2-9272-65ac8721c87d" containerName="barbican-api-log" containerID="cri-o://2b088a9056603d3d58e3baff59e58248fc06291c4ff662a1d08a6fc2664c9a1c" gracePeriod=30 Feb 18 19:36:07 crc kubenswrapper[4942]: I0218 19:36:07.175356 4942 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-9d65dd5d-c4zgj" podUID="e4cc3ba2-abea-4fa2-9272-65ac8721c87d" containerName="barbican-api" containerID="cri-o://ecbd025dc0394b9034d21e03a44147434ce1904d40c5ab1c61c7e88c90aadd1e" gracePeriod=30 Feb 18 19:36:07 crc kubenswrapper[4942]: I0218 19:36:07.596993 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"e7ce79f4-8fac-499d-aa4d-1ca6b2b50259","Type":"ContainerStarted","Data":"a4316a50ea1a16243db84d37fb517e94ea394f23b89e3660f9729bb3224e6560"} Feb 18 19:36:07 crc kubenswrapper[4942]: I0218 19:36:07.612153 4942 generic.go:334] "Generic (PLEG): container finished" podID="e4cc3ba2-abea-4fa2-9272-65ac8721c87d" containerID="2b088a9056603d3d58e3baff59e58248fc06291c4ff662a1d08a6fc2664c9a1c" exitCode=143 Feb 18 19:36:07 crc kubenswrapper[4942]: I0218 19:36:07.612216 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-9d65dd5d-c4zgj" event={"ID":"e4cc3ba2-abea-4fa2-9272-65ac8721c87d","Type":"ContainerDied","Data":"2b088a9056603d3d58e3baff59e58248fc06291c4ff662a1d08a6fc2664c9a1c"} Feb 18 19:36:07 crc kubenswrapper[4942]: I0218 19:36:07.661998 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Feb 18 19:36:07 crc kubenswrapper[4942]: I0218 19:36:07.705472 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-api-0"] Feb 18 19:36:07 crc kubenswrapper[4942]: W0218 19:36:07.708178 4942 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod38accb89_093d_4b4b_b098_b4f73a4bb561.slice/crio-ff1cdb0b7e3b37cce93a44da8103dcaa016bfadf422c68a610fa69a6094f9cc2 WatchSource:0}: Error finding container ff1cdb0b7e3b37cce93a44da8103dcaa016bfadf422c68a610fa69a6094f9cc2: Status 404 returned error can't find the container with id ff1cdb0b7e3b37cce93a44da8103dcaa016bfadf422c68a610fa69a6094f9cc2 Feb 18 19:36:08 crc kubenswrapper[4942]: I0218 19:36:08.631097 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-api-0" event={"ID":"618db7e3-a45b-472e-8341-bce342277a17","Type":"ContainerStarted","Data":"823c265199a899046da3b6d893152513a99c4c8a0856243e6a7ab9e94e7c5d6b"} Feb 18 19:36:08 crc kubenswrapper[4942]: I0218 19:36:08.631693 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-api-0" event={"ID":"618db7e3-a45b-472e-8341-bce342277a17","Type":"ContainerStarted","Data":"f4180aa5041d1857f75582c9a3b1a76fe76e0ec02072654dd06aa9a911fc7f4e"} Feb 18 19:36:08 crc kubenswrapper[4942]: I0218 19:36:08.631715 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/watcher-api-0" Feb 18 19:36:08 crc kubenswrapper[4942]: I0218 19:36:08.631725 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-api-0" event={"ID":"618db7e3-a45b-472e-8341-bce342277a17","Type":"ContainerStarted","Data":"cadeec1713837beea98b89d34f97a59e57ba43023eb28c9ee03066e955fb17d2"} Feb 18 19:36:08 crc kubenswrapper[4942]: I0218 19:36:08.633503 4942 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/watcher-api-0" podUID="618db7e3-a45b-472e-8341-bce342277a17" containerName="watcher-api" probeResult="failure" output="Get \"https://10.217.0.187:9322/\": dial tcp 10.217.0.187:9322: connect: connection refused" Feb 18 19:36:08 crc kubenswrapper[4942]: I0218 19:36:08.644029 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"e7ce79f4-8fac-499d-aa4d-1ca6b2b50259","Type":"ContainerStarted","Data":"f125ab975ab7eb97174e78d37f99220d729200ee72a3e7255f597efca4a8defc"} Feb 18 19:36:08 crc kubenswrapper[4942]: I0218 19:36:08.650478 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"38accb89-093d-4b4b-b098-b4f73a4bb561","Type":"ContainerStarted","Data":"ff1cdb0b7e3b37cce93a44da8103dcaa016bfadf422c68a610fa69a6094f9cc2"} Feb 18 19:36:08 crc kubenswrapper[4942]: I0218 19:36:08.667791 4942 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/watcher-api-0" podStartSLOduration=2.6677535409999997 podStartE2EDuration="2.667753541s" podCreationTimestamp="2026-02-18 19:36:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:36:08.659320782 +0000 UTC m=+1128.364253487" watchObservedRunningTime="2026-02-18 19:36:08.667753541 +0000 UTC m=+1128.372686206" Feb 18 19:36:08 crc kubenswrapper[4942]: I0218 19:36:08.688221 4942 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=3.688193713 podStartE2EDuration="3.688193713s" podCreationTimestamp="2026-02-18 19:36:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:36:08.680689858 +0000 UTC m=+1128.385622523" watchObservedRunningTime="2026-02-18 19:36:08.688193713 +0000 UTC m=+1128.393126388" Feb 18 19:36:09 crc kubenswrapper[4942]: I0218 19:36:09.527199 4942 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-54d64cf59b-xp7rk" podUID="3ecc91e6-4e7f-438f-8530-bb8dd55764c5" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.158:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.158:8443: connect: connection refused" Feb 18 19:36:10 crc kubenswrapper[4942]: I0218 19:36:10.188294 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cinder-api-0" Feb 18 19:36:10 crc kubenswrapper[4942]: I0218 19:36:10.690094 4942 generic.go:334] "Generic (PLEG): container finished" podID="e4cc3ba2-abea-4fa2-9272-65ac8721c87d" containerID="ecbd025dc0394b9034d21e03a44147434ce1904d40c5ab1c61c7e88c90aadd1e" exitCode=0 Feb 18 19:36:10 crc kubenswrapper[4942]: I0218 19:36:10.690141 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-9d65dd5d-c4zgj" event={"ID":"e4cc3ba2-abea-4fa2-9272-65ac8721c87d","Type":"ContainerDied","Data":"ecbd025dc0394b9034d21e03a44147434ce1904d40c5ab1c61c7e88c90aadd1e"} Feb 18 19:36:10 crc kubenswrapper[4942]: I0218 19:36:10.848521 4942 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-9d65dd5d-c4zgj" Feb 18 19:36:10 crc kubenswrapper[4942]: I0218 19:36:10.954251 4942 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Feb 18 19:36:10 crc kubenswrapper[4942]: I0218 19:36:10.993538 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e4cc3ba2-abea-4fa2-9272-65ac8721c87d-config-data-custom\") pod \"e4cc3ba2-abea-4fa2-9272-65ac8721c87d\" (UID: \"e4cc3ba2-abea-4fa2-9272-65ac8721c87d\") " Feb 18 19:36:10 crc kubenswrapper[4942]: I0218 19:36:10.993585 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e4cc3ba2-abea-4fa2-9272-65ac8721c87d-combined-ca-bundle\") pod \"e4cc3ba2-abea-4fa2-9272-65ac8721c87d\" (UID: \"e4cc3ba2-abea-4fa2-9272-65ac8721c87d\") " Feb 18 19:36:10 crc kubenswrapper[4942]: I0218 19:36:10.993618 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-92ffg\" (UniqueName: \"kubernetes.io/projected/e4cc3ba2-abea-4fa2-9272-65ac8721c87d-kube-api-access-92ffg\") pod \"e4cc3ba2-abea-4fa2-9272-65ac8721c87d\" (UID: \"e4cc3ba2-abea-4fa2-9272-65ac8721c87d\") " Feb 18 19:36:10 crc kubenswrapper[4942]: I0218 19:36:10.993681 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e4cc3ba2-abea-4fa2-9272-65ac8721c87d-config-data\") pod \"e4cc3ba2-abea-4fa2-9272-65ac8721c87d\" (UID: \"e4cc3ba2-abea-4fa2-9272-65ac8721c87d\") " Feb 18 19:36:10 crc kubenswrapper[4942]: I0218 19:36:10.993780 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e4cc3ba2-abea-4fa2-9272-65ac8721c87d-logs\") pod \"e4cc3ba2-abea-4fa2-9272-65ac8721c87d\" (UID: \"e4cc3ba2-abea-4fa2-9272-65ac8721c87d\") " Feb 18 19:36:10 crc kubenswrapper[4942]: I0218 19:36:10.994205 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e4cc3ba2-abea-4fa2-9272-65ac8721c87d-logs" (OuterVolumeSpecName: "logs") pod "e4cc3ba2-abea-4fa2-9272-65ac8721c87d" (UID: "e4cc3ba2-abea-4fa2-9272-65ac8721c87d"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 19:36:10 crc kubenswrapper[4942]: I0218 19:36:10.994699 4942 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e4cc3ba2-abea-4fa2-9272-65ac8721c87d-logs\") on node \"crc\" DevicePath \"\"" Feb 18 19:36:11 crc kubenswrapper[4942]: I0218 19:36:11.000674 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e4cc3ba2-abea-4fa2-9272-65ac8721c87d-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "e4cc3ba2-abea-4fa2-9272-65ac8721c87d" (UID: "e4cc3ba2-abea-4fa2-9272-65ac8721c87d"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:36:11 crc kubenswrapper[4942]: I0218 19:36:11.006261 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e4cc3ba2-abea-4fa2-9272-65ac8721c87d-kube-api-access-92ffg" (OuterVolumeSpecName: "kube-api-access-92ffg") pod "e4cc3ba2-abea-4fa2-9272-65ac8721c87d" (UID: "e4cc3ba2-abea-4fa2-9272-65ac8721c87d"). InnerVolumeSpecName "kube-api-access-92ffg". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:36:11 crc kubenswrapper[4942]: I0218 19:36:11.050725 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e4cc3ba2-abea-4fa2-9272-65ac8721c87d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e4cc3ba2-abea-4fa2-9272-65ac8721c87d" (UID: "e4cc3ba2-abea-4fa2-9272-65ac8721c87d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:36:11 crc kubenswrapper[4942]: I0218 19:36:11.067642 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e4cc3ba2-abea-4fa2-9272-65ac8721c87d-config-data" (OuterVolumeSpecName: "config-data") pod "e4cc3ba2-abea-4fa2-9272-65ac8721c87d" (UID: "e4cc3ba2-abea-4fa2-9272-65ac8721c87d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:36:11 crc kubenswrapper[4942]: I0218 19:36:11.097550 4942 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e4cc3ba2-abea-4fa2-9272-65ac8721c87d-config-data-custom\") on node \"crc\" DevicePath \"\"" Feb 18 19:36:11 crc kubenswrapper[4942]: I0218 19:36:11.097590 4942 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e4cc3ba2-abea-4fa2-9272-65ac8721c87d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 19:36:11 crc kubenswrapper[4942]: I0218 19:36:11.097602 4942 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-92ffg\" (UniqueName: \"kubernetes.io/projected/e4cc3ba2-abea-4fa2-9272-65ac8721c87d-kube-api-access-92ffg\") on node \"crc\" DevicePath \"\"" Feb 18 19:36:11 crc kubenswrapper[4942]: I0218 19:36:11.097615 4942 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e4cc3ba2-abea-4fa2-9272-65ac8721c87d-config-data\") on node \"crc\" DevicePath \"\"" Feb 18 19:36:11 crc kubenswrapper[4942]: I0218 19:36:11.701785 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-9d65dd5d-c4zgj" event={"ID":"e4cc3ba2-abea-4fa2-9272-65ac8721c87d","Type":"ContainerDied","Data":"508f30ffb0657c1e039b8b11a78534bab62a7a31f3ad591584cdc61bbaa73274"} Feb 18 19:36:11 crc kubenswrapper[4942]: I0218 19:36:11.701839 4942 scope.go:117] "RemoveContainer" containerID="ecbd025dc0394b9034d21e03a44147434ce1904d40c5ab1c61c7e88c90aadd1e" Feb 18 19:36:11 crc kubenswrapper[4942]: I0218 19:36:11.701972 4942 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-9d65dd5d-c4zgj" Feb 18 19:36:11 crc kubenswrapper[4942]: I0218 19:36:11.731350 4942 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-9d65dd5d-c4zgj"] Feb 18 19:36:11 crc kubenswrapper[4942]: I0218 19:36:11.738261 4942 scope.go:117] "RemoveContainer" containerID="2b088a9056603d3d58e3baff59e58248fc06291c4ff662a1d08a6fc2664c9a1c" Feb 18 19:36:11 crc kubenswrapper[4942]: I0218 19:36:11.742824 4942 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-api-9d65dd5d-c4zgj"] Feb 18 19:36:11 crc kubenswrapper[4942]: I0218 19:36:11.992566 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/watcher-api-0" Feb 18 19:36:12 crc kubenswrapper[4942]: I0218 19:36:12.032038 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/watcher-api-0" Feb 18 19:36:13 crc kubenswrapper[4942]: I0218 19:36:13.051329 4942 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e4cc3ba2-abea-4fa2-9272-65ac8721c87d" path="/var/lib/kubelet/pods/e4cc3ba2-abea-4fa2-9272-65ac8721c87d/volumes" Feb 18 19:36:13 crc kubenswrapper[4942]: I0218 19:36:13.627051 4942 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-proxy-b6f54bc7f-8lcdv"] Feb 18 19:36:13 crc kubenswrapper[4942]: E0218 19:36:13.627446 4942 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e4cc3ba2-abea-4fa2-9272-65ac8721c87d" containerName="barbican-api-log" Feb 18 19:36:13 crc kubenswrapper[4942]: I0218 19:36:13.627462 4942 state_mem.go:107] "Deleted CPUSet assignment" podUID="e4cc3ba2-abea-4fa2-9272-65ac8721c87d" containerName="barbican-api-log" Feb 18 19:36:13 crc kubenswrapper[4942]: E0218 19:36:13.627487 4942 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e4cc3ba2-abea-4fa2-9272-65ac8721c87d" containerName="barbican-api" Feb 18 19:36:13 crc kubenswrapper[4942]: I0218 19:36:13.627495 4942 state_mem.go:107] "Deleted CPUSet assignment" podUID="e4cc3ba2-abea-4fa2-9272-65ac8721c87d" containerName="barbican-api" Feb 18 19:36:13 crc kubenswrapper[4942]: I0218 19:36:13.627683 4942 memory_manager.go:354] "RemoveStaleState removing state" podUID="e4cc3ba2-abea-4fa2-9272-65ac8721c87d" containerName="barbican-api" Feb 18 19:36:13 crc kubenswrapper[4942]: I0218 19:36:13.627702 4942 memory_manager.go:354] "RemoveStaleState removing state" podUID="e4cc3ba2-abea-4fa2-9272-65ac8721c87d" containerName="barbican-api-log" Feb 18 19:36:13 crc kubenswrapper[4942]: I0218 19:36:13.630472 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-b6f54bc7f-8lcdv" Feb 18 19:36:13 crc kubenswrapper[4942]: I0218 19:36:13.636201 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-swift-public-svc" Feb 18 19:36:13 crc kubenswrapper[4942]: I0218 19:36:13.636752 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-proxy-config-data" Feb 18 19:36:13 crc kubenswrapper[4942]: I0218 19:36:13.636799 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-swift-internal-svc" Feb 18 19:36:13 crc kubenswrapper[4942]: I0218 19:36:13.639446 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-b6f54bc7f-8lcdv"] Feb 18 19:36:13 crc kubenswrapper[4942]: I0218 19:36:13.656901 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b50976a2-9059-4076-8a11-9c86c8b49070-config-data\") pod \"swift-proxy-b6f54bc7f-8lcdv\" (UID: \"b50976a2-9059-4076-8a11-9c86c8b49070\") " pod="openstack/swift-proxy-b6f54bc7f-8lcdv" Feb 18 19:36:13 crc kubenswrapper[4942]: I0218 19:36:13.656987 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b50976a2-9059-4076-8a11-9c86c8b49070-internal-tls-certs\") pod \"swift-proxy-b6f54bc7f-8lcdv\" (UID: \"b50976a2-9059-4076-8a11-9c86c8b49070\") " pod="openstack/swift-proxy-b6f54bc7f-8lcdv" Feb 18 19:36:13 crc kubenswrapper[4942]: I0218 19:36:13.657042 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b50976a2-9059-4076-8a11-9c86c8b49070-log-httpd\") pod \"swift-proxy-b6f54bc7f-8lcdv\" (UID: \"b50976a2-9059-4076-8a11-9c86c8b49070\") " pod="openstack/swift-proxy-b6f54bc7f-8lcdv" Feb 18 19:36:13 crc kubenswrapper[4942]: I0218 19:36:13.657069 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b50976a2-9059-4076-8a11-9c86c8b49070-run-httpd\") pod \"swift-proxy-b6f54bc7f-8lcdv\" (UID: \"b50976a2-9059-4076-8a11-9c86c8b49070\") " pod="openstack/swift-proxy-b6f54bc7f-8lcdv" Feb 18 19:36:13 crc kubenswrapper[4942]: I0218 19:36:13.657088 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b50976a2-9059-4076-8a11-9c86c8b49070-combined-ca-bundle\") pod \"swift-proxy-b6f54bc7f-8lcdv\" (UID: \"b50976a2-9059-4076-8a11-9c86c8b49070\") " pod="openstack/swift-proxy-b6f54bc7f-8lcdv" Feb 18 19:36:13 crc kubenswrapper[4942]: I0218 19:36:13.657115 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-slqrd\" (UniqueName: \"kubernetes.io/projected/b50976a2-9059-4076-8a11-9c86c8b49070-kube-api-access-slqrd\") pod \"swift-proxy-b6f54bc7f-8lcdv\" (UID: \"b50976a2-9059-4076-8a11-9c86c8b49070\") " pod="openstack/swift-proxy-b6f54bc7f-8lcdv" Feb 18 19:36:13 crc kubenswrapper[4942]: I0218 19:36:13.657185 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/b50976a2-9059-4076-8a11-9c86c8b49070-public-tls-certs\") pod \"swift-proxy-b6f54bc7f-8lcdv\" (UID: \"b50976a2-9059-4076-8a11-9c86c8b49070\") " pod="openstack/swift-proxy-b6f54bc7f-8lcdv" Feb 18 19:36:13 crc kubenswrapper[4942]: I0218 19:36:13.657212 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/b50976a2-9059-4076-8a11-9c86c8b49070-etc-swift\") pod \"swift-proxy-b6f54bc7f-8lcdv\" (UID: \"b50976a2-9059-4076-8a11-9c86c8b49070\") " pod="openstack/swift-proxy-b6f54bc7f-8lcdv" Feb 18 19:36:13 crc kubenswrapper[4942]: I0218 19:36:13.759638 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b50976a2-9059-4076-8a11-9c86c8b49070-internal-tls-certs\") pod \"swift-proxy-b6f54bc7f-8lcdv\" (UID: \"b50976a2-9059-4076-8a11-9c86c8b49070\") " pod="openstack/swift-proxy-b6f54bc7f-8lcdv" Feb 18 19:36:13 crc kubenswrapper[4942]: I0218 19:36:13.759740 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b50976a2-9059-4076-8a11-9c86c8b49070-log-httpd\") pod \"swift-proxy-b6f54bc7f-8lcdv\" (UID: \"b50976a2-9059-4076-8a11-9c86c8b49070\") " pod="openstack/swift-proxy-b6f54bc7f-8lcdv" Feb 18 19:36:13 crc kubenswrapper[4942]: I0218 19:36:13.759792 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b50976a2-9059-4076-8a11-9c86c8b49070-combined-ca-bundle\") pod \"swift-proxy-b6f54bc7f-8lcdv\" (UID: \"b50976a2-9059-4076-8a11-9c86c8b49070\") " pod="openstack/swift-proxy-b6f54bc7f-8lcdv" Feb 18 19:36:13 crc kubenswrapper[4942]: I0218 19:36:13.759815 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b50976a2-9059-4076-8a11-9c86c8b49070-run-httpd\") pod \"swift-proxy-b6f54bc7f-8lcdv\" (UID: \"b50976a2-9059-4076-8a11-9c86c8b49070\") " pod="openstack/swift-proxy-b6f54bc7f-8lcdv" Feb 18 19:36:13 crc kubenswrapper[4942]: I0218 19:36:13.759844 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-slqrd\" (UniqueName: \"kubernetes.io/projected/b50976a2-9059-4076-8a11-9c86c8b49070-kube-api-access-slqrd\") pod \"swift-proxy-b6f54bc7f-8lcdv\" (UID: \"b50976a2-9059-4076-8a11-9c86c8b49070\") " pod="openstack/swift-proxy-b6f54bc7f-8lcdv" Feb 18 19:36:13 crc kubenswrapper[4942]: I0218 19:36:13.759912 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/b50976a2-9059-4076-8a11-9c86c8b49070-public-tls-certs\") pod \"swift-proxy-b6f54bc7f-8lcdv\" (UID: \"b50976a2-9059-4076-8a11-9c86c8b49070\") " pod="openstack/swift-proxy-b6f54bc7f-8lcdv" Feb 18 19:36:13 crc kubenswrapper[4942]: I0218 19:36:13.759943 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/b50976a2-9059-4076-8a11-9c86c8b49070-etc-swift\") pod \"swift-proxy-b6f54bc7f-8lcdv\" (UID: \"b50976a2-9059-4076-8a11-9c86c8b49070\") " pod="openstack/swift-proxy-b6f54bc7f-8lcdv" Feb 18 19:36:13 crc kubenswrapper[4942]: I0218 19:36:13.760016 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b50976a2-9059-4076-8a11-9c86c8b49070-config-data\") pod \"swift-proxy-b6f54bc7f-8lcdv\" (UID: \"b50976a2-9059-4076-8a11-9c86c8b49070\") " pod="openstack/swift-proxy-b6f54bc7f-8lcdv" Feb 18 19:36:13 crc kubenswrapper[4942]: I0218 19:36:13.760650 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b50976a2-9059-4076-8a11-9c86c8b49070-run-httpd\") pod \"swift-proxy-b6f54bc7f-8lcdv\" (UID: \"b50976a2-9059-4076-8a11-9c86c8b49070\") " pod="openstack/swift-proxy-b6f54bc7f-8lcdv" Feb 18 19:36:13 crc kubenswrapper[4942]: I0218 19:36:13.760874 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b50976a2-9059-4076-8a11-9c86c8b49070-log-httpd\") pod \"swift-proxy-b6f54bc7f-8lcdv\" (UID: \"b50976a2-9059-4076-8a11-9c86c8b49070\") " pod="openstack/swift-proxy-b6f54bc7f-8lcdv" Feb 18 19:36:13 crc kubenswrapper[4942]: I0218 19:36:13.765353 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b50976a2-9059-4076-8a11-9c86c8b49070-combined-ca-bundle\") pod \"swift-proxy-b6f54bc7f-8lcdv\" (UID: \"b50976a2-9059-4076-8a11-9c86c8b49070\") " pod="openstack/swift-proxy-b6f54bc7f-8lcdv" Feb 18 19:36:13 crc kubenswrapper[4942]: I0218 19:36:13.766588 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/b50976a2-9059-4076-8a11-9c86c8b49070-public-tls-certs\") pod \"swift-proxy-b6f54bc7f-8lcdv\" (UID: \"b50976a2-9059-4076-8a11-9c86c8b49070\") " pod="openstack/swift-proxy-b6f54bc7f-8lcdv" Feb 18 19:36:13 crc kubenswrapper[4942]: I0218 19:36:13.771905 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/b50976a2-9059-4076-8a11-9c86c8b49070-etc-swift\") pod \"swift-proxy-b6f54bc7f-8lcdv\" (UID: \"b50976a2-9059-4076-8a11-9c86c8b49070\") " pod="openstack/swift-proxy-b6f54bc7f-8lcdv" Feb 18 19:36:13 crc kubenswrapper[4942]: I0218 19:36:13.772503 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b50976a2-9059-4076-8a11-9c86c8b49070-config-data\") pod \"swift-proxy-b6f54bc7f-8lcdv\" (UID: \"b50976a2-9059-4076-8a11-9c86c8b49070\") " pod="openstack/swift-proxy-b6f54bc7f-8lcdv" Feb 18 19:36:13 crc kubenswrapper[4942]: I0218 19:36:13.772553 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b50976a2-9059-4076-8a11-9c86c8b49070-internal-tls-certs\") pod \"swift-proxy-b6f54bc7f-8lcdv\" (UID: \"b50976a2-9059-4076-8a11-9c86c8b49070\") " pod="openstack/swift-proxy-b6f54bc7f-8lcdv" Feb 18 19:36:13 crc kubenswrapper[4942]: I0218 19:36:13.784496 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-slqrd\" (UniqueName: \"kubernetes.io/projected/b50976a2-9059-4076-8a11-9c86c8b49070-kube-api-access-slqrd\") pod \"swift-proxy-b6f54bc7f-8lcdv\" (UID: \"b50976a2-9059-4076-8a11-9c86c8b49070\") " pod="openstack/swift-proxy-b6f54bc7f-8lcdv" Feb 18 19:36:13 crc kubenswrapper[4942]: I0218 19:36:13.950871 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-b6f54bc7f-8lcdv" Feb 18 19:36:16 crc kubenswrapper[4942]: I0218 19:36:16.145153 4942 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Feb 18 19:36:16 crc kubenswrapper[4942]: I0218 19:36:16.992034 4942 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/watcher-api-0" Feb 18 19:36:17 crc kubenswrapper[4942]: I0218 19:36:17.002166 4942 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/watcher-api-0" Feb 18 19:36:17 crc kubenswrapper[4942]: I0218 19:36:17.771448 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/watcher-api-0" Feb 18 19:36:18 crc kubenswrapper[4942]: I0218 19:36:18.510839 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-b6f54bc7f-8lcdv"] Feb 18 19:36:18 crc kubenswrapper[4942]: W0218 19:36:18.511442 4942 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb50976a2_9059_4076_8a11_9c86c8b49070.slice/crio-db0f50882f211bb3b4947a29515f0070d719cad9af603748c03e2cd23af0f1c4 WatchSource:0}: Error finding container db0f50882f211bb3b4947a29515f0070d719cad9af603748c03e2cd23af0f1c4: Status 404 returned error can't find the container with id db0f50882f211bb3b4947a29515f0070d719cad9af603748c03e2cd23af0f1c4 Feb 18 19:36:18 crc kubenswrapper[4942]: I0218 19:36:18.778464 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-b6f54bc7f-8lcdv" event={"ID":"b50976a2-9059-4076-8a11-9c86c8b49070","Type":"ContainerStarted","Data":"8c43d49e7d7c3a5091c20a931fc541f39b912cacdcb12e7dc5468bd791862b57"} Feb 18 19:36:18 crc kubenswrapper[4942]: I0218 19:36:18.778813 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-b6f54bc7f-8lcdv" event={"ID":"b50976a2-9059-4076-8a11-9c86c8b49070","Type":"ContainerStarted","Data":"db0f50882f211bb3b4947a29515f0070d719cad9af603748c03e2cd23af0f1c4"} Feb 18 19:36:18 crc kubenswrapper[4942]: I0218 19:36:18.781882 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"38accb89-093d-4b4b-b098-b4f73a4bb561","Type":"ContainerStarted","Data":"cf37ca18e3ccd543dae3900379bb0f942e5bece6b1bf367bc113b0672e408fc5"} Feb 18 19:36:18 crc kubenswrapper[4942]: I0218 19:36:18.800021 4942 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstackclient" podStartSLOduration=2.531448059 podStartE2EDuration="12.800003382s" podCreationTimestamp="2026-02-18 19:36:06 +0000 UTC" firstStartedPulling="2026-02-18 19:36:07.7178738 +0000 UTC m=+1127.422806455" lastFinishedPulling="2026-02-18 19:36:17.986429103 +0000 UTC m=+1137.691361778" observedRunningTime="2026-02-18 19:36:18.796450989 +0000 UTC m=+1138.501383654" watchObservedRunningTime="2026-02-18 19:36:18.800003382 +0000 UTC m=+1138.504936047" Feb 18 19:36:19 crc kubenswrapper[4942]: I0218 19:36:19.527949 4942 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-54d64cf59b-xp7rk" podUID="3ecc91e6-4e7f-438f-8530-bb8dd55764c5" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.158:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.158:8443: connect: connection refused" Feb 18 19:36:19 crc kubenswrapper[4942]: I0218 19:36:19.528643 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-54d64cf59b-xp7rk" Feb 18 19:36:19 crc kubenswrapper[4942]: I0218 19:36:19.800252 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-b6f54bc7f-8lcdv" event={"ID":"b50976a2-9059-4076-8a11-9c86c8b49070","Type":"ContainerStarted","Data":"8aada01431394c4f8ce3a99a4f95b7f95f34d7bf452f92879db8c95b52c42e03"} Feb 18 19:36:19 crc kubenswrapper[4942]: I0218 19:36:19.800348 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-b6f54bc7f-8lcdv" Feb 18 19:36:19 crc kubenswrapper[4942]: I0218 19:36:19.800396 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-b6f54bc7f-8lcdv" Feb 18 19:36:19 crc kubenswrapper[4942]: I0218 19:36:19.824245 4942 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-proxy-b6f54bc7f-8lcdv" podStartSLOduration=6.824225686 podStartE2EDuration="6.824225686s" podCreationTimestamp="2026-02-18 19:36:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:36:19.821100825 +0000 UTC m=+1139.526033500" watchObservedRunningTime="2026-02-18 19:36:19.824225686 +0000 UTC m=+1139.529158351" Feb 18 19:36:19 crc kubenswrapper[4942]: I0218 19:36:19.969995 4942 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 18 19:36:19 crc kubenswrapper[4942]: I0218 19:36:19.970404 4942 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="cb08df0a-0162-4e04-a641-6fd65af9048b" containerName="ceilometer-central-agent" containerID="cri-o://ed48b1a780714eb223b18d06dc51c76e72512cff5c52173a2e3ee292ee687994" gracePeriod=30 Feb 18 19:36:19 crc kubenswrapper[4942]: I0218 19:36:19.971033 4942 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="cb08df0a-0162-4e04-a641-6fd65af9048b" containerName="proxy-httpd" containerID="cri-o://33f88e67e2d64ef0cdf5c3ea9ad2d23061784bba770fa1c0fe079285a1cbbc56" gracePeriod=30 Feb 18 19:36:19 crc kubenswrapper[4942]: I0218 19:36:19.971226 4942 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="cb08df0a-0162-4e04-a641-6fd65af9048b" containerName="sg-core" containerID="cri-o://9ecd7aaddb526f7a536755bf17c5ed2cdffb53f01f22747fc9607ce810b409a8" gracePeriod=30 Feb 18 19:36:19 crc kubenswrapper[4942]: I0218 19:36:19.971323 4942 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="cb08df0a-0162-4e04-a641-6fd65af9048b" containerName="ceilometer-notification-agent" containerID="cri-o://28ebb3effac1a702e96312e12a7195c54046ef1e0a31212d28c03650f2be31be" gracePeriod=30 Feb 18 19:36:20 crc kubenswrapper[4942]: I0218 19:36:20.078555 4942 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ceilometer-0" podUID="cb08df0a-0162-4e04-a641-6fd65af9048b" containerName="proxy-httpd" probeResult="failure" output="Get \"http://10.217.0.182:3000/\": read tcp 10.217.0.2:44782->10.217.0.182:3000: read: connection reset by peer" Feb 18 19:36:20 crc kubenswrapper[4942]: I0218 19:36:20.810185 4942 generic.go:334] "Generic (PLEG): container finished" podID="cb08df0a-0162-4e04-a641-6fd65af9048b" containerID="33f88e67e2d64ef0cdf5c3ea9ad2d23061784bba770fa1c0fe079285a1cbbc56" exitCode=0 Feb 18 19:36:20 crc kubenswrapper[4942]: I0218 19:36:20.810501 4942 generic.go:334] "Generic (PLEG): container finished" podID="cb08df0a-0162-4e04-a641-6fd65af9048b" containerID="9ecd7aaddb526f7a536755bf17c5ed2cdffb53f01f22747fc9607ce810b409a8" exitCode=2 Feb 18 19:36:20 crc kubenswrapper[4942]: I0218 19:36:20.810515 4942 generic.go:334] "Generic (PLEG): container finished" podID="cb08df0a-0162-4e04-a641-6fd65af9048b" containerID="ed48b1a780714eb223b18d06dc51c76e72512cff5c52173a2e3ee292ee687994" exitCode=0 Feb 18 19:36:20 crc kubenswrapper[4942]: I0218 19:36:20.810266 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"cb08df0a-0162-4e04-a641-6fd65af9048b","Type":"ContainerDied","Data":"33f88e67e2d64ef0cdf5c3ea9ad2d23061784bba770fa1c0fe079285a1cbbc56"} Feb 18 19:36:20 crc kubenswrapper[4942]: I0218 19:36:20.810608 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"cb08df0a-0162-4e04-a641-6fd65af9048b","Type":"ContainerDied","Data":"9ecd7aaddb526f7a536755bf17c5ed2cdffb53f01f22747fc9607ce810b409a8"} Feb 18 19:36:20 crc kubenswrapper[4942]: I0218 19:36:20.810627 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"cb08df0a-0162-4e04-a641-6fd65af9048b","Type":"ContainerDied","Data":"ed48b1a780714eb223b18d06dc51c76e72512cff5c52173a2e3ee292ee687994"} Feb 18 19:36:21 crc kubenswrapper[4942]: I0218 19:36:21.824591 4942 generic.go:334] "Generic (PLEG): container finished" podID="cb08df0a-0162-4e04-a641-6fd65af9048b" containerID="28ebb3effac1a702e96312e12a7195c54046ef1e0a31212d28c03650f2be31be" exitCode=0 Feb 18 19:36:21 crc kubenswrapper[4942]: I0218 19:36:21.824817 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"cb08df0a-0162-4e04-a641-6fd65af9048b","Type":"ContainerDied","Data":"28ebb3effac1a702e96312e12a7195c54046ef1e0a31212d28c03650f2be31be"} Feb 18 19:36:22 crc kubenswrapper[4942]: I0218 19:36:22.058084 4942 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 18 19:36:22 crc kubenswrapper[4942]: I0218 19:36:22.227234 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cb08df0a-0162-4e04-a641-6fd65af9048b-combined-ca-bundle\") pod \"cb08df0a-0162-4e04-a641-6fd65af9048b\" (UID: \"cb08df0a-0162-4e04-a641-6fd65af9048b\") " Feb 18 19:36:22 crc kubenswrapper[4942]: I0218 19:36:22.227310 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cb08df0a-0162-4e04-a641-6fd65af9048b-scripts\") pod \"cb08df0a-0162-4e04-a641-6fd65af9048b\" (UID: \"cb08df0a-0162-4e04-a641-6fd65af9048b\") " Feb 18 19:36:22 crc kubenswrapper[4942]: I0218 19:36:22.227395 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/cb08df0a-0162-4e04-a641-6fd65af9048b-run-httpd\") pod \"cb08df0a-0162-4e04-a641-6fd65af9048b\" (UID: \"cb08df0a-0162-4e04-a641-6fd65af9048b\") " Feb 18 19:36:22 crc kubenswrapper[4942]: I0218 19:36:22.227492 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cb08df0a-0162-4e04-a641-6fd65af9048b-config-data\") pod \"cb08df0a-0162-4e04-a641-6fd65af9048b\" (UID: \"cb08df0a-0162-4e04-a641-6fd65af9048b\") " Feb 18 19:36:22 crc kubenswrapper[4942]: I0218 19:36:22.227542 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/cb08df0a-0162-4e04-a641-6fd65af9048b-log-httpd\") pod \"cb08df0a-0162-4e04-a641-6fd65af9048b\" (UID: \"cb08df0a-0162-4e04-a641-6fd65af9048b\") " Feb 18 19:36:22 crc kubenswrapper[4942]: I0218 19:36:22.227592 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/cb08df0a-0162-4e04-a641-6fd65af9048b-sg-core-conf-yaml\") pod \"cb08df0a-0162-4e04-a641-6fd65af9048b\" (UID: \"cb08df0a-0162-4e04-a641-6fd65af9048b\") " Feb 18 19:36:22 crc kubenswrapper[4942]: I0218 19:36:22.227970 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2plf5\" (UniqueName: \"kubernetes.io/projected/cb08df0a-0162-4e04-a641-6fd65af9048b-kube-api-access-2plf5\") pod \"cb08df0a-0162-4e04-a641-6fd65af9048b\" (UID: \"cb08df0a-0162-4e04-a641-6fd65af9048b\") " Feb 18 19:36:22 crc kubenswrapper[4942]: I0218 19:36:22.228820 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cb08df0a-0162-4e04-a641-6fd65af9048b-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "cb08df0a-0162-4e04-a641-6fd65af9048b" (UID: "cb08df0a-0162-4e04-a641-6fd65af9048b"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 19:36:22 crc kubenswrapper[4942]: I0218 19:36:22.228953 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cb08df0a-0162-4e04-a641-6fd65af9048b-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "cb08df0a-0162-4e04-a641-6fd65af9048b" (UID: "cb08df0a-0162-4e04-a641-6fd65af9048b"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 19:36:22 crc kubenswrapper[4942]: I0218 19:36:22.229332 4942 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/cb08df0a-0162-4e04-a641-6fd65af9048b-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 18 19:36:22 crc kubenswrapper[4942]: I0218 19:36:22.229361 4942 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/cb08df0a-0162-4e04-a641-6fd65af9048b-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 18 19:36:22 crc kubenswrapper[4942]: I0218 19:36:22.236962 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cb08df0a-0162-4e04-a641-6fd65af9048b-kube-api-access-2plf5" (OuterVolumeSpecName: "kube-api-access-2plf5") pod "cb08df0a-0162-4e04-a641-6fd65af9048b" (UID: "cb08df0a-0162-4e04-a641-6fd65af9048b"). InnerVolumeSpecName "kube-api-access-2plf5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:36:22 crc kubenswrapper[4942]: I0218 19:36:22.240332 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cb08df0a-0162-4e04-a641-6fd65af9048b-scripts" (OuterVolumeSpecName: "scripts") pod "cb08df0a-0162-4e04-a641-6fd65af9048b" (UID: "cb08df0a-0162-4e04-a641-6fd65af9048b"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:36:22 crc kubenswrapper[4942]: I0218 19:36:22.273635 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cb08df0a-0162-4e04-a641-6fd65af9048b-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "cb08df0a-0162-4e04-a641-6fd65af9048b" (UID: "cb08df0a-0162-4e04-a641-6fd65af9048b"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:36:22 crc kubenswrapper[4942]: I0218 19:36:22.317012 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cb08df0a-0162-4e04-a641-6fd65af9048b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "cb08df0a-0162-4e04-a641-6fd65af9048b" (UID: "cb08df0a-0162-4e04-a641-6fd65af9048b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:36:22 crc kubenswrapper[4942]: I0218 19:36:22.331271 4942 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2plf5\" (UniqueName: \"kubernetes.io/projected/cb08df0a-0162-4e04-a641-6fd65af9048b-kube-api-access-2plf5\") on node \"crc\" DevicePath \"\"" Feb 18 19:36:22 crc kubenswrapper[4942]: I0218 19:36:22.331313 4942 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cb08df0a-0162-4e04-a641-6fd65af9048b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 19:36:22 crc kubenswrapper[4942]: I0218 19:36:22.331324 4942 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cb08df0a-0162-4e04-a641-6fd65af9048b-scripts\") on node \"crc\" DevicePath \"\"" Feb 18 19:36:22 crc kubenswrapper[4942]: I0218 19:36:22.331333 4942 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/cb08df0a-0162-4e04-a641-6fd65af9048b-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 18 19:36:22 crc kubenswrapper[4942]: I0218 19:36:22.354070 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cb08df0a-0162-4e04-a641-6fd65af9048b-config-data" (OuterVolumeSpecName: "config-data") pod "cb08df0a-0162-4e04-a641-6fd65af9048b" (UID: "cb08df0a-0162-4e04-a641-6fd65af9048b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:36:22 crc kubenswrapper[4942]: I0218 19:36:22.421906 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-555cb4cc6f-xh69m" Feb 18 19:36:22 crc kubenswrapper[4942]: I0218 19:36:22.432819 4942 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cb08df0a-0162-4e04-a641-6fd65af9048b-config-data\") on node \"crc\" DevicePath \"\"" Feb 18 19:36:22 crc kubenswrapper[4942]: I0218 19:36:22.482292 4942 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-67cc44d6c6-sp59w"] Feb 18 19:36:22 crc kubenswrapper[4942]: I0218 19:36:22.482564 4942 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-67cc44d6c6-sp59w" podUID="df34bdbb-8771-4d46-b5ba-29088c793a4c" containerName="neutron-api" containerID="cri-o://8b2790adbab8c3f7f1e931b6f90eb17d0d170a8ea3e8297671b08ac8cd2f42be" gracePeriod=30 Feb 18 19:36:22 crc kubenswrapper[4942]: I0218 19:36:22.483098 4942 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-67cc44d6c6-sp59w" podUID="df34bdbb-8771-4d46-b5ba-29088c793a4c" containerName="neutron-httpd" containerID="cri-o://686f47180a9ccf7623cbed7358eef7f2d2fa27a8a72e96ad726f79f619dd1afc" gracePeriod=30 Feb 18 19:36:22 crc kubenswrapper[4942]: I0218 19:36:22.834817 4942 generic.go:334] "Generic (PLEG): container finished" podID="df34bdbb-8771-4d46-b5ba-29088c793a4c" containerID="686f47180a9ccf7623cbed7358eef7f2d2fa27a8a72e96ad726f79f619dd1afc" exitCode=0 Feb 18 19:36:22 crc kubenswrapper[4942]: I0218 19:36:22.834901 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-67cc44d6c6-sp59w" event={"ID":"df34bdbb-8771-4d46-b5ba-29088c793a4c","Type":"ContainerDied","Data":"686f47180a9ccf7623cbed7358eef7f2d2fa27a8a72e96ad726f79f619dd1afc"} Feb 18 19:36:22 crc kubenswrapper[4942]: I0218 19:36:22.838775 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"cb08df0a-0162-4e04-a641-6fd65af9048b","Type":"ContainerDied","Data":"7b5c07d9023f0c81a3490adcfb94e32fc0800eeb0c4be517c4b9b978e0bb5083"} Feb 18 19:36:22 crc kubenswrapper[4942]: I0218 19:36:22.838833 4942 scope.go:117] "RemoveContainer" containerID="33f88e67e2d64ef0cdf5c3ea9ad2d23061784bba770fa1c0fe079285a1cbbc56" Feb 18 19:36:22 crc kubenswrapper[4942]: I0218 19:36:22.838836 4942 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 18 19:36:22 crc kubenswrapper[4942]: I0218 19:36:22.877331 4942 scope.go:117] "RemoveContainer" containerID="9ecd7aaddb526f7a536755bf17c5ed2cdffb53f01f22747fc9607ce810b409a8" Feb 18 19:36:22 crc kubenswrapper[4942]: I0218 19:36:22.922242 4942 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 18 19:36:22 crc kubenswrapper[4942]: I0218 19:36:22.933993 4942 scope.go:117] "RemoveContainer" containerID="28ebb3effac1a702e96312e12a7195c54046ef1e0a31212d28c03650f2be31be" Feb 18 19:36:22 crc kubenswrapper[4942]: I0218 19:36:22.947829 4942 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 18 19:36:22 crc kubenswrapper[4942]: I0218 19:36:22.960380 4942 scope.go:117] "RemoveContainer" containerID="ed48b1a780714eb223b18d06dc51c76e72512cff5c52173a2e3ee292ee687994" Feb 18 19:36:22 crc kubenswrapper[4942]: I0218 19:36:22.967090 4942 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 18 19:36:22 crc kubenswrapper[4942]: E0218 19:36:22.967578 4942 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cb08df0a-0162-4e04-a641-6fd65af9048b" containerName="sg-core" Feb 18 19:36:22 crc kubenswrapper[4942]: I0218 19:36:22.967601 4942 state_mem.go:107] "Deleted CPUSet assignment" podUID="cb08df0a-0162-4e04-a641-6fd65af9048b" containerName="sg-core" Feb 18 19:36:22 crc kubenswrapper[4942]: E0218 19:36:22.967620 4942 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cb08df0a-0162-4e04-a641-6fd65af9048b" containerName="proxy-httpd" Feb 18 19:36:22 crc kubenswrapper[4942]: I0218 19:36:22.967628 4942 state_mem.go:107] "Deleted CPUSet assignment" podUID="cb08df0a-0162-4e04-a641-6fd65af9048b" containerName="proxy-httpd" Feb 18 19:36:22 crc kubenswrapper[4942]: E0218 19:36:22.967645 4942 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cb08df0a-0162-4e04-a641-6fd65af9048b" containerName="ceilometer-notification-agent" Feb 18 19:36:22 crc kubenswrapper[4942]: I0218 19:36:22.967655 4942 state_mem.go:107] "Deleted CPUSet assignment" podUID="cb08df0a-0162-4e04-a641-6fd65af9048b" containerName="ceilometer-notification-agent" Feb 18 19:36:22 crc kubenswrapper[4942]: E0218 19:36:22.967684 4942 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cb08df0a-0162-4e04-a641-6fd65af9048b" containerName="ceilometer-central-agent" Feb 18 19:36:22 crc kubenswrapper[4942]: I0218 19:36:22.967692 4942 state_mem.go:107] "Deleted CPUSet assignment" podUID="cb08df0a-0162-4e04-a641-6fd65af9048b" containerName="ceilometer-central-agent" Feb 18 19:36:22 crc kubenswrapper[4942]: I0218 19:36:22.967930 4942 memory_manager.go:354] "RemoveStaleState removing state" podUID="cb08df0a-0162-4e04-a641-6fd65af9048b" containerName="proxy-httpd" Feb 18 19:36:22 crc kubenswrapper[4942]: I0218 19:36:22.967945 4942 memory_manager.go:354] "RemoveStaleState removing state" podUID="cb08df0a-0162-4e04-a641-6fd65af9048b" containerName="sg-core" Feb 18 19:36:22 crc kubenswrapper[4942]: I0218 19:36:22.967961 4942 memory_manager.go:354] "RemoveStaleState removing state" podUID="cb08df0a-0162-4e04-a641-6fd65af9048b" containerName="ceilometer-central-agent" Feb 18 19:36:22 crc kubenswrapper[4942]: I0218 19:36:22.967975 4942 memory_manager.go:354] "RemoveStaleState removing state" podUID="cb08df0a-0162-4e04-a641-6fd65af9048b" containerName="ceilometer-notification-agent" Feb 18 19:36:22 crc kubenswrapper[4942]: I0218 19:36:22.983265 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 18 19:36:22 crc kubenswrapper[4942]: I0218 19:36:22.990751 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 18 19:36:22 crc kubenswrapper[4942]: I0218 19:36:22.991029 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 18 19:36:22 crc kubenswrapper[4942]: I0218 19:36:22.995818 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 18 19:36:23 crc kubenswrapper[4942]: I0218 19:36:23.051101 4942 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cb08df0a-0162-4e04-a641-6fd65af9048b" path="/var/lib/kubelet/pods/cb08df0a-0162-4e04-a641-6fd65af9048b/volumes" Feb 18 19:36:23 crc kubenswrapper[4942]: I0218 19:36:23.051803 4942 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-db-create-hxdjn"] Feb 18 19:36:23 crc kubenswrapper[4942]: I0218 19:36:23.054613 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-hxdjn" Feb 18 19:36:23 crc kubenswrapper[4942]: I0218 19:36:23.065253 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-hxdjn"] Feb 18 19:36:23 crc kubenswrapper[4942]: I0218 19:36:23.129090 4942 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-db-create-d7fm8"] Feb 18 19:36:23 crc kubenswrapper[4942]: I0218 19:36:23.130419 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-d7fm8" Feb 18 19:36:23 crc kubenswrapper[4942]: I0218 19:36:23.137541 4942 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-f195-account-create-update-jjctk"] Feb 18 19:36:23 crc kubenswrapper[4942]: I0218 19:36:23.138859 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-f195-account-create-update-jjctk" Feb 18 19:36:23 crc kubenswrapper[4942]: I0218 19:36:23.141971 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-db-secret" Feb 18 19:36:23 crc kubenswrapper[4942]: I0218 19:36:23.147358 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zwktj\" (UniqueName: \"kubernetes.io/projected/696aecc5-9837-4941-a9e2-06c1743b6983-kube-api-access-zwktj\") pod \"ceilometer-0\" (UID: \"696aecc5-9837-4941-a9e2-06c1743b6983\") " pod="openstack/ceilometer-0" Feb 18 19:36:23 crc kubenswrapper[4942]: I0218 19:36:23.147443 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/696aecc5-9837-4941-a9e2-06c1743b6983-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"696aecc5-9837-4941-a9e2-06c1743b6983\") " pod="openstack/ceilometer-0" Feb 18 19:36:23 crc kubenswrapper[4942]: I0218 19:36:23.147474 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/696aecc5-9837-4941-a9e2-06c1743b6983-scripts\") pod \"ceilometer-0\" (UID: \"696aecc5-9837-4941-a9e2-06c1743b6983\") " pod="openstack/ceilometer-0" Feb 18 19:36:23 crc kubenswrapper[4942]: I0218 19:36:23.147508 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mtfvh\" (UniqueName: \"kubernetes.io/projected/54e11ed4-f85e-4125-acc8-b0b86cef91fb-kube-api-access-mtfvh\") pod \"nova-api-db-create-hxdjn\" (UID: \"54e11ed4-f85e-4125-acc8-b0b86cef91fb\") " pod="openstack/nova-api-db-create-hxdjn" Feb 18 19:36:23 crc kubenswrapper[4942]: I0218 19:36:23.147540 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/696aecc5-9837-4941-a9e2-06c1743b6983-run-httpd\") pod \"ceilometer-0\" (UID: \"696aecc5-9837-4941-a9e2-06c1743b6983\") " pod="openstack/ceilometer-0" Feb 18 19:36:23 crc kubenswrapper[4942]: I0218 19:36:23.147556 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/54e11ed4-f85e-4125-acc8-b0b86cef91fb-operator-scripts\") pod \"nova-api-db-create-hxdjn\" (UID: \"54e11ed4-f85e-4125-acc8-b0b86cef91fb\") " pod="openstack/nova-api-db-create-hxdjn" Feb 18 19:36:23 crc kubenswrapper[4942]: I0218 19:36:23.147595 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/696aecc5-9837-4941-a9e2-06c1743b6983-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"696aecc5-9837-4941-a9e2-06c1743b6983\") " pod="openstack/ceilometer-0" Feb 18 19:36:23 crc kubenswrapper[4942]: I0218 19:36:23.147613 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/696aecc5-9837-4941-a9e2-06c1743b6983-log-httpd\") pod \"ceilometer-0\" (UID: \"696aecc5-9837-4941-a9e2-06c1743b6983\") " pod="openstack/ceilometer-0" Feb 18 19:36:23 crc kubenswrapper[4942]: I0218 19:36:23.147642 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/696aecc5-9837-4941-a9e2-06c1743b6983-config-data\") pod \"ceilometer-0\" (UID: \"696aecc5-9837-4941-a9e2-06c1743b6983\") " pod="openstack/ceilometer-0" Feb 18 19:36:23 crc kubenswrapper[4942]: I0218 19:36:23.148742 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-d7fm8"] Feb 18 19:36:23 crc kubenswrapper[4942]: I0218 19:36:23.158437 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-f195-account-create-update-jjctk"] Feb 18 19:36:23 crc kubenswrapper[4942]: I0218 19:36:23.249638 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/696aecc5-9837-4941-a9e2-06c1743b6983-scripts\") pod \"ceilometer-0\" (UID: \"696aecc5-9837-4941-a9e2-06c1743b6983\") " pod="openstack/ceilometer-0" Feb 18 19:36:23 crc kubenswrapper[4942]: I0218 19:36:23.249685 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mtfvh\" (UniqueName: \"kubernetes.io/projected/54e11ed4-f85e-4125-acc8-b0b86cef91fb-kube-api-access-mtfvh\") pod \"nova-api-db-create-hxdjn\" (UID: \"54e11ed4-f85e-4125-acc8-b0b86cef91fb\") " pod="openstack/nova-api-db-create-hxdjn" Feb 18 19:36:23 crc kubenswrapper[4942]: I0218 19:36:23.249711 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/696aecc5-9837-4941-a9e2-06c1743b6983-run-httpd\") pod \"ceilometer-0\" (UID: \"696aecc5-9837-4941-a9e2-06c1743b6983\") " pod="openstack/ceilometer-0" Feb 18 19:36:23 crc kubenswrapper[4942]: I0218 19:36:23.249731 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/54e11ed4-f85e-4125-acc8-b0b86cef91fb-operator-scripts\") pod \"nova-api-db-create-hxdjn\" (UID: \"54e11ed4-f85e-4125-acc8-b0b86cef91fb\") " pod="openstack/nova-api-db-create-hxdjn" Feb 18 19:36:23 crc kubenswrapper[4942]: I0218 19:36:23.249776 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/696aecc5-9837-4941-a9e2-06c1743b6983-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"696aecc5-9837-4941-a9e2-06c1743b6983\") " pod="openstack/ceilometer-0" Feb 18 19:36:23 crc kubenswrapper[4942]: I0218 19:36:23.249791 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/696aecc5-9837-4941-a9e2-06c1743b6983-log-httpd\") pod \"ceilometer-0\" (UID: \"696aecc5-9837-4941-a9e2-06c1743b6983\") " pod="openstack/ceilometer-0" Feb 18 19:36:23 crc kubenswrapper[4942]: I0218 19:36:23.249809 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r8xk6\" (UniqueName: \"kubernetes.io/projected/3319773b-d924-402a-adbd-f421ee14c994-kube-api-access-r8xk6\") pod \"nova-cell0-db-create-d7fm8\" (UID: \"3319773b-d924-402a-adbd-f421ee14c994\") " pod="openstack/nova-cell0-db-create-d7fm8" Feb 18 19:36:23 crc kubenswrapper[4942]: I0218 19:36:23.249829 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/696aecc5-9837-4941-a9e2-06c1743b6983-config-data\") pod \"ceilometer-0\" (UID: \"696aecc5-9837-4941-a9e2-06c1743b6983\") " pod="openstack/ceilometer-0" Feb 18 19:36:23 crc kubenswrapper[4942]: I0218 19:36:23.249880 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w65jq\" (UniqueName: \"kubernetes.io/projected/ef4ca914-d763-484f-aa35-39dbd725d14c-kube-api-access-w65jq\") pod \"nova-api-f195-account-create-update-jjctk\" (UID: \"ef4ca914-d763-484f-aa35-39dbd725d14c\") " pod="openstack/nova-api-f195-account-create-update-jjctk" Feb 18 19:36:23 crc kubenswrapper[4942]: I0218 19:36:23.249930 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zwktj\" (UniqueName: \"kubernetes.io/projected/696aecc5-9837-4941-a9e2-06c1743b6983-kube-api-access-zwktj\") pod \"ceilometer-0\" (UID: \"696aecc5-9837-4941-a9e2-06c1743b6983\") " pod="openstack/ceilometer-0" Feb 18 19:36:23 crc kubenswrapper[4942]: I0218 19:36:23.249955 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3319773b-d924-402a-adbd-f421ee14c994-operator-scripts\") pod \"nova-cell0-db-create-d7fm8\" (UID: \"3319773b-d924-402a-adbd-f421ee14c994\") " pod="openstack/nova-cell0-db-create-d7fm8" Feb 18 19:36:23 crc kubenswrapper[4942]: I0218 19:36:23.249981 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ef4ca914-d763-484f-aa35-39dbd725d14c-operator-scripts\") pod \"nova-api-f195-account-create-update-jjctk\" (UID: \"ef4ca914-d763-484f-aa35-39dbd725d14c\") " pod="openstack/nova-api-f195-account-create-update-jjctk" Feb 18 19:36:23 crc kubenswrapper[4942]: I0218 19:36:23.250009 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/696aecc5-9837-4941-a9e2-06c1743b6983-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"696aecc5-9837-4941-a9e2-06c1743b6983\") " pod="openstack/ceilometer-0" Feb 18 19:36:23 crc kubenswrapper[4942]: I0218 19:36:23.250917 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/696aecc5-9837-4941-a9e2-06c1743b6983-log-httpd\") pod \"ceilometer-0\" (UID: \"696aecc5-9837-4941-a9e2-06c1743b6983\") " pod="openstack/ceilometer-0" Feb 18 19:36:23 crc kubenswrapper[4942]: I0218 19:36:23.251034 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/696aecc5-9837-4941-a9e2-06c1743b6983-run-httpd\") pod \"ceilometer-0\" (UID: \"696aecc5-9837-4941-a9e2-06c1743b6983\") " pod="openstack/ceilometer-0" Feb 18 19:36:23 crc kubenswrapper[4942]: I0218 19:36:23.251470 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/54e11ed4-f85e-4125-acc8-b0b86cef91fb-operator-scripts\") pod \"nova-api-db-create-hxdjn\" (UID: \"54e11ed4-f85e-4125-acc8-b0b86cef91fb\") " pod="openstack/nova-api-db-create-hxdjn" Feb 18 19:36:23 crc kubenswrapper[4942]: I0218 19:36:23.254974 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/696aecc5-9837-4941-a9e2-06c1743b6983-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"696aecc5-9837-4941-a9e2-06c1743b6983\") " pod="openstack/ceilometer-0" Feb 18 19:36:23 crc kubenswrapper[4942]: I0218 19:36:23.255368 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/696aecc5-9837-4941-a9e2-06c1743b6983-config-data\") pod \"ceilometer-0\" (UID: \"696aecc5-9837-4941-a9e2-06c1743b6983\") " pod="openstack/ceilometer-0" Feb 18 19:36:23 crc kubenswrapper[4942]: I0218 19:36:23.257632 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/696aecc5-9837-4941-a9e2-06c1743b6983-scripts\") pod \"ceilometer-0\" (UID: \"696aecc5-9837-4941-a9e2-06c1743b6983\") " pod="openstack/ceilometer-0" Feb 18 19:36:23 crc kubenswrapper[4942]: I0218 19:36:23.264777 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/696aecc5-9837-4941-a9e2-06c1743b6983-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"696aecc5-9837-4941-a9e2-06c1743b6983\") " pod="openstack/ceilometer-0" Feb 18 19:36:23 crc kubenswrapper[4942]: I0218 19:36:23.270900 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zwktj\" (UniqueName: \"kubernetes.io/projected/696aecc5-9837-4941-a9e2-06c1743b6983-kube-api-access-zwktj\") pod \"ceilometer-0\" (UID: \"696aecc5-9837-4941-a9e2-06c1743b6983\") " pod="openstack/ceilometer-0" Feb 18 19:36:23 crc kubenswrapper[4942]: I0218 19:36:23.277430 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mtfvh\" (UniqueName: \"kubernetes.io/projected/54e11ed4-f85e-4125-acc8-b0b86cef91fb-kube-api-access-mtfvh\") pod \"nova-api-db-create-hxdjn\" (UID: \"54e11ed4-f85e-4125-acc8-b0b86cef91fb\") " pod="openstack/nova-api-db-create-hxdjn" Feb 18 19:36:23 crc kubenswrapper[4942]: I0218 19:36:23.321432 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 18 19:36:23 crc kubenswrapper[4942]: I0218 19:36:23.323298 4942 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-db-create-f9r9j"] Feb 18 19:36:23 crc kubenswrapper[4942]: I0218 19:36:23.324444 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-f9r9j" Feb 18 19:36:23 crc kubenswrapper[4942]: I0218 19:36:23.337029 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-f9r9j"] Feb 18 19:36:23 crc kubenswrapper[4942]: I0218 19:36:23.352298 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3319773b-d924-402a-adbd-f421ee14c994-operator-scripts\") pod \"nova-cell0-db-create-d7fm8\" (UID: \"3319773b-d924-402a-adbd-f421ee14c994\") " pod="openstack/nova-cell0-db-create-d7fm8" Feb 18 19:36:23 crc kubenswrapper[4942]: I0218 19:36:23.352367 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ef4ca914-d763-484f-aa35-39dbd725d14c-operator-scripts\") pod \"nova-api-f195-account-create-update-jjctk\" (UID: \"ef4ca914-d763-484f-aa35-39dbd725d14c\") " pod="openstack/nova-api-f195-account-create-update-jjctk" Feb 18 19:36:23 crc kubenswrapper[4942]: I0218 19:36:23.352464 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r8xk6\" (UniqueName: \"kubernetes.io/projected/3319773b-d924-402a-adbd-f421ee14c994-kube-api-access-r8xk6\") pod \"nova-cell0-db-create-d7fm8\" (UID: \"3319773b-d924-402a-adbd-f421ee14c994\") " pod="openstack/nova-cell0-db-create-d7fm8" Feb 18 19:36:23 crc kubenswrapper[4942]: I0218 19:36:23.352557 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w65jq\" (UniqueName: \"kubernetes.io/projected/ef4ca914-d763-484f-aa35-39dbd725d14c-kube-api-access-w65jq\") pod \"nova-api-f195-account-create-update-jjctk\" (UID: \"ef4ca914-d763-484f-aa35-39dbd725d14c\") " pod="openstack/nova-api-f195-account-create-update-jjctk" Feb 18 19:36:23 crc kubenswrapper[4942]: I0218 19:36:23.353687 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3319773b-d924-402a-adbd-f421ee14c994-operator-scripts\") pod \"nova-cell0-db-create-d7fm8\" (UID: \"3319773b-d924-402a-adbd-f421ee14c994\") " pod="openstack/nova-cell0-db-create-d7fm8" Feb 18 19:36:23 crc kubenswrapper[4942]: I0218 19:36:23.354287 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ef4ca914-d763-484f-aa35-39dbd725d14c-operator-scripts\") pod \"nova-api-f195-account-create-update-jjctk\" (UID: \"ef4ca914-d763-484f-aa35-39dbd725d14c\") " pod="openstack/nova-api-f195-account-create-update-jjctk" Feb 18 19:36:23 crc kubenswrapper[4942]: I0218 19:36:23.354486 4942 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-1b0e-account-create-update-p6b7z"] Feb 18 19:36:23 crc kubenswrapper[4942]: I0218 19:36:23.355919 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-1b0e-account-create-update-p6b7z" Feb 18 19:36:23 crc kubenswrapper[4942]: I0218 19:36:23.358285 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-db-secret" Feb 18 19:36:23 crc kubenswrapper[4942]: I0218 19:36:23.373412 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r8xk6\" (UniqueName: \"kubernetes.io/projected/3319773b-d924-402a-adbd-f421ee14c994-kube-api-access-r8xk6\") pod \"nova-cell0-db-create-d7fm8\" (UID: \"3319773b-d924-402a-adbd-f421ee14c994\") " pod="openstack/nova-cell0-db-create-d7fm8" Feb 18 19:36:23 crc kubenswrapper[4942]: I0218 19:36:23.374916 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w65jq\" (UniqueName: \"kubernetes.io/projected/ef4ca914-d763-484f-aa35-39dbd725d14c-kube-api-access-w65jq\") pod \"nova-api-f195-account-create-update-jjctk\" (UID: \"ef4ca914-d763-484f-aa35-39dbd725d14c\") " pod="openstack/nova-api-f195-account-create-update-jjctk" Feb 18 19:36:23 crc kubenswrapper[4942]: I0218 19:36:23.381358 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-hxdjn" Feb 18 19:36:23 crc kubenswrapper[4942]: I0218 19:36:23.386841 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-1b0e-account-create-update-p6b7z"] Feb 18 19:36:23 crc kubenswrapper[4942]: I0218 19:36:23.436514 4942 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-a3b1-account-create-update-sdgp2"] Feb 18 19:36:23 crc kubenswrapper[4942]: I0218 19:36:23.438099 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-a3b1-account-create-update-sdgp2" Feb 18 19:36:23 crc kubenswrapper[4942]: I0218 19:36:23.442197 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-db-secret" Feb 18 19:36:23 crc kubenswrapper[4942]: I0218 19:36:23.454079 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-svc8k\" (UniqueName: \"kubernetes.io/projected/de103e96-857c-4fa9-b78b-51c8f4734643-kube-api-access-svc8k\") pod \"nova-cell1-db-create-f9r9j\" (UID: \"de103e96-857c-4fa9-b78b-51c8f4734643\") " pod="openstack/nova-cell1-db-create-f9r9j" Feb 18 19:36:23 crc kubenswrapper[4942]: I0218 19:36:23.454147 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pv798\" (UniqueName: \"kubernetes.io/projected/908017b2-bbca-42f2-b6a0-af358a18d1b7-kube-api-access-pv798\") pod \"nova-cell0-1b0e-account-create-update-p6b7z\" (UID: \"908017b2-bbca-42f2-b6a0-af358a18d1b7\") " pod="openstack/nova-cell0-1b0e-account-create-update-p6b7z" Feb 18 19:36:23 crc kubenswrapper[4942]: I0218 19:36:23.454225 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/de103e96-857c-4fa9-b78b-51c8f4734643-operator-scripts\") pod \"nova-cell1-db-create-f9r9j\" (UID: \"de103e96-857c-4fa9-b78b-51c8f4734643\") " pod="openstack/nova-cell1-db-create-f9r9j" Feb 18 19:36:23 crc kubenswrapper[4942]: I0218 19:36:23.454252 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/908017b2-bbca-42f2-b6a0-af358a18d1b7-operator-scripts\") pod \"nova-cell0-1b0e-account-create-update-p6b7z\" (UID: \"908017b2-bbca-42f2-b6a0-af358a18d1b7\") " pod="openstack/nova-cell0-1b0e-account-create-update-p6b7z" Feb 18 19:36:23 crc kubenswrapper[4942]: I0218 19:36:23.455437 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-a3b1-account-create-update-sdgp2"] Feb 18 19:36:23 crc kubenswrapper[4942]: I0218 19:36:23.460135 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-d7fm8" Feb 18 19:36:23 crc kubenswrapper[4942]: I0218 19:36:23.475802 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-f195-account-create-update-jjctk" Feb 18 19:36:23 crc kubenswrapper[4942]: I0218 19:36:23.561613 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xvvfc\" (UniqueName: \"kubernetes.io/projected/bdd3a7b9-5bb1-47a4-8a4a-95131e50cf27-kube-api-access-xvvfc\") pod \"nova-cell1-a3b1-account-create-update-sdgp2\" (UID: \"bdd3a7b9-5bb1-47a4-8a4a-95131e50cf27\") " pod="openstack/nova-cell1-a3b1-account-create-update-sdgp2" Feb 18 19:36:23 crc kubenswrapper[4942]: I0218 19:36:23.561974 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/de103e96-857c-4fa9-b78b-51c8f4734643-operator-scripts\") pod \"nova-cell1-db-create-f9r9j\" (UID: \"de103e96-857c-4fa9-b78b-51c8f4734643\") " pod="openstack/nova-cell1-db-create-f9r9j" Feb 18 19:36:23 crc kubenswrapper[4942]: I0218 19:36:23.562010 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/908017b2-bbca-42f2-b6a0-af358a18d1b7-operator-scripts\") pod \"nova-cell0-1b0e-account-create-update-p6b7z\" (UID: \"908017b2-bbca-42f2-b6a0-af358a18d1b7\") " pod="openstack/nova-cell0-1b0e-account-create-update-p6b7z" Feb 18 19:36:23 crc kubenswrapper[4942]: I0218 19:36:23.562060 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bdd3a7b9-5bb1-47a4-8a4a-95131e50cf27-operator-scripts\") pod \"nova-cell1-a3b1-account-create-update-sdgp2\" (UID: \"bdd3a7b9-5bb1-47a4-8a4a-95131e50cf27\") " pod="openstack/nova-cell1-a3b1-account-create-update-sdgp2" Feb 18 19:36:23 crc kubenswrapper[4942]: I0218 19:36:23.562141 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-svc8k\" (UniqueName: \"kubernetes.io/projected/de103e96-857c-4fa9-b78b-51c8f4734643-kube-api-access-svc8k\") pod \"nova-cell1-db-create-f9r9j\" (UID: \"de103e96-857c-4fa9-b78b-51c8f4734643\") " pod="openstack/nova-cell1-db-create-f9r9j" Feb 18 19:36:23 crc kubenswrapper[4942]: I0218 19:36:23.562190 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pv798\" (UniqueName: \"kubernetes.io/projected/908017b2-bbca-42f2-b6a0-af358a18d1b7-kube-api-access-pv798\") pod \"nova-cell0-1b0e-account-create-update-p6b7z\" (UID: \"908017b2-bbca-42f2-b6a0-af358a18d1b7\") " pod="openstack/nova-cell0-1b0e-account-create-update-p6b7z" Feb 18 19:36:23 crc kubenswrapper[4942]: I0218 19:36:23.564329 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/908017b2-bbca-42f2-b6a0-af358a18d1b7-operator-scripts\") pod \"nova-cell0-1b0e-account-create-update-p6b7z\" (UID: \"908017b2-bbca-42f2-b6a0-af358a18d1b7\") " pod="openstack/nova-cell0-1b0e-account-create-update-p6b7z" Feb 18 19:36:23 crc kubenswrapper[4942]: I0218 19:36:23.564916 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/de103e96-857c-4fa9-b78b-51c8f4734643-operator-scripts\") pod \"nova-cell1-db-create-f9r9j\" (UID: \"de103e96-857c-4fa9-b78b-51c8f4734643\") " pod="openstack/nova-cell1-db-create-f9r9j" Feb 18 19:36:23 crc kubenswrapper[4942]: I0218 19:36:23.586995 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pv798\" (UniqueName: \"kubernetes.io/projected/908017b2-bbca-42f2-b6a0-af358a18d1b7-kube-api-access-pv798\") pod \"nova-cell0-1b0e-account-create-update-p6b7z\" (UID: \"908017b2-bbca-42f2-b6a0-af358a18d1b7\") " pod="openstack/nova-cell0-1b0e-account-create-update-p6b7z" Feb 18 19:36:23 crc kubenswrapper[4942]: I0218 19:36:23.611303 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-svc8k\" (UniqueName: \"kubernetes.io/projected/de103e96-857c-4fa9-b78b-51c8f4734643-kube-api-access-svc8k\") pod \"nova-cell1-db-create-f9r9j\" (UID: \"de103e96-857c-4fa9-b78b-51c8f4734643\") " pod="openstack/nova-cell1-db-create-f9r9j" Feb 18 19:36:23 crc kubenswrapper[4942]: I0218 19:36:23.669568 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bdd3a7b9-5bb1-47a4-8a4a-95131e50cf27-operator-scripts\") pod \"nova-cell1-a3b1-account-create-update-sdgp2\" (UID: \"bdd3a7b9-5bb1-47a4-8a4a-95131e50cf27\") " pod="openstack/nova-cell1-a3b1-account-create-update-sdgp2" Feb 18 19:36:23 crc kubenswrapper[4942]: I0218 19:36:23.669777 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xvvfc\" (UniqueName: \"kubernetes.io/projected/bdd3a7b9-5bb1-47a4-8a4a-95131e50cf27-kube-api-access-xvvfc\") pod \"nova-cell1-a3b1-account-create-update-sdgp2\" (UID: \"bdd3a7b9-5bb1-47a4-8a4a-95131e50cf27\") " pod="openstack/nova-cell1-a3b1-account-create-update-sdgp2" Feb 18 19:36:23 crc kubenswrapper[4942]: I0218 19:36:23.670799 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bdd3a7b9-5bb1-47a4-8a4a-95131e50cf27-operator-scripts\") pod \"nova-cell1-a3b1-account-create-update-sdgp2\" (UID: \"bdd3a7b9-5bb1-47a4-8a4a-95131e50cf27\") " pod="openstack/nova-cell1-a3b1-account-create-update-sdgp2" Feb 18 19:36:23 crc kubenswrapper[4942]: I0218 19:36:23.706750 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xvvfc\" (UniqueName: \"kubernetes.io/projected/bdd3a7b9-5bb1-47a4-8a4a-95131e50cf27-kube-api-access-xvvfc\") pod \"nova-cell1-a3b1-account-create-update-sdgp2\" (UID: \"bdd3a7b9-5bb1-47a4-8a4a-95131e50cf27\") " pod="openstack/nova-cell1-a3b1-account-create-update-sdgp2" Feb 18 19:36:23 crc kubenswrapper[4942]: I0218 19:36:23.764192 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-f9r9j" Feb 18 19:36:23 crc kubenswrapper[4942]: I0218 19:36:23.775677 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-1b0e-account-create-update-p6b7z" Feb 18 19:36:23 crc kubenswrapper[4942]: I0218 19:36:23.812218 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-a3b1-account-create-update-sdgp2" Feb 18 19:36:23 crc kubenswrapper[4942]: I0218 19:36:23.977791 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-b6f54bc7f-8lcdv" Feb 18 19:36:23 crc kubenswrapper[4942]: I0218 19:36:23.978534 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-b6f54bc7f-8lcdv" Feb 18 19:36:24 crc kubenswrapper[4942]: I0218 19:36:24.100035 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 18 19:36:24 crc kubenswrapper[4942]: W0218 19:36:24.124814 4942 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod696aecc5_9837_4941_a9e2_06c1743b6983.slice/crio-307cb7b1955145e6c351de8d60f608d94882a5c445ff5005916b7c10fe933d13 WatchSource:0}: Error finding container 307cb7b1955145e6c351de8d60f608d94882a5c445ff5005916b7c10fe933d13: Status 404 returned error can't find the container with id 307cb7b1955145e6c351de8d60f608d94882a5c445ff5005916b7c10fe933d13 Feb 18 19:36:24 crc kubenswrapper[4942]: W0218 19:36:24.398942 4942 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod54e11ed4_f85e_4125_acc8_b0b86cef91fb.slice/crio-ddd182eb206dc22a342a3b1f3594a6f99229b825fa7f274d17d9f1ea44479c1e WatchSource:0}: Error finding container ddd182eb206dc22a342a3b1f3594a6f99229b825fa7f274d17d9f1ea44479c1e: Status 404 returned error can't find the container with id ddd182eb206dc22a342a3b1f3594a6f99229b825fa7f274d17d9f1ea44479c1e Feb 18 19:36:24 crc kubenswrapper[4942]: I0218 19:36:24.401596 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-hxdjn"] Feb 18 19:36:24 crc kubenswrapper[4942]: I0218 19:36:24.525106 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-d7fm8"] Feb 18 19:36:24 crc kubenswrapper[4942]: W0218 19:36:24.536350 4942 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podef4ca914_d763_484f_aa35_39dbd725d14c.slice/crio-ca97c472a2ce2679b6717b8b6ab28e4c936e92048aa5f3ab74e769d7bcd04c68 WatchSource:0}: Error finding container ca97c472a2ce2679b6717b8b6ab28e4c936e92048aa5f3ab74e769d7bcd04c68: Status 404 returned error can't find the container with id ca97c472a2ce2679b6717b8b6ab28e4c936e92048aa5f3ab74e769d7bcd04c68 Feb 18 19:36:24 crc kubenswrapper[4942]: I0218 19:36:24.544740 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-f195-account-create-update-jjctk"] Feb 18 19:36:24 crc kubenswrapper[4942]: I0218 19:36:24.700048 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-f9r9j"] Feb 18 19:36:24 crc kubenswrapper[4942]: I0218 19:36:24.723819 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-1b0e-account-create-update-p6b7z"] Feb 18 19:36:24 crc kubenswrapper[4942]: I0218 19:36:24.733220 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-a3b1-account-create-update-sdgp2"] Feb 18 19:36:24 crc kubenswrapper[4942]: W0218 19:36:24.767000 4942 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbdd3a7b9_5bb1_47a4_8a4a_95131e50cf27.slice/crio-fd25e15f2b69ef489266f93ff5e54bcabaad72ec94bfa30e588907d0fa96302e WatchSource:0}: Error finding container fd25e15f2b69ef489266f93ff5e54bcabaad72ec94bfa30e588907d0fa96302e: Status 404 returned error can't find the container with id fd25e15f2b69ef489266f93ff5e54bcabaad72ec94bfa30e588907d0fa96302e Feb 18 19:36:24 crc kubenswrapper[4942]: I0218 19:36:24.950006 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-f9r9j" event={"ID":"de103e96-857c-4fa9-b78b-51c8f4734643","Type":"ContainerStarted","Data":"b394124733feae208cca8678a899da21a42cd1a1bcdab470215b05da69af051d"} Feb 18 19:36:24 crc kubenswrapper[4942]: I0218 19:36:24.967385 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"696aecc5-9837-4941-a9e2-06c1743b6983","Type":"ContainerStarted","Data":"307cb7b1955145e6c351de8d60f608d94882a5c445ff5005916b7c10fe933d13"} Feb 18 19:36:24 crc kubenswrapper[4942]: I0218 19:36:24.984479 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-hxdjn" event={"ID":"54e11ed4-f85e-4125-acc8-b0b86cef91fb","Type":"ContainerStarted","Data":"9f2c359e5e4f7ba110dc92287a82c170423f21670d64c2a6b420aa0beff96ce3"} Feb 18 19:36:24 crc kubenswrapper[4942]: I0218 19:36:24.984530 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-hxdjn" event={"ID":"54e11ed4-f85e-4125-acc8-b0b86cef91fb","Type":"ContainerStarted","Data":"ddd182eb206dc22a342a3b1f3594a6f99229b825fa7f274d17d9f1ea44479c1e"} Feb 18 19:36:24 crc kubenswrapper[4942]: I0218 19:36:24.987943 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-1b0e-account-create-update-p6b7z" event={"ID":"908017b2-bbca-42f2-b6a0-af358a18d1b7","Type":"ContainerStarted","Data":"527d26ceec3f8972dda44cae7e3560073a290e058817bf5b7e32fe2b65220c1d"} Feb 18 19:36:24 crc kubenswrapper[4942]: I0218 19:36:24.991853 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-d7fm8" event={"ID":"3319773b-d924-402a-adbd-f421ee14c994","Type":"ContainerStarted","Data":"4ee086e7e747f10b7d38270d86480864775d35a33a827da89168941ff41e3484"} Feb 18 19:36:24 crc kubenswrapper[4942]: I0218 19:36:24.991931 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-d7fm8" event={"ID":"3319773b-d924-402a-adbd-f421ee14c994","Type":"ContainerStarted","Data":"64ececf7ff8da7e27ae9a12795f5871d5ca9a079d17366752709256c0742b5ba"} Feb 18 19:36:24 crc kubenswrapper[4942]: I0218 19:36:24.999735 4942 generic.go:334] "Generic (PLEG): container finished" podID="df34bdbb-8771-4d46-b5ba-29088c793a4c" containerID="8b2790adbab8c3f7f1e931b6f90eb17d0d170a8ea3e8297671b08ac8cd2f42be" exitCode=0 Feb 18 19:36:24 crc kubenswrapper[4942]: I0218 19:36:24.999831 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-67cc44d6c6-sp59w" event={"ID":"df34bdbb-8771-4d46-b5ba-29088c793a4c","Type":"ContainerDied","Data":"8b2790adbab8c3f7f1e931b6f90eb17d0d170a8ea3e8297671b08ac8cd2f42be"} Feb 18 19:36:25 crc kubenswrapper[4942]: I0218 19:36:25.002792 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-a3b1-account-create-update-sdgp2" event={"ID":"bdd3a7b9-5bb1-47a4-8a4a-95131e50cf27","Type":"ContainerStarted","Data":"fd25e15f2b69ef489266f93ff5e54bcabaad72ec94bfa30e588907d0fa96302e"} Feb 18 19:36:25 crc kubenswrapper[4942]: I0218 19:36:25.006631 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-f195-account-create-update-jjctk" event={"ID":"ef4ca914-d763-484f-aa35-39dbd725d14c","Type":"ContainerStarted","Data":"c2c74965083b09d2fda5c205fdee24ab8d991088f20cd6c4fd29973dbc9a7c39"} Feb 18 19:36:25 crc kubenswrapper[4942]: I0218 19:36:25.006668 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-f195-account-create-update-jjctk" event={"ID":"ef4ca914-d763-484f-aa35-39dbd725d14c","Type":"ContainerStarted","Data":"ca97c472a2ce2679b6717b8b6ab28e4c936e92048aa5f3ab74e769d7bcd04c68"} Feb 18 19:36:25 crc kubenswrapper[4942]: I0218 19:36:25.019281 4942 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-db-create-d7fm8" podStartSLOduration=2.019260487 podStartE2EDuration="2.019260487s" podCreationTimestamp="2026-02-18 19:36:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:36:25.014930204 +0000 UTC m=+1144.719862869" watchObservedRunningTime="2026-02-18 19:36:25.019260487 +0000 UTC m=+1144.724193162" Feb 18 19:36:25 crc kubenswrapper[4942]: I0218 19:36:25.046269 4942 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-f195-account-create-update-jjctk" podStartSLOduration=2.0462509779999998 podStartE2EDuration="2.046250978s" podCreationTimestamp="2026-02-18 19:36:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:36:25.030609762 +0000 UTC m=+1144.735542427" watchObservedRunningTime="2026-02-18 19:36:25.046250978 +0000 UTC m=+1144.751183643" Feb 18 19:36:25 crc kubenswrapper[4942]: I0218 19:36:25.274199 4942 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 18 19:36:25 crc kubenswrapper[4942]: I0218 19:36:25.449935 4942 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-67cc44d6c6-sp59w" Feb 18 19:36:25 crc kubenswrapper[4942]: I0218 19:36:25.537711 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/df34bdbb-8771-4d46-b5ba-29088c793a4c-config\") pod \"df34bdbb-8771-4d46-b5ba-29088c793a4c\" (UID: \"df34bdbb-8771-4d46-b5ba-29088c793a4c\") " Feb 18 19:36:25 crc kubenswrapper[4942]: I0218 19:36:25.538152 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/df34bdbb-8771-4d46-b5ba-29088c793a4c-httpd-config\") pod \"df34bdbb-8771-4d46-b5ba-29088c793a4c\" (UID: \"df34bdbb-8771-4d46-b5ba-29088c793a4c\") " Feb 18 19:36:25 crc kubenswrapper[4942]: I0218 19:36:25.538179 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/df34bdbb-8771-4d46-b5ba-29088c793a4c-ovndb-tls-certs\") pod \"df34bdbb-8771-4d46-b5ba-29088c793a4c\" (UID: \"df34bdbb-8771-4d46-b5ba-29088c793a4c\") " Feb 18 19:36:25 crc kubenswrapper[4942]: I0218 19:36:25.538244 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/df34bdbb-8771-4d46-b5ba-29088c793a4c-combined-ca-bundle\") pod \"df34bdbb-8771-4d46-b5ba-29088c793a4c\" (UID: \"df34bdbb-8771-4d46-b5ba-29088c793a4c\") " Feb 18 19:36:25 crc kubenswrapper[4942]: I0218 19:36:25.538342 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-znrs9\" (UniqueName: \"kubernetes.io/projected/df34bdbb-8771-4d46-b5ba-29088c793a4c-kube-api-access-znrs9\") pod \"df34bdbb-8771-4d46-b5ba-29088c793a4c\" (UID: \"df34bdbb-8771-4d46-b5ba-29088c793a4c\") " Feb 18 19:36:25 crc kubenswrapper[4942]: I0218 19:36:25.544864 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/df34bdbb-8771-4d46-b5ba-29088c793a4c-kube-api-access-znrs9" (OuterVolumeSpecName: "kube-api-access-znrs9") pod "df34bdbb-8771-4d46-b5ba-29088c793a4c" (UID: "df34bdbb-8771-4d46-b5ba-29088c793a4c"). InnerVolumeSpecName "kube-api-access-znrs9". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:36:25 crc kubenswrapper[4942]: I0218 19:36:25.546355 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/df34bdbb-8771-4d46-b5ba-29088c793a4c-httpd-config" (OuterVolumeSpecName: "httpd-config") pod "df34bdbb-8771-4d46-b5ba-29088c793a4c" (UID: "df34bdbb-8771-4d46-b5ba-29088c793a4c"). InnerVolumeSpecName "httpd-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:36:25 crc kubenswrapper[4942]: I0218 19:36:25.621721 4942 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 18 19:36:25 crc kubenswrapper[4942]: I0218 19:36:25.622022 4942 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="dc47abc8-8f2f-41c6-96c3-d6e81388e5b2" containerName="glance-log" containerID="cri-o://af0f17fdd4b111e87d9ffc74c4fed5912320cf203228fa25c7dde7a00ca05bb2" gracePeriod=30 Feb 18 19:36:25 crc kubenswrapper[4942]: I0218 19:36:25.622124 4942 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="dc47abc8-8f2f-41c6-96c3-d6e81388e5b2" containerName="glance-httpd" containerID="cri-o://0c82f89cf5ce35ccda5a5b29f76963df047d6ffca2e6b1f0144d5f20d3dfe0a7" gracePeriod=30 Feb 18 19:36:25 crc kubenswrapper[4942]: I0218 19:36:25.642169 4942 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-znrs9\" (UniqueName: \"kubernetes.io/projected/df34bdbb-8771-4d46-b5ba-29088c793a4c-kube-api-access-znrs9\") on node \"crc\" DevicePath \"\"" Feb 18 19:36:25 crc kubenswrapper[4942]: I0218 19:36:25.642199 4942 reconciler_common.go:293] "Volume detached for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/df34bdbb-8771-4d46-b5ba-29088c793a4c-httpd-config\") on node \"crc\" DevicePath \"\"" Feb 18 19:36:25 crc kubenswrapper[4942]: I0218 19:36:25.643892 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/df34bdbb-8771-4d46-b5ba-29088c793a4c-ovndb-tls-certs" (OuterVolumeSpecName: "ovndb-tls-certs") pod "df34bdbb-8771-4d46-b5ba-29088c793a4c" (UID: "df34bdbb-8771-4d46-b5ba-29088c793a4c"). InnerVolumeSpecName "ovndb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:36:25 crc kubenswrapper[4942]: I0218 19:36:25.643975 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/df34bdbb-8771-4d46-b5ba-29088c793a4c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "df34bdbb-8771-4d46-b5ba-29088c793a4c" (UID: "df34bdbb-8771-4d46-b5ba-29088c793a4c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:36:25 crc kubenswrapper[4942]: I0218 19:36:25.666117 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/df34bdbb-8771-4d46-b5ba-29088c793a4c-config" (OuterVolumeSpecName: "config") pod "df34bdbb-8771-4d46-b5ba-29088c793a4c" (UID: "df34bdbb-8771-4d46-b5ba-29088c793a4c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:36:25 crc kubenswrapper[4942]: I0218 19:36:25.745469 4942 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/df34bdbb-8771-4d46-b5ba-29088c793a4c-config\") on node \"crc\" DevicePath \"\"" Feb 18 19:36:25 crc kubenswrapper[4942]: I0218 19:36:25.745513 4942 reconciler_common.go:293] "Volume detached for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/df34bdbb-8771-4d46-b5ba-29088c793a4c-ovndb-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 18 19:36:25 crc kubenswrapper[4942]: I0218 19:36:25.745522 4942 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/df34bdbb-8771-4d46-b5ba-29088c793a4c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 19:36:26 crc kubenswrapper[4942]: I0218 19:36:26.019703 4942 generic.go:334] "Generic (PLEG): container finished" podID="3319773b-d924-402a-adbd-f421ee14c994" containerID="4ee086e7e747f10b7d38270d86480864775d35a33a827da89168941ff41e3484" exitCode=0 Feb 18 19:36:26 crc kubenswrapper[4942]: I0218 19:36:26.019752 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-d7fm8" event={"ID":"3319773b-d924-402a-adbd-f421ee14c994","Type":"ContainerDied","Data":"4ee086e7e747f10b7d38270d86480864775d35a33a827da89168941ff41e3484"} Feb 18 19:36:26 crc kubenswrapper[4942]: I0218 19:36:26.021539 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-a3b1-account-create-update-sdgp2" event={"ID":"bdd3a7b9-5bb1-47a4-8a4a-95131e50cf27","Type":"ContainerStarted","Data":"12651ed44c362c43a5a615685457fd590c1593f4afa3ac50fda9dea54a2e1f71"} Feb 18 19:36:26 crc kubenswrapper[4942]: I0218 19:36:26.024535 4942 generic.go:334] "Generic (PLEG): container finished" podID="de103e96-857c-4fa9-b78b-51c8f4734643" containerID="a09c56da144b09bdcb7865a7cc27a2ff95e7937bd4f16a766144008dd1c49144" exitCode=0 Feb 18 19:36:26 crc kubenswrapper[4942]: I0218 19:36:26.024880 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-f9r9j" event={"ID":"de103e96-857c-4fa9-b78b-51c8f4734643","Type":"ContainerDied","Data":"a09c56da144b09bdcb7865a7cc27a2ff95e7937bd4f16a766144008dd1c49144"} Feb 18 19:36:26 crc kubenswrapper[4942]: I0218 19:36:26.027667 4942 generic.go:334] "Generic (PLEG): container finished" podID="dc47abc8-8f2f-41c6-96c3-d6e81388e5b2" containerID="af0f17fdd4b111e87d9ffc74c4fed5912320cf203228fa25c7dde7a00ca05bb2" exitCode=143 Feb 18 19:36:26 crc kubenswrapper[4942]: I0218 19:36:26.027882 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"dc47abc8-8f2f-41c6-96c3-d6e81388e5b2","Type":"ContainerDied","Data":"af0f17fdd4b111e87d9ffc74c4fed5912320cf203228fa25c7dde7a00ca05bb2"} Feb 18 19:36:26 crc kubenswrapper[4942]: I0218 19:36:26.055260 4942 generic.go:334] "Generic (PLEG): container finished" podID="54e11ed4-f85e-4125-acc8-b0b86cef91fb" containerID="9f2c359e5e4f7ba110dc92287a82c170423f21670d64c2a6b420aa0beff96ce3" exitCode=0 Feb 18 19:36:26 crc kubenswrapper[4942]: I0218 19:36:26.055347 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-hxdjn" event={"ID":"54e11ed4-f85e-4125-acc8-b0b86cef91fb","Type":"ContainerDied","Data":"9f2c359e5e4f7ba110dc92287a82c170423f21670d64c2a6b420aa0beff96ce3"} Feb 18 19:36:26 crc kubenswrapper[4942]: I0218 19:36:26.065192 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-1b0e-account-create-update-p6b7z" event={"ID":"908017b2-bbca-42f2-b6a0-af358a18d1b7","Type":"ContainerStarted","Data":"866788c6c2a051f7476fcb5d58fd9c13e62810bec69e94d021b4616590e98f0b"} Feb 18 19:36:26 crc kubenswrapper[4942]: I0218 19:36:26.082987 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-67cc44d6c6-sp59w" event={"ID":"df34bdbb-8771-4d46-b5ba-29088c793a4c","Type":"ContainerDied","Data":"16cfdf5777da304074f8658c0e294de7985ac237e0c31312cdfc21ceef0ca88c"} Feb 18 19:36:26 crc kubenswrapper[4942]: I0218 19:36:26.083063 4942 scope.go:117] "RemoveContainer" containerID="686f47180a9ccf7623cbed7358eef7f2d2fa27a8a72e96ad726f79f619dd1afc" Feb 18 19:36:26 crc kubenswrapper[4942]: I0218 19:36:26.083208 4942 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-67cc44d6c6-sp59w" Feb 18 19:36:26 crc kubenswrapper[4942]: I0218 19:36:26.091733 4942 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-a3b1-account-create-update-sdgp2" podStartSLOduration=3.091710415 podStartE2EDuration="3.091710415s" podCreationTimestamp="2026-02-18 19:36:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:36:26.068634605 +0000 UTC m=+1145.773567280" watchObservedRunningTime="2026-02-18 19:36:26.091710415 +0000 UTC m=+1145.796643080" Feb 18 19:36:26 crc kubenswrapper[4942]: I0218 19:36:26.100889 4942 generic.go:334] "Generic (PLEG): container finished" podID="ef4ca914-d763-484f-aa35-39dbd725d14c" containerID="c2c74965083b09d2fda5c205fdee24ab8d991088f20cd6c4fd29973dbc9a7c39" exitCode=0 Feb 18 19:36:26 crc kubenswrapper[4942]: I0218 19:36:26.101054 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-f195-account-create-update-jjctk" event={"ID":"ef4ca914-d763-484f-aa35-39dbd725d14c","Type":"ContainerDied","Data":"c2c74965083b09d2fda5c205fdee24ab8d991088f20cd6c4fd29973dbc9a7c39"} Feb 18 19:36:26 crc kubenswrapper[4942]: I0218 19:36:26.124714 4942 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-1b0e-account-create-update-p6b7z" podStartSLOduration=3.124697682 podStartE2EDuration="3.124697682s" podCreationTimestamp="2026-02-18 19:36:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:36:26.123153422 +0000 UTC m=+1145.828086087" watchObservedRunningTime="2026-02-18 19:36:26.124697682 +0000 UTC m=+1145.829630347" Feb 18 19:36:26 crc kubenswrapper[4942]: I0218 19:36:26.136712 4942 generic.go:334] "Generic (PLEG): container finished" podID="3ecc91e6-4e7f-438f-8530-bb8dd55764c5" containerID="036dc92b12e420ef80458fb3e23d3375424a9aed1ed6d80a904da58e73ba2659" exitCode=137 Feb 18 19:36:26 crc kubenswrapper[4942]: I0218 19:36:26.136786 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-54d64cf59b-xp7rk" event={"ID":"3ecc91e6-4e7f-438f-8530-bb8dd55764c5","Type":"ContainerDied","Data":"036dc92b12e420ef80458fb3e23d3375424a9aed1ed6d80a904da58e73ba2659"} Feb 18 19:36:26 crc kubenswrapper[4942]: I0218 19:36:26.193468 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"696aecc5-9837-4941-a9e2-06c1743b6983","Type":"ContainerStarted","Data":"f199cea9b51631457ac52fd4aa8f018a58676c04337cc4ce60e41428c59205eb"} Feb 18 19:36:26 crc kubenswrapper[4942]: I0218 19:36:26.248870 4942 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-54d64cf59b-xp7rk" Feb 18 19:36:26 crc kubenswrapper[4942]: I0218 19:36:26.259102 4942 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-67cc44d6c6-sp59w"] Feb 18 19:36:26 crc kubenswrapper[4942]: I0218 19:36:26.259590 4942 scope.go:117] "RemoveContainer" containerID="8b2790adbab8c3f7f1e931b6f90eb17d0d170a8ea3e8297671b08ac8cd2f42be" Feb 18 19:36:26 crc kubenswrapper[4942]: I0218 19:36:26.281937 4942 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-67cc44d6c6-sp59w"] Feb 18 19:36:26 crc kubenswrapper[4942]: I0218 19:36:26.358218 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/3ecc91e6-4e7f-438f-8530-bb8dd55764c5-horizon-secret-key\") pod \"3ecc91e6-4e7f-438f-8530-bb8dd55764c5\" (UID: \"3ecc91e6-4e7f-438f-8530-bb8dd55764c5\") " Feb 18 19:36:26 crc kubenswrapper[4942]: I0218 19:36:26.358314 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/3ecc91e6-4e7f-438f-8530-bb8dd55764c5-scripts\") pod \"3ecc91e6-4e7f-438f-8530-bb8dd55764c5\" (UID: \"3ecc91e6-4e7f-438f-8530-bb8dd55764c5\") " Feb 18 19:36:26 crc kubenswrapper[4942]: I0218 19:36:26.358409 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/3ecc91e6-4e7f-438f-8530-bb8dd55764c5-config-data\") pod \"3ecc91e6-4e7f-438f-8530-bb8dd55764c5\" (UID: \"3ecc91e6-4e7f-438f-8530-bb8dd55764c5\") " Feb 18 19:36:26 crc kubenswrapper[4942]: I0218 19:36:26.358525 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3ecc91e6-4e7f-438f-8530-bb8dd55764c5-logs\") pod \"3ecc91e6-4e7f-438f-8530-bb8dd55764c5\" (UID: \"3ecc91e6-4e7f-438f-8530-bb8dd55764c5\") " Feb 18 19:36:26 crc kubenswrapper[4942]: I0218 19:36:26.358670 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dnrb5\" (UniqueName: \"kubernetes.io/projected/3ecc91e6-4e7f-438f-8530-bb8dd55764c5-kube-api-access-dnrb5\") pod \"3ecc91e6-4e7f-438f-8530-bb8dd55764c5\" (UID: \"3ecc91e6-4e7f-438f-8530-bb8dd55764c5\") " Feb 18 19:36:26 crc kubenswrapper[4942]: I0218 19:36:26.358722 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/3ecc91e6-4e7f-438f-8530-bb8dd55764c5-horizon-tls-certs\") pod \"3ecc91e6-4e7f-438f-8530-bb8dd55764c5\" (UID: \"3ecc91e6-4e7f-438f-8530-bb8dd55764c5\") " Feb 18 19:36:26 crc kubenswrapper[4942]: I0218 19:36:26.358922 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3ecc91e6-4e7f-438f-8530-bb8dd55764c5-combined-ca-bundle\") pod \"3ecc91e6-4e7f-438f-8530-bb8dd55764c5\" (UID: \"3ecc91e6-4e7f-438f-8530-bb8dd55764c5\") " Feb 18 19:36:26 crc kubenswrapper[4942]: I0218 19:36:26.360227 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3ecc91e6-4e7f-438f-8530-bb8dd55764c5-logs" (OuterVolumeSpecName: "logs") pod "3ecc91e6-4e7f-438f-8530-bb8dd55764c5" (UID: "3ecc91e6-4e7f-438f-8530-bb8dd55764c5"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 19:36:26 crc kubenswrapper[4942]: I0218 19:36:26.384948 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ecc91e6-4e7f-438f-8530-bb8dd55764c5-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "3ecc91e6-4e7f-438f-8530-bb8dd55764c5" (UID: "3ecc91e6-4e7f-438f-8530-bb8dd55764c5"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:36:26 crc kubenswrapper[4942]: I0218 19:36:26.402455 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3ecc91e6-4e7f-438f-8530-bb8dd55764c5-config-data" (OuterVolumeSpecName: "config-data") pod "3ecc91e6-4e7f-438f-8530-bb8dd55764c5" (UID: "3ecc91e6-4e7f-438f-8530-bb8dd55764c5"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:36:26 crc kubenswrapper[4942]: I0218 19:36:26.404952 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3ecc91e6-4e7f-438f-8530-bb8dd55764c5-kube-api-access-dnrb5" (OuterVolumeSpecName: "kube-api-access-dnrb5") pod "3ecc91e6-4e7f-438f-8530-bb8dd55764c5" (UID: "3ecc91e6-4e7f-438f-8530-bb8dd55764c5"). InnerVolumeSpecName "kube-api-access-dnrb5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:36:26 crc kubenswrapper[4942]: I0218 19:36:26.418128 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3ecc91e6-4e7f-438f-8530-bb8dd55764c5-scripts" (OuterVolumeSpecName: "scripts") pod "3ecc91e6-4e7f-438f-8530-bb8dd55764c5" (UID: "3ecc91e6-4e7f-438f-8530-bb8dd55764c5"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:36:26 crc kubenswrapper[4942]: I0218 19:36:26.425903 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ecc91e6-4e7f-438f-8530-bb8dd55764c5-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "3ecc91e6-4e7f-438f-8530-bb8dd55764c5" (UID: "3ecc91e6-4e7f-438f-8530-bb8dd55764c5"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:36:26 crc kubenswrapper[4942]: I0218 19:36:26.429029 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ecc91e6-4e7f-438f-8530-bb8dd55764c5-horizon-tls-certs" (OuterVolumeSpecName: "horizon-tls-certs") pod "3ecc91e6-4e7f-438f-8530-bb8dd55764c5" (UID: "3ecc91e6-4e7f-438f-8530-bb8dd55764c5"). InnerVolumeSpecName "horizon-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:36:26 crc kubenswrapper[4942]: I0218 19:36:26.437367 4942 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-hxdjn" Feb 18 19:36:26 crc kubenswrapper[4942]: I0218 19:36:26.462350 4942 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dnrb5\" (UniqueName: \"kubernetes.io/projected/3ecc91e6-4e7f-438f-8530-bb8dd55764c5-kube-api-access-dnrb5\") on node \"crc\" DevicePath \"\"" Feb 18 19:36:26 crc kubenswrapper[4942]: I0218 19:36:26.462397 4942 reconciler_common.go:293] "Volume detached for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/3ecc91e6-4e7f-438f-8530-bb8dd55764c5-horizon-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 18 19:36:26 crc kubenswrapper[4942]: I0218 19:36:26.462410 4942 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3ecc91e6-4e7f-438f-8530-bb8dd55764c5-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 19:36:26 crc kubenswrapper[4942]: I0218 19:36:26.462422 4942 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/3ecc91e6-4e7f-438f-8530-bb8dd55764c5-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Feb 18 19:36:26 crc kubenswrapper[4942]: I0218 19:36:26.462434 4942 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/3ecc91e6-4e7f-438f-8530-bb8dd55764c5-scripts\") on node \"crc\" DevicePath \"\"" Feb 18 19:36:26 crc kubenswrapper[4942]: I0218 19:36:26.462445 4942 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/3ecc91e6-4e7f-438f-8530-bb8dd55764c5-config-data\") on node \"crc\" DevicePath \"\"" Feb 18 19:36:26 crc kubenswrapper[4942]: I0218 19:36:26.462457 4942 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3ecc91e6-4e7f-438f-8530-bb8dd55764c5-logs\") on node \"crc\" DevicePath \"\"" Feb 18 19:36:26 crc kubenswrapper[4942]: I0218 19:36:26.563427 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mtfvh\" (UniqueName: \"kubernetes.io/projected/54e11ed4-f85e-4125-acc8-b0b86cef91fb-kube-api-access-mtfvh\") pod \"54e11ed4-f85e-4125-acc8-b0b86cef91fb\" (UID: \"54e11ed4-f85e-4125-acc8-b0b86cef91fb\") " Feb 18 19:36:26 crc kubenswrapper[4942]: I0218 19:36:26.563949 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/54e11ed4-f85e-4125-acc8-b0b86cef91fb-operator-scripts\") pod \"54e11ed4-f85e-4125-acc8-b0b86cef91fb\" (UID: \"54e11ed4-f85e-4125-acc8-b0b86cef91fb\") " Feb 18 19:36:26 crc kubenswrapper[4942]: I0218 19:36:26.564560 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/54e11ed4-f85e-4125-acc8-b0b86cef91fb-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "54e11ed4-f85e-4125-acc8-b0b86cef91fb" (UID: "54e11ed4-f85e-4125-acc8-b0b86cef91fb"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:36:26 crc kubenswrapper[4942]: I0218 19:36:26.564985 4942 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/54e11ed4-f85e-4125-acc8-b0b86cef91fb-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 18 19:36:26 crc kubenswrapper[4942]: I0218 19:36:26.566275 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/54e11ed4-f85e-4125-acc8-b0b86cef91fb-kube-api-access-mtfvh" (OuterVolumeSpecName: "kube-api-access-mtfvh") pod "54e11ed4-f85e-4125-acc8-b0b86cef91fb" (UID: "54e11ed4-f85e-4125-acc8-b0b86cef91fb"). InnerVolumeSpecName "kube-api-access-mtfvh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:36:26 crc kubenswrapper[4942]: I0218 19:36:26.667178 4942 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mtfvh\" (UniqueName: \"kubernetes.io/projected/54e11ed4-f85e-4125-acc8-b0b86cef91fb-kube-api-access-mtfvh\") on node \"crc\" DevicePath \"\"" Feb 18 19:36:27 crc kubenswrapper[4942]: I0218 19:36:27.050756 4942 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="df34bdbb-8771-4d46-b5ba-29088c793a4c" path="/var/lib/kubelet/pods/df34bdbb-8771-4d46-b5ba-29088c793a4c/volumes" Feb 18 19:36:27 crc kubenswrapper[4942]: I0218 19:36:27.204140 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-54d64cf59b-xp7rk" event={"ID":"3ecc91e6-4e7f-438f-8530-bb8dd55764c5","Type":"ContainerDied","Data":"f9c6502e1e5809e23b3664eb42d069f99f7705e9a66bf07935b4912b98778c64"} Feb 18 19:36:27 crc kubenswrapper[4942]: I0218 19:36:27.204194 4942 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-54d64cf59b-xp7rk" Feb 18 19:36:27 crc kubenswrapper[4942]: I0218 19:36:27.204214 4942 scope.go:117] "RemoveContainer" containerID="4bd98068ec637cd03846de3ac7d0bc145a81ebf089811ebc4b9501aa76cae874" Feb 18 19:36:27 crc kubenswrapper[4942]: I0218 19:36:27.205596 4942 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-hxdjn" Feb 18 19:36:27 crc kubenswrapper[4942]: I0218 19:36:27.205595 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-hxdjn" event={"ID":"54e11ed4-f85e-4125-acc8-b0b86cef91fb","Type":"ContainerDied","Data":"ddd182eb206dc22a342a3b1f3594a6f99229b825fa7f274d17d9f1ea44479c1e"} Feb 18 19:36:27 crc kubenswrapper[4942]: I0218 19:36:27.205632 4942 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ddd182eb206dc22a342a3b1f3594a6f99229b825fa7f274d17d9f1ea44479c1e" Feb 18 19:36:27 crc kubenswrapper[4942]: I0218 19:36:27.239169 4942 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-54d64cf59b-xp7rk"] Feb 18 19:36:27 crc kubenswrapper[4942]: I0218 19:36:27.247449 4942 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-54d64cf59b-xp7rk"] Feb 18 19:36:28 crc kubenswrapper[4942]: I0218 19:36:28.009375 4942 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 18 19:36:28 crc kubenswrapper[4942]: I0218 19:36:28.009991 4942 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="5cd0efdc-b208-4270-9c23-33e01f7298be" containerName="glance-log" containerID="cri-o://1b92a562ea433f43d820eeece6e874b38a343cedbb1b276827ec28ad7679c4ae" gracePeriod=30 Feb 18 19:36:28 crc kubenswrapper[4942]: I0218 19:36:28.010649 4942 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="5cd0efdc-b208-4270-9c23-33e01f7298be" containerName="glance-httpd" containerID="cri-o://7f7ecb8106c4011dd2affe0db157078ed440c3dc9a5f336a7fd4922172637f01" gracePeriod=30 Feb 18 19:36:28 crc kubenswrapper[4942]: I0218 19:36:28.079419 4942 scope.go:117] "RemoveContainer" containerID="036dc92b12e420ef80458fb3e23d3375424a9aed1ed6d80a904da58e73ba2659" Feb 18 19:36:28 crc kubenswrapper[4942]: I0218 19:36:28.221952 4942 generic.go:334] "Generic (PLEG): container finished" podID="908017b2-bbca-42f2-b6a0-af358a18d1b7" containerID="866788c6c2a051f7476fcb5d58fd9c13e62810bec69e94d021b4616590e98f0b" exitCode=0 Feb 18 19:36:28 crc kubenswrapper[4942]: I0218 19:36:28.222322 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-1b0e-account-create-update-p6b7z" event={"ID":"908017b2-bbca-42f2-b6a0-af358a18d1b7","Type":"ContainerDied","Data":"866788c6c2a051f7476fcb5d58fd9c13e62810bec69e94d021b4616590e98f0b"} Feb 18 19:36:28 crc kubenswrapper[4942]: I0218 19:36:28.227125 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-d7fm8" event={"ID":"3319773b-d924-402a-adbd-f421ee14c994","Type":"ContainerDied","Data":"64ececf7ff8da7e27ae9a12795f5871d5ca9a079d17366752709256c0742b5ba"} Feb 18 19:36:28 crc kubenswrapper[4942]: I0218 19:36:28.227172 4942 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="64ececf7ff8da7e27ae9a12795f5871d5ca9a079d17366752709256c0742b5ba" Feb 18 19:36:28 crc kubenswrapper[4942]: I0218 19:36:28.240554 4942 generic.go:334] "Generic (PLEG): container finished" podID="5cd0efdc-b208-4270-9c23-33e01f7298be" containerID="1b92a562ea433f43d820eeece6e874b38a343cedbb1b276827ec28ad7679c4ae" exitCode=143 Feb 18 19:36:28 crc kubenswrapper[4942]: I0218 19:36:28.240668 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"5cd0efdc-b208-4270-9c23-33e01f7298be","Type":"ContainerDied","Data":"1b92a562ea433f43d820eeece6e874b38a343cedbb1b276827ec28ad7679c4ae"} Feb 18 19:36:28 crc kubenswrapper[4942]: I0218 19:36:28.259149 4942 generic.go:334] "Generic (PLEG): container finished" podID="bdd3a7b9-5bb1-47a4-8a4a-95131e50cf27" containerID="12651ed44c362c43a5a615685457fd590c1593f4afa3ac50fda9dea54a2e1f71" exitCode=0 Feb 18 19:36:28 crc kubenswrapper[4942]: I0218 19:36:28.259260 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-a3b1-account-create-update-sdgp2" event={"ID":"bdd3a7b9-5bb1-47a4-8a4a-95131e50cf27","Type":"ContainerDied","Data":"12651ed44c362c43a5a615685457fd590c1593f4afa3ac50fda9dea54a2e1f71"} Feb 18 19:36:28 crc kubenswrapper[4942]: I0218 19:36:28.270826 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-f195-account-create-update-jjctk" event={"ID":"ef4ca914-d763-484f-aa35-39dbd725d14c","Type":"ContainerDied","Data":"ca97c472a2ce2679b6717b8b6ab28e4c936e92048aa5f3ab74e769d7bcd04c68"} Feb 18 19:36:28 crc kubenswrapper[4942]: I0218 19:36:28.270859 4942 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ca97c472a2ce2679b6717b8b6ab28e4c936e92048aa5f3ab74e769d7bcd04c68" Feb 18 19:36:28 crc kubenswrapper[4942]: I0218 19:36:28.285992 4942 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-d7fm8" Feb 18 19:36:28 crc kubenswrapper[4942]: I0218 19:36:28.286037 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-f9r9j" event={"ID":"de103e96-857c-4fa9-b78b-51c8f4734643","Type":"ContainerDied","Data":"b394124733feae208cca8678a899da21a42cd1a1bcdab470215b05da69af051d"} Feb 18 19:36:28 crc kubenswrapper[4942]: I0218 19:36:28.286131 4942 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b394124733feae208cca8678a899da21a42cd1a1bcdab470215b05da69af051d" Feb 18 19:36:28 crc kubenswrapper[4942]: I0218 19:36:28.291741 4942 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-f9r9j" Feb 18 19:36:28 crc kubenswrapper[4942]: I0218 19:36:28.293978 4942 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-f195-account-create-update-jjctk" Feb 18 19:36:28 crc kubenswrapper[4942]: I0218 19:36:28.404469 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/de103e96-857c-4fa9-b78b-51c8f4734643-operator-scripts\") pod \"de103e96-857c-4fa9-b78b-51c8f4734643\" (UID: \"de103e96-857c-4fa9-b78b-51c8f4734643\") " Feb 18 19:36:28 crc kubenswrapper[4942]: I0218 19:36:28.404519 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3319773b-d924-402a-adbd-f421ee14c994-operator-scripts\") pod \"3319773b-d924-402a-adbd-f421ee14c994\" (UID: \"3319773b-d924-402a-adbd-f421ee14c994\") " Feb 18 19:36:28 crc kubenswrapper[4942]: I0218 19:36:28.404576 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r8xk6\" (UniqueName: \"kubernetes.io/projected/3319773b-d924-402a-adbd-f421ee14c994-kube-api-access-r8xk6\") pod \"3319773b-d924-402a-adbd-f421ee14c994\" (UID: \"3319773b-d924-402a-adbd-f421ee14c994\") " Feb 18 19:36:28 crc kubenswrapper[4942]: I0218 19:36:28.404654 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ef4ca914-d763-484f-aa35-39dbd725d14c-operator-scripts\") pod \"ef4ca914-d763-484f-aa35-39dbd725d14c\" (UID: \"ef4ca914-d763-484f-aa35-39dbd725d14c\") " Feb 18 19:36:28 crc kubenswrapper[4942]: I0218 19:36:28.404685 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-svc8k\" (UniqueName: \"kubernetes.io/projected/de103e96-857c-4fa9-b78b-51c8f4734643-kube-api-access-svc8k\") pod \"de103e96-857c-4fa9-b78b-51c8f4734643\" (UID: \"de103e96-857c-4fa9-b78b-51c8f4734643\") " Feb 18 19:36:28 crc kubenswrapper[4942]: I0218 19:36:28.404851 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w65jq\" (UniqueName: \"kubernetes.io/projected/ef4ca914-d763-484f-aa35-39dbd725d14c-kube-api-access-w65jq\") pod \"ef4ca914-d763-484f-aa35-39dbd725d14c\" (UID: \"ef4ca914-d763-484f-aa35-39dbd725d14c\") " Feb 18 19:36:28 crc kubenswrapper[4942]: I0218 19:36:28.405227 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/de103e96-857c-4fa9-b78b-51c8f4734643-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "de103e96-857c-4fa9-b78b-51c8f4734643" (UID: "de103e96-857c-4fa9-b78b-51c8f4734643"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:36:28 crc kubenswrapper[4942]: I0218 19:36:28.405307 4942 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/de103e96-857c-4fa9-b78b-51c8f4734643-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 18 19:36:28 crc kubenswrapper[4942]: I0218 19:36:28.405339 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3319773b-d924-402a-adbd-f421ee14c994-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "3319773b-d924-402a-adbd-f421ee14c994" (UID: "3319773b-d924-402a-adbd-f421ee14c994"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:36:28 crc kubenswrapper[4942]: I0218 19:36:28.405785 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ef4ca914-d763-484f-aa35-39dbd725d14c-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "ef4ca914-d763-484f-aa35-39dbd725d14c" (UID: "ef4ca914-d763-484f-aa35-39dbd725d14c"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:36:28 crc kubenswrapper[4942]: I0218 19:36:28.411883 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/de103e96-857c-4fa9-b78b-51c8f4734643-kube-api-access-svc8k" (OuterVolumeSpecName: "kube-api-access-svc8k") pod "de103e96-857c-4fa9-b78b-51c8f4734643" (UID: "de103e96-857c-4fa9-b78b-51c8f4734643"). InnerVolumeSpecName "kube-api-access-svc8k". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:36:28 crc kubenswrapper[4942]: I0218 19:36:28.412082 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ef4ca914-d763-484f-aa35-39dbd725d14c-kube-api-access-w65jq" (OuterVolumeSpecName: "kube-api-access-w65jq") pod "ef4ca914-d763-484f-aa35-39dbd725d14c" (UID: "ef4ca914-d763-484f-aa35-39dbd725d14c"). InnerVolumeSpecName "kube-api-access-w65jq". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:36:28 crc kubenswrapper[4942]: I0218 19:36:28.412221 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3319773b-d924-402a-adbd-f421ee14c994-kube-api-access-r8xk6" (OuterVolumeSpecName: "kube-api-access-r8xk6") pod "3319773b-d924-402a-adbd-f421ee14c994" (UID: "3319773b-d924-402a-adbd-f421ee14c994"). InnerVolumeSpecName "kube-api-access-r8xk6". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:36:28 crc kubenswrapper[4942]: I0218 19:36:28.507576 4942 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w65jq\" (UniqueName: \"kubernetes.io/projected/ef4ca914-d763-484f-aa35-39dbd725d14c-kube-api-access-w65jq\") on node \"crc\" DevicePath \"\"" Feb 18 19:36:28 crc kubenswrapper[4942]: I0218 19:36:28.507607 4942 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3319773b-d924-402a-adbd-f421ee14c994-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 18 19:36:28 crc kubenswrapper[4942]: I0218 19:36:28.507615 4942 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r8xk6\" (UniqueName: \"kubernetes.io/projected/3319773b-d924-402a-adbd-f421ee14c994-kube-api-access-r8xk6\") on node \"crc\" DevicePath \"\"" Feb 18 19:36:28 crc kubenswrapper[4942]: I0218 19:36:28.507625 4942 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ef4ca914-d763-484f-aa35-39dbd725d14c-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 18 19:36:28 crc kubenswrapper[4942]: I0218 19:36:28.507636 4942 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-svc8k\" (UniqueName: \"kubernetes.io/projected/de103e96-857c-4fa9-b78b-51c8f4734643-kube-api-access-svc8k\") on node \"crc\" DevicePath \"\"" Feb 18 19:36:28 crc kubenswrapper[4942]: I0218 19:36:28.793054 4942 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/glance-default-external-api-0" podUID="dc47abc8-8f2f-41c6-96c3-d6e81388e5b2" containerName="glance-httpd" probeResult="failure" output="Get \"https://10.217.0.167:9292/healthcheck\": read tcp 10.217.0.2:51714->10.217.0.167:9292: read: connection reset by peer" Feb 18 19:36:28 crc kubenswrapper[4942]: I0218 19:36:28.793127 4942 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/glance-default-external-api-0" podUID="dc47abc8-8f2f-41c6-96c3-d6e81388e5b2" containerName="glance-log" probeResult="failure" output="Get \"https://10.217.0.167:9292/healthcheck\": read tcp 10.217.0.2:51712->10.217.0.167:9292: read: connection reset by peer" Feb 18 19:36:29 crc kubenswrapper[4942]: I0218 19:36:29.047160 4942 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3ecc91e6-4e7f-438f-8530-bb8dd55764c5" path="/var/lib/kubelet/pods/3ecc91e6-4e7f-438f-8530-bb8dd55764c5/volumes" Feb 18 19:36:29 crc kubenswrapper[4942]: I0218 19:36:29.311074 4942 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 18 19:36:29 crc kubenswrapper[4942]: I0218 19:36:29.388380 4942 generic.go:334] "Generic (PLEG): container finished" podID="dc47abc8-8f2f-41c6-96c3-d6e81388e5b2" containerID="0c82f89cf5ce35ccda5a5b29f76963df047d6ffca2e6b1f0144d5f20d3dfe0a7" exitCode=0 Feb 18 19:36:29 crc kubenswrapper[4942]: I0218 19:36:29.388460 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"dc47abc8-8f2f-41c6-96c3-d6e81388e5b2","Type":"ContainerDied","Data":"0c82f89cf5ce35ccda5a5b29f76963df047d6ffca2e6b1f0144d5f20d3dfe0a7"} Feb 18 19:36:29 crc kubenswrapper[4942]: I0218 19:36:29.388494 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"dc47abc8-8f2f-41c6-96c3-d6e81388e5b2","Type":"ContainerDied","Data":"5071cc9380a8d29894cf185feb69d5860ec44c77140f4a82c7520791aad9109c"} Feb 18 19:36:29 crc kubenswrapper[4942]: I0218 19:36:29.388513 4942 scope.go:117] "RemoveContainer" containerID="0c82f89cf5ce35ccda5a5b29f76963df047d6ffca2e6b1f0144d5f20d3dfe0a7" Feb 18 19:36:29 crc kubenswrapper[4942]: I0218 19:36:29.388661 4942 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 18 19:36:29 crc kubenswrapper[4942]: I0218 19:36:29.407625 4942 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-f9r9j" Feb 18 19:36:29 crc kubenswrapper[4942]: I0218 19:36:29.408346 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"696aecc5-9837-4941-a9e2-06c1743b6983","Type":"ContainerStarted","Data":"ff5a66ca95a9acb98874490f26d4d917450e3dbd52c6493e4894edb793c0261b"} Feb 18 19:36:29 crc kubenswrapper[4942]: I0218 19:36:29.408374 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"696aecc5-9837-4941-a9e2-06c1743b6983","Type":"ContainerStarted","Data":"7bd1dc3d7ceb9cd510d24aaa8a624c13e7f5dd415a98c0dc54d4fb8d58f9ca84"} Feb 18 19:36:29 crc kubenswrapper[4942]: I0218 19:36:29.408409 4942 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-d7fm8" Feb 18 19:36:29 crc kubenswrapper[4942]: I0218 19:36:29.408886 4942 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-f195-account-create-update-jjctk" Feb 18 19:36:29 crc kubenswrapper[4942]: I0218 19:36:29.430738 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"dc47abc8-8f2f-41c6-96c3-d6e81388e5b2\" (UID: \"dc47abc8-8f2f-41c6-96c3-d6e81388e5b2\") " Feb 18 19:36:29 crc kubenswrapper[4942]: I0218 19:36:29.431011 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/dc47abc8-8f2f-41c6-96c3-d6e81388e5b2-httpd-run\") pod \"dc47abc8-8f2f-41c6-96c3-d6e81388e5b2\" (UID: \"dc47abc8-8f2f-41c6-96c3-d6e81388e5b2\") " Feb 18 19:36:29 crc kubenswrapper[4942]: I0218 19:36:29.431479 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/dc47abc8-8f2f-41c6-96c3-d6e81388e5b2-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "dc47abc8-8f2f-41c6-96c3-d6e81388e5b2" (UID: "dc47abc8-8f2f-41c6-96c3-d6e81388e5b2"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 19:36:29 crc kubenswrapper[4942]: I0218 19:36:29.431877 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/dc47abc8-8f2f-41c6-96c3-d6e81388e5b2-public-tls-certs\") pod \"dc47abc8-8f2f-41c6-96c3-d6e81388e5b2\" (UID: \"dc47abc8-8f2f-41c6-96c3-d6e81388e5b2\") " Feb 18 19:36:29 crc kubenswrapper[4942]: I0218 19:36:29.431904 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/dc47abc8-8f2f-41c6-96c3-d6e81388e5b2-logs\") pod \"dc47abc8-8f2f-41c6-96c3-d6e81388e5b2\" (UID: \"dc47abc8-8f2f-41c6-96c3-d6e81388e5b2\") " Feb 18 19:36:29 crc kubenswrapper[4942]: I0218 19:36:29.432236 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dc47abc8-8f2f-41c6-96c3-d6e81388e5b2-combined-ca-bundle\") pod \"dc47abc8-8f2f-41c6-96c3-d6e81388e5b2\" (UID: \"dc47abc8-8f2f-41c6-96c3-d6e81388e5b2\") " Feb 18 19:36:29 crc kubenswrapper[4942]: I0218 19:36:29.432272 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/dc47abc8-8f2f-41c6-96c3-d6e81388e5b2-scripts\") pod \"dc47abc8-8f2f-41c6-96c3-d6e81388e5b2\" (UID: \"dc47abc8-8f2f-41c6-96c3-d6e81388e5b2\") " Feb 18 19:36:29 crc kubenswrapper[4942]: I0218 19:36:29.432315 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rk7vw\" (UniqueName: \"kubernetes.io/projected/dc47abc8-8f2f-41c6-96c3-d6e81388e5b2-kube-api-access-rk7vw\") pod \"dc47abc8-8f2f-41c6-96c3-d6e81388e5b2\" (UID: \"dc47abc8-8f2f-41c6-96c3-d6e81388e5b2\") " Feb 18 19:36:29 crc kubenswrapper[4942]: I0218 19:36:29.432346 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dc47abc8-8f2f-41c6-96c3-d6e81388e5b2-config-data\") pod \"dc47abc8-8f2f-41c6-96c3-d6e81388e5b2\" (UID: \"dc47abc8-8f2f-41c6-96c3-d6e81388e5b2\") " Feb 18 19:36:29 crc kubenswrapper[4942]: I0218 19:36:29.433193 4942 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/dc47abc8-8f2f-41c6-96c3-d6e81388e5b2-httpd-run\") on node \"crc\" DevicePath \"\"" Feb 18 19:36:29 crc kubenswrapper[4942]: I0218 19:36:29.435338 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/dc47abc8-8f2f-41c6-96c3-d6e81388e5b2-logs" (OuterVolumeSpecName: "logs") pod "dc47abc8-8f2f-41c6-96c3-d6e81388e5b2" (UID: "dc47abc8-8f2f-41c6-96c3-d6e81388e5b2"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 19:36:29 crc kubenswrapper[4942]: I0218 19:36:29.452150 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage08-crc" (OuterVolumeSpecName: "glance") pod "dc47abc8-8f2f-41c6-96c3-d6e81388e5b2" (UID: "dc47abc8-8f2f-41c6-96c3-d6e81388e5b2"). InnerVolumeSpecName "local-storage08-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Feb 18 19:36:29 crc kubenswrapper[4942]: I0218 19:36:29.452862 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dc47abc8-8f2f-41c6-96c3-d6e81388e5b2-scripts" (OuterVolumeSpecName: "scripts") pod "dc47abc8-8f2f-41c6-96c3-d6e81388e5b2" (UID: "dc47abc8-8f2f-41c6-96c3-d6e81388e5b2"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:36:29 crc kubenswrapper[4942]: I0218 19:36:29.459109 4942 scope.go:117] "RemoveContainer" containerID="af0f17fdd4b111e87d9ffc74c4fed5912320cf203228fa25c7dde7a00ca05bb2" Feb 18 19:36:29 crc kubenswrapper[4942]: I0218 19:36:29.463815 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dc47abc8-8f2f-41c6-96c3-d6e81388e5b2-kube-api-access-rk7vw" (OuterVolumeSpecName: "kube-api-access-rk7vw") pod "dc47abc8-8f2f-41c6-96c3-d6e81388e5b2" (UID: "dc47abc8-8f2f-41c6-96c3-d6e81388e5b2"). InnerVolumeSpecName "kube-api-access-rk7vw". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:36:29 crc kubenswrapper[4942]: I0218 19:36:29.479357 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dc47abc8-8f2f-41c6-96c3-d6e81388e5b2-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "dc47abc8-8f2f-41c6-96c3-d6e81388e5b2" (UID: "dc47abc8-8f2f-41c6-96c3-d6e81388e5b2"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:36:29 crc kubenswrapper[4942]: I0218 19:36:29.534466 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dc47abc8-8f2f-41c6-96c3-d6e81388e5b2-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "dc47abc8-8f2f-41c6-96c3-d6e81388e5b2" (UID: "dc47abc8-8f2f-41c6-96c3-d6e81388e5b2"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:36:29 crc kubenswrapper[4942]: I0218 19:36:29.534839 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/dc47abc8-8f2f-41c6-96c3-d6e81388e5b2-public-tls-certs\") pod \"dc47abc8-8f2f-41c6-96c3-d6e81388e5b2\" (UID: \"dc47abc8-8f2f-41c6-96c3-d6e81388e5b2\") " Feb 18 19:36:29 crc kubenswrapper[4942]: W0218 19:36:29.535647 4942 empty_dir.go:500] Warning: Unmount skipped because path does not exist: /var/lib/kubelet/pods/dc47abc8-8f2f-41c6-96c3-d6e81388e5b2/volumes/kubernetes.io~secret/public-tls-certs Feb 18 19:36:29 crc kubenswrapper[4942]: I0218 19:36:29.535714 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dc47abc8-8f2f-41c6-96c3-d6e81388e5b2-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "dc47abc8-8f2f-41c6-96c3-d6e81388e5b2" (UID: "dc47abc8-8f2f-41c6-96c3-d6e81388e5b2"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:36:29 crc kubenswrapper[4942]: I0218 19:36:29.535894 4942 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rk7vw\" (UniqueName: \"kubernetes.io/projected/dc47abc8-8f2f-41c6-96c3-d6e81388e5b2-kube-api-access-rk7vw\") on node \"crc\" DevicePath \"\"" Feb 18 19:36:29 crc kubenswrapper[4942]: I0218 19:36:29.535965 4942 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") on node \"crc\" " Feb 18 19:36:29 crc kubenswrapper[4942]: I0218 19:36:29.536031 4942 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/dc47abc8-8f2f-41c6-96c3-d6e81388e5b2-public-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 18 19:36:29 crc kubenswrapper[4942]: I0218 19:36:29.536084 4942 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/dc47abc8-8f2f-41c6-96c3-d6e81388e5b2-logs\") on node \"crc\" DevicePath \"\"" Feb 18 19:36:29 crc kubenswrapper[4942]: I0218 19:36:29.536136 4942 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dc47abc8-8f2f-41c6-96c3-d6e81388e5b2-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 19:36:29 crc kubenswrapper[4942]: I0218 19:36:29.536186 4942 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/dc47abc8-8f2f-41c6-96c3-d6e81388e5b2-scripts\") on node \"crc\" DevicePath \"\"" Feb 18 19:36:29 crc kubenswrapper[4942]: I0218 19:36:29.543514 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dc47abc8-8f2f-41c6-96c3-d6e81388e5b2-config-data" (OuterVolumeSpecName: "config-data") pod "dc47abc8-8f2f-41c6-96c3-d6e81388e5b2" (UID: "dc47abc8-8f2f-41c6-96c3-d6e81388e5b2"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:36:29 crc kubenswrapper[4942]: I0218 19:36:29.576695 4942 scope.go:117] "RemoveContainer" containerID="0c82f89cf5ce35ccda5a5b29f76963df047d6ffca2e6b1f0144d5f20d3dfe0a7" Feb 18 19:36:29 crc kubenswrapper[4942]: E0218 19:36:29.582974 4942 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0c82f89cf5ce35ccda5a5b29f76963df047d6ffca2e6b1f0144d5f20d3dfe0a7\": container with ID starting with 0c82f89cf5ce35ccda5a5b29f76963df047d6ffca2e6b1f0144d5f20d3dfe0a7 not found: ID does not exist" containerID="0c82f89cf5ce35ccda5a5b29f76963df047d6ffca2e6b1f0144d5f20d3dfe0a7" Feb 18 19:36:29 crc kubenswrapper[4942]: I0218 19:36:29.583010 4942 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0c82f89cf5ce35ccda5a5b29f76963df047d6ffca2e6b1f0144d5f20d3dfe0a7"} err="failed to get container status \"0c82f89cf5ce35ccda5a5b29f76963df047d6ffca2e6b1f0144d5f20d3dfe0a7\": rpc error: code = NotFound desc = could not find container \"0c82f89cf5ce35ccda5a5b29f76963df047d6ffca2e6b1f0144d5f20d3dfe0a7\": container with ID starting with 0c82f89cf5ce35ccda5a5b29f76963df047d6ffca2e6b1f0144d5f20d3dfe0a7 not found: ID does not exist" Feb 18 19:36:29 crc kubenswrapper[4942]: I0218 19:36:29.583034 4942 scope.go:117] "RemoveContainer" containerID="af0f17fdd4b111e87d9ffc74c4fed5912320cf203228fa25c7dde7a00ca05bb2" Feb 18 19:36:29 crc kubenswrapper[4942]: E0218 19:36:29.587728 4942 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"af0f17fdd4b111e87d9ffc74c4fed5912320cf203228fa25c7dde7a00ca05bb2\": container with ID starting with af0f17fdd4b111e87d9ffc74c4fed5912320cf203228fa25c7dde7a00ca05bb2 not found: ID does not exist" containerID="af0f17fdd4b111e87d9ffc74c4fed5912320cf203228fa25c7dde7a00ca05bb2" Feb 18 19:36:29 crc kubenswrapper[4942]: I0218 19:36:29.587808 4942 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"af0f17fdd4b111e87d9ffc74c4fed5912320cf203228fa25c7dde7a00ca05bb2"} err="failed to get container status \"af0f17fdd4b111e87d9ffc74c4fed5912320cf203228fa25c7dde7a00ca05bb2\": rpc error: code = NotFound desc = could not find container \"af0f17fdd4b111e87d9ffc74c4fed5912320cf203228fa25c7dde7a00ca05bb2\": container with ID starting with af0f17fdd4b111e87d9ffc74c4fed5912320cf203228fa25c7dde7a00ca05bb2 not found: ID does not exist" Feb 18 19:36:29 crc kubenswrapper[4942]: I0218 19:36:29.594868 4942 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage08-crc" (UniqueName: "kubernetes.io/local-volume/local-storage08-crc") on node "crc" Feb 18 19:36:29 crc kubenswrapper[4942]: I0218 19:36:29.638208 4942 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dc47abc8-8f2f-41c6-96c3-d6e81388e5b2-config-data\") on node \"crc\" DevicePath \"\"" Feb 18 19:36:29 crc kubenswrapper[4942]: I0218 19:36:29.638238 4942 reconciler_common.go:293] "Volume detached for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") on node \"crc\" DevicePath \"\"" Feb 18 19:36:29 crc kubenswrapper[4942]: I0218 19:36:29.757012 4942 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 18 19:36:29 crc kubenswrapper[4942]: I0218 19:36:29.774883 4942 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 18 19:36:29 crc kubenswrapper[4942]: I0218 19:36:29.795882 4942 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Feb 18 19:36:29 crc kubenswrapper[4942]: E0218 19:36:29.796296 4942 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="df34bdbb-8771-4d46-b5ba-29088c793a4c" containerName="neutron-api" Feb 18 19:36:29 crc kubenswrapper[4942]: I0218 19:36:29.796313 4942 state_mem.go:107] "Deleted CPUSet assignment" podUID="df34bdbb-8771-4d46-b5ba-29088c793a4c" containerName="neutron-api" Feb 18 19:36:29 crc kubenswrapper[4942]: E0218 19:36:29.796327 4942 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="df34bdbb-8771-4d46-b5ba-29088c793a4c" containerName="neutron-httpd" Feb 18 19:36:29 crc kubenswrapper[4942]: I0218 19:36:29.796333 4942 state_mem.go:107] "Deleted CPUSet assignment" podUID="df34bdbb-8771-4d46-b5ba-29088c793a4c" containerName="neutron-httpd" Feb 18 19:36:29 crc kubenswrapper[4942]: E0218 19:36:29.796346 4942 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dc47abc8-8f2f-41c6-96c3-d6e81388e5b2" containerName="glance-httpd" Feb 18 19:36:29 crc kubenswrapper[4942]: I0218 19:36:29.796353 4942 state_mem.go:107] "Deleted CPUSet assignment" podUID="dc47abc8-8f2f-41c6-96c3-d6e81388e5b2" containerName="glance-httpd" Feb 18 19:36:29 crc kubenswrapper[4942]: E0218 19:36:29.796363 4942 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="54e11ed4-f85e-4125-acc8-b0b86cef91fb" containerName="mariadb-database-create" Feb 18 19:36:29 crc kubenswrapper[4942]: I0218 19:36:29.796369 4942 state_mem.go:107] "Deleted CPUSet assignment" podUID="54e11ed4-f85e-4125-acc8-b0b86cef91fb" containerName="mariadb-database-create" Feb 18 19:36:29 crc kubenswrapper[4942]: E0218 19:36:29.796387 4942 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3ecc91e6-4e7f-438f-8530-bb8dd55764c5" containerName="horizon" Feb 18 19:36:29 crc kubenswrapper[4942]: I0218 19:36:29.796392 4942 state_mem.go:107] "Deleted CPUSet assignment" podUID="3ecc91e6-4e7f-438f-8530-bb8dd55764c5" containerName="horizon" Feb 18 19:36:29 crc kubenswrapper[4942]: E0218 19:36:29.796414 4942 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="de103e96-857c-4fa9-b78b-51c8f4734643" containerName="mariadb-database-create" Feb 18 19:36:29 crc kubenswrapper[4942]: I0218 19:36:29.796422 4942 state_mem.go:107] "Deleted CPUSet assignment" podUID="de103e96-857c-4fa9-b78b-51c8f4734643" containerName="mariadb-database-create" Feb 18 19:36:29 crc kubenswrapper[4942]: E0218 19:36:29.796434 4942 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3ecc91e6-4e7f-438f-8530-bb8dd55764c5" containerName="horizon-log" Feb 18 19:36:29 crc kubenswrapper[4942]: I0218 19:36:29.796440 4942 state_mem.go:107] "Deleted CPUSet assignment" podUID="3ecc91e6-4e7f-438f-8530-bb8dd55764c5" containerName="horizon-log" Feb 18 19:36:29 crc kubenswrapper[4942]: E0218 19:36:29.796453 4942 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3319773b-d924-402a-adbd-f421ee14c994" containerName="mariadb-database-create" Feb 18 19:36:29 crc kubenswrapper[4942]: I0218 19:36:29.796459 4942 state_mem.go:107] "Deleted CPUSet assignment" podUID="3319773b-d924-402a-adbd-f421ee14c994" containerName="mariadb-database-create" Feb 18 19:36:29 crc kubenswrapper[4942]: E0218 19:36:29.796466 4942 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dc47abc8-8f2f-41c6-96c3-d6e81388e5b2" containerName="glance-log" Feb 18 19:36:29 crc kubenswrapper[4942]: I0218 19:36:29.796473 4942 state_mem.go:107] "Deleted CPUSet assignment" podUID="dc47abc8-8f2f-41c6-96c3-d6e81388e5b2" containerName="glance-log" Feb 18 19:36:29 crc kubenswrapper[4942]: E0218 19:36:29.796486 4942 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ef4ca914-d763-484f-aa35-39dbd725d14c" containerName="mariadb-account-create-update" Feb 18 19:36:29 crc kubenswrapper[4942]: I0218 19:36:29.796492 4942 state_mem.go:107] "Deleted CPUSet assignment" podUID="ef4ca914-d763-484f-aa35-39dbd725d14c" containerName="mariadb-account-create-update" Feb 18 19:36:29 crc kubenswrapper[4942]: I0218 19:36:29.796658 4942 memory_manager.go:354] "RemoveStaleState removing state" podUID="ef4ca914-d763-484f-aa35-39dbd725d14c" containerName="mariadb-account-create-update" Feb 18 19:36:29 crc kubenswrapper[4942]: I0218 19:36:29.796667 4942 memory_manager.go:354] "RemoveStaleState removing state" podUID="3ecc91e6-4e7f-438f-8530-bb8dd55764c5" containerName="horizon-log" Feb 18 19:36:29 crc kubenswrapper[4942]: I0218 19:36:29.796681 4942 memory_manager.go:354] "RemoveStaleState removing state" podUID="54e11ed4-f85e-4125-acc8-b0b86cef91fb" containerName="mariadb-database-create" Feb 18 19:36:29 crc kubenswrapper[4942]: I0218 19:36:29.796690 4942 memory_manager.go:354] "RemoveStaleState removing state" podUID="3319773b-d924-402a-adbd-f421ee14c994" containerName="mariadb-database-create" Feb 18 19:36:29 crc kubenswrapper[4942]: I0218 19:36:29.796698 4942 memory_manager.go:354] "RemoveStaleState removing state" podUID="3ecc91e6-4e7f-438f-8530-bb8dd55764c5" containerName="horizon" Feb 18 19:36:29 crc kubenswrapper[4942]: I0218 19:36:29.796709 4942 memory_manager.go:354] "RemoveStaleState removing state" podUID="dc47abc8-8f2f-41c6-96c3-d6e81388e5b2" containerName="glance-httpd" Feb 18 19:36:29 crc kubenswrapper[4942]: I0218 19:36:29.796716 4942 memory_manager.go:354] "RemoveStaleState removing state" podUID="de103e96-857c-4fa9-b78b-51c8f4734643" containerName="mariadb-database-create" Feb 18 19:36:29 crc kubenswrapper[4942]: I0218 19:36:29.796723 4942 memory_manager.go:354] "RemoveStaleState removing state" podUID="df34bdbb-8771-4d46-b5ba-29088c793a4c" containerName="neutron-api" Feb 18 19:36:29 crc kubenswrapper[4942]: I0218 19:36:29.796732 4942 memory_manager.go:354] "RemoveStaleState removing state" podUID="df34bdbb-8771-4d46-b5ba-29088c793a4c" containerName="neutron-httpd" Feb 18 19:36:29 crc kubenswrapper[4942]: I0218 19:36:29.796743 4942 memory_manager.go:354] "RemoveStaleState removing state" podUID="dc47abc8-8f2f-41c6-96c3-d6e81388e5b2" containerName="glance-log" Feb 18 19:36:29 crc kubenswrapper[4942]: I0218 19:36:29.797691 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 18 19:36:29 crc kubenswrapper[4942]: I0218 19:36:29.801735 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Feb 18 19:36:29 crc kubenswrapper[4942]: I0218 19:36:29.801860 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Feb 18 19:36:29 crc kubenswrapper[4942]: I0218 19:36:29.811808 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 18 19:36:29 crc kubenswrapper[4942]: I0218 19:36:29.845932 4942 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-a3b1-account-create-update-sdgp2" Feb 18 19:36:29 crc kubenswrapper[4942]: I0218 19:36:29.895215 4942 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-1b0e-account-create-update-p6b7z" Feb 18 19:36:29 crc kubenswrapper[4942]: I0218 19:36:29.951268 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xvvfc\" (UniqueName: \"kubernetes.io/projected/bdd3a7b9-5bb1-47a4-8a4a-95131e50cf27-kube-api-access-xvvfc\") pod \"bdd3a7b9-5bb1-47a4-8a4a-95131e50cf27\" (UID: \"bdd3a7b9-5bb1-47a4-8a4a-95131e50cf27\") " Feb 18 19:36:29 crc kubenswrapper[4942]: I0218 19:36:29.951467 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bdd3a7b9-5bb1-47a4-8a4a-95131e50cf27-operator-scripts\") pod \"bdd3a7b9-5bb1-47a4-8a4a-95131e50cf27\" (UID: \"bdd3a7b9-5bb1-47a4-8a4a-95131e50cf27\") " Feb 18 19:36:29 crc kubenswrapper[4942]: I0218 19:36:29.951702 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c208165d-3fd9-436b-b964-c2839e67f1f9-config-data\") pod \"glance-default-external-api-0\" (UID: \"c208165d-3fd9-436b-b964-c2839e67f1f9\") " pod="openstack/glance-default-external-api-0" Feb 18 19:36:29 crc kubenswrapper[4942]: I0218 19:36:29.951733 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c208165d-3fd9-436b-b964-c2839e67f1f9-logs\") pod \"glance-default-external-api-0\" (UID: \"c208165d-3fd9-436b-b964-c2839e67f1f9\") " pod="openstack/glance-default-external-api-0" Feb 18 19:36:29 crc kubenswrapper[4942]: I0218 19:36:29.951754 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cmp2f\" (UniqueName: \"kubernetes.io/projected/c208165d-3fd9-436b-b964-c2839e67f1f9-kube-api-access-cmp2f\") pod \"glance-default-external-api-0\" (UID: \"c208165d-3fd9-436b-b964-c2839e67f1f9\") " pod="openstack/glance-default-external-api-0" Feb 18 19:36:29 crc kubenswrapper[4942]: I0218 19:36:29.951799 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c208165d-3fd9-436b-b964-c2839e67f1f9-scripts\") pod \"glance-default-external-api-0\" (UID: \"c208165d-3fd9-436b-b964-c2839e67f1f9\") " pod="openstack/glance-default-external-api-0" Feb 18 19:36:29 crc kubenswrapper[4942]: I0218 19:36:29.951851 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c208165d-3fd9-436b-b964-c2839e67f1f9-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"c208165d-3fd9-436b-b964-c2839e67f1f9\") " pod="openstack/glance-default-external-api-0" Feb 18 19:36:29 crc kubenswrapper[4942]: I0218 19:36:29.951916 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c208165d-3fd9-436b-b964-c2839e67f1f9-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"c208165d-3fd9-436b-b964-c2839e67f1f9\") " pod="openstack/glance-default-external-api-0" Feb 18 19:36:29 crc kubenswrapper[4942]: I0218 19:36:29.951943 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-external-api-0\" (UID: \"c208165d-3fd9-436b-b964-c2839e67f1f9\") " pod="openstack/glance-default-external-api-0" Feb 18 19:36:29 crc kubenswrapper[4942]: I0218 19:36:29.951979 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/c208165d-3fd9-436b-b964-c2839e67f1f9-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"c208165d-3fd9-436b-b964-c2839e67f1f9\") " pod="openstack/glance-default-external-api-0" Feb 18 19:36:29 crc kubenswrapper[4942]: I0218 19:36:29.953639 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bdd3a7b9-5bb1-47a4-8a4a-95131e50cf27-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "bdd3a7b9-5bb1-47a4-8a4a-95131e50cf27" (UID: "bdd3a7b9-5bb1-47a4-8a4a-95131e50cf27"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:36:29 crc kubenswrapper[4942]: I0218 19:36:29.966018 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bdd3a7b9-5bb1-47a4-8a4a-95131e50cf27-kube-api-access-xvvfc" (OuterVolumeSpecName: "kube-api-access-xvvfc") pod "bdd3a7b9-5bb1-47a4-8a4a-95131e50cf27" (UID: "bdd3a7b9-5bb1-47a4-8a4a-95131e50cf27"). InnerVolumeSpecName "kube-api-access-xvvfc". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:36:30 crc kubenswrapper[4942]: I0218 19:36:30.053057 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pv798\" (UniqueName: \"kubernetes.io/projected/908017b2-bbca-42f2-b6a0-af358a18d1b7-kube-api-access-pv798\") pod \"908017b2-bbca-42f2-b6a0-af358a18d1b7\" (UID: \"908017b2-bbca-42f2-b6a0-af358a18d1b7\") " Feb 18 19:36:30 crc kubenswrapper[4942]: I0218 19:36:30.053255 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/908017b2-bbca-42f2-b6a0-af358a18d1b7-operator-scripts\") pod \"908017b2-bbca-42f2-b6a0-af358a18d1b7\" (UID: \"908017b2-bbca-42f2-b6a0-af358a18d1b7\") " Feb 18 19:36:30 crc kubenswrapper[4942]: I0218 19:36:30.053564 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c208165d-3fd9-436b-b964-c2839e67f1f9-scripts\") pod \"glance-default-external-api-0\" (UID: \"c208165d-3fd9-436b-b964-c2839e67f1f9\") " pod="openstack/glance-default-external-api-0" Feb 18 19:36:30 crc kubenswrapper[4942]: I0218 19:36:30.053607 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c208165d-3fd9-436b-b964-c2839e67f1f9-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"c208165d-3fd9-436b-b964-c2839e67f1f9\") " pod="openstack/glance-default-external-api-0" Feb 18 19:36:30 crc kubenswrapper[4942]: I0218 19:36:30.053670 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c208165d-3fd9-436b-b964-c2839e67f1f9-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"c208165d-3fd9-436b-b964-c2839e67f1f9\") " pod="openstack/glance-default-external-api-0" Feb 18 19:36:30 crc kubenswrapper[4942]: I0218 19:36:30.053699 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-external-api-0\" (UID: \"c208165d-3fd9-436b-b964-c2839e67f1f9\") " pod="openstack/glance-default-external-api-0" Feb 18 19:36:30 crc kubenswrapper[4942]: I0218 19:36:30.053739 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/c208165d-3fd9-436b-b964-c2839e67f1f9-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"c208165d-3fd9-436b-b964-c2839e67f1f9\") " pod="openstack/glance-default-external-api-0" Feb 18 19:36:30 crc kubenswrapper[4942]: I0218 19:36:30.053808 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c208165d-3fd9-436b-b964-c2839e67f1f9-config-data\") pod \"glance-default-external-api-0\" (UID: \"c208165d-3fd9-436b-b964-c2839e67f1f9\") " pod="openstack/glance-default-external-api-0" Feb 18 19:36:30 crc kubenswrapper[4942]: I0218 19:36:30.053829 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c208165d-3fd9-436b-b964-c2839e67f1f9-logs\") pod \"glance-default-external-api-0\" (UID: \"c208165d-3fd9-436b-b964-c2839e67f1f9\") " pod="openstack/glance-default-external-api-0" Feb 18 19:36:30 crc kubenswrapper[4942]: I0218 19:36:30.053847 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cmp2f\" (UniqueName: \"kubernetes.io/projected/c208165d-3fd9-436b-b964-c2839e67f1f9-kube-api-access-cmp2f\") pod \"glance-default-external-api-0\" (UID: \"c208165d-3fd9-436b-b964-c2839e67f1f9\") " pod="openstack/glance-default-external-api-0" Feb 18 19:36:30 crc kubenswrapper[4942]: I0218 19:36:30.053894 4942 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xvvfc\" (UniqueName: \"kubernetes.io/projected/bdd3a7b9-5bb1-47a4-8a4a-95131e50cf27-kube-api-access-xvvfc\") on node \"crc\" DevicePath \"\"" Feb 18 19:36:30 crc kubenswrapper[4942]: I0218 19:36:30.053905 4942 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bdd3a7b9-5bb1-47a4-8a4a-95131e50cf27-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 18 19:36:30 crc kubenswrapper[4942]: I0218 19:36:30.054889 4942 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-external-api-0\" (UID: \"c208165d-3fd9-436b-b964-c2839e67f1f9\") device mount path \"/mnt/openstack/pv08\"" pod="openstack/glance-default-external-api-0" Feb 18 19:36:30 crc kubenswrapper[4942]: I0218 19:36:30.055055 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/c208165d-3fd9-436b-b964-c2839e67f1f9-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"c208165d-3fd9-436b-b964-c2839e67f1f9\") " pod="openstack/glance-default-external-api-0" Feb 18 19:36:30 crc kubenswrapper[4942]: I0218 19:36:30.055241 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c208165d-3fd9-436b-b964-c2839e67f1f9-logs\") pod \"glance-default-external-api-0\" (UID: \"c208165d-3fd9-436b-b964-c2839e67f1f9\") " pod="openstack/glance-default-external-api-0" Feb 18 19:36:30 crc kubenswrapper[4942]: I0218 19:36:30.055461 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/908017b2-bbca-42f2-b6a0-af358a18d1b7-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "908017b2-bbca-42f2-b6a0-af358a18d1b7" (UID: "908017b2-bbca-42f2-b6a0-af358a18d1b7"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:36:30 crc kubenswrapper[4942]: I0218 19:36:30.057868 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/908017b2-bbca-42f2-b6a0-af358a18d1b7-kube-api-access-pv798" (OuterVolumeSpecName: "kube-api-access-pv798") pod "908017b2-bbca-42f2-b6a0-af358a18d1b7" (UID: "908017b2-bbca-42f2-b6a0-af358a18d1b7"). InnerVolumeSpecName "kube-api-access-pv798". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:36:30 crc kubenswrapper[4942]: I0218 19:36:30.069281 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c208165d-3fd9-436b-b964-c2839e67f1f9-scripts\") pod \"glance-default-external-api-0\" (UID: \"c208165d-3fd9-436b-b964-c2839e67f1f9\") " pod="openstack/glance-default-external-api-0" Feb 18 19:36:30 crc kubenswrapper[4942]: I0218 19:36:30.069314 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c208165d-3fd9-436b-b964-c2839e67f1f9-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"c208165d-3fd9-436b-b964-c2839e67f1f9\") " pod="openstack/glance-default-external-api-0" Feb 18 19:36:30 crc kubenswrapper[4942]: I0218 19:36:30.070291 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c208165d-3fd9-436b-b964-c2839e67f1f9-config-data\") pod \"glance-default-external-api-0\" (UID: \"c208165d-3fd9-436b-b964-c2839e67f1f9\") " pod="openstack/glance-default-external-api-0" Feb 18 19:36:30 crc kubenswrapper[4942]: I0218 19:36:30.070819 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c208165d-3fd9-436b-b964-c2839e67f1f9-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"c208165d-3fd9-436b-b964-c2839e67f1f9\") " pod="openstack/glance-default-external-api-0" Feb 18 19:36:30 crc kubenswrapper[4942]: I0218 19:36:30.075396 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cmp2f\" (UniqueName: \"kubernetes.io/projected/c208165d-3fd9-436b-b964-c2839e67f1f9-kube-api-access-cmp2f\") pod \"glance-default-external-api-0\" (UID: \"c208165d-3fd9-436b-b964-c2839e67f1f9\") " pod="openstack/glance-default-external-api-0" Feb 18 19:36:30 crc kubenswrapper[4942]: I0218 19:36:30.120634 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-external-api-0\" (UID: \"c208165d-3fd9-436b-b964-c2839e67f1f9\") " pod="openstack/glance-default-external-api-0" Feb 18 19:36:30 crc kubenswrapper[4942]: I0218 19:36:30.155560 4942 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/908017b2-bbca-42f2-b6a0-af358a18d1b7-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 18 19:36:30 crc kubenswrapper[4942]: I0218 19:36:30.155597 4942 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pv798\" (UniqueName: \"kubernetes.io/projected/908017b2-bbca-42f2-b6a0-af358a18d1b7-kube-api-access-pv798\") on node \"crc\" DevicePath \"\"" Feb 18 19:36:30 crc kubenswrapper[4942]: I0218 19:36:30.414999 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-a3b1-account-create-update-sdgp2" event={"ID":"bdd3a7b9-5bb1-47a4-8a4a-95131e50cf27","Type":"ContainerDied","Data":"fd25e15f2b69ef489266f93ff5e54bcabaad72ec94bfa30e588907d0fa96302e"} Feb 18 19:36:30 crc kubenswrapper[4942]: I0218 19:36:30.415036 4942 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fd25e15f2b69ef489266f93ff5e54bcabaad72ec94bfa30e588907d0fa96302e" Feb 18 19:36:30 crc kubenswrapper[4942]: I0218 19:36:30.415086 4942 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-a3b1-account-create-update-sdgp2" Feb 18 19:36:30 crc kubenswrapper[4942]: I0218 19:36:30.419753 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-1b0e-account-create-update-p6b7z" event={"ID":"908017b2-bbca-42f2-b6a0-af358a18d1b7","Type":"ContainerDied","Data":"527d26ceec3f8972dda44cae7e3560073a290e058817bf5b7e32fe2b65220c1d"} Feb 18 19:36:30 crc kubenswrapper[4942]: I0218 19:36:30.419804 4942 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="527d26ceec3f8972dda44cae7e3560073a290e058817bf5b7e32fe2b65220c1d" Feb 18 19:36:30 crc kubenswrapper[4942]: I0218 19:36:30.419822 4942 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-1b0e-account-create-update-p6b7z" Feb 18 19:36:30 crc kubenswrapper[4942]: I0218 19:36:30.423142 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 18 19:36:31 crc kubenswrapper[4942]: I0218 19:36:31.074007 4942 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dc47abc8-8f2f-41c6-96c3-d6e81388e5b2" path="/var/lib/kubelet/pods/dc47abc8-8f2f-41c6-96c3-d6e81388e5b2/volumes" Feb 18 19:36:31 crc kubenswrapper[4942]: I0218 19:36:31.079088 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 18 19:36:31 crc kubenswrapper[4942]: I0218 19:36:31.435140 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"c208165d-3fd9-436b-b964-c2839e67f1f9","Type":"ContainerStarted","Data":"9bec0bad1afe526c4e78ff309b48fd1aa33f8d684edcd63d83c8ebbe72fe7dd7"} Feb 18 19:36:31 crc kubenswrapper[4942]: I0218 19:36:31.446598 4942 generic.go:334] "Generic (PLEG): container finished" podID="5cd0efdc-b208-4270-9c23-33e01f7298be" containerID="7f7ecb8106c4011dd2affe0db157078ed440c3dc9a5f336a7fd4922172637f01" exitCode=0 Feb 18 19:36:31 crc kubenswrapper[4942]: I0218 19:36:31.446680 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"5cd0efdc-b208-4270-9c23-33e01f7298be","Type":"ContainerDied","Data":"7f7ecb8106c4011dd2affe0db157078ed440c3dc9a5f336a7fd4922172637f01"} Feb 18 19:36:31 crc kubenswrapper[4942]: I0218 19:36:31.452364 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"696aecc5-9837-4941-a9e2-06c1743b6983","Type":"ContainerStarted","Data":"dab22ef643cf4a1848ae3f2c3600077ca2b9255a63d8f2ec325041316075d69d"} Feb 18 19:36:31 crc kubenswrapper[4942]: I0218 19:36:31.452528 4942 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="696aecc5-9837-4941-a9e2-06c1743b6983" containerName="ceilometer-central-agent" containerID="cri-o://f199cea9b51631457ac52fd4aa8f018a58676c04337cc4ce60e41428c59205eb" gracePeriod=30 Feb 18 19:36:31 crc kubenswrapper[4942]: I0218 19:36:31.452572 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 18 19:36:31 crc kubenswrapper[4942]: I0218 19:36:31.452604 4942 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="696aecc5-9837-4941-a9e2-06c1743b6983" containerName="proxy-httpd" containerID="cri-o://dab22ef643cf4a1848ae3f2c3600077ca2b9255a63d8f2ec325041316075d69d" gracePeriod=30 Feb 18 19:36:31 crc kubenswrapper[4942]: I0218 19:36:31.452661 4942 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="696aecc5-9837-4941-a9e2-06c1743b6983" containerName="ceilometer-notification-agent" containerID="cri-o://7bd1dc3d7ceb9cd510d24aaa8a624c13e7f5dd415a98c0dc54d4fb8d58f9ca84" gracePeriod=30 Feb 18 19:36:31 crc kubenswrapper[4942]: I0218 19:36:31.452622 4942 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="696aecc5-9837-4941-a9e2-06c1743b6983" containerName="sg-core" containerID="cri-o://ff5a66ca95a9acb98874490f26d4d917450e3dbd52c6493e4894edb793c0261b" gracePeriod=30 Feb 18 19:36:31 crc kubenswrapper[4942]: I0218 19:36:31.479474 4942 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.981627669 podStartE2EDuration="9.479435865s" podCreationTimestamp="2026-02-18 19:36:22 +0000 UTC" firstStartedPulling="2026-02-18 19:36:24.127785924 +0000 UTC m=+1143.832718579" lastFinishedPulling="2026-02-18 19:36:30.62559411 +0000 UTC m=+1150.330526775" observedRunningTime="2026-02-18 19:36:31.477347391 +0000 UTC m=+1151.182280056" watchObservedRunningTime="2026-02-18 19:36:31.479435865 +0000 UTC m=+1151.184368530" Feb 18 19:36:31 crc kubenswrapper[4942]: I0218 19:36:31.787525 4942 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 18 19:36:31 crc kubenswrapper[4942]: I0218 19:36:31.892438 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5cd0efdc-b208-4270-9c23-33e01f7298be-logs\") pod \"5cd0efdc-b208-4270-9c23-33e01f7298be\" (UID: \"5cd0efdc-b208-4270-9c23-33e01f7298be\") " Feb 18 19:36:31 crc kubenswrapper[4942]: I0218 19:36:31.892573 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/5cd0efdc-b208-4270-9c23-33e01f7298be-httpd-run\") pod \"5cd0efdc-b208-4270-9c23-33e01f7298be\" (UID: \"5cd0efdc-b208-4270-9c23-33e01f7298be\") " Feb 18 19:36:31 crc kubenswrapper[4942]: I0218 19:36:31.892607 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/5cd0efdc-b208-4270-9c23-33e01f7298be-internal-tls-certs\") pod \"5cd0efdc-b208-4270-9c23-33e01f7298be\" (UID: \"5cd0efdc-b208-4270-9c23-33e01f7298be\") " Feb 18 19:36:31 crc kubenswrapper[4942]: I0218 19:36:31.892651 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8bv8n\" (UniqueName: \"kubernetes.io/projected/5cd0efdc-b208-4270-9c23-33e01f7298be-kube-api-access-8bv8n\") pod \"5cd0efdc-b208-4270-9c23-33e01f7298be\" (UID: \"5cd0efdc-b208-4270-9c23-33e01f7298be\") " Feb 18 19:36:31 crc kubenswrapper[4942]: I0218 19:36:31.892693 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5cd0efdc-b208-4270-9c23-33e01f7298be-scripts\") pod \"5cd0efdc-b208-4270-9c23-33e01f7298be\" (UID: \"5cd0efdc-b208-4270-9c23-33e01f7298be\") " Feb 18 19:36:31 crc kubenswrapper[4942]: I0218 19:36:31.892724 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5cd0efdc-b208-4270-9c23-33e01f7298be-combined-ca-bundle\") pod \"5cd0efdc-b208-4270-9c23-33e01f7298be\" (UID: \"5cd0efdc-b208-4270-9c23-33e01f7298be\") " Feb 18 19:36:31 crc kubenswrapper[4942]: I0218 19:36:31.892748 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"5cd0efdc-b208-4270-9c23-33e01f7298be\" (UID: \"5cd0efdc-b208-4270-9c23-33e01f7298be\") " Feb 18 19:36:31 crc kubenswrapper[4942]: I0218 19:36:31.892789 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5cd0efdc-b208-4270-9c23-33e01f7298be-config-data\") pod \"5cd0efdc-b208-4270-9c23-33e01f7298be\" (UID: \"5cd0efdc-b208-4270-9c23-33e01f7298be\") " Feb 18 19:36:31 crc kubenswrapper[4942]: I0218 19:36:31.894168 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5cd0efdc-b208-4270-9c23-33e01f7298be-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "5cd0efdc-b208-4270-9c23-33e01f7298be" (UID: "5cd0efdc-b208-4270-9c23-33e01f7298be"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 19:36:31 crc kubenswrapper[4942]: I0218 19:36:31.894411 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5cd0efdc-b208-4270-9c23-33e01f7298be-logs" (OuterVolumeSpecName: "logs") pod "5cd0efdc-b208-4270-9c23-33e01f7298be" (UID: "5cd0efdc-b208-4270-9c23-33e01f7298be"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 19:36:31 crc kubenswrapper[4942]: I0218 19:36:31.900102 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5cd0efdc-b208-4270-9c23-33e01f7298be-kube-api-access-8bv8n" (OuterVolumeSpecName: "kube-api-access-8bv8n") pod "5cd0efdc-b208-4270-9c23-33e01f7298be" (UID: "5cd0efdc-b208-4270-9c23-33e01f7298be"). InnerVolumeSpecName "kube-api-access-8bv8n". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:36:31 crc kubenswrapper[4942]: I0218 19:36:31.900190 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage04-crc" (OuterVolumeSpecName: "glance") pod "5cd0efdc-b208-4270-9c23-33e01f7298be" (UID: "5cd0efdc-b208-4270-9c23-33e01f7298be"). InnerVolumeSpecName "local-storage04-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Feb 18 19:36:31 crc kubenswrapper[4942]: I0218 19:36:31.900319 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5cd0efdc-b208-4270-9c23-33e01f7298be-scripts" (OuterVolumeSpecName: "scripts") pod "5cd0efdc-b208-4270-9c23-33e01f7298be" (UID: "5cd0efdc-b208-4270-9c23-33e01f7298be"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:36:31 crc kubenswrapper[4942]: I0218 19:36:31.928443 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-9bf555976-zxfhl" Feb 18 19:36:31 crc kubenswrapper[4942]: I0218 19:36:31.941239 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5cd0efdc-b208-4270-9c23-33e01f7298be-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "5cd0efdc-b208-4270-9c23-33e01f7298be" (UID: "5cd0efdc-b208-4270-9c23-33e01f7298be"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:36:31 crc kubenswrapper[4942]: I0218 19:36:31.966707 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5cd0efdc-b208-4270-9c23-33e01f7298be-config-data" (OuterVolumeSpecName: "config-data") pod "5cd0efdc-b208-4270-9c23-33e01f7298be" (UID: "5cd0efdc-b208-4270-9c23-33e01f7298be"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:36:31 crc kubenswrapper[4942]: I0218 19:36:31.984537 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5cd0efdc-b208-4270-9c23-33e01f7298be-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "5cd0efdc-b208-4270-9c23-33e01f7298be" (UID: "5cd0efdc-b208-4270-9c23-33e01f7298be"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:36:31 crc kubenswrapper[4942]: I0218 19:36:31.995741 4942 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5cd0efdc-b208-4270-9c23-33e01f7298be-logs\") on node \"crc\" DevicePath \"\"" Feb 18 19:36:31 crc kubenswrapper[4942]: I0218 19:36:31.995962 4942 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/5cd0efdc-b208-4270-9c23-33e01f7298be-httpd-run\") on node \"crc\" DevicePath \"\"" Feb 18 19:36:31 crc kubenswrapper[4942]: I0218 19:36:31.996048 4942 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/5cd0efdc-b208-4270-9c23-33e01f7298be-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 18 19:36:31 crc kubenswrapper[4942]: I0218 19:36:31.996126 4942 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8bv8n\" (UniqueName: \"kubernetes.io/projected/5cd0efdc-b208-4270-9c23-33e01f7298be-kube-api-access-8bv8n\") on node \"crc\" DevicePath \"\"" Feb 18 19:36:31 crc kubenswrapper[4942]: I0218 19:36:31.996217 4942 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5cd0efdc-b208-4270-9c23-33e01f7298be-scripts\") on node \"crc\" DevicePath \"\"" Feb 18 19:36:31 crc kubenswrapper[4942]: I0218 19:36:31.996306 4942 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5cd0efdc-b208-4270-9c23-33e01f7298be-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 19:36:31 crc kubenswrapper[4942]: I0218 19:36:31.996398 4942 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") on node \"crc\" " Feb 18 19:36:31 crc kubenswrapper[4942]: I0218 19:36:31.996475 4942 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5cd0efdc-b208-4270-9c23-33e01f7298be-config-data\") on node \"crc\" DevicePath \"\"" Feb 18 19:36:32 crc kubenswrapper[4942]: I0218 19:36:32.021650 4942 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage04-crc" (UniqueName: "kubernetes.io/local-volume/local-storage04-crc") on node "crc" Feb 18 19:36:32 crc kubenswrapper[4942]: I0218 19:36:32.089581 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-9bf555976-zxfhl" Feb 18 19:36:32 crc kubenswrapper[4942]: I0218 19:36:32.101280 4942 reconciler_common.go:293] "Volume detached for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") on node \"crc\" DevicePath \"\"" Feb 18 19:36:32 crc kubenswrapper[4942]: I0218 19:36:32.169703 4942 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-5794bf846d-82xzg"] Feb 18 19:36:32 crc kubenswrapper[4942]: I0218 19:36:32.169940 4942 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/placement-5794bf846d-82xzg" podUID="ab301488-e86d-4ba2-b628-f4ea689acd3b" containerName="placement-log" containerID="cri-o://f8a851dfe023e77ce2012d0b840a4729b646e24254cac11ed22579fa4353c01b" gracePeriod=30 Feb 18 19:36:32 crc kubenswrapper[4942]: I0218 19:36:32.170320 4942 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/placement-5794bf846d-82xzg" podUID="ab301488-e86d-4ba2-b628-f4ea689acd3b" containerName="placement-api" containerID="cri-o://9ef44ea2e648e2bbfb3bd289c97d6ea2ed93750446192377e2017b04b006f489" gracePeriod=30 Feb 18 19:36:32 crc kubenswrapper[4942]: I0218 19:36:32.490991 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"c208165d-3fd9-436b-b964-c2839e67f1f9","Type":"ContainerStarted","Data":"82853c4c7a5be022d0766662ed2ed2a6066b1ff3042e37f5caa28d7873e5610f"} Feb 18 19:36:32 crc kubenswrapper[4942]: I0218 19:36:32.512480 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"5cd0efdc-b208-4270-9c23-33e01f7298be","Type":"ContainerDied","Data":"a6a851f31a8af36c76a03d082cd2bcde730a917e0fda0acf37bf24b1cd98ff69"} Feb 18 19:36:32 crc kubenswrapper[4942]: I0218 19:36:32.512531 4942 scope.go:117] "RemoveContainer" containerID="7f7ecb8106c4011dd2affe0db157078ed440c3dc9a5f336a7fd4922172637f01" Feb 18 19:36:32 crc kubenswrapper[4942]: I0218 19:36:32.512659 4942 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 18 19:36:32 crc kubenswrapper[4942]: I0218 19:36:32.527798 4942 generic.go:334] "Generic (PLEG): container finished" podID="ab301488-e86d-4ba2-b628-f4ea689acd3b" containerID="f8a851dfe023e77ce2012d0b840a4729b646e24254cac11ed22579fa4353c01b" exitCode=143 Feb 18 19:36:32 crc kubenswrapper[4942]: I0218 19:36:32.527903 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-5794bf846d-82xzg" event={"ID":"ab301488-e86d-4ba2-b628-f4ea689acd3b","Type":"ContainerDied","Data":"f8a851dfe023e77ce2012d0b840a4729b646e24254cac11ed22579fa4353c01b"} Feb 18 19:36:32 crc kubenswrapper[4942]: I0218 19:36:32.560904 4942 scope.go:117] "RemoveContainer" containerID="1b92a562ea433f43d820eeece6e874b38a343cedbb1b276827ec28ad7679c4ae" Feb 18 19:36:32 crc kubenswrapper[4942]: I0218 19:36:32.568192 4942 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 18 19:36:32 crc kubenswrapper[4942]: I0218 19:36:32.572905 4942 generic.go:334] "Generic (PLEG): container finished" podID="696aecc5-9837-4941-a9e2-06c1743b6983" containerID="dab22ef643cf4a1848ae3f2c3600077ca2b9255a63d8f2ec325041316075d69d" exitCode=0 Feb 18 19:36:32 crc kubenswrapper[4942]: I0218 19:36:32.572933 4942 generic.go:334] "Generic (PLEG): container finished" podID="696aecc5-9837-4941-a9e2-06c1743b6983" containerID="ff5a66ca95a9acb98874490f26d4d917450e3dbd52c6493e4894edb793c0261b" exitCode=2 Feb 18 19:36:32 crc kubenswrapper[4942]: I0218 19:36:32.572943 4942 generic.go:334] "Generic (PLEG): container finished" podID="696aecc5-9837-4941-a9e2-06c1743b6983" containerID="7bd1dc3d7ceb9cd510d24aaa8a624c13e7f5dd415a98c0dc54d4fb8d58f9ca84" exitCode=0 Feb 18 19:36:32 crc kubenswrapper[4942]: I0218 19:36:32.573840 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"696aecc5-9837-4941-a9e2-06c1743b6983","Type":"ContainerDied","Data":"dab22ef643cf4a1848ae3f2c3600077ca2b9255a63d8f2ec325041316075d69d"} Feb 18 19:36:32 crc kubenswrapper[4942]: I0218 19:36:32.573873 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"696aecc5-9837-4941-a9e2-06c1743b6983","Type":"ContainerDied","Data":"ff5a66ca95a9acb98874490f26d4d917450e3dbd52c6493e4894edb793c0261b"} Feb 18 19:36:32 crc kubenswrapper[4942]: I0218 19:36:32.573883 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"696aecc5-9837-4941-a9e2-06c1743b6983","Type":"ContainerDied","Data":"7bd1dc3d7ceb9cd510d24aaa8a624c13e7f5dd415a98c0dc54d4fb8d58f9ca84"} Feb 18 19:36:32 crc kubenswrapper[4942]: I0218 19:36:32.577927 4942 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 18 19:36:32 crc kubenswrapper[4942]: I0218 19:36:32.606722 4942 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 18 19:36:32 crc kubenswrapper[4942]: E0218 19:36:32.607094 4942 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bdd3a7b9-5bb1-47a4-8a4a-95131e50cf27" containerName="mariadb-account-create-update" Feb 18 19:36:32 crc kubenswrapper[4942]: I0218 19:36:32.607111 4942 state_mem.go:107] "Deleted CPUSet assignment" podUID="bdd3a7b9-5bb1-47a4-8a4a-95131e50cf27" containerName="mariadb-account-create-update" Feb 18 19:36:32 crc kubenswrapper[4942]: E0218 19:36:32.607129 4942 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5cd0efdc-b208-4270-9c23-33e01f7298be" containerName="glance-log" Feb 18 19:36:32 crc kubenswrapper[4942]: I0218 19:36:32.607134 4942 state_mem.go:107] "Deleted CPUSet assignment" podUID="5cd0efdc-b208-4270-9c23-33e01f7298be" containerName="glance-log" Feb 18 19:36:32 crc kubenswrapper[4942]: E0218 19:36:32.607156 4942 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="908017b2-bbca-42f2-b6a0-af358a18d1b7" containerName="mariadb-account-create-update" Feb 18 19:36:32 crc kubenswrapper[4942]: I0218 19:36:32.607163 4942 state_mem.go:107] "Deleted CPUSet assignment" podUID="908017b2-bbca-42f2-b6a0-af358a18d1b7" containerName="mariadb-account-create-update" Feb 18 19:36:32 crc kubenswrapper[4942]: E0218 19:36:32.607175 4942 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5cd0efdc-b208-4270-9c23-33e01f7298be" containerName="glance-httpd" Feb 18 19:36:32 crc kubenswrapper[4942]: I0218 19:36:32.607181 4942 state_mem.go:107] "Deleted CPUSet assignment" podUID="5cd0efdc-b208-4270-9c23-33e01f7298be" containerName="glance-httpd" Feb 18 19:36:32 crc kubenswrapper[4942]: I0218 19:36:32.607371 4942 memory_manager.go:354] "RemoveStaleState removing state" podUID="bdd3a7b9-5bb1-47a4-8a4a-95131e50cf27" containerName="mariadb-account-create-update" Feb 18 19:36:32 crc kubenswrapper[4942]: I0218 19:36:32.607384 4942 memory_manager.go:354] "RemoveStaleState removing state" podUID="5cd0efdc-b208-4270-9c23-33e01f7298be" containerName="glance-log" Feb 18 19:36:32 crc kubenswrapper[4942]: I0218 19:36:32.607396 4942 memory_manager.go:354] "RemoveStaleState removing state" podUID="908017b2-bbca-42f2-b6a0-af358a18d1b7" containerName="mariadb-account-create-update" Feb 18 19:36:32 crc kubenswrapper[4942]: I0218 19:36:32.607404 4942 memory_manager.go:354] "RemoveStaleState removing state" podUID="5cd0efdc-b208-4270-9c23-33e01f7298be" containerName="glance-httpd" Feb 18 19:36:32 crc kubenswrapper[4942]: I0218 19:36:32.608316 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 18 19:36:32 crc kubenswrapper[4942]: I0218 19:36:32.616136 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Feb 18 19:36:32 crc kubenswrapper[4942]: I0218 19:36:32.616147 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Feb 18 19:36:32 crc kubenswrapper[4942]: I0218 19:36:32.650107 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 18 19:36:32 crc kubenswrapper[4942]: I0218 19:36:32.716749 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c1669290-6aa1-4a36-8397-a62c14647c13-scripts\") pod \"glance-default-internal-api-0\" (UID: \"c1669290-6aa1-4a36-8397-a62c14647c13\") " pod="openstack/glance-default-internal-api-0" Feb 18 19:36:32 crc kubenswrapper[4942]: I0218 19:36:32.716814 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/c1669290-6aa1-4a36-8397-a62c14647c13-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"c1669290-6aa1-4a36-8397-a62c14647c13\") " pod="openstack/glance-default-internal-api-0" Feb 18 19:36:32 crc kubenswrapper[4942]: I0218 19:36:32.716838 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"glance-default-internal-api-0\" (UID: \"c1669290-6aa1-4a36-8397-a62c14647c13\") " pod="openstack/glance-default-internal-api-0" Feb 18 19:36:32 crc kubenswrapper[4942]: I0218 19:36:32.716859 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c1669290-6aa1-4a36-8397-a62c14647c13-logs\") pod \"glance-default-internal-api-0\" (UID: \"c1669290-6aa1-4a36-8397-a62c14647c13\") " pod="openstack/glance-default-internal-api-0" Feb 18 19:36:32 crc kubenswrapper[4942]: I0218 19:36:32.716888 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c1669290-6aa1-4a36-8397-a62c14647c13-config-data\") pod \"glance-default-internal-api-0\" (UID: \"c1669290-6aa1-4a36-8397-a62c14647c13\") " pod="openstack/glance-default-internal-api-0" Feb 18 19:36:32 crc kubenswrapper[4942]: I0218 19:36:32.716943 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c1669290-6aa1-4a36-8397-a62c14647c13-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"c1669290-6aa1-4a36-8397-a62c14647c13\") " pod="openstack/glance-default-internal-api-0" Feb 18 19:36:32 crc kubenswrapper[4942]: I0218 19:36:32.716959 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fvwwb\" (UniqueName: \"kubernetes.io/projected/c1669290-6aa1-4a36-8397-a62c14647c13-kube-api-access-fvwwb\") pod \"glance-default-internal-api-0\" (UID: \"c1669290-6aa1-4a36-8397-a62c14647c13\") " pod="openstack/glance-default-internal-api-0" Feb 18 19:36:32 crc kubenswrapper[4942]: I0218 19:36:32.716992 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c1669290-6aa1-4a36-8397-a62c14647c13-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"c1669290-6aa1-4a36-8397-a62c14647c13\") " pod="openstack/glance-default-internal-api-0" Feb 18 19:36:32 crc kubenswrapper[4942]: I0218 19:36:32.818248 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c1669290-6aa1-4a36-8397-a62c14647c13-config-data\") pod \"glance-default-internal-api-0\" (UID: \"c1669290-6aa1-4a36-8397-a62c14647c13\") " pod="openstack/glance-default-internal-api-0" Feb 18 19:36:32 crc kubenswrapper[4942]: I0218 19:36:32.818339 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c1669290-6aa1-4a36-8397-a62c14647c13-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"c1669290-6aa1-4a36-8397-a62c14647c13\") " pod="openstack/glance-default-internal-api-0" Feb 18 19:36:32 crc kubenswrapper[4942]: I0218 19:36:32.818361 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fvwwb\" (UniqueName: \"kubernetes.io/projected/c1669290-6aa1-4a36-8397-a62c14647c13-kube-api-access-fvwwb\") pod \"glance-default-internal-api-0\" (UID: \"c1669290-6aa1-4a36-8397-a62c14647c13\") " pod="openstack/glance-default-internal-api-0" Feb 18 19:36:32 crc kubenswrapper[4942]: I0218 19:36:32.818396 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c1669290-6aa1-4a36-8397-a62c14647c13-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"c1669290-6aa1-4a36-8397-a62c14647c13\") " pod="openstack/glance-default-internal-api-0" Feb 18 19:36:32 crc kubenswrapper[4942]: I0218 19:36:32.818471 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c1669290-6aa1-4a36-8397-a62c14647c13-scripts\") pod \"glance-default-internal-api-0\" (UID: \"c1669290-6aa1-4a36-8397-a62c14647c13\") " pod="openstack/glance-default-internal-api-0" Feb 18 19:36:32 crc kubenswrapper[4942]: I0218 19:36:32.818505 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/c1669290-6aa1-4a36-8397-a62c14647c13-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"c1669290-6aa1-4a36-8397-a62c14647c13\") " pod="openstack/glance-default-internal-api-0" Feb 18 19:36:32 crc kubenswrapper[4942]: I0218 19:36:32.818538 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"glance-default-internal-api-0\" (UID: \"c1669290-6aa1-4a36-8397-a62c14647c13\") " pod="openstack/glance-default-internal-api-0" Feb 18 19:36:32 crc kubenswrapper[4942]: I0218 19:36:32.818565 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c1669290-6aa1-4a36-8397-a62c14647c13-logs\") pod \"glance-default-internal-api-0\" (UID: \"c1669290-6aa1-4a36-8397-a62c14647c13\") " pod="openstack/glance-default-internal-api-0" Feb 18 19:36:32 crc kubenswrapper[4942]: I0218 19:36:32.819097 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c1669290-6aa1-4a36-8397-a62c14647c13-logs\") pod \"glance-default-internal-api-0\" (UID: \"c1669290-6aa1-4a36-8397-a62c14647c13\") " pod="openstack/glance-default-internal-api-0" Feb 18 19:36:32 crc kubenswrapper[4942]: I0218 19:36:32.819113 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/c1669290-6aa1-4a36-8397-a62c14647c13-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"c1669290-6aa1-4a36-8397-a62c14647c13\") " pod="openstack/glance-default-internal-api-0" Feb 18 19:36:32 crc kubenswrapper[4942]: I0218 19:36:32.819405 4942 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"glance-default-internal-api-0\" (UID: \"c1669290-6aa1-4a36-8397-a62c14647c13\") device mount path \"/mnt/openstack/pv04\"" pod="openstack/glance-default-internal-api-0" Feb 18 19:36:32 crc kubenswrapper[4942]: I0218 19:36:32.824086 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c1669290-6aa1-4a36-8397-a62c14647c13-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"c1669290-6aa1-4a36-8397-a62c14647c13\") " pod="openstack/glance-default-internal-api-0" Feb 18 19:36:32 crc kubenswrapper[4942]: I0218 19:36:32.825918 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c1669290-6aa1-4a36-8397-a62c14647c13-config-data\") pod \"glance-default-internal-api-0\" (UID: \"c1669290-6aa1-4a36-8397-a62c14647c13\") " pod="openstack/glance-default-internal-api-0" Feb 18 19:36:32 crc kubenswrapper[4942]: I0218 19:36:32.826467 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c1669290-6aa1-4a36-8397-a62c14647c13-scripts\") pod \"glance-default-internal-api-0\" (UID: \"c1669290-6aa1-4a36-8397-a62c14647c13\") " pod="openstack/glance-default-internal-api-0" Feb 18 19:36:32 crc kubenswrapper[4942]: I0218 19:36:32.826555 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c1669290-6aa1-4a36-8397-a62c14647c13-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"c1669290-6aa1-4a36-8397-a62c14647c13\") " pod="openstack/glance-default-internal-api-0" Feb 18 19:36:32 crc kubenswrapper[4942]: I0218 19:36:32.839329 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fvwwb\" (UniqueName: \"kubernetes.io/projected/c1669290-6aa1-4a36-8397-a62c14647c13-kube-api-access-fvwwb\") pod \"glance-default-internal-api-0\" (UID: \"c1669290-6aa1-4a36-8397-a62c14647c13\") " pod="openstack/glance-default-internal-api-0" Feb 18 19:36:32 crc kubenswrapper[4942]: I0218 19:36:32.863340 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"glance-default-internal-api-0\" (UID: \"c1669290-6aa1-4a36-8397-a62c14647c13\") " pod="openstack/glance-default-internal-api-0" Feb 18 19:36:32 crc kubenswrapper[4942]: I0218 19:36:32.937677 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 18 19:36:33 crc kubenswrapper[4942]: I0218 19:36:33.061330 4942 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5cd0efdc-b208-4270-9c23-33e01f7298be" path="/var/lib/kubelet/pods/5cd0efdc-b208-4270-9c23-33e01f7298be/volumes" Feb 18 19:36:33 crc kubenswrapper[4942]: I0218 19:36:33.514751 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 18 19:36:33 crc kubenswrapper[4942]: W0218 19:36:33.517539 4942 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc1669290_6aa1_4a36_8397_a62c14647c13.slice/crio-9144f806c209abb6436b4db321b570024f4498390566722f0745c8f40e0fee98 WatchSource:0}: Error finding container 9144f806c209abb6436b4db321b570024f4498390566722f0745c8f40e0fee98: Status 404 returned error can't find the container with id 9144f806c209abb6436b4db321b570024f4498390566722f0745c8f40e0fee98 Feb 18 19:36:33 crc kubenswrapper[4942]: I0218 19:36:33.619305 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"c208165d-3fd9-436b-b964-c2839e67f1f9","Type":"ContainerStarted","Data":"36d0d041423d481c40dd46c8de918565aa453789659b355b6b0e7f64245b52d4"} Feb 18 19:36:33 crc kubenswrapper[4942]: I0218 19:36:33.628167 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"c1669290-6aa1-4a36-8397-a62c14647c13","Type":"ContainerStarted","Data":"9144f806c209abb6436b4db321b570024f4498390566722f0745c8f40e0fee98"} Feb 18 19:36:33 crc kubenswrapper[4942]: I0218 19:36:33.656357 4942 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=4.656340872 podStartE2EDuration="4.656340872s" podCreationTimestamp="2026-02-18 19:36:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:36:33.648772455 +0000 UTC m=+1153.353705140" watchObservedRunningTime="2026-02-18 19:36:33.656340872 +0000 UTC m=+1153.361273537" Feb 18 19:36:34 crc kubenswrapper[4942]: I0218 19:36:34.005859 4942 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-db-sync-bbrrn"] Feb 18 19:36:34 crc kubenswrapper[4942]: I0218 19:36:34.007326 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-bbrrn" Feb 18 19:36:34 crc kubenswrapper[4942]: I0218 19:36:34.015445 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-scripts" Feb 18 19:36:34 crc kubenswrapper[4942]: I0218 19:36:34.015643 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Feb 18 19:36:34 crc kubenswrapper[4942]: I0218 19:36:34.015738 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-r8ppn" Feb 18 19:36:34 crc kubenswrapper[4942]: I0218 19:36:34.036440 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-bbrrn"] Feb 18 19:36:34 crc kubenswrapper[4942]: I0218 19:36:34.047858 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hzdtz\" (UniqueName: \"kubernetes.io/projected/e14c764c-c1b5-4196-a48b-2aff4c38782b-kube-api-access-hzdtz\") pod \"nova-cell0-conductor-db-sync-bbrrn\" (UID: \"e14c764c-c1b5-4196-a48b-2aff4c38782b\") " pod="openstack/nova-cell0-conductor-db-sync-bbrrn" Feb 18 19:36:34 crc kubenswrapper[4942]: I0218 19:36:34.047913 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e14c764c-c1b5-4196-a48b-2aff4c38782b-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-bbrrn\" (UID: \"e14c764c-c1b5-4196-a48b-2aff4c38782b\") " pod="openstack/nova-cell0-conductor-db-sync-bbrrn" Feb 18 19:36:34 crc kubenswrapper[4942]: I0218 19:36:34.047986 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e14c764c-c1b5-4196-a48b-2aff4c38782b-config-data\") pod \"nova-cell0-conductor-db-sync-bbrrn\" (UID: \"e14c764c-c1b5-4196-a48b-2aff4c38782b\") " pod="openstack/nova-cell0-conductor-db-sync-bbrrn" Feb 18 19:36:34 crc kubenswrapper[4942]: I0218 19:36:34.048041 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e14c764c-c1b5-4196-a48b-2aff4c38782b-scripts\") pod \"nova-cell0-conductor-db-sync-bbrrn\" (UID: \"e14c764c-c1b5-4196-a48b-2aff4c38782b\") " pod="openstack/nova-cell0-conductor-db-sync-bbrrn" Feb 18 19:36:34 crc kubenswrapper[4942]: I0218 19:36:34.150074 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e14c764c-c1b5-4196-a48b-2aff4c38782b-scripts\") pod \"nova-cell0-conductor-db-sync-bbrrn\" (UID: \"e14c764c-c1b5-4196-a48b-2aff4c38782b\") " pod="openstack/nova-cell0-conductor-db-sync-bbrrn" Feb 18 19:36:34 crc kubenswrapper[4942]: I0218 19:36:34.150235 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hzdtz\" (UniqueName: \"kubernetes.io/projected/e14c764c-c1b5-4196-a48b-2aff4c38782b-kube-api-access-hzdtz\") pod \"nova-cell0-conductor-db-sync-bbrrn\" (UID: \"e14c764c-c1b5-4196-a48b-2aff4c38782b\") " pod="openstack/nova-cell0-conductor-db-sync-bbrrn" Feb 18 19:36:34 crc kubenswrapper[4942]: I0218 19:36:34.150278 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e14c764c-c1b5-4196-a48b-2aff4c38782b-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-bbrrn\" (UID: \"e14c764c-c1b5-4196-a48b-2aff4c38782b\") " pod="openstack/nova-cell0-conductor-db-sync-bbrrn" Feb 18 19:36:34 crc kubenswrapper[4942]: I0218 19:36:34.150425 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e14c764c-c1b5-4196-a48b-2aff4c38782b-config-data\") pod \"nova-cell0-conductor-db-sync-bbrrn\" (UID: \"e14c764c-c1b5-4196-a48b-2aff4c38782b\") " pod="openstack/nova-cell0-conductor-db-sync-bbrrn" Feb 18 19:36:34 crc kubenswrapper[4942]: I0218 19:36:34.155941 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e14c764c-c1b5-4196-a48b-2aff4c38782b-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-bbrrn\" (UID: \"e14c764c-c1b5-4196-a48b-2aff4c38782b\") " pod="openstack/nova-cell0-conductor-db-sync-bbrrn" Feb 18 19:36:34 crc kubenswrapper[4942]: I0218 19:36:34.156695 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e14c764c-c1b5-4196-a48b-2aff4c38782b-config-data\") pod \"nova-cell0-conductor-db-sync-bbrrn\" (UID: \"e14c764c-c1b5-4196-a48b-2aff4c38782b\") " pod="openstack/nova-cell0-conductor-db-sync-bbrrn" Feb 18 19:36:34 crc kubenswrapper[4942]: I0218 19:36:34.170386 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hzdtz\" (UniqueName: \"kubernetes.io/projected/e14c764c-c1b5-4196-a48b-2aff4c38782b-kube-api-access-hzdtz\") pod \"nova-cell0-conductor-db-sync-bbrrn\" (UID: \"e14c764c-c1b5-4196-a48b-2aff4c38782b\") " pod="openstack/nova-cell0-conductor-db-sync-bbrrn" Feb 18 19:36:34 crc kubenswrapper[4942]: I0218 19:36:34.170894 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e14c764c-c1b5-4196-a48b-2aff4c38782b-scripts\") pod \"nova-cell0-conductor-db-sync-bbrrn\" (UID: \"e14c764c-c1b5-4196-a48b-2aff4c38782b\") " pod="openstack/nova-cell0-conductor-db-sync-bbrrn" Feb 18 19:36:34 crc kubenswrapper[4942]: I0218 19:36:34.329244 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-bbrrn" Feb 18 19:36:34 crc kubenswrapper[4942]: I0218 19:36:34.645719 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"c1669290-6aa1-4a36-8397-a62c14647c13","Type":"ContainerStarted","Data":"539326ba24780274bb3169a41e3f0301cbc69d974ca578d45c9e263dd0889740"} Feb 18 19:36:34 crc kubenswrapper[4942]: I0218 19:36:34.862625 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-bbrrn"] Feb 18 19:36:34 crc kubenswrapper[4942]: W0218 19:36:34.870199 4942 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode14c764c_c1b5_4196_a48b_2aff4c38782b.slice/crio-41848844e5ca0ec4e07ca2fcd7497cd5893a17d562a97a6ad440536587e4b055 WatchSource:0}: Error finding container 41848844e5ca0ec4e07ca2fcd7497cd5893a17d562a97a6ad440536587e4b055: Status 404 returned error can't find the container with id 41848844e5ca0ec4e07ca2fcd7497cd5893a17d562a97a6ad440536587e4b055 Feb 18 19:36:35 crc kubenswrapper[4942]: I0218 19:36:35.658182 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"c1669290-6aa1-4a36-8397-a62c14647c13","Type":"ContainerStarted","Data":"9ef69375edeb57daae67118a01e73cabffdf8cd2b54e9dc26ca9ce13b9e3aab6"} Feb 18 19:36:35 crc kubenswrapper[4942]: I0218 19:36:35.663306 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-bbrrn" event={"ID":"e14c764c-c1b5-4196-a48b-2aff4c38782b","Type":"ContainerStarted","Data":"41848844e5ca0ec4e07ca2fcd7497cd5893a17d562a97a6ad440536587e4b055"} Feb 18 19:36:35 crc kubenswrapper[4942]: I0218 19:36:35.691897 4942 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=3.691881184 podStartE2EDuration="3.691881184s" podCreationTimestamp="2026-02-18 19:36:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:36:35.681319409 +0000 UTC m=+1155.386252074" watchObservedRunningTime="2026-02-18 19:36:35.691881184 +0000 UTC m=+1155.396813849" Feb 18 19:36:35 crc kubenswrapper[4942]: I0218 19:36:35.710219 4942 generic.go:334] "Generic (PLEG): container finished" podID="ab301488-e86d-4ba2-b628-f4ea689acd3b" containerID="9ef44ea2e648e2bbfb3bd289c97d6ea2ed93750446192377e2017b04b006f489" exitCode=0 Feb 18 19:36:35 crc kubenswrapper[4942]: I0218 19:36:35.710263 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-5794bf846d-82xzg" event={"ID":"ab301488-e86d-4ba2-b628-f4ea689acd3b","Type":"ContainerDied","Data":"9ef44ea2e648e2bbfb3bd289c97d6ea2ed93750446192377e2017b04b006f489"} Feb 18 19:36:35 crc kubenswrapper[4942]: I0218 19:36:35.788778 4942 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-5794bf846d-82xzg" Feb 18 19:36:35 crc kubenswrapper[4942]: I0218 19:36:35.884887 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ab301488-e86d-4ba2-b628-f4ea689acd3b-scripts\") pod \"ab301488-e86d-4ba2-b628-f4ea689acd3b\" (UID: \"ab301488-e86d-4ba2-b628-f4ea689acd3b\") " Feb 18 19:36:35 crc kubenswrapper[4942]: I0218 19:36:35.885019 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v89br\" (UniqueName: \"kubernetes.io/projected/ab301488-e86d-4ba2-b628-f4ea689acd3b-kube-api-access-v89br\") pod \"ab301488-e86d-4ba2-b628-f4ea689acd3b\" (UID: \"ab301488-e86d-4ba2-b628-f4ea689acd3b\") " Feb 18 19:36:35 crc kubenswrapper[4942]: I0218 19:36:35.885041 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ab301488-e86d-4ba2-b628-f4ea689acd3b-logs\") pod \"ab301488-e86d-4ba2-b628-f4ea689acd3b\" (UID: \"ab301488-e86d-4ba2-b628-f4ea689acd3b\") " Feb 18 19:36:35 crc kubenswrapper[4942]: I0218 19:36:35.885084 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ab301488-e86d-4ba2-b628-f4ea689acd3b-config-data\") pod \"ab301488-e86d-4ba2-b628-f4ea689acd3b\" (UID: \"ab301488-e86d-4ba2-b628-f4ea689acd3b\") " Feb 18 19:36:35 crc kubenswrapper[4942]: I0218 19:36:35.885099 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ab301488-e86d-4ba2-b628-f4ea689acd3b-combined-ca-bundle\") pod \"ab301488-e86d-4ba2-b628-f4ea689acd3b\" (UID: \"ab301488-e86d-4ba2-b628-f4ea689acd3b\") " Feb 18 19:36:35 crc kubenswrapper[4942]: I0218 19:36:35.885377 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ab301488-e86d-4ba2-b628-f4ea689acd3b-internal-tls-certs\") pod \"ab301488-e86d-4ba2-b628-f4ea689acd3b\" (UID: \"ab301488-e86d-4ba2-b628-f4ea689acd3b\") " Feb 18 19:36:35 crc kubenswrapper[4942]: I0218 19:36:35.885400 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ab301488-e86d-4ba2-b628-f4ea689acd3b-public-tls-certs\") pod \"ab301488-e86d-4ba2-b628-f4ea689acd3b\" (UID: \"ab301488-e86d-4ba2-b628-f4ea689acd3b\") " Feb 18 19:36:35 crc kubenswrapper[4942]: I0218 19:36:35.886197 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ab301488-e86d-4ba2-b628-f4ea689acd3b-logs" (OuterVolumeSpecName: "logs") pod "ab301488-e86d-4ba2-b628-f4ea689acd3b" (UID: "ab301488-e86d-4ba2-b628-f4ea689acd3b"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 19:36:35 crc kubenswrapper[4942]: I0218 19:36:35.893913 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ab301488-e86d-4ba2-b628-f4ea689acd3b-kube-api-access-v89br" (OuterVolumeSpecName: "kube-api-access-v89br") pod "ab301488-e86d-4ba2-b628-f4ea689acd3b" (UID: "ab301488-e86d-4ba2-b628-f4ea689acd3b"). InnerVolumeSpecName "kube-api-access-v89br". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:36:35 crc kubenswrapper[4942]: I0218 19:36:35.904104 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ab301488-e86d-4ba2-b628-f4ea689acd3b-scripts" (OuterVolumeSpecName: "scripts") pod "ab301488-e86d-4ba2-b628-f4ea689acd3b" (UID: "ab301488-e86d-4ba2-b628-f4ea689acd3b"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:36:35 crc kubenswrapper[4942]: I0218 19:36:35.968045 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ab301488-e86d-4ba2-b628-f4ea689acd3b-config-data" (OuterVolumeSpecName: "config-data") pod "ab301488-e86d-4ba2-b628-f4ea689acd3b" (UID: "ab301488-e86d-4ba2-b628-f4ea689acd3b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:36:35 crc kubenswrapper[4942]: I0218 19:36:35.994875 4942 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v89br\" (UniqueName: \"kubernetes.io/projected/ab301488-e86d-4ba2-b628-f4ea689acd3b-kube-api-access-v89br\") on node \"crc\" DevicePath \"\"" Feb 18 19:36:35 crc kubenswrapper[4942]: I0218 19:36:35.999461 4942 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ab301488-e86d-4ba2-b628-f4ea689acd3b-logs\") on node \"crc\" DevicePath \"\"" Feb 18 19:36:35 crc kubenswrapper[4942]: I0218 19:36:35.999502 4942 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ab301488-e86d-4ba2-b628-f4ea689acd3b-config-data\") on node \"crc\" DevicePath \"\"" Feb 18 19:36:35 crc kubenswrapper[4942]: I0218 19:36:35.999530 4942 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ab301488-e86d-4ba2-b628-f4ea689acd3b-scripts\") on node \"crc\" DevicePath \"\"" Feb 18 19:36:36 crc kubenswrapper[4942]: I0218 19:36:36.013170 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ab301488-e86d-4ba2-b628-f4ea689acd3b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ab301488-e86d-4ba2-b628-f4ea689acd3b" (UID: "ab301488-e86d-4ba2-b628-f4ea689acd3b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:36:36 crc kubenswrapper[4942]: I0218 19:36:36.054994 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ab301488-e86d-4ba2-b628-f4ea689acd3b-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "ab301488-e86d-4ba2-b628-f4ea689acd3b" (UID: "ab301488-e86d-4ba2-b628-f4ea689acd3b"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:36:36 crc kubenswrapper[4942]: I0218 19:36:36.102204 4942 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ab301488-e86d-4ba2-b628-f4ea689acd3b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 19:36:36 crc kubenswrapper[4942]: I0218 19:36:36.102234 4942 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ab301488-e86d-4ba2-b628-f4ea689acd3b-public-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 18 19:36:36 crc kubenswrapper[4942]: I0218 19:36:36.108967 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ab301488-e86d-4ba2-b628-f4ea689acd3b-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "ab301488-e86d-4ba2-b628-f4ea689acd3b" (UID: "ab301488-e86d-4ba2-b628-f4ea689acd3b"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:36:36 crc kubenswrapper[4942]: I0218 19:36:36.204132 4942 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ab301488-e86d-4ba2-b628-f4ea689acd3b-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 18 19:36:36 crc kubenswrapper[4942]: I0218 19:36:36.753270 4942 generic.go:334] "Generic (PLEG): container finished" podID="696aecc5-9837-4941-a9e2-06c1743b6983" containerID="f199cea9b51631457ac52fd4aa8f018a58676c04337cc4ce60e41428c59205eb" exitCode=0 Feb 18 19:36:36 crc kubenswrapper[4942]: I0218 19:36:36.753386 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"696aecc5-9837-4941-a9e2-06c1743b6983","Type":"ContainerDied","Data":"f199cea9b51631457ac52fd4aa8f018a58676c04337cc4ce60e41428c59205eb"} Feb 18 19:36:36 crc kubenswrapper[4942]: I0218 19:36:36.757616 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-5794bf846d-82xzg" event={"ID":"ab301488-e86d-4ba2-b628-f4ea689acd3b","Type":"ContainerDied","Data":"5655340f4bf0abd595b0c47b02dacb9178105661696797fd33a844b3ed3d1922"} Feb 18 19:36:36 crc kubenswrapper[4942]: I0218 19:36:36.757672 4942 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-5794bf846d-82xzg" Feb 18 19:36:36 crc kubenswrapper[4942]: I0218 19:36:36.757672 4942 scope.go:117] "RemoveContainer" containerID="9ef44ea2e648e2bbfb3bd289c97d6ea2ed93750446192377e2017b04b006f489" Feb 18 19:36:36 crc kubenswrapper[4942]: I0218 19:36:36.802358 4942 scope.go:117] "RemoveContainer" containerID="f8a851dfe023e77ce2012d0b840a4729b646e24254cac11ed22579fa4353c01b" Feb 18 19:36:36 crc kubenswrapper[4942]: I0218 19:36:36.812341 4942 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-5794bf846d-82xzg"] Feb 18 19:36:36 crc kubenswrapper[4942]: I0218 19:36:36.826454 4942 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-5794bf846d-82xzg"] Feb 18 19:36:36 crc kubenswrapper[4942]: I0218 19:36:36.903627 4942 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 18 19:36:36 crc kubenswrapper[4942]: I0218 19:36:36.919807 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/696aecc5-9837-4941-a9e2-06c1743b6983-config-data\") pod \"696aecc5-9837-4941-a9e2-06c1743b6983\" (UID: \"696aecc5-9837-4941-a9e2-06c1743b6983\") " Feb 18 19:36:36 crc kubenswrapper[4942]: I0218 19:36:36.919902 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/696aecc5-9837-4941-a9e2-06c1743b6983-sg-core-conf-yaml\") pod \"696aecc5-9837-4941-a9e2-06c1743b6983\" (UID: \"696aecc5-9837-4941-a9e2-06c1743b6983\") " Feb 18 19:36:36 crc kubenswrapper[4942]: I0218 19:36:36.919936 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/696aecc5-9837-4941-a9e2-06c1743b6983-log-httpd\") pod \"696aecc5-9837-4941-a9e2-06c1743b6983\" (UID: \"696aecc5-9837-4941-a9e2-06c1743b6983\") " Feb 18 19:36:36 crc kubenswrapper[4942]: I0218 19:36:36.919982 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/696aecc5-9837-4941-a9e2-06c1743b6983-run-httpd\") pod \"696aecc5-9837-4941-a9e2-06c1743b6983\" (UID: \"696aecc5-9837-4941-a9e2-06c1743b6983\") " Feb 18 19:36:36 crc kubenswrapper[4942]: I0218 19:36:36.920119 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zwktj\" (UniqueName: \"kubernetes.io/projected/696aecc5-9837-4941-a9e2-06c1743b6983-kube-api-access-zwktj\") pod \"696aecc5-9837-4941-a9e2-06c1743b6983\" (UID: \"696aecc5-9837-4941-a9e2-06c1743b6983\") " Feb 18 19:36:36 crc kubenswrapper[4942]: I0218 19:36:36.920144 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/696aecc5-9837-4941-a9e2-06c1743b6983-combined-ca-bundle\") pod \"696aecc5-9837-4941-a9e2-06c1743b6983\" (UID: \"696aecc5-9837-4941-a9e2-06c1743b6983\") " Feb 18 19:36:36 crc kubenswrapper[4942]: I0218 19:36:36.920203 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/696aecc5-9837-4941-a9e2-06c1743b6983-scripts\") pod \"696aecc5-9837-4941-a9e2-06c1743b6983\" (UID: \"696aecc5-9837-4941-a9e2-06c1743b6983\") " Feb 18 19:36:36 crc kubenswrapper[4942]: I0218 19:36:36.921361 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/696aecc5-9837-4941-a9e2-06c1743b6983-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "696aecc5-9837-4941-a9e2-06c1743b6983" (UID: "696aecc5-9837-4941-a9e2-06c1743b6983"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 19:36:36 crc kubenswrapper[4942]: I0218 19:36:36.921798 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/696aecc5-9837-4941-a9e2-06c1743b6983-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "696aecc5-9837-4941-a9e2-06c1743b6983" (UID: "696aecc5-9837-4941-a9e2-06c1743b6983"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 19:36:36 crc kubenswrapper[4942]: I0218 19:36:36.924926 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/696aecc5-9837-4941-a9e2-06c1743b6983-scripts" (OuterVolumeSpecName: "scripts") pod "696aecc5-9837-4941-a9e2-06c1743b6983" (UID: "696aecc5-9837-4941-a9e2-06c1743b6983"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:36:36 crc kubenswrapper[4942]: I0218 19:36:36.942360 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/696aecc5-9837-4941-a9e2-06c1743b6983-kube-api-access-zwktj" (OuterVolumeSpecName: "kube-api-access-zwktj") pod "696aecc5-9837-4941-a9e2-06c1743b6983" (UID: "696aecc5-9837-4941-a9e2-06c1743b6983"). InnerVolumeSpecName "kube-api-access-zwktj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:36:36 crc kubenswrapper[4942]: I0218 19:36:36.972369 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/696aecc5-9837-4941-a9e2-06c1743b6983-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "696aecc5-9837-4941-a9e2-06c1743b6983" (UID: "696aecc5-9837-4941-a9e2-06c1743b6983"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:36:37 crc kubenswrapper[4942]: I0218 19:36:37.024686 4942 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zwktj\" (UniqueName: \"kubernetes.io/projected/696aecc5-9837-4941-a9e2-06c1743b6983-kube-api-access-zwktj\") on node \"crc\" DevicePath \"\"" Feb 18 19:36:37 crc kubenswrapper[4942]: I0218 19:36:37.024716 4942 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/696aecc5-9837-4941-a9e2-06c1743b6983-scripts\") on node \"crc\" DevicePath \"\"" Feb 18 19:36:37 crc kubenswrapper[4942]: I0218 19:36:37.024726 4942 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/696aecc5-9837-4941-a9e2-06c1743b6983-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 18 19:36:37 crc kubenswrapper[4942]: I0218 19:36:37.024735 4942 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/696aecc5-9837-4941-a9e2-06c1743b6983-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 18 19:36:37 crc kubenswrapper[4942]: I0218 19:36:37.024743 4942 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/696aecc5-9837-4941-a9e2-06c1743b6983-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 18 19:36:37 crc kubenswrapper[4942]: I0218 19:36:37.068976 4942 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ab301488-e86d-4ba2-b628-f4ea689acd3b" path="/var/lib/kubelet/pods/ab301488-e86d-4ba2-b628-f4ea689acd3b/volumes" Feb 18 19:36:37 crc kubenswrapper[4942]: I0218 19:36:37.095975 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/696aecc5-9837-4941-a9e2-06c1743b6983-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "696aecc5-9837-4941-a9e2-06c1743b6983" (UID: "696aecc5-9837-4941-a9e2-06c1743b6983"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:36:37 crc kubenswrapper[4942]: I0218 19:36:37.096033 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/696aecc5-9837-4941-a9e2-06c1743b6983-config-data" (OuterVolumeSpecName: "config-data") pod "696aecc5-9837-4941-a9e2-06c1743b6983" (UID: "696aecc5-9837-4941-a9e2-06c1743b6983"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:36:37 crc kubenswrapper[4942]: I0218 19:36:37.126455 4942 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/696aecc5-9837-4941-a9e2-06c1743b6983-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 19:36:37 crc kubenswrapper[4942]: I0218 19:36:37.126484 4942 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/696aecc5-9837-4941-a9e2-06c1743b6983-config-data\") on node \"crc\" DevicePath \"\"" Feb 18 19:36:37 crc kubenswrapper[4942]: I0218 19:36:37.773343 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"696aecc5-9837-4941-a9e2-06c1743b6983","Type":"ContainerDied","Data":"307cb7b1955145e6c351de8d60f608d94882a5c445ff5005916b7c10fe933d13"} Feb 18 19:36:37 crc kubenswrapper[4942]: I0218 19:36:37.773390 4942 scope.go:117] "RemoveContainer" containerID="dab22ef643cf4a1848ae3f2c3600077ca2b9255a63d8f2ec325041316075d69d" Feb 18 19:36:37 crc kubenswrapper[4942]: I0218 19:36:37.773399 4942 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 18 19:36:37 crc kubenswrapper[4942]: I0218 19:36:37.799396 4942 scope.go:117] "RemoveContainer" containerID="ff5a66ca95a9acb98874490f26d4d917450e3dbd52c6493e4894edb793c0261b" Feb 18 19:36:37 crc kubenswrapper[4942]: I0218 19:36:37.811549 4942 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 18 19:36:37 crc kubenswrapper[4942]: I0218 19:36:37.820671 4942 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 18 19:36:37 crc kubenswrapper[4942]: I0218 19:36:37.827651 4942 scope.go:117] "RemoveContainer" containerID="7bd1dc3d7ceb9cd510d24aaa8a624c13e7f5dd415a98c0dc54d4fb8d58f9ca84" Feb 18 19:36:37 crc kubenswrapper[4942]: I0218 19:36:37.843245 4942 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 18 19:36:37 crc kubenswrapper[4942]: E0218 19:36:37.843617 4942 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ab301488-e86d-4ba2-b628-f4ea689acd3b" containerName="placement-log" Feb 18 19:36:37 crc kubenswrapper[4942]: I0218 19:36:37.843630 4942 state_mem.go:107] "Deleted CPUSet assignment" podUID="ab301488-e86d-4ba2-b628-f4ea689acd3b" containerName="placement-log" Feb 18 19:36:37 crc kubenswrapper[4942]: E0218 19:36:37.843647 4942 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ab301488-e86d-4ba2-b628-f4ea689acd3b" containerName="placement-api" Feb 18 19:36:37 crc kubenswrapper[4942]: I0218 19:36:37.843653 4942 state_mem.go:107] "Deleted CPUSet assignment" podUID="ab301488-e86d-4ba2-b628-f4ea689acd3b" containerName="placement-api" Feb 18 19:36:37 crc kubenswrapper[4942]: E0218 19:36:37.843669 4942 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="696aecc5-9837-4941-a9e2-06c1743b6983" containerName="ceilometer-notification-agent" Feb 18 19:36:37 crc kubenswrapper[4942]: I0218 19:36:37.843675 4942 state_mem.go:107] "Deleted CPUSet assignment" podUID="696aecc5-9837-4941-a9e2-06c1743b6983" containerName="ceilometer-notification-agent" Feb 18 19:36:37 crc kubenswrapper[4942]: E0218 19:36:37.843692 4942 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="696aecc5-9837-4941-a9e2-06c1743b6983" containerName="proxy-httpd" Feb 18 19:36:37 crc kubenswrapper[4942]: I0218 19:36:37.843697 4942 state_mem.go:107] "Deleted CPUSet assignment" podUID="696aecc5-9837-4941-a9e2-06c1743b6983" containerName="proxy-httpd" Feb 18 19:36:37 crc kubenswrapper[4942]: E0218 19:36:37.843711 4942 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="696aecc5-9837-4941-a9e2-06c1743b6983" containerName="sg-core" Feb 18 19:36:37 crc kubenswrapper[4942]: I0218 19:36:37.843717 4942 state_mem.go:107] "Deleted CPUSet assignment" podUID="696aecc5-9837-4941-a9e2-06c1743b6983" containerName="sg-core" Feb 18 19:36:37 crc kubenswrapper[4942]: E0218 19:36:37.843729 4942 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="696aecc5-9837-4941-a9e2-06c1743b6983" containerName="ceilometer-central-agent" Feb 18 19:36:37 crc kubenswrapper[4942]: I0218 19:36:37.843736 4942 state_mem.go:107] "Deleted CPUSet assignment" podUID="696aecc5-9837-4941-a9e2-06c1743b6983" containerName="ceilometer-central-agent" Feb 18 19:36:37 crc kubenswrapper[4942]: I0218 19:36:37.843907 4942 memory_manager.go:354] "RemoveStaleState removing state" podUID="ab301488-e86d-4ba2-b628-f4ea689acd3b" containerName="placement-log" Feb 18 19:36:37 crc kubenswrapper[4942]: I0218 19:36:37.843918 4942 memory_manager.go:354] "RemoveStaleState removing state" podUID="ab301488-e86d-4ba2-b628-f4ea689acd3b" containerName="placement-api" Feb 18 19:36:37 crc kubenswrapper[4942]: I0218 19:36:37.843926 4942 memory_manager.go:354] "RemoveStaleState removing state" podUID="696aecc5-9837-4941-a9e2-06c1743b6983" containerName="proxy-httpd" Feb 18 19:36:37 crc kubenswrapper[4942]: I0218 19:36:37.843940 4942 memory_manager.go:354] "RemoveStaleState removing state" podUID="696aecc5-9837-4941-a9e2-06c1743b6983" containerName="ceilometer-notification-agent" Feb 18 19:36:37 crc kubenswrapper[4942]: I0218 19:36:37.843951 4942 memory_manager.go:354] "RemoveStaleState removing state" podUID="696aecc5-9837-4941-a9e2-06c1743b6983" containerName="sg-core" Feb 18 19:36:37 crc kubenswrapper[4942]: I0218 19:36:37.843960 4942 memory_manager.go:354] "RemoveStaleState removing state" podUID="696aecc5-9837-4941-a9e2-06c1743b6983" containerName="ceilometer-central-agent" Feb 18 19:36:37 crc kubenswrapper[4942]: I0218 19:36:37.845601 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 18 19:36:37 crc kubenswrapper[4942]: I0218 19:36:37.857355 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 18 19:36:37 crc kubenswrapper[4942]: I0218 19:36:37.857429 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 18 19:36:37 crc kubenswrapper[4942]: I0218 19:36:37.858008 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 18 19:36:37 crc kubenswrapper[4942]: I0218 19:36:37.868368 4942 scope.go:117] "RemoveContainer" containerID="f199cea9b51631457ac52fd4aa8f018a58676c04337cc4ce60e41428c59205eb" Feb 18 19:36:37 crc kubenswrapper[4942]: I0218 19:36:37.942005 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/236551c8-9c37-4188-aea0-7ea6cb91c093-run-httpd\") pod \"ceilometer-0\" (UID: \"236551c8-9c37-4188-aea0-7ea6cb91c093\") " pod="openstack/ceilometer-0" Feb 18 19:36:37 crc kubenswrapper[4942]: I0218 19:36:37.942206 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/236551c8-9c37-4188-aea0-7ea6cb91c093-log-httpd\") pod \"ceilometer-0\" (UID: \"236551c8-9c37-4188-aea0-7ea6cb91c093\") " pod="openstack/ceilometer-0" Feb 18 19:36:37 crc kubenswrapper[4942]: I0218 19:36:37.942292 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/236551c8-9c37-4188-aea0-7ea6cb91c093-scripts\") pod \"ceilometer-0\" (UID: \"236551c8-9c37-4188-aea0-7ea6cb91c093\") " pod="openstack/ceilometer-0" Feb 18 19:36:37 crc kubenswrapper[4942]: I0218 19:36:37.942354 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/236551c8-9c37-4188-aea0-7ea6cb91c093-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"236551c8-9c37-4188-aea0-7ea6cb91c093\") " pod="openstack/ceilometer-0" Feb 18 19:36:37 crc kubenswrapper[4942]: I0218 19:36:37.942439 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/236551c8-9c37-4188-aea0-7ea6cb91c093-config-data\") pod \"ceilometer-0\" (UID: \"236551c8-9c37-4188-aea0-7ea6cb91c093\") " pod="openstack/ceilometer-0" Feb 18 19:36:37 crc kubenswrapper[4942]: I0218 19:36:37.942487 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/236551c8-9c37-4188-aea0-7ea6cb91c093-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"236551c8-9c37-4188-aea0-7ea6cb91c093\") " pod="openstack/ceilometer-0" Feb 18 19:36:37 crc kubenswrapper[4942]: I0218 19:36:37.942632 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rgml2\" (UniqueName: \"kubernetes.io/projected/236551c8-9c37-4188-aea0-7ea6cb91c093-kube-api-access-rgml2\") pod \"ceilometer-0\" (UID: \"236551c8-9c37-4188-aea0-7ea6cb91c093\") " pod="openstack/ceilometer-0" Feb 18 19:36:38 crc kubenswrapper[4942]: I0218 19:36:38.044050 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/236551c8-9c37-4188-aea0-7ea6cb91c093-log-httpd\") pod \"ceilometer-0\" (UID: \"236551c8-9c37-4188-aea0-7ea6cb91c093\") " pod="openstack/ceilometer-0" Feb 18 19:36:38 crc kubenswrapper[4942]: I0218 19:36:38.044112 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/236551c8-9c37-4188-aea0-7ea6cb91c093-scripts\") pod \"ceilometer-0\" (UID: \"236551c8-9c37-4188-aea0-7ea6cb91c093\") " pod="openstack/ceilometer-0" Feb 18 19:36:38 crc kubenswrapper[4942]: I0218 19:36:38.044138 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/236551c8-9c37-4188-aea0-7ea6cb91c093-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"236551c8-9c37-4188-aea0-7ea6cb91c093\") " pod="openstack/ceilometer-0" Feb 18 19:36:38 crc kubenswrapper[4942]: I0218 19:36:38.044180 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/236551c8-9c37-4188-aea0-7ea6cb91c093-config-data\") pod \"ceilometer-0\" (UID: \"236551c8-9c37-4188-aea0-7ea6cb91c093\") " pod="openstack/ceilometer-0" Feb 18 19:36:38 crc kubenswrapper[4942]: I0218 19:36:38.044538 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/236551c8-9c37-4188-aea0-7ea6cb91c093-log-httpd\") pod \"ceilometer-0\" (UID: \"236551c8-9c37-4188-aea0-7ea6cb91c093\") " pod="openstack/ceilometer-0" Feb 18 19:36:38 crc kubenswrapper[4942]: I0218 19:36:38.044946 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/236551c8-9c37-4188-aea0-7ea6cb91c093-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"236551c8-9c37-4188-aea0-7ea6cb91c093\") " pod="openstack/ceilometer-0" Feb 18 19:36:38 crc kubenswrapper[4942]: I0218 19:36:38.045382 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rgml2\" (UniqueName: \"kubernetes.io/projected/236551c8-9c37-4188-aea0-7ea6cb91c093-kube-api-access-rgml2\") pod \"ceilometer-0\" (UID: \"236551c8-9c37-4188-aea0-7ea6cb91c093\") " pod="openstack/ceilometer-0" Feb 18 19:36:38 crc kubenswrapper[4942]: I0218 19:36:38.045493 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/236551c8-9c37-4188-aea0-7ea6cb91c093-run-httpd\") pod \"ceilometer-0\" (UID: \"236551c8-9c37-4188-aea0-7ea6cb91c093\") " pod="openstack/ceilometer-0" Feb 18 19:36:38 crc kubenswrapper[4942]: I0218 19:36:38.045970 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/236551c8-9c37-4188-aea0-7ea6cb91c093-run-httpd\") pod \"ceilometer-0\" (UID: \"236551c8-9c37-4188-aea0-7ea6cb91c093\") " pod="openstack/ceilometer-0" Feb 18 19:36:38 crc kubenswrapper[4942]: I0218 19:36:38.049443 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/236551c8-9c37-4188-aea0-7ea6cb91c093-scripts\") pod \"ceilometer-0\" (UID: \"236551c8-9c37-4188-aea0-7ea6cb91c093\") " pod="openstack/ceilometer-0" Feb 18 19:36:38 crc kubenswrapper[4942]: I0218 19:36:38.049456 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/236551c8-9c37-4188-aea0-7ea6cb91c093-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"236551c8-9c37-4188-aea0-7ea6cb91c093\") " pod="openstack/ceilometer-0" Feb 18 19:36:38 crc kubenswrapper[4942]: I0218 19:36:38.051598 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/236551c8-9c37-4188-aea0-7ea6cb91c093-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"236551c8-9c37-4188-aea0-7ea6cb91c093\") " pod="openstack/ceilometer-0" Feb 18 19:36:38 crc kubenswrapper[4942]: I0218 19:36:38.055950 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/236551c8-9c37-4188-aea0-7ea6cb91c093-config-data\") pod \"ceilometer-0\" (UID: \"236551c8-9c37-4188-aea0-7ea6cb91c093\") " pod="openstack/ceilometer-0" Feb 18 19:36:38 crc kubenswrapper[4942]: I0218 19:36:38.071671 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rgml2\" (UniqueName: \"kubernetes.io/projected/236551c8-9c37-4188-aea0-7ea6cb91c093-kube-api-access-rgml2\") pod \"ceilometer-0\" (UID: \"236551c8-9c37-4188-aea0-7ea6cb91c093\") " pod="openstack/ceilometer-0" Feb 18 19:36:38 crc kubenswrapper[4942]: I0218 19:36:38.178960 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 18 19:36:38 crc kubenswrapper[4942]: I0218 19:36:38.507031 4942 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 18 19:36:38 crc kubenswrapper[4942]: I0218 19:36:38.701587 4942 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 18 19:36:38 crc kubenswrapper[4942]: I0218 19:36:38.784114 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"236551c8-9c37-4188-aea0-7ea6cb91c093","Type":"ContainerStarted","Data":"e8bdb79b574b6c2621bf8442e6633e45aa4f74b8b682ec57dcc5865cbb5bdecf"} Feb 18 19:36:39 crc kubenswrapper[4942]: I0218 19:36:39.047858 4942 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="696aecc5-9837-4941-a9e2-06c1743b6983" path="/var/lib/kubelet/pods/696aecc5-9837-4941-a9e2-06c1743b6983/volumes" Feb 18 19:36:40 crc kubenswrapper[4942]: I0218 19:36:40.424071 4942 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Feb 18 19:36:40 crc kubenswrapper[4942]: I0218 19:36:40.424427 4942 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Feb 18 19:36:40 crc kubenswrapper[4942]: I0218 19:36:40.457374 4942 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Feb 18 19:36:40 crc kubenswrapper[4942]: I0218 19:36:40.472954 4942 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Feb 18 19:36:40 crc kubenswrapper[4942]: I0218 19:36:40.810166 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Feb 18 19:36:40 crc kubenswrapper[4942]: I0218 19:36:40.810206 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Feb 18 19:36:42 crc kubenswrapper[4942]: I0218 19:36:42.714431 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Feb 18 19:36:42 crc kubenswrapper[4942]: I0218 19:36:42.715134 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Feb 18 19:36:42 crc kubenswrapper[4942]: I0218 19:36:42.938712 4942 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Feb 18 19:36:42 crc kubenswrapper[4942]: I0218 19:36:42.939047 4942 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Feb 18 19:36:42 crc kubenswrapper[4942]: I0218 19:36:42.970463 4942 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Feb 18 19:36:42 crc kubenswrapper[4942]: I0218 19:36:42.985824 4942 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Feb 18 19:36:43 crc kubenswrapper[4942]: I0218 19:36:43.848200 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Feb 18 19:36:43 crc kubenswrapper[4942]: I0218 19:36:43.848245 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Feb 18 19:36:44 crc kubenswrapper[4942]: I0218 19:36:44.445905 4942 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/watcher-decision-engine-0"] Feb 18 19:36:44 crc kubenswrapper[4942]: I0218 19:36:44.446138 4942 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/watcher-decision-engine-0" podUID="9cf66c1e-2f67-4785-85e9-f0b06e578d29" containerName="watcher-decision-engine" containerID="cri-o://565df78e0898331235735ffa8948cdc3dea82d61dc2d3519faa61301dd4f6ffd" gracePeriod=30 Feb 18 19:36:44 crc kubenswrapper[4942]: I0218 19:36:44.853284 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"236551c8-9c37-4188-aea0-7ea6cb91c093","Type":"ContainerStarted","Data":"58abb8ce5186e3b36cb68b8f09a80be99bc7c5dac34cbf75929681b8480cc49b"} Feb 18 19:36:44 crc kubenswrapper[4942]: I0218 19:36:44.855037 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-bbrrn" event={"ID":"e14c764c-c1b5-4196-a48b-2aff4c38782b","Type":"ContainerStarted","Data":"ebb11ccd20be89bb58e99f7b4e01c65708315c8dea33a27fefa79d1ee13756e9"} Feb 18 19:36:45 crc kubenswrapper[4942]: I0218 19:36:45.872133 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"236551c8-9c37-4188-aea0-7ea6cb91c093","Type":"ContainerStarted","Data":"3dd6fe39fe21a60b5f9b0d19a22b65bac95a5d9ab95a36271b835d36d69e15fd"} Feb 18 19:36:45 crc kubenswrapper[4942]: I0218 19:36:45.872422 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"236551c8-9c37-4188-aea0-7ea6cb91c093","Type":"ContainerStarted","Data":"656ce3f7f19932e4f5068737003e48886455c0d9f03c836dc5b0675a5d689546"} Feb 18 19:36:45 crc kubenswrapper[4942]: I0218 19:36:45.934450 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Feb 18 19:36:45 crc kubenswrapper[4942]: I0218 19:36:45.934536 4942 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 18 19:36:45 crc kubenswrapper[4942]: I0218 19:36:45.936030 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Feb 18 19:36:45 crc kubenswrapper[4942]: I0218 19:36:45.964298 4942 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-db-sync-bbrrn" podStartSLOduration=3.776740416 podStartE2EDuration="12.964280369s" podCreationTimestamp="2026-02-18 19:36:33 +0000 UTC" firstStartedPulling="2026-02-18 19:36:34.874398655 +0000 UTC m=+1154.579331310" lastFinishedPulling="2026-02-18 19:36:44.061938608 +0000 UTC m=+1163.766871263" observedRunningTime="2026-02-18 19:36:44.874457899 +0000 UTC m=+1164.579390564" watchObservedRunningTime="2026-02-18 19:36:45.964280369 +0000 UTC m=+1165.669213034" Feb 18 19:36:48 crc kubenswrapper[4942]: I0218 19:36:48.900603 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"236551c8-9c37-4188-aea0-7ea6cb91c093","Type":"ContainerStarted","Data":"df343275baf5cd2afba42ef23b7d382a0815790debbcfdcf638e76927f78b7e1"} Feb 18 19:36:48 crc kubenswrapper[4942]: I0218 19:36:48.901037 4942 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="236551c8-9c37-4188-aea0-7ea6cb91c093" containerName="ceilometer-central-agent" containerID="cri-o://58abb8ce5186e3b36cb68b8f09a80be99bc7c5dac34cbf75929681b8480cc49b" gracePeriod=30 Feb 18 19:36:48 crc kubenswrapper[4942]: I0218 19:36:48.901209 4942 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="236551c8-9c37-4188-aea0-7ea6cb91c093" containerName="proxy-httpd" containerID="cri-o://df343275baf5cd2afba42ef23b7d382a0815790debbcfdcf638e76927f78b7e1" gracePeriod=30 Feb 18 19:36:48 crc kubenswrapper[4942]: I0218 19:36:48.901247 4942 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="236551c8-9c37-4188-aea0-7ea6cb91c093" containerName="sg-core" containerID="cri-o://3dd6fe39fe21a60b5f9b0d19a22b65bac95a5d9ab95a36271b835d36d69e15fd" gracePeriod=30 Feb 18 19:36:48 crc kubenswrapper[4942]: I0218 19:36:48.901279 4942 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="236551c8-9c37-4188-aea0-7ea6cb91c093" containerName="ceilometer-notification-agent" containerID="cri-o://656ce3f7f19932e4f5068737003e48886455c0d9f03c836dc5b0675a5d689546" gracePeriod=30 Feb 18 19:36:48 crc kubenswrapper[4942]: I0218 19:36:48.901341 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 18 19:36:49 crc kubenswrapper[4942]: I0218 19:36:49.914145 4942 generic.go:334] "Generic (PLEG): container finished" podID="236551c8-9c37-4188-aea0-7ea6cb91c093" containerID="df343275baf5cd2afba42ef23b7d382a0815790debbcfdcf638e76927f78b7e1" exitCode=0 Feb 18 19:36:49 crc kubenswrapper[4942]: I0218 19:36:49.914468 4942 generic.go:334] "Generic (PLEG): container finished" podID="236551c8-9c37-4188-aea0-7ea6cb91c093" containerID="3dd6fe39fe21a60b5f9b0d19a22b65bac95a5d9ab95a36271b835d36d69e15fd" exitCode=2 Feb 18 19:36:49 crc kubenswrapper[4942]: I0218 19:36:49.914235 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"236551c8-9c37-4188-aea0-7ea6cb91c093","Type":"ContainerDied","Data":"df343275baf5cd2afba42ef23b7d382a0815790debbcfdcf638e76927f78b7e1"} Feb 18 19:36:49 crc kubenswrapper[4942]: I0218 19:36:49.914528 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"236551c8-9c37-4188-aea0-7ea6cb91c093","Type":"ContainerDied","Data":"3dd6fe39fe21a60b5f9b0d19a22b65bac95a5d9ab95a36271b835d36d69e15fd"} Feb 18 19:36:49 crc kubenswrapper[4942]: I0218 19:36:49.914553 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"236551c8-9c37-4188-aea0-7ea6cb91c093","Type":"ContainerDied","Data":"656ce3f7f19932e4f5068737003e48886455c0d9f03c836dc5b0675a5d689546"} Feb 18 19:36:49 crc kubenswrapper[4942]: I0218 19:36:49.914484 4942 generic.go:334] "Generic (PLEG): container finished" podID="236551c8-9c37-4188-aea0-7ea6cb91c093" containerID="656ce3f7f19932e4f5068737003e48886455c0d9f03c836dc5b0675a5d689546" exitCode=0 Feb 18 19:36:50 crc kubenswrapper[4942]: I0218 19:36:50.923123 4942 generic.go:334] "Generic (PLEG): container finished" podID="9cf66c1e-2f67-4785-85e9-f0b06e578d29" containerID="565df78e0898331235735ffa8948cdc3dea82d61dc2d3519faa61301dd4f6ffd" exitCode=0 Feb 18 19:36:50 crc kubenswrapper[4942]: I0218 19:36:50.923203 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-decision-engine-0" event={"ID":"9cf66c1e-2f67-4785-85e9-f0b06e578d29","Type":"ContainerDied","Data":"565df78e0898331235735ffa8948cdc3dea82d61dc2d3519faa61301dd4f6ffd"} Feb 18 19:36:51 crc kubenswrapper[4942]: I0218 19:36:51.473105 4942 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-decision-engine-0" Feb 18 19:36:51 crc kubenswrapper[4942]: I0218 19:36:51.498580 4942 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=5.020255265 podStartE2EDuration="14.498558927s" podCreationTimestamp="2026-02-18 19:36:37 +0000 UTC" firstStartedPulling="2026-02-18 19:36:38.717696648 +0000 UTC m=+1158.422629303" lastFinishedPulling="2026-02-18 19:36:48.1960003 +0000 UTC m=+1167.900932965" observedRunningTime="2026-02-18 19:36:48.925550358 +0000 UTC m=+1168.630483043" watchObservedRunningTime="2026-02-18 19:36:51.498558927 +0000 UTC m=+1171.203491592" Feb 18 19:36:51 crc kubenswrapper[4942]: I0218 19:36:51.520406 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9cf66c1e-2f67-4785-85e9-f0b06e578d29-combined-ca-bundle\") pod \"9cf66c1e-2f67-4785-85e9-f0b06e578d29\" (UID: \"9cf66c1e-2f67-4785-85e9-f0b06e578d29\") " Feb 18 19:36:51 crc kubenswrapper[4942]: I0218 19:36:51.520996 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/9cf66c1e-2f67-4785-85e9-f0b06e578d29-custom-prometheus-ca\") pod \"9cf66c1e-2f67-4785-85e9-f0b06e578d29\" (UID: \"9cf66c1e-2f67-4785-85e9-f0b06e578d29\") " Feb 18 19:36:51 crc kubenswrapper[4942]: I0218 19:36:51.521238 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9cf66c1e-2f67-4785-85e9-f0b06e578d29-config-data\") pod \"9cf66c1e-2f67-4785-85e9-f0b06e578d29\" (UID: \"9cf66c1e-2f67-4785-85e9-f0b06e578d29\") " Feb 18 19:36:51 crc kubenswrapper[4942]: I0218 19:36:51.521317 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9cf66c1e-2f67-4785-85e9-f0b06e578d29-logs\") pod \"9cf66c1e-2f67-4785-85e9-f0b06e578d29\" (UID: \"9cf66c1e-2f67-4785-85e9-f0b06e578d29\") " Feb 18 19:36:51 crc kubenswrapper[4942]: I0218 19:36:51.521419 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-89hpq\" (UniqueName: \"kubernetes.io/projected/9cf66c1e-2f67-4785-85e9-f0b06e578d29-kube-api-access-89hpq\") pod \"9cf66c1e-2f67-4785-85e9-f0b06e578d29\" (UID: \"9cf66c1e-2f67-4785-85e9-f0b06e578d29\") " Feb 18 19:36:51 crc kubenswrapper[4942]: I0218 19:36:51.521718 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9cf66c1e-2f67-4785-85e9-f0b06e578d29-logs" (OuterVolumeSpecName: "logs") pod "9cf66c1e-2f67-4785-85e9-f0b06e578d29" (UID: "9cf66c1e-2f67-4785-85e9-f0b06e578d29"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 19:36:51 crc kubenswrapper[4942]: I0218 19:36:51.522746 4942 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9cf66c1e-2f67-4785-85e9-f0b06e578d29-logs\") on node \"crc\" DevicePath \"\"" Feb 18 19:36:51 crc kubenswrapper[4942]: I0218 19:36:51.530094 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9cf66c1e-2f67-4785-85e9-f0b06e578d29-kube-api-access-89hpq" (OuterVolumeSpecName: "kube-api-access-89hpq") pod "9cf66c1e-2f67-4785-85e9-f0b06e578d29" (UID: "9cf66c1e-2f67-4785-85e9-f0b06e578d29"). InnerVolumeSpecName "kube-api-access-89hpq". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:36:51 crc kubenswrapper[4942]: I0218 19:36:51.553451 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9cf66c1e-2f67-4785-85e9-f0b06e578d29-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "9cf66c1e-2f67-4785-85e9-f0b06e578d29" (UID: "9cf66c1e-2f67-4785-85e9-f0b06e578d29"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:36:51 crc kubenswrapper[4942]: I0218 19:36:51.572991 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9cf66c1e-2f67-4785-85e9-f0b06e578d29-custom-prometheus-ca" (OuterVolumeSpecName: "custom-prometheus-ca") pod "9cf66c1e-2f67-4785-85e9-f0b06e578d29" (UID: "9cf66c1e-2f67-4785-85e9-f0b06e578d29"). InnerVolumeSpecName "custom-prometheus-ca". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:36:51 crc kubenswrapper[4942]: I0218 19:36:51.576452 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9cf66c1e-2f67-4785-85e9-f0b06e578d29-config-data" (OuterVolumeSpecName: "config-data") pod "9cf66c1e-2f67-4785-85e9-f0b06e578d29" (UID: "9cf66c1e-2f67-4785-85e9-f0b06e578d29"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:36:51 crc kubenswrapper[4942]: I0218 19:36:51.623998 4942 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9cf66c1e-2f67-4785-85e9-f0b06e578d29-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 19:36:51 crc kubenswrapper[4942]: I0218 19:36:51.624024 4942 reconciler_common.go:293] "Volume detached for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/9cf66c1e-2f67-4785-85e9-f0b06e578d29-custom-prometheus-ca\") on node \"crc\" DevicePath \"\"" Feb 18 19:36:51 crc kubenswrapper[4942]: I0218 19:36:51.624035 4942 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9cf66c1e-2f67-4785-85e9-f0b06e578d29-config-data\") on node \"crc\" DevicePath \"\"" Feb 18 19:36:51 crc kubenswrapper[4942]: I0218 19:36:51.624044 4942 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-89hpq\" (UniqueName: \"kubernetes.io/projected/9cf66c1e-2f67-4785-85e9-f0b06e578d29-kube-api-access-89hpq\") on node \"crc\" DevicePath \"\"" Feb 18 19:36:51 crc kubenswrapper[4942]: I0218 19:36:51.942726 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-decision-engine-0" event={"ID":"9cf66c1e-2f67-4785-85e9-f0b06e578d29","Type":"ContainerDied","Data":"e7e10840e11edbe6af151474727a77162010126b060487f8547836dcab0bb348"} Feb 18 19:36:51 crc kubenswrapper[4942]: I0218 19:36:51.942805 4942 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-decision-engine-0" Feb 18 19:36:51 crc kubenswrapper[4942]: I0218 19:36:51.943002 4942 scope.go:117] "RemoveContainer" containerID="565df78e0898331235735ffa8948cdc3dea82d61dc2d3519faa61301dd4f6ffd" Feb 18 19:36:51 crc kubenswrapper[4942]: I0218 19:36:51.994457 4942 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/watcher-decision-engine-0"] Feb 18 19:36:52 crc kubenswrapper[4942]: I0218 19:36:52.022797 4942 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/watcher-decision-engine-0"] Feb 18 19:36:52 crc kubenswrapper[4942]: I0218 19:36:52.031811 4942 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/watcher-decision-engine-0"] Feb 18 19:36:52 crc kubenswrapper[4942]: E0218 19:36:52.032295 4942 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9cf66c1e-2f67-4785-85e9-f0b06e578d29" containerName="watcher-decision-engine" Feb 18 19:36:52 crc kubenswrapper[4942]: I0218 19:36:52.032322 4942 state_mem.go:107] "Deleted CPUSet assignment" podUID="9cf66c1e-2f67-4785-85e9-f0b06e578d29" containerName="watcher-decision-engine" Feb 18 19:36:52 crc kubenswrapper[4942]: I0218 19:36:52.032539 4942 memory_manager.go:354] "RemoveStaleState removing state" podUID="9cf66c1e-2f67-4785-85e9-f0b06e578d29" containerName="watcher-decision-engine" Feb 18 19:36:52 crc kubenswrapper[4942]: I0218 19:36:52.033284 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-decision-engine-0" Feb 18 19:36:52 crc kubenswrapper[4942]: I0218 19:36:52.035393 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"watcher-decision-engine-config-data" Feb 18 19:36:52 crc kubenswrapper[4942]: I0218 19:36:52.052832 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-decision-engine-0"] Feb 18 19:36:52 crc kubenswrapper[4942]: I0218 19:36:52.132483 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/4dcf4849-8f10-4a95-9168-8933cc67b424-custom-prometheus-ca\") pod \"watcher-decision-engine-0\" (UID: \"4dcf4849-8f10-4a95-9168-8933cc67b424\") " pod="openstack/watcher-decision-engine-0" Feb 18 19:36:52 crc kubenswrapper[4942]: I0218 19:36:52.132604 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4dcf4849-8f10-4a95-9168-8933cc67b424-combined-ca-bundle\") pod \"watcher-decision-engine-0\" (UID: \"4dcf4849-8f10-4a95-9168-8933cc67b424\") " pod="openstack/watcher-decision-engine-0" Feb 18 19:36:52 crc kubenswrapper[4942]: I0218 19:36:52.132680 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4dcf4849-8f10-4a95-9168-8933cc67b424-logs\") pod \"watcher-decision-engine-0\" (UID: \"4dcf4849-8f10-4a95-9168-8933cc67b424\") " pod="openstack/watcher-decision-engine-0" Feb 18 19:36:52 crc kubenswrapper[4942]: I0218 19:36:52.132704 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m95fm\" (UniqueName: \"kubernetes.io/projected/4dcf4849-8f10-4a95-9168-8933cc67b424-kube-api-access-m95fm\") pod \"watcher-decision-engine-0\" (UID: \"4dcf4849-8f10-4a95-9168-8933cc67b424\") " pod="openstack/watcher-decision-engine-0" Feb 18 19:36:52 crc kubenswrapper[4942]: I0218 19:36:52.132838 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4dcf4849-8f10-4a95-9168-8933cc67b424-config-data\") pod \"watcher-decision-engine-0\" (UID: \"4dcf4849-8f10-4a95-9168-8933cc67b424\") " pod="openstack/watcher-decision-engine-0" Feb 18 19:36:52 crc kubenswrapper[4942]: I0218 19:36:52.234600 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4dcf4849-8f10-4a95-9168-8933cc67b424-combined-ca-bundle\") pod \"watcher-decision-engine-0\" (UID: \"4dcf4849-8f10-4a95-9168-8933cc67b424\") " pod="openstack/watcher-decision-engine-0" Feb 18 19:36:52 crc kubenswrapper[4942]: I0218 19:36:52.234946 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4dcf4849-8f10-4a95-9168-8933cc67b424-logs\") pod \"watcher-decision-engine-0\" (UID: \"4dcf4849-8f10-4a95-9168-8933cc67b424\") " pod="openstack/watcher-decision-engine-0" Feb 18 19:36:52 crc kubenswrapper[4942]: I0218 19:36:52.235080 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m95fm\" (UniqueName: \"kubernetes.io/projected/4dcf4849-8f10-4a95-9168-8933cc67b424-kube-api-access-m95fm\") pod \"watcher-decision-engine-0\" (UID: \"4dcf4849-8f10-4a95-9168-8933cc67b424\") " pod="openstack/watcher-decision-engine-0" Feb 18 19:36:52 crc kubenswrapper[4942]: I0218 19:36:52.235229 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4dcf4849-8f10-4a95-9168-8933cc67b424-config-data\") pod \"watcher-decision-engine-0\" (UID: \"4dcf4849-8f10-4a95-9168-8933cc67b424\") " pod="openstack/watcher-decision-engine-0" Feb 18 19:36:52 crc kubenswrapper[4942]: I0218 19:36:52.235374 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4dcf4849-8f10-4a95-9168-8933cc67b424-logs\") pod \"watcher-decision-engine-0\" (UID: \"4dcf4849-8f10-4a95-9168-8933cc67b424\") " pod="openstack/watcher-decision-engine-0" Feb 18 19:36:52 crc kubenswrapper[4942]: I0218 19:36:52.235506 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/4dcf4849-8f10-4a95-9168-8933cc67b424-custom-prometheus-ca\") pod \"watcher-decision-engine-0\" (UID: \"4dcf4849-8f10-4a95-9168-8933cc67b424\") " pod="openstack/watcher-decision-engine-0" Feb 18 19:36:52 crc kubenswrapper[4942]: I0218 19:36:52.239795 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4dcf4849-8f10-4a95-9168-8933cc67b424-config-data\") pod \"watcher-decision-engine-0\" (UID: \"4dcf4849-8f10-4a95-9168-8933cc67b424\") " pod="openstack/watcher-decision-engine-0" Feb 18 19:36:52 crc kubenswrapper[4942]: I0218 19:36:52.241243 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4dcf4849-8f10-4a95-9168-8933cc67b424-combined-ca-bundle\") pod \"watcher-decision-engine-0\" (UID: \"4dcf4849-8f10-4a95-9168-8933cc67b424\") " pod="openstack/watcher-decision-engine-0" Feb 18 19:36:52 crc kubenswrapper[4942]: I0218 19:36:52.242416 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/4dcf4849-8f10-4a95-9168-8933cc67b424-custom-prometheus-ca\") pod \"watcher-decision-engine-0\" (UID: \"4dcf4849-8f10-4a95-9168-8933cc67b424\") " pod="openstack/watcher-decision-engine-0" Feb 18 19:36:52 crc kubenswrapper[4942]: I0218 19:36:52.252611 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m95fm\" (UniqueName: \"kubernetes.io/projected/4dcf4849-8f10-4a95-9168-8933cc67b424-kube-api-access-m95fm\") pod \"watcher-decision-engine-0\" (UID: \"4dcf4849-8f10-4a95-9168-8933cc67b424\") " pod="openstack/watcher-decision-engine-0" Feb 18 19:36:52 crc kubenswrapper[4942]: I0218 19:36:52.372658 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-decision-engine-0" Feb 18 19:36:52 crc kubenswrapper[4942]: I0218 19:36:52.830867 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-decision-engine-0"] Feb 18 19:36:52 crc kubenswrapper[4942]: I0218 19:36:52.963624 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-decision-engine-0" event={"ID":"4dcf4849-8f10-4a95-9168-8933cc67b424","Type":"ContainerStarted","Data":"07abe5ef3f3d69bbf6db31ae0d309c288435fa7a22331eb7453098f36ae14d6a"} Feb 18 19:36:53 crc kubenswrapper[4942]: I0218 19:36:53.047207 4942 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9cf66c1e-2f67-4785-85e9-f0b06e578d29" path="/var/lib/kubelet/pods/9cf66c1e-2f67-4785-85e9-f0b06e578d29/volumes" Feb 18 19:36:53 crc kubenswrapper[4942]: I0218 19:36:53.974718 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-decision-engine-0" event={"ID":"4dcf4849-8f10-4a95-9168-8933cc67b424","Type":"ContainerStarted","Data":"8e58bab231be9a36ff597ec486b5cf488a59d8a85c6730bc1ba3360922d52b13"} Feb 18 19:36:53 crc kubenswrapper[4942]: I0218 19:36:53.999807 4942 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/watcher-decision-engine-0" podStartSLOduration=2.999742101 podStartE2EDuration="2.999742101s" podCreationTimestamp="2026-02-18 19:36:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:36:53.989749858 +0000 UTC m=+1173.694682523" watchObservedRunningTime="2026-02-18 19:36:53.999742101 +0000 UTC m=+1173.704674806" Feb 18 19:36:57 crc kubenswrapper[4942]: I0218 19:36:57.897572 4942 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 18 19:36:57 crc kubenswrapper[4942]: I0218 19:36:57.943626 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/236551c8-9c37-4188-aea0-7ea6cb91c093-scripts\") pod \"236551c8-9c37-4188-aea0-7ea6cb91c093\" (UID: \"236551c8-9c37-4188-aea0-7ea6cb91c093\") " Feb 18 19:36:57 crc kubenswrapper[4942]: I0218 19:36:57.943748 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/236551c8-9c37-4188-aea0-7ea6cb91c093-run-httpd\") pod \"236551c8-9c37-4188-aea0-7ea6cb91c093\" (UID: \"236551c8-9c37-4188-aea0-7ea6cb91c093\") " Feb 18 19:36:57 crc kubenswrapper[4942]: I0218 19:36:57.943827 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/236551c8-9c37-4188-aea0-7ea6cb91c093-config-data\") pod \"236551c8-9c37-4188-aea0-7ea6cb91c093\" (UID: \"236551c8-9c37-4188-aea0-7ea6cb91c093\") " Feb 18 19:36:57 crc kubenswrapper[4942]: I0218 19:36:57.943875 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/236551c8-9c37-4188-aea0-7ea6cb91c093-sg-core-conf-yaml\") pod \"236551c8-9c37-4188-aea0-7ea6cb91c093\" (UID: \"236551c8-9c37-4188-aea0-7ea6cb91c093\") " Feb 18 19:36:57 crc kubenswrapper[4942]: I0218 19:36:57.943935 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/236551c8-9c37-4188-aea0-7ea6cb91c093-combined-ca-bundle\") pod \"236551c8-9c37-4188-aea0-7ea6cb91c093\" (UID: \"236551c8-9c37-4188-aea0-7ea6cb91c093\") " Feb 18 19:36:57 crc kubenswrapper[4942]: I0218 19:36:57.944007 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rgml2\" (UniqueName: \"kubernetes.io/projected/236551c8-9c37-4188-aea0-7ea6cb91c093-kube-api-access-rgml2\") pod \"236551c8-9c37-4188-aea0-7ea6cb91c093\" (UID: \"236551c8-9c37-4188-aea0-7ea6cb91c093\") " Feb 18 19:36:57 crc kubenswrapper[4942]: I0218 19:36:57.944092 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/236551c8-9c37-4188-aea0-7ea6cb91c093-log-httpd\") pod \"236551c8-9c37-4188-aea0-7ea6cb91c093\" (UID: \"236551c8-9c37-4188-aea0-7ea6cb91c093\") " Feb 18 19:36:57 crc kubenswrapper[4942]: I0218 19:36:57.944166 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/236551c8-9c37-4188-aea0-7ea6cb91c093-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "236551c8-9c37-4188-aea0-7ea6cb91c093" (UID: "236551c8-9c37-4188-aea0-7ea6cb91c093"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 19:36:57 crc kubenswrapper[4942]: I0218 19:36:57.944522 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/236551c8-9c37-4188-aea0-7ea6cb91c093-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "236551c8-9c37-4188-aea0-7ea6cb91c093" (UID: "236551c8-9c37-4188-aea0-7ea6cb91c093"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 19:36:57 crc kubenswrapper[4942]: I0218 19:36:57.944588 4942 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/236551c8-9c37-4188-aea0-7ea6cb91c093-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 18 19:36:57 crc kubenswrapper[4942]: I0218 19:36:57.949460 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/236551c8-9c37-4188-aea0-7ea6cb91c093-kube-api-access-rgml2" (OuterVolumeSpecName: "kube-api-access-rgml2") pod "236551c8-9c37-4188-aea0-7ea6cb91c093" (UID: "236551c8-9c37-4188-aea0-7ea6cb91c093"). InnerVolumeSpecName "kube-api-access-rgml2". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:36:57 crc kubenswrapper[4942]: I0218 19:36:57.951957 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/236551c8-9c37-4188-aea0-7ea6cb91c093-scripts" (OuterVolumeSpecName: "scripts") pod "236551c8-9c37-4188-aea0-7ea6cb91c093" (UID: "236551c8-9c37-4188-aea0-7ea6cb91c093"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:36:57 crc kubenswrapper[4942]: I0218 19:36:57.978392 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/236551c8-9c37-4188-aea0-7ea6cb91c093-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "236551c8-9c37-4188-aea0-7ea6cb91c093" (UID: "236551c8-9c37-4188-aea0-7ea6cb91c093"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:36:58 crc kubenswrapper[4942]: I0218 19:36:58.022164 4942 generic.go:334] "Generic (PLEG): container finished" podID="236551c8-9c37-4188-aea0-7ea6cb91c093" containerID="58abb8ce5186e3b36cb68b8f09a80be99bc7c5dac34cbf75929681b8480cc49b" exitCode=0 Feb 18 19:36:58 crc kubenswrapper[4942]: I0218 19:36:58.022211 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"236551c8-9c37-4188-aea0-7ea6cb91c093","Type":"ContainerDied","Data":"58abb8ce5186e3b36cb68b8f09a80be99bc7c5dac34cbf75929681b8480cc49b"} Feb 18 19:36:58 crc kubenswrapper[4942]: I0218 19:36:58.022240 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"236551c8-9c37-4188-aea0-7ea6cb91c093","Type":"ContainerDied","Data":"e8bdb79b574b6c2621bf8442e6633e45aa4f74b8b682ec57dcc5865cbb5bdecf"} Feb 18 19:36:58 crc kubenswrapper[4942]: I0218 19:36:58.022262 4942 scope.go:117] "RemoveContainer" containerID="df343275baf5cd2afba42ef23b7d382a0815790debbcfdcf638e76927f78b7e1" Feb 18 19:36:58 crc kubenswrapper[4942]: I0218 19:36:58.022420 4942 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 18 19:36:58 crc kubenswrapper[4942]: I0218 19:36:58.023906 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/236551c8-9c37-4188-aea0-7ea6cb91c093-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "236551c8-9c37-4188-aea0-7ea6cb91c093" (UID: "236551c8-9c37-4188-aea0-7ea6cb91c093"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:36:58 crc kubenswrapper[4942]: I0218 19:36:58.042676 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/236551c8-9c37-4188-aea0-7ea6cb91c093-config-data" (OuterVolumeSpecName: "config-data") pod "236551c8-9c37-4188-aea0-7ea6cb91c093" (UID: "236551c8-9c37-4188-aea0-7ea6cb91c093"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:36:58 crc kubenswrapper[4942]: I0218 19:36:58.045905 4942 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/236551c8-9c37-4188-aea0-7ea6cb91c093-config-data\") on node \"crc\" DevicePath \"\"" Feb 18 19:36:58 crc kubenswrapper[4942]: I0218 19:36:58.046023 4942 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/236551c8-9c37-4188-aea0-7ea6cb91c093-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 18 19:36:58 crc kubenswrapper[4942]: I0218 19:36:58.046086 4942 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/236551c8-9c37-4188-aea0-7ea6cb91c093-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 19:36:58 crc kubenswrapper[4942]: I0218 19:36:58.046144 4942 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rgml2\" (UniqueName: \"kubernetes.io/projected/236551c8-9c37-4188-aea0-7ea6cb91c093-kube-api-access-rgml2\") on node \"crc\" DevicePath \"\"" Feb 18 19:36:58 crc kubenswrapper[4942]: I0218 19:36:58.046282 4942 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/236551c8-9c37-4188-aea0-7ea6cb91c093-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 18 19:36:58 crc kubenswrapper[4942]: I0218 19:36:58.046350 4942 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/236551c8-9c37-4188-aea0-7ea6cb91c093-scripts\") on node \"crc\" DevicePath \"\"" Feb 18 19:36:58 crc kubenswrapper[4942]: I0218 19:36:58.065210 4942 scope.go:117] "RemoveContainer" containerID="3dd6fe39fe21a60b5f9b0d19a22b65bac95a5d9ab95a36271b835d36d69e15fd" Feb 18 19:36:58 crc kubenswrapper[4942]: I0218 19:36:58.082691 4942 scope.go:117] "RemoveContainer" containerID="656ce3f7f19932e4f5068737003e48886455c0d9f03c836dc5b0675a5d689546" Feb 18 19:36:58 crc kubenswrapper[4942]: I0218 19:36:58.102321 4942 scope.go:117] "RemoveContainer" containerID="58abb8ce5186e3b36cb68b8f09a80be99bc7c5dac34cbf75929681b8480cc49b" Feb 18 19:36:58 crc kubenswrapper[4942]: I0218 19:36:58.118580 4942 scope.go:117] "RemoveContainer" containerID="df343275baf5cd2afba42ef23b7d382a0815790debbcfdcf638e76927f78b7e1" Feb 18 19:36:58 crc kubenswrapper[4942]: E0218 19:36:58.119008 4942 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"df343275baf5cd2afba42ef23b7d382a0815790debbcfdcf638e76927f78b7e1\": container with ID starting with df343275baf5cd2afba42ef23b7d382a0815790debbcfdcf638e76927f78b7e1 not found: ID does not exist" containerID="df343275baf5cd2afba42ef23b7d382a0815790debbcfdcf638e76927f78b7e1" Feb 18 19:36:58 crc kubenswrapper[4942]: I0218 19:36:58.119048 4942 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"df343275baf5cd2afba42ef23b7d382a0815790debbcfdcf638e76927f78b7e1"} err="failed to get container status \"df343275baf5cd2afba42ef23b7d382a0815790debbcfdcf638e76927f78b7e1\": rpc error: code = NotFound desc = could not find container \"df343275baf5cd2afba42ef23b7d382a0815790debbcfdcf638e76927f78b7e1\": container with ID starting with df343275baf5cd2afba42ef23b7d382a0815790debbcfdcf638e76927f78b7e1 not found: ID does not exist" Feb 18 19:36:58 crc kubenswrapper[4942]: I0218 19:36:58.119074 4942 scope.go:117] "RemoveContainer" containerID="3dd6fe39fe21a60b5f9b0d19a22b65bac95a5d9ab95a36271b835d36d69e15fd" Feb 18 19:36:58 crc kubenswrapper[4942]: E0218 19:36:58.119427 4942 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3dd6fe39fe21a60b5f9b0d19a22b65bac95a5d9ab95a36271b835d36d69e15fd\": container with ID starting with 3dd6fe39fe21a60b5f9b0d19a22b65bac95a5d9ab95a36271b835d36d69e15fd not found: ID does not exist" containerID="3dd6fe39fe21a60b5f9b0d19a22b65bac95a5d9ab95a36271b835d36d69e15fd" Feb 18 19:36:58 crc kubenswrapper[4942]: I0218 19:36:58.119466 4942 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3dd6fe39fe21a60b5f9b0d19a22b65bac95a5d9ab95a36271b835d36d69e15fd"} err="failed to get container status \"3dd6fe39fe21a60b5f9b0d19a22b65bac95a5d9ab95a36271b835d36d69e15fd\": rpc error: code = NotFound desc = could not find container \"3dd6fe39fe21a60b5f9b0d19a22b65bac95a5d9ab95a36271b835d36d69e15fd\": container with ID starting with 3dd6fe39fe21a60b5f9b0d19a22b65bac95a5d9ab95a36271b835d36d69e15fd not found: ID does not exist" Feb 18 19:36:58 crc kubenswrapper[4942]: I0218 19:36:58.119492 4942 scope.go:117] "RemoveContainer" containerID="656ce3f7f19932e4f5068737003e48886455c0d9f03c836dc5b0675a5d689546" Feb 18 19:36:58 crc kubenswrapper[4942]: E0218 19:36:58.119997 4942 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"656ce3f7f19932e4f5068737003e48886455c0d9f03c836dc5b0675a5d689546\": container with ID starting with 656ce3f7f19932e4f5068737003e48886455c0d9f03c836dc5b0675a5d689546 not found: ID does not exist" containerID="656ce3f7f19932e4f5068737003e48886455c0d9f03c836dc5b0675a5d689546" Feb 18 19:36:58 crc kubenswrapper[4942]: I0218 19:36:58.120021 4942 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"656ce3f7f19932e4f5068737003e48886455c0d9f03c836dc5b0675a5d689546"} err="failed to get container status \"656ce3f7f19932e4f5068737003e48886455c0d9f03c836dc5b0675a5d689546\": rpc error: code = NotFound desc = could not find container \"656ce3f7f19932e4f5068737003e48886455c0d9f03c836dc5b0675a5d689546\": container with ID starting with 656ce3f7f19932e4f5068737003e48886455c0d9f03c836dc5b0675a5d689546 not found: ID does not exist" Feb 18 19:36:58 crc kubenswrapper[4942]: I0218 19:36:58.120037 4942 scope.go:117] "RemoveContainer" containerID="58abb8ce5186e3b36cb68b8f09a80be99bc7c5dac34cbf75929681b8480cc49b" Feb 18 19:36:58 crc kubenswrapper[4942]: E0218 19:36:58.120377 4942 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"58abb8ce5186e3b36cb68b8f09a80be99bc7c5dac34cbf75929681b8480cc49b\": container with ID starting with 58abb8ce5186e3b36cb68b8f09a80be99bc7c5dac34cbf75929681b8480cc49b not found: ID does not exist" containerID="58abb8ce5186e3b36cb68b8f09a80be99bc7c5dac34cbf75929681b8480cc49b" Feb 18 19:36:58 crc kubenswrapper[4942]: I0218 19:36:58.120400 4942 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"58abb8ce5186e3b36cb68b8f09a80be99bc7c5dac34cbf75929681b8480cc49b"} err="failed to get container status \"58abb8ce5186e3b36cb68b8f09a80be99bc7c5dac34cbf75929681b8480cc49b\": rpc error: code = NotFound desc = could not find container \"58abb8ce5186e3b36cb68b8f09a80be99bc7c5dac34cbf75929681b8480cc49b\": container with ID starting with 58abb8ce5186e3b36cb68b8f09a80be99bc7c5dac34cbf75929681b8480cc49b not found: ID does not exist" Feb 18 19:36:58 crc kubenswrapper[4942]: I0218 19:36:58.370800 4942 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 18 19:36:58 crc kubenswrapper[4942]: I0218 19:36:58.393349 4942 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 18 19:36:58 crc kubenswrapper[4942]: I0218 19:36:58.405660 4942 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 18 19:36:58 crc kubenswrapper[4942]: E0218 19:36:58.406492 4942 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="236551c8-9c37-4188-aea0-7ea6cb91c093" containerName="sg-core" Feb 18 19:36:58 crc kubenswrapper[4942]: I0218 19:36:58.406617 4942 state_mem.go:107] "Deleted CPUSet assignment" podUID="236551c8-9c37-4188-aea0-7ea6cb91c093" containerName="sg-core" Feb 18 19:36:58 crc kubenswrapper[4942]: E0218 19:36:58.406742 4942 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="236551c8-9c37-4188-aea0-7ea6cb91c093" containerName="proxy-httpd" Feb 18 19:36:58 crc kubenswrapper[4942]: I0218 19:36:58.406869 4942 state_mem.go:107] "Deleted CPUSet assignment" podUID="236551c8-9c37-4188-aea0-7ea6cb91c093" containerName="proxy-httpd" Feb 18 19:36:58 crc kubenswrapper[4942]: E0218 19:36:58.406985 4942 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="236551c8-9c37-4188-aea0-7ea6cb91c093" containerName="ceilometer-central-agent" Feb 18 19:36:58 crc kubenswrapper[4942]: I0218 19:36:58.407072 4942 state_mem.go:107] "Deleted CPUSet assignment" podUID="236551c8-9c37-4188-aea0-7ea6cb91c093" containerName="ceilometer-central-agent" Feb 18 19:36:58 crc kubenswrapper[4942]: E0218 19:36:58.407272 4942 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="236551c8-9c37-4188-aea0-7ea6cb91c093" containerName="ceilometer-notification-agent" Feb 18 19:36:58 crc kubenswrapper[4942]: I0218 19:36:58.407377 4942 state_mem.go:107] "Deleted CPUSet assignment" podUID="236551c8-9c37-4188-aea0-7ea6cb91c093" containerName="ceilometer-notification-agent" Feb 18 19:36:58 crc kubenswrapper[4942]: I0218 19:36:58.407719 4942 memory_manager.go:354] "RemoveStaleState removing state" podUID="236551c8-9c37-4188-aea0-7ea6cb91c093" containerName="proxy-httpd" Feb 18 19:36:58 crc kubenswrapper[4942]: I0218 19:36:58.407870 4942 memory_manager.go:354] "RemoveStaleState removing state" podUID="236551c8-9c37-4188-aea0-7ea6cb91c093" containerName="ceilometer-notification-agent" Feb 18 19:36:58 crc kubenswrapper[4942]: I0218 19:36:58.408000 4942 memory_manager.go:354] "RemoveStaleState removing state" podUID="236551c8-9c37-4188-aea0-7ea6cb91c093" containerName="sg-core" Feb 18 19:36:58 crc kubenswrapper[4942]: I0218 19:36:58.408140 4942 memory_manager.go:354] "RemoveStaleState removing state" podUID="236551c8-9c37-4188-aea0-7ea6cb91c093" containerName="ceilometer-central-agent" Feb 18 19:36:58 crc kubenswrapper[4942]: I0218 19:36:58.410690 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 18 19:36:58 crc kubenswrapper[4942]: I0218 19:36:58.413409 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 18 19:36:58 crc kubenswrapper[4942]: I0218 19:36:58.420009 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 18 19:36:58 crc kubenswrapper[4942]: I0218 19:36:58.441274 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 18 19:36:58 crc kubenswrapper[4942]: I0218 19:36:58.459975 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4981b67f-ebf1-4d2e-a717-67edbc242474-config-data\") pod \"ceilometer-0\" (UID: \"4981b67f-ebf1-4d2e-a717-67edbc242474\") " pod="openstack/ceilometer-0" Feb 18 19:36:58 crc kubenswrapper[4942]: I0218 19:36:58.460031 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4981b67f-ebf1-4d2e-a717-67edbc242474-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"4981b67f-ebf1-4d2e-a717-67edbc242474\") " pod="openstack/ceilometer-0" Feb 18 19:36:58 crc kubenswrapper[4942]: I0218 19:36:58.460055 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4981b67f-ebf1-4d2e-a717-67edbc242474-scripts\") pod \"ceilometer-0\" (UID: \"4981b67f-ebf1-4d2e-a717-67edbc242474\") " pod="openstack/ceilometer-0" Feb 18 19:36:58 crc kubenswrapper[4942]: I0218 19:36:58.460097 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v2dh2\" (UniqueName: \"kubernetes.io/projected/4981b67f-ebf1-4d2e-a717-67edbc242474-kube-api-access-v2dh2\") pod \"ceilometer-0\" (UID: \"4981b67f-ebf1-4d2e-a717-67edbc242474\") " pod="openstack/ceilometer-0" Feb 18 19:36:58 crc kubenswrapper[4942]: I0218 19:36:58.460147 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4981b67f-ebf1-4d2e-a717-67edbc242474-run-httpd\") pod \"ceilometer-0\" (UID: \"4981b67f-ebf1-4d2e-a717-67edbc242474\") " pod="openstack/ceilometer-0" Feb 18 19:36:58 crc kubenswrapper[4942]: I0218 19:36:58.460173 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/4981b67f-ebf1-4d2e-a717-67edbc242474-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"4981b67f-ebf1-4d2e-a717-67edbc242474\") " pod="openstack/ceilometer-0" Feb 18 19:36:58 crc kubenswrapper[4942]: I0218 19:36:58.460220 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4981b67f-ebf1-4d2e-a717-67edbc242474-log-httpd\") pod \"ceilometer-0\" (UID: \"4981b67f-ebf1-4d2e-a717-67edbc242474\") " pod="openstack/ceilometer-0" Feb 18 19:36:58 crc kubenswrapper[4942]: I0218 19:36:58.561274 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4981b67f-ebf1-4d2e-a717-67edbc242474-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"4981b67f-ebf1-4d2e-a717-67edbc242474\") " pod="openstack/ceilometer-0" Feb 18 19:36:58 crc kubenswrapper[4942]: I0218 19:36:58.561591 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4981b67f-ebf1-4d2e-a717-67edbc242474-scripts\") pod \"ceilometer-0\" (UID: \"4981b67f-ebf1-4d2e-a717-67edbc242474\") " pod="openstack/ceilometer-0" Feb 18 19:36:58 crc kubenswrapper[4942]: I0218 19:36:58.561658 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v2dh2\" (UniqueName: \"kubernetes.io/projected/4981b67f-ebf1-4d2e-a717-67edbc242474-kube-api-access-v2dh2\") pod \"ceilometer-0\" (UID: \"4981b67f-ebf1-4d2e-a717-67edbc242474\") " pod="openstack/ceilometer-0" Feb 18 19:36:58 crc kubenswrapper[4942]: I0218 19:36:58.561721 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4981b67f-ebf1-4d2e-a717-67edbc242474-run-httpd\") pod \"ceilometer-0\" (UID: \"4981b67f-ebf1-4d2e-a717-67edbc242474\") " pod="openstack/ceilometer-0" Feb 18 19:36:58 crc kubenswrapper[4942]: I0218 19:36:58.561748 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/4981b67f-ebf1-4d2e-a717-67edbc242474-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"4981b67f-ebf1-4d2e-a717-67edbc242474\") " pod="openstack/ceilometer-0" Feb 18 19:36:58 crc kubenswrapper[4942]: I0218 19:36:58.561805 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4981b67f-ebf1-4d2e-a717-67edbc242474-log-httpd\") pod \"ceilometer-0\" (UID: \"4981b67f-ebf1-4d2e-a717-67edbc242474\") " pod="openstack/ceilometer-0" Feb 18 19:36:58 crc kubenswrapper[4942]: I0218 19:36:58.561888 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4981b67f-ebf1-4d2e-a717-67edbc242474-config-data\") pod \"ceilometer-0\" (UID: \"4981b67f-ebf1-4d2e-a717-67edbc242474\") " pod="openstack/ceilometer-0" Feb 18 19:36:58 crc kubenswrapper[4942]: I0218 19:36:58.562297 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4981b67f-ebf1-4d2e-a717-67edbc242474-run-httpd\") pod \"ceilometer-0\" (UID: \"4981b67f-ebf1-4d2e-a717-67edbc242474\") " pod="openstack/ceilometer-0" Feb 18 19:36:58 crc kubenswrapper[4942]: I0218 19:36:58.562502 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4981b67f-ebf1-4d2e-a717-67edbc242474-log-httpd\") pod \"ceilometer-0\" (UID: \"4981b67f-ebf1-4d2e-a717-67edbc242474\") " pod="openstack/ceilometer-0" Feb 18 19:36:58 crc kubenswrapper[4942]: I0218 19:36:58.566915 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4981b67f-ebf1-4d2e-a717-67edbc242474-scripts\") pod \"ceilometer-0\" (UID: \"4981b67f-ebf1-4d2e-a717-67edbc242474\") " pod="openstack/ceilometer-0" Feb 18 19:36:58 crc kubenswrapper[4942]: I0218 19:36:58.567208 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4981b67f-ebf1-4d2e-a717-67edbc242474-config-data\") pod \"ceilometer-0\" (UID: \"4981b67f-ebf1-4d2e-a717-67edbc242474\") " pod="openstack/ceilometer-0" Feb 18 19:36:58 crc kubenswrapper[4942]: I0218 19:36:58.567916 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/4981b67f-ebf1-4d2e-a717-67edbc242474-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"4981b67f-ebf1-4d2e-a717-67edbc242474\") " pod="openstack/ceilometer-0" Feb 18 19:36:58 crc kubenswrapper[4942]: I0218 19:36:58.568042 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4981b67f-ebf1-4d2e-a717-67edbc242474-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"4981b67f-ebf1-4d2e-a717-67edbc242474\") " pod="openstack/ceilometer-0" Feb 18 19:36:58 crc kubenswrapper[4942]: I0218 19:36:58.582058 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v2dh2\" (UniqueName: \"kubernetes.io/projected/4981b67f-ebf1-4d2e-a717-67edbc242474-kube-api-access-v2dh2\") pod \"ceilometer-0\" (UID: \"4981b67f-ebf1-4d2e-a717-67edbc242474\") " pod="openstack/ceilometer-0" Feb 18 19:36:58 crc kubenswrapper[4942]: I0218 19:36:58.732335 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 18 19:36:59 crc kubenswrapper[4942]: I0218 19:36:59.033297 4942 generic.go:334] "Generic (PLEG): container finished" podID="e14c764c-c1b5-4196-a48b-2aff4c38782b" containerID="ebb11ccd20be89bb58e99f7b4e01c65708315c8dea33a27fefa79d1ee13756e9" exitCode=0 Feb 18 19:36:59 crc kubenswrapper[4942]: I0218 19:36:59.033350 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-bbrrn" event={"ID":"e14c764c-c1b5-4196-a48b-2aff4c38782b","Type":"ContainerDied","Data":"ebb11ccd20be89bb58e99f7b4e01c65708315c8dea33a27fefa79d1ee13756e9"} Feb 18 19:36:59 crc kubenswrapper[4942]: I0218 19:36:59.048228 4942 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="236551c8-9c37-4188-aea0-7ea6cb91c093" path="/var/lib/kubelet/pods/236551c8-9c37-4188-aea0-7ea6cb91c093/volumes" Feb 18 19:36:59 crc kubenswrapper[4942]: W0218 19:36:59.163910 4942 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4981b67f_ebf1_4d2e_a717_67edbc242474.slice/crio-30ee69691a3055e0e7dae81b55c1720ecd6dcb44e21ef193aead637c15341932 WatchSource:0}: Error finding container 30ee69691a3055e0e7dae81b55c1720ecd6dcb44e21ef193aead637c15341932: Status 404 returned error can't find the container with id 30ee69691a3055e0e7dae81b55c1720ecd6dcb44e21ef193aead637c15341932 Feb 18 19:36:59 crc kubenswrapper[4942]: I0218 19:36:59.166542 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 18 19:36:59 crc kubenswrapper[4942]: I0218 19:36:59.167002 4942 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 18 19:37:00 crc kubenswrapper[4942]: I0218 19:37:00.047692 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4981b67f-ebf1-4d2e-a717-67edbc242474","Type":"ContainerStarted","Data":"4b1f480ddf927d40a046c53a831832dfc0661e5a1bfbb9d0061a5f0118ebd54e"} Feb 18 19:37:00 crc kubenswrapper[4942]: I0218 19:37:00.047794 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4981b67f-ebf1-4d2e-a717-67edbc242474","Type":"ContainerStarted","Data":"30ee69691a3055e0e7dae81b55c1720ecd6dcb44e21ef193aead637c15341932"} Feb 18 19:37:00 crc kubenswrapper[4942]: I0218 19:37:00.364930 4942 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-bbrrn" Feb 18 19:37:00 crc kubenswrapper[4942]: I0218 19:37:00.399070 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e14c764c-c1b5-4196-a48b-2aff4c38782b-config-data\") pod \"e14c764c-c1b5-4196-a48b-2aff4c38782b\" (UID: \"e14c764c-c1b5-4196-a48b-2aff4c38782b\") " Feb 18 19:37:00 crc kubenswrapper[4942]: I0218 19:37:00.399211 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hzdtz\" (UniqueName: \"kubernetes.io/projected/e14c764c-c1b5-4196-a48b-2aff4c38782b-kube-api-access-hzdtz\") pod \"e14c764c-c1b5-4196-a48b-2aff4c38782b\" (UID: \"e14c764c-c1b5-4196-a48b-2aff4c38782b\") " Feb 18 19:37:00 crc kubenswrapper[4942]: I0218 19:37:00.399287 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e14c764c-c1b5-4196-a48b-2aff4c38782b-scripts\") pod \"e14c764c-c1b5-4196-a48b-2aff4c38782b\" (UID: \"e14c764c-c1b5-4196-a48b-2aff4c38782b\") " Feb 18 19:37:00 crc kubenswrapper[4942]: I0218 19:37:00.399381 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e14c764c-c1b5-4196-a48b-2aff4c38782b-combined-ca-bundle\") pod \"e14c764c-c1b5-4196-a48b-2aff4c38782b\" (UID: \"e14c764c-c1b5-4196-a48b-2aff4c38782b\") " Feb 18 19:37:00 crc kubenswrapper[4942]: I0218 19:37:00.423142 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e14c764c-c1b5-4196-a48b-2aff4c38782b-kube-api-access-hzdtz" (OuterVolumeSpecName: "kube-api-access-hzdtz") pod "e14c764c-c1b5-4196-a48b-2aff4c38782b" (UID: "e14c764c-c1b5-4196-a48b-2aff4c38782b"). InnerVolumeSpecName "kube-api-access-hzdtz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:37:00 crc kubenswrapper[4942]: I0218 19:37:00.423600 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e14c764c-c1b5-4196-a48b-2aff4c38782b-scripts" (OuterVolumeSpecName: "scripts") pod "e14c764c-c1b5-4196-a48b-2aff4c38782b" (UID: "e14c764c-c1b5-4196-a48b-2aff4c38782b"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:37:00 crc kubenswrapper[4942]: I0218 19:37:00.427471 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e14c764c-c1b5-4196-a48b-2aff4c38782b-config-data" (OuterVolumeSpecName: "config-data") pod "e14c764c-c1b5-4196-a48b-2aff4c38782b" (UID: "e14c764c-c1b5-4196-a48b-2aff4c38782b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:37:00 crc kubenswrapper[4942]: I0218 19:37:00.444642 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e14c764c-c1b5-4196-a48b-2aff4c38782b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e14c764c-c1b5-4196-a48b-2aff4c38782b" (UID: "e14c764c-c1b5-4196-a48b-2aff4c38782b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:37:00 crc kubenswrapper[4942]: I0218 19:37:00.501352 4942 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e14c764c-c1b5-4196-a48b-2aff4c38782b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 19:37:00 crc kubenswrapper[4942]: I0218 19:37:00.501383 4942 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e14c764c-c1b5-4196-a48b-2aff4c38782b-config-data\") on node \"crc\" DevicePath \"\"" Feb 18 19:37:00 crc kubenswrapper[4942]: I0218 19:37:00.501392 4942 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hzdtz\" (UniqueName: \"kubernetes.io/projected/e14c764c-c1b5-4196-a48b-2aff4c38782b-kube-api-access-hzdtz\") on node \"crc\" DevicePath \"\"" Feb 18 19:37:00 crc kubenswrapper[4942]: I0218 19:37:00.501401 4942 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e14c764c-c1b5-4196-a48b-2aff4c38782b-scripts\") on node \"crc\" DevicePath \"\"" Feb 18 19:37:01 crc kubenswrapper[4942]: I0218 19:37:01.096277 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4981b67f-ebf1-4d2e-a717-67edbc242474","Type":"ContainerStarted","Data":"a0cad6f7c64293c0ca724387bf6888f39862910aef71002691436d4792c6de55"} Feb 18 19:37:01 crc kubenswrapper[4942]: I0218 19:37:01.103417 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-bbrrn" event={"ID":"e14c764c-c1b5-4196-a48b-2aff4c38782b","Type":"ContainerDied","Data":"41848844e5ca0ec4e07ca2fcd7497cd5893a17d562a97a6ad440536587e4b055"} Feb 18 19:37:01 crc kubenswrapper[4942]: I0218 19:37:01.103459 4942 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="41848844e5ca0ec4e07ca2fcd7497cd5893a17d562a97a6ad440536587e4b055" Feb 18 19:37:01 crc kubenswrapper[4942]: I0218 19:37:01.103522 4942 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-bbrrn" Feb 18 19:37:01 crc kubenswrapper[4942]: I0218 19:37:01.158389 4942 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-0"] Feb 18 19:37:01 crc kubenswrapper[4942]: E0218 19:37:01.158862 4942 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e14c764c-c1b5-4196-a48b-2aff4c38782b" containerName="nova-cell0-conductor-db-sync" Feb 18 19:37:01 crc kubenswrapper[4942]: I0218 19:37:01.158887 4942 state_mem.go:107] "Deleted CPUSet assignment" podUID="e14c764c-c1b5-4196-a48b-2aff4c38782b" containerName="nova-cell0-conductor-db-sync" Feb 18 19:37:01 crc kubenswrapper[4942]: I0218 19:37:01.159114 4942 memory_manager.go:354] "RemoveStaleState removing state" podUID="e14c764c-c1b5-4196-a48b-2aff4c38782b" containerName="nova-cell0-conductor-db-sync" Feb 18 19:37:01 crc kubenswrapper[4942]: I0218 19:37:01.159933 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Feb 18 19:37:01 crc kubenswrapper[4942]: I0218 19:37:01.162476 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Feb 18 19:37:01 crc kubenswrapper[4942]: I0218 19:37:01.162694 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-r8ppn" Feb 18 19:37:01 crc kubenswrapper[4942]: I0218 19:37:01.181230 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Feb 18 19:37:01 crc kubenswrapper[4942]: I0218 19:37:01.224878 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7ec2114e-697d-44fd-ae1e-4da66730e456-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"7ec2114e-697d-44fd-ae1e-4da66730e456\") " pod="openstack/nova-cell0-conductor-0" Feb 18 19:37:01 crc kubenswrapper[4942]: I0218 19:37:01.224923 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7ec2114e-697d-44fd-ae1e-4da66730e456-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"7ec2114e-697d-44fd-ae1e-4da66730e456\") " pod="openstack/nova-cell0-conductor-0" Feb 18 19:37:01 crc kubenswrapper[4942]: I0218 19:37:01.225008 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5q92t\" (UniqueName: \"kubernetes.io/projected/7ec2114e-697d-44fd-ae1e-4da66730e456-kube-api-access-5q92t\") pod \"nova-cell0-conductor-0\" (UID: \"7ec2114e-697d-44fd-ae1e-4da66730e456\") " pod="openstack/nova-cell0-conductor-0" Feb 18 19:37:01 crc kubenswrapper[4942]: I0218 19:37:01.326989 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7ec2114e-697d-44fd-ae1e-4da66730e456-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"7ec2114e-697d-44fd-ae1e-4da66730e456\") " pod="openstack/nova-cell0-conductor-0" Feb 18 19:37:01 crc kubenswrapper[4942]: I0218 19:37:01.327052 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7ec2114e-697d-44fd-ae1e-4da66730e456-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"7ec2114e-697d-44fd-ae1e-4da66730e456\") " pod="openstack/nova-cell0-conductor-0" Feb 18 19:37:01 crc kubenswrapper[4942]: I0218 19:37:01.327126 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5q92t\" (UniqueName: \"kubernetes.io/projected/7ec2114e-697d-44fd-ae1e-4da66730e456-kube-api-access-5q92t\") pod \"nova-cell0-conductor-0\" (UID: \"7ec2114e-697d-44fd-ae1e-4da66730e456\") " pod="openstack/nova-cell0-conductor-0" Feb 18 19:37:01 crc kubenswrapper[4942]: I0218 19:37:01.331496 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7ec2114e-697d-44fd-ae1e-4da66730e456-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"7ec2114e-697d-44fd-ae1e-4da66730e456\") " pod="openstack/nova-cell0-conductor-0" Feb 18 19:37:01 crc kubenswrapper[4942]: I0218 19:37:01.332655 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7ec2114e-697d-44fd-ae1e-4da66730e456-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"7ec2114e-697d-44fd-ae1e-4da66730e456\") " pod="openstack/nova-cell0-conductor-0" Feb 18 19:37:01 crc kubenswrapper[4942]: I0218 19:37:01.369418 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5q92t\" (UniqueName: \"kubernetes.io/projected/7ec2114e-697d-44fd-ae1e-4da66730e456-kube-api-access-5q92t\") pod \"nova-cell0-conductor-0\" (UID: \"7ec2114e-697d-44fd-ae1e-4da66730e456\") " pod="openstack/nova-cell0-conductor-0" Feb 18 19:37:01 crc kubenswrapper[4942]: I0218 19:37:01.484979 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Feb 18 19:37:01 crc kubenswrapper[4942]: I0218 19:37:01.942671 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Feb 18 19:37:02 crc kubenswrapper[4942]: I0218 19:37:02.115286 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4981b67f-ebf1-4d2e-a717-67edbc242474","Type":"ContainerStarted","Data":"611f841412a878234cbc129413f605782e9a919aae0510161f0f5229befb06e0"} Feb 18 19:37:02 crc kubenswrapper[4942]: I0218 19:37:02.117694 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"7ec2114e-697d-44fd-ae1e-4da66730e456","Type":"ContainerStarted","Data":"c6ad23870e2b3fe26fcaa7290d1d42830d8faaa7fc9577845df69d239d211029"} Feb 18 19:37:02 crc kubenswrapper[4942]: I0218 19:37:02.374319 4942 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/watcher-decision-engine-0" Feb 18 19:37:02 crc kubenswrapper[4942]: I0218 19:37:02.410222 4942 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/watcher-decision-engine-0" Feb 18 19:37:03 crc kubenswrapper[4942]: I0218 19:37:03.136540 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"7ec2114e-697d-44fd-ae1e-4da66730e456","Type":"ContainerStarted","Data":"04b4a1ff1cc34d018e0ef3fc7471f7b23d32ba2eb3cebed2cda340ab96ccbbfe"} Feb 18 19:37:03 crc kubenswrapper[4942]: I0218 19:37:03.137000 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/watcher-decision-engine-0" Feb 18 19:37:03 crc kubenswrapper[4942]: I0218 19:37:03.163901 4942 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-0" podStartSLOduration=2.163873952 podStartE2EDuration="2.163873952s" podCreationTimestamp="2026-02-18 19:37:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:37:03.158143432 +0000 UTC m=+1182.863076127" watchObservedRunningTime="2026-02-18 19:37:03.163873952 +0000 UTC m=+1182.868806657" Feb 18 19:37:03 crc kubenswrapper[4942]: I0218 19:37:03.183564 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/watcher-decision-engine-0" Feb 18 19:37:04 crc kubenswrapper[4942]: I0218 19:37:04.148896 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4981b67f-ebf1-4d2e-a717-67edbc242474","Type":"ContainerStarted","Data":"ca5ad4aca4a617b9bbb63455e63bf0207750221ea32250b434a89953bffe9fd9"} Feb 18 19:37:04 crc kubenswrapper[4942]: I0218 19:37:04.149140 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell0-conductor-0" Feb 18 19:37:04 crc kubenswrapper[4942]: I0218 19:37:04.188395 4942 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.104659066 podStartE2EDuration="6.188379722s" podCreationTimestamp="2026-02-18 19:36:58 +0000 UTC" firstStartedPulling="2026-02-18 19:36:59.166675927 +0000 UTC m=+1178.871608602" lastFinishedPulling="2026-02-18 19:37:03.250396573 +0000 UTC m=+1182.955329258" observedRunningTime="2026-02-18 19:37:04.183822752 +0000 UTC m=+1183.888755417" watchObservedRunningTime="2026-02-18 19:37:04.188379722 +0000 UTC m=+1183.893312387" Feb 18 19:37:05 crc kubenswrapper[4942]: I0218 19:37:05.159312 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 18 19:37:11 crc kubenswrapper[4942]: I0218 19:37:11.529032 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell0-conductor-0" Feb 18 19:37:11 crc kubenswrapper[4942]: I0218 19:37:11.999380 4942 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-cell-mapping-6wkkj"] Feb 18 19:37:12 crc kubenswrapper[4942]: I0218 19:37:12.000613 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-6wkkj" Feb 18 19:37:12 crc kubenswrapper[4942]: I0218 19:37:12.011981 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-config-data" Feb 18 19:37:12 crc kubenswrapper[4942]: I0218 19:37:12.012133 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-scripts" Feb 18 19:37:12 crc kubenswrapper[4942]: I0218 19:37:12.022882 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-6wkkj"] Feb 18 19:37:12 crc kubenswrapper[4942]: I0218 19:37:12.034748 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b4a19078-b432-452e-8918-7b0f8c60e632-config-data\") pod \"nova-cell0-cell-mapping-6wkkj\" (UID: \"b4a19078-b432-452e-8918-7b0f8c60e632\") " pod="openstack/nova-cell0-cell-mapping-6wkkj" Feb 18 19:37:12 crc kubenswrapper[4942]: I0218 19:37:12.034827 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b6d97\" (UniqueName: \"kubernetes.io/projected/b4a19078-b432-452e-8918-7b0f8c60e632-kube-api-access-b6d97\") pod \"nova-cell0-cell-mapping-6wkkj\" (UID: \"b4a19078-b432-452e-8918-7b0f8c60e632\") " pod="openstack/nova-cell0-cell-mapping-6wkkj" Feb 18 19:37:12 crc kubenswrapper[4942]: I0218 19:37:12.034863 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b4a19078-b432-452e-8918-7b0f8c60e632-scripts\") pod \"nova-cell0-cell-mapping-6wkkj\" (UID: \"b4a19078-b432-452e-8918-7b0f8c60e632\") " pod="openstack/nova-cell0-cell-mapping-6wkkj" Feb 18 19:37:12 crc kubenswrapper[4942]: I0218 19:37:12.034907 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b4a19078-b432-452e-8918-7b0f8c60e632-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-6wkkj\" (UID: \"b4a19078-b432-452e-8918-7b0f8c60e632\") " pod="openstack/nova-cell0-cell-mapping-6wkkj" Feb 18 19:37:12 crc kubenswrapper[4942]: I0218 19:37:12.137123 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b4a19078-b432-452e-8918-7b0f8c60e632-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-6wkkj\" (UID: \"b4a19078-b432-452e-8918-7b0f8c60e632\") " pod="openstack/nova-cell0-cell-mapping-6wkkj" Feb 18 19:37:12 crc kubenswrapper[4942]: I0218 19:37:12.137346 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b4a19078-b432-452e-8918-7b0f8c60e632-config-data\") pod \"nova-cell0-cell-mapping-6wkkj\" (UID: \"b4a19078-b432-452e-8918-7b0f8c60e632\") " pod="openstack/nova-cell0-cell-mapping-6wkkj" Feb 18 19:37:12 crc kubenswrapper[4942]: I0218 19:37:12.137403 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b6d97\" (UniqueName: \"kubernetes.io/projected/b4a19078-b432-452e-8918-7b0f8c60e632-kube-api-access-b6d97\") pod \"nova-cell0-cell-mapping-6wkkj\" (UID: \"b4a19078-b432-452e-8918-7b0f8c60e632\") " pod="openstack/nova-cell0-cell-mapping-6wkkj" Feb 18 19:37:12 crc kubenswrapper[4942]: I0218 19:37:12.137437 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b4a19078-b432-452e-8918-7b0f8c60e632-scripts\") pod \"nova-cell0-cell-mapping-6wkkj\" (UID: \"b4a19078-b432-452e-8918-7b0f8c60e632\") " pod="openstack/nova-cell0-cell-mapping-6wkkj" Feb 18 19:37:12 crc kubenswrapper[4942]: I0218 19:37:12.143684 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b4a19078-b432-452e-8918-7b0f8c60e632-scripts\") pod \"nova-cell0-cell-mapping-6wkkj\" (UID: \"b4a19078-b432-452e-8918-7b0f8c60e632\") " pod="openstack/nova-cell0-cell-mapping-6wkkj" Feb 18 19:37:12 crc kubenswrapper[4942]: I0218 19:37:12.152464 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b4a19078-b432-452e-8918-7b0f8c60e632-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-6wkkj\" (UID: \"b4a19078-b432-452e-8918-7b0f8c60e632\") " pod="openstack/nova-cell0-cell-mapping-6wkkj" Feb 18 19:37:12 crc kubenswrapper[4942]: I0218 19:37:12.160369 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b4a19078-b432-452e-8918-7b0f8c60e632-config-data\") pod \"nova-cell0-cell-mapping-6wkkj\" (UID: \"b4a19078-b432-452e-8918-7b0f8c60e632\") " pod="openstack/nova-cell0-cell-mapping-6wkkj" Feb 18 19:37:12 crc kubenswrapper[4942]: I0218 19:37:12.165392 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b6d97\" (UniqueName: \"kubernetes.io/projected/b4a19078-b432-452e-8918-7b0f8c60e632-kube-api-access-b6d97\") pod \"nova-cell0-cell-mapping-6wkkj\" (UID: \"b4a19078-b432-452e-8918-7b0f8c60e632\") " pod="openstack/nova-cell0-cell-mapping-6wkkj" Feb 18 19:37:12 crc kubenswrapper[4942]: I0218 19:37:12.200952 4942 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Feb 18 19:37:12 crc kubenswrapper[4942]: I0218 19:37:12.202461 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 18 19:37:12 crc kubenswrapper[4942]: I0218 19:37:12.208453 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Feb 18 19:37:12 crc kubenswrapper[4942]: I0218 19:37:12.223927 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 18 19:37:12 crc kubenswrapper[4942]: I0218 19:37:12.240076 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f1366c48-2eab-4f52-b946-41b5cd9682a9-config-data\") pod \"nova-api-0\" (UID: \"f1366c48-2eab-4f52-b946-41b5cd9682a9\") " pod="openstack/nova-api-0" Feb 18 19:37:12 crc kubenswrapper[4942]: I0218 19:37:12.240128 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f1366c48-2eab-4f52-b946-41b5cd9682a9-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"f1366c48-2eab-4f52-b946-41b5cd9682a9\") " pod="openstack/nova-api-0" Feb 18 19:37:12 crc kubenswrapper[4942]: I0218 19:37:12.240163 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f1366c48-2eab-4f52-b946-41b5cd9682a9-logs\") pod \"nova-api-0\" (UID: \"f1366c48-2eab-4f52-b946-41b5cd9682a9\") " pod="openstack/nova-api-0" Feb 18 19:37:12 crc kubenswrapper[4942]: I0218 19:37:12.240223 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-knn95\" (UniqueName: \"kubernetes.io/projected/f1366c48-2eab-4f52-b946-41b5cd9682a9-kube-api-access-knn95\") pod \"nova-api-0\" (UID: \"f1366c48-2eab-4f52-b946-41b5cd9682a9\") " pod="openstack/nova-api-0" Feb 18 19:37:12 crc kubenswrapper[4942]: I0218 19:37:12.282686 4942 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Feb 18 19:37:12 crc kubenswrapper[4942]: I0218 19:37:12.284329 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 18 19:37:12 crc kubenswrapper[4942]: I0218 19:37:12.289143 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Feb 18 19:37:12 crc kubenswrapper[4942]: I0218 19:37:12.295408 4942 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 18 19:37:12 crc kubenswrapper[4942]: I0218 19:37:12.296687 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Feb 18 19:37:12 crc kubenswrapper[4942]: I0218 19:37:12.307162 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Feb 18 19:37:12 crc kubenswrapper[4942]: I0218 19:37:12.307638 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 18 19:37:12 crc kubenswrapper[4942]: I0218 19:37:12.324426 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-6wkkj" Feb 18 19:37:12 crc kubenswrapper[4942]: I0218 19:37:12.333115 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 18 19:37:12 crc kubenswrapper[4942]: I0218 19:37:12.344305 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mlgsg\" (UniqueName: \"kubernetes.io/projected/4e00cb35-640c-4e86-8ef4-9c11a4a83768-kube-api-access-mlgsg\") pod \"nova-cell1-novncproxy-0\" (UID: \"4e00cb35-640c-4e86-8ef4-9c11a4a83768\") " pod="openstack/nova-cell1-novncproxy-0" Feb 18 19:37:12 crc kubenswrapper[4942]: I0218 19:37:12.344538 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4e00cb35-640c-4e86-8ef4-9c11a4a83768-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"4e00cb35-640c-4e86-8ef4-9c11a4a83768\") " pod="openstack/nova-cell1-novncproxy-0" Feb 18 19:37:12 crc kubenswrapper[4942]: I0218 19:37:12.344882 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-knn95\" (UniqueName: \"kubernetes.io/projected/f1366c48-2eab-4f52-b946-41b5cd9682a9-kube-api-access-knn95\") pod \"nova-api-0\" (UID: \"f1366c48-2eab-4f52-b946-41b5cd9682a9\") " pod="openstack/nova-api-0" Feb 18 19:37:12 crc kubenswrapper[4942]: I0218 19:37:12.346589 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1dcc14d9-d4a9-41a7-a380-d28ed9d39ef3-logs\") pod \"nova-metadata-0\" (UID: \"1dcc14d9-d4a9-41a7-a380-d28ed9d39ef3\") " pod="openstack/nova-metadata-0" Feb 18 19:37:12 crc kubenswrapper[4942]: I0218 19:37:12.347145 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1dcc14d9-d4a9-41a7-a380-d28ed9d39ef3-config-data\") pod \"nova-metadata-0\" (UID: \"1dcc14d9-d4a9-41a7-a380-d28ed9d39ef3\") " pod="openstack/nova-metadata-0" Feb 18 19:37:12 crc kubenswrapper[4942]: I0218 19:37:12.347220 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hq77n\" (UniqueName: \"kubernetes.io/projected/1dcc14d9-d4a9-41a7-a380-d28ed9d39ef3-kube-api-access-hq77n\") pod \"nova-metadata-0\" (UID: \"1dcc14d9-d4a9-41a7-a380-d28ed9d39ef3\") " pod="openstack/nova-metadata-0" Feb 18 19:37:12 crc kubenswrapper[4942]: I0218 19:37:12.347260 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4e00cb35-640c-4e86-8ef4-9c11a4a83768-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"4e00cb35-640c-4e86-8ef4-9c11a4a83768\") " pod="openstack/nova-cell1-novncproxy-0" Feb 18 19:37:12 crc kubenswrapper[4942]: I0218 19:37:12.347294 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f1366c48-2eab-4f52-b946-41b5cd9682a9-config-data\") pod \"nova-api-0\" (UID: \"f1366c48-2eab-4f52-b946-41b5cd9682a9\") " pod="openstack/nova-api-0" Feb 18 19:37:12 crc kubenswrapper[4942]: I0218 19:37:12.347467 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f1366c48-2eab-4f52-b946-41b5cd9682a9-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"f1366c48-2eab-4f52-b946-41b5cd9682a9\") " pod="openstack/nova-api-0" Feb 18 19:37:12 crc kubenswrapper[4942]: I0218 19:37:12.348931 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1dcc14d9-d4a9-41a7-a380-d28ed9d39ef3-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"1dcc14d9-d4a9-41a7-a380-d28ed9d39ef3\") " pod="openstack/nova-metadata-0" Feb 18 19:37:12 crc kubenswrapper[4942]: I0218 19:37:12.349271 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f1366c48-2eab-4f52-b946-41b5cd9682a9-logs\") pod \"nova-api-0\" (UID: \"f1366c48-2eab-4f52-b946-41b5cd9682a9\") " pod="openstack/nova-api-0" Feb 18 19:37:12 crc kubenswrapper[4942]: I0218 19:37:12.351467 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f1366c48-2eab-4f52-b946-41b5cd9682a9-logs\") pod \"nova-api-0\" (UID: \"f1366c48-2eab-4f52-b946-41b5cd9682a9\") " pod="openstack/nova-api-0" Feb 18 19:37:12 crc kubenswrapper[4942]: I0218 19:37:12.355415 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f1366c48-2eab-4f52-b946-41b5cd9682a9-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"f1366c48-2eab-4f52-b946-41b5cd9682a9\") " pod="openstack/nova-api-0" Feb 18 19:37:12 crc kubenswrapper[4942]: I0218 19:37:12.403796 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-knn95\" (UniqueName: \"kubernetes.io/projected/f1366c48-2eab-4f52-b946-41b5cd9682a9-kube-api-access-knn95\") pod \"nova-api-0\" (UID: \"f1366c48-2eab-4f52-b946-41b5cd9682a9\") " pod="openstack/nova-api-0" Feb 18 19:37:12 crc kubenswrapper[4942]: I0218 19:37:12.404578 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f1366c48-2eab-4f52-b946-41b5cd9682a9-config-data\") pod \"nova-api-0\" (UID: \"f1366c48-2eab-4f52-b946-41b5cd9682a9\") " pod="openstack/nova-api-0" Feb 18 19:37:12 crc kubenswrapper[4942]: I0218 19:37:12.452581 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mlgsg\" (UniqueName: \"kubernetes.io/projected/4e00cb35-640c-4e86-8ef4-9c11a4a83768-kube-api-access-mlgsg\") pod \"nova-cell1-novncproxy-0\" (UID: \"4e00cb35-640c-4e86-8ef4-9c11a4a83768\") " pod="openstack/nova-cell1-novncproxy-0" Feb 18 19:37:12 crc kubenswrapper[4942]: I0218 19:37:12.452639 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4e00cb35-640c-4e86-8ef4-9c11a4a83768-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"4e00cb35-640c-4e86-8ef4-9c11a4a83768\") " pod="openstack/nova-cell1-novncproxy-0" Feb 18 19:37:12 crc kubenswrapper[4942]: I0218 19:37:12.452868 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1dcc14d9-d4a9-41a7-a380-d28ed9d39ef3-logs\") pod \"nova-metadata-0\" (UID: \"1dcc14d9-d4a9-41a7-a380-d28ed9d39ef3\") " pod="openstack/nova-metadata-0" Feb 18 19:37:12 crc kubenswrapper[4942]: I0218 19:37:12.453252 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1dcc14d9-d4a9-41a7-a380-d28ed9d39ef3-config-data\") pod \"nova-metadata-0\" (UID: \"1dcc14d9-d4a9-41a7-a380-d28ed9d39ef3\") " pod="openstack/nova-metadata-0" Feb 18 19:37:12 crc kubenswrapper[4942]: I0218 19:37:12.453308 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hq77n\" (UniqueName: \"kubernetes.io/projected/1dcc14d9-d4a9-41a7-a380-d28ed9d39ef3-kube-api-access-hq77n\") pod \"nova-metadata-0\" (UID: \"1dcc14d9-d4a9-41a7-a380-d28ed9d39ef3\") " pod="openstack/nova-metadata-0" Feb 18 19:37:12 crc kubenswrapper[4942]: I0218 19:37:12.453343 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4e00cb35-640c-4e86-8ef4-9c11a4a83768-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"4e00cb35-640c-4e86-8ef4-9c11a4a83768\") " pod="openstack/nova-cell1-novncproxy-0" Feb 18 19:37:12 crc kubenswrapper[4942]: I0218 19:37:12.453585 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1dcc14d9-d4a9-41a7-a380-d28ed9d39ef3-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"1dcc14d9-d4a9-41a7-a380-d28ed9d39ef3\") " pod="openstack/nova-metadata-0" Feb 18 19:37:12 crc kubenswrapper[4942]: I0218 19:37:12.454399 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1dcc14d9-d4a9-41a7-a380-d28ed9d39ef3-logs\") pod \"nova-metadata-0\" (UID: \"1dcc14d9-d4a9-41a7-a380-d28ed9d39ef3\") " pod="openstack/nova-metadata-0" Feb 18 19:37:12 crc kubenswrapper[4942]: I0218 19:37:12.464595 4942 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-757b4f8459-4gdxj"] Feb 18 19:37:12 crc kubenswrapper[4942]: I0218 19:37:12.471136 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-757b4f8459-4gdxj" Feb 18 19:37:12 crc kubenswrapper[4942]: I0218 19:37:12.475713 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1dcc14d9-d4a9-41a7-a380-d28ed9d39ef3-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"1dcc14d9-d4a9-41a7-a380-d28ed9d39ef3\") " pod="openstack/nova-metadata-0" Feb 18 19:37:12 crc kubenswrapper[4942]: I0218 19:37:12.500648 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4e00cb35-640c-4e86-8ef4-9c11a4a83768-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"4e00cb35-640c-4e86-8ef4-9c11a4a83768\") " pod="openstack/nova-cell1-novncproxy-0" Feb 18 19:37:12 crc kubenswrapper[4942]: I0218 19:37:12.482032 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4e00cb35-640c-4e86-8ef4-9c11a4a83768-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"4e00cb35-640c-4e86-8ef4-9c11a4a83768\") " pod="openstack/nova-cell1-novncproxy-0" Feb 18 19:37:12 crc kubenswrapper[4942]: I0218 19:37:12.482055 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1dcc14d9-d4a9-41a7-a380-d28ed9d39ef3-config-data\") pod \"nova-metadata-0\" (UID: \"1dcc14d9-d4a9-41a7-a380-d28ed9d39ef3\") " pod="openstack/nova-metadata-0" Feb 18 19:37:12 crc kubenswrapper[4942]: I0218 19:37:12.501963 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mlgsg\" (UniqueName: \"kubernetes.io/projected/4e00cb35-640c-4e86-8ef4-9c11a4a83768-kube-api-access-mlgsg\") pod \"nova-cell1-novncproxy-0\" (UID: \"4e00cb35-640c-4e86-8ef4-9c11a4a83768\") " pod="openstack/nova-cell1-novncproxy-0" Feb 18 19:37:12 crc kubenswrapper[4942]: I0218 19:37:12.503373 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hq77n\" (UniqueName: \"kubernetes.io/projected/1dcc14d9-d4a9-41a7-a380-d28ed9d39ef3-kube-api-access-hq77n\") pod \"nova-metadata-0\" (UID: \"1dcc14d9-d4a9-41a7-a380-d28ed9d39ef3\") " pod="openstack/nova-metadata-0" Feb 18 19:37:12 crc kubenswrapper[4942]: I0218 19:37:12.505525 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-757b4f8459-4gdxj"] Feb 18 19:37:12 crc kubenswrapper[4942]: I0218 19:37:12.512730 4942 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Feb 18 19:37:12 crc kubenswrapper[4942]: I0218 19:37:12.514108 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 18 19:37:12 crc kubenswrapper[4942]: I0218 19:37:12.522098 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Feb 18 19:37:12 crc kubenswrapper[4942]: I0218 19:37:12.523523 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Feb 18 19:37:12 crc kubenswrapper[4942]: I0218 19:37:12.553418 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 18 19:37:12 crc kubenswrapper[4942]: I0218 19:37:12.555380 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1b369297-3ab8-4077-9af5-68455e6f2fa7-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"1b369297-3ab8-4077-9af5-68455e6f2fa7\") " pod="openstack/nova-scheduler-0" Feb 18 19:37:12 crc kubenswrapper[4942]: I0218 19:37:12.555420 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/df5e2192-70b4-43cc-a9e0-f9023ba0d4a9-dns-swift-storage-0\") pod \"dnsmasq-dns-757b4f8459-4gdxj\" (UID: \"df5e2192-70b4-43cc-a9e0-f9023ba0d4a9\") " pod="openstack/dnsmasq-dns-757b4f8459-4gdxj" Feb 18 19:37:12 crc kubenswrapper[4942]: I0218 19:37:12.555456 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/df5e2192-70b4-43cc-a9e0-f9023ba0d4a9-ovsdbserver-nb\") pod \"dnsmasq-dns-757b4f8459-4gdxj\" (UID: \"df5e2192-70b4-43cc-a9e0-f9023ba0d4a9\") " pod="openstack/dnsmasq-dns-757b4f8459-4gdxj" Feb 18 19:37:12 crc kubenswrapper[4942]: I0218 19:37:12.555470 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-shhc6\" (UniqueName: \"kubernetes.io/projected/df5e2192-70b4-43cc-a9e0-f9023ba0d4a9-kube-api-access-shhc6\") pod \"dnsmasq-dns-757b4f8459-4gdxj\" (UID: \"df5e2192-70b4-43cc-a9e0-f9023ba0d4a9\") " pod="openstack/dnsmasq-dns-757b4f8459-4gdxj" Feb 18 19:37:12 crc kubenswrapper[4942]: I0218 19:37:12.555504 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/df5e2192-70b4-43cc-a9e0-f9023ba0d4a9-ovsdbserver-sb\") pod \"dnsmasq-dns-757b4f8459-4gdxj\" (UID: \"df5e2192-70b4-43cc-a9e0-f9023ba0d4a9\") " pod="openstack/dnsmasq-dns-757b4f8459-4gdxj" Feb 18 19:37:12 crc kubenswrapper[4942]: I0218 19:37:12.555541 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/df5e2192-70b4-43cc-a9e0-f9023ba0d4a9-config\") pod \"dnsmasq-dns-757b4f8459-4gdxj\" (UID: \"df5e2192-70b4-43cc-a9e0-f9023ba0d4a9\") " pod="openstack/dnsmasq-dns-757b4f8459-4gdxj" Feb 18 19:37:12 crc kubenswrapper[4942]: I0218 19:37:12.555585 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/df5e2192-70b4-43cc-a9e0-f9023ba0d4a9-dns-svc\") pod \"dnsmasq-dns-757b4f8459-4gdxj\" (UID: \"df5e2192-70b4-43cc-a9e0-f9023ba0d4a9\") " pod="openstack/dnsmasq-dns-757b4f8459-4gdxj" Feb 18 19:37:12 crc kubenswrapper[4942]: I0218 19:37:12.555604 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1b369297-3ab8-4077-9af5-68455e6f2fa7-config-data\") pod \"nova-scheduler-0\" (UID: \"1b369297-3ab8-4077-9af5-68455e6f2fa7\") " pod="openstack/nova-scheduler-0" Feb 18 19:37:12 crc kubenswrapper[4942]: I0218 19:37:12.555627 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5blns\" (UniqueName: \"kubernetes.io/projected/1b369297-3ab8-4077-9af5-68455e6f2fa7-kube-api-access-5blns\") pod \"nova-scheduler-0\" (UID: \"1b369297-3ab8-4077-9af5-68455e6f2fa7\") " pod="openstack/nova-scheduler-0" Feb 18 19:37:12 crc kubenswrapper[4942]: I0218 19:37:12.610013 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 18 19:37:12 crc kubenswrapper[4942]: I0218 19:37:12.619574 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Feb 18 19:37:12 crc kubenswrapper[4942]: I0218 19:37:12.656147 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/df5e2192-70b4-43cc-a9e0-f9023ba0d4a9-config\") pod \"dnsmasq-dns-757b4f8459-4gdxj\" (UID: \"df5e2192-70b4-43cc-a9e0-f9023ba0d4a9\") " pod="openstack/dnsmasq-dns-757b4f8459-4gdxj" Feb 18 19:37:12 crc kubenswrapper[4942]: I0218 19:37:12.656227 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/df5e2192-70b4-43cc-a9e0-f9023ba0d4a9-dns-svc\") pod \"dnsmasq-dns-757b4f8459-4gdxj\" (UID: \"df5e2192-70b4-43cc-a9e0-f9023ba0d4a9\") " pod="openstack/dnsmasq-dns-757b4f8459-4gdxj" Feb 18 19:37:12 crc kubenswrapper[4942]: I0218 19:37:12.656249 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1b369297-3ab8-4077-9af5-68455e6f2fa7-config-data\") pod \"nova-scheduler-0\" (UID: \"1b369297-3ab8-4077-9af5-68455e6f2fa7\") " pod="openstack/nova-scheduler-0" Feb 18 19:37:12 crc kubenswrapper[4942]: I0218 19:37:12.656274 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5blns\" (UniqueName: \"kubernetes.io/projected/1b369297-3ab8-4077-9af5-68455e6f2fa7-kube-api-access-5blns\") pod \"nova-scheduler-0\" (UID: \"1b369297-3ab8-4077-9af5-68455e6f2fa7\") " pod="openstack/nova-scheduler-0" Feb 18 19:37:12 crc kubenswrapper[4942]: I0218 19:37:12.656340 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1b369297-3ab8-4077-9af5-68455e6f2fa7-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"1b369297-3ab8-4077-9af5-68455e6f2fa7\") " pod="openstack/nova-scheduler-0" Feb 18 19:37:12 crc kubenswrapper[4942]: I0218 19:37:12.656362 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/df5e2192-70b4-43cc-a9e0-f9023ba0d4a9-dns-swift-storage-0\") pod \"dnsmasq-dns-757b4f8459-4gdxj\" (UID: \"df5e2192-70b4-43cc-a9e0-f9023ba0d4a9\") " pod="openstack/dnsmasq-dns-757b4f8459-4gdxj" Feb 18 19:37:12 crc kubenswrapper[4942]: I0218 19:37:12.656387 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/df5e2192-70b4-43cc-a9e0-f9023ba0d4a9-ovsdbserver-nb\") pod \"dnsmasq-dns-757b4f8459-4gdxj\" (UID: \"df5e2192-70b4-43cc-a9e0-f9023ba0d4a9\") " pod="openstack/dnsmasq-dns-757b4f8459-4gdxj" Feb 18 19:37:12 crc kubenswrapper[4942]: I0218 19:37:12.656404 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-shhc6\" (UniqueName: \"kubernetes.io/projected/df5e2192-70b4-43cc-a9e0-f9023ba0d4a9-kube-api-access-shhc6\") pod \"dnsmasq-dns-757b4f8459-4gdxj\" (UID: \"df5e2192-70b4-43cc-a9e0-f9023ba0d4a9\") " pod="openstack/dnsmasq-dns-757b4f8459-4gdxj" Feb 18 19:37:12 crc kubenswrapper[4942]: I0218 19:37:12.656438 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/df5e2192-70b4-43cc-a9e0-f9023ba0d4a9-ovsdbserver-sb\") pod \"dnsmasq-dns-757b4f8459-4gdxj\" (UID: \"df5e2192-70b4-43cc-a9e0-f9023ba0d4a9\") " pod="openstack/dnsmasq-dns-757b4f8459-4gdxj" Feb 18 19:37:12 crc kubenswrapper[4942]: I0218 19:37:12.657364 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/df5e2192-70b4-43cc-a9e0-f9023ba0d4a9-ovsdbserver-sb\") pod \"dnsmasq-dns-757b4f8459-4gdxj\" (UID: \"df5e2192-70b4-43cc-a9e0-f9023ba0d4a9\") " pod="openstack/dnsmasq-dns-757b4f8459-4gdxj" Feb 18 19:37:12 crc kubenswrapper[4942]: I0218 19:37:12.658031 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/df5e2192-70b4-43cc-a9e0-f9023ba0d4a9-config\") pod \"dnsmasq-dns-757b4f8459-4gdxj\" (UID: \"df5e2192-70b4-43cc-a9e0-f9023ba0d4a9\") " pod="openstack/dnsmasq-dns-757b4f8459-4gdxj" Feb 18 19:37:12 crc kubenswrapper[4942]: I0218 19:37:12.661407 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/df5e2192-70b4-43cc-a9e0-f9023ba0d4a9-dns-svc\") pod \"dnsmasq-dns-757b4f8459-4gdxj\" (UID: \"df5e2192-70b4-43cc-a9e0-f9023ba0d4a9\") " pod="openstack/dnsmasq-dns-757b4f8459-4gdxj" Feb 18 19:37:12 crc kubenswrapper[4942]: I0218 19:37:12.663244 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1b369297-3ab8-4077-9af5-68455e6f2fa7-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"1b369297-3ab8-4077-9af5-68455e6f2fa7\") " pod="openstack/nova-scheduler-0" Feb 18 19:37:12 crc kubenswrapper[4942]: I0218 19:37:12.669517 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/df5e2192-70b4-43cc-a9e0-f9023ba0d4a9-ovsdbserver-nb\") pod \"dnsmasq-dns-757b4f8459-4gdxj\" (UID: \"df5e2192-70b4-43cc-a9e0-f9023ba0d4a9\") " pod="openstack/dnsmasq-dns-757b4f8459-4gdxj" Feb 18 19:37:12 crc kubenswrapper[4942]: I0218 19:37:12.669612 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/df5e2192-70b4-43cc-a9e0-f9023ba0d4a9-dns-swift-storage-0\") pod \"dnsmasq-dns-757b4f8459-4gdxj\" (UID: \"df5e2192-70b4-43cc-a9e0-f9023ba0d4a9\") " pod="openstack/dnsmasq-dns-757b4f8459-4gdxj" Feb 18 19:37:12 crc kubenswrapper[4942]: I0218 19:37:12.670518 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1b369297-3ab8-4077-9af5-68455e6f2fa7-config-data\") pod \"nova-scheduler-0\" (UID: \"1b369297-3ab8-4077-9af5-68455e6f2fa7\") " pod="openstack/nova-scheduler-0" Feb 18 19:37:12 crc kubenswrapper[4942]: I0218 19:37:12.688851 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-shhc6\" (UniqueName: \"kubernetes.io/projected/df5e2192-70b4-43cc-a9e0-f9023ba0d4a9-kube-api-access-shhc6\") pod \"dnsmasq-dns-757b4f8459-4gdxj\" (UID: \"df5e2192-70b4-43cc-a9e0-f9023ba0d4a9\") " pod="openstack/dnsmasq-dns-757b4f8459-4gdxj" Feb 18 19:37:12 crc kubenswrapper[4942]: I0218 19:37:12.689654 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5blns\" (UniqueName: \"kubernetes.io/projected/1b369297-3ab8-4077-9af5-68455e6f2fa7-kube-api-access-5blns\") pod \"nova-scheduler-0\" (UID: \"1b369297-3ab8-4077-9af5-68455e6f2fa7\") " pod="openstack/nova-scheduler-0" Feb 18 19:37:12 crc kubenswrapper[4942]: I0218 19:37:12.843049 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-757b4f8459-4gdxj" Feb 18 19:37:12 crc kubenswrapper[4942]: I0218 19:37:12.860366 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 18 19:37:13 crc kubenswrapper[4942]: I0218 19:37:13.086669 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-6wkkj"] Feb 18 19:37:13 crc kubenswrapper[4942]: I0218 19:37:13.109163 4942 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-db-sync-bqfl9"] Feb 18 19:37:13 crc kubenswrapper[4942]: I0218 19:37:13.110733 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-bqfl9" Feb 18 19:37:13 crc kubenswrapper[4942]: I0218 19:37:13.113319 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-scripts" Feb 18 19:37:13 crc kubenswrapper[4942]: I0218 19:37:13.114282 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Feb 18 19:37:13 crc kubenswrapper[4942]: I0218 19:37:13.138799 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-bqfl9"] Feb 18 19:37:13 crc kubenswrapper[4942]: I0218 19:37:13.188821 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 18 19:37:13 crc kubenswrapper[4942]: I0218 19:37:13.229032 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 18 19:37:13 crc kubenswrapper[4942]: W0218 19:37:13.242794 4942 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1dcc14d9_d4a9_41a7_a380_d28ed9d39ef3.slice/crio-c9b2d102cdaeda4714a41f8fd9d6eea88f81b5d3b64632545c0357f2607bbf2b WatchSource:0}: Error finding container c9b2d102cdaeda4714a41f8fd9d6eea88f81b5d3b64632545c0357f2607bbf2b: Status 404 returned error can't find the container with id c9b2d102cdaeda4714a41f8fd9d6eea88f81b5d3b64632545c0357f2607bbf2b Feb 18 19:37:13 crc kubenswrapper[4942]: I0218 19:37:13.255922 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 18 19:37:13 crc kubenswrapper[4942]: W0218 19:37:13.256567 4942 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4e00cb35_640c_4e86_8ef4_9c11a4a83768.slice/crio-709413498c2a9aaa5df37a75330ab20cc5f02facee6f8f09c4d5399431b7f4ad WatchSource:0}: Error finding container 709413498c2a9aaa5df37a75330ab20cc5f02facee6f8f09c4d5399431b7f4ad: Status 404 returned error can't find the container with id 709413498c2a9aaa5df37a75330ab20cc5f02facee6f8f09c4d5399431b7f4ad Feb 18 19:37:13 crc kubenswrapper[4942]: I0218 19:37:13.258138 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"1dcc14d9-d4a9-41a7-a380-d28ed9d39ef3","Type":"ContainerStarted","Data":"c9b2d102cdaeda4714a41f8fd9d6eea88f81b5d3b64632545c0357f2607bbf2b"} Feb 18 19:37:13 crc kubenswrapper[4942]: I0218 19:37:13.259461 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"f1366c48-2eab-4f52-b946-41b5cd9682a9","Type":"ContainerStarted","Data":"7b2a3f65d372ea4e30eae9c1cb3d4c4737814a71cc9dd040162565d4ea30b91c"} Feb 18 19:37:13 crc kubenswrapper[4942]: I0218 19:37:13.260934 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-6wkkj" event={"ID":"b4a19078-b432-452e-8918-7b0f8c60e632","Type":"ContainerStarted","Data":"117e05763b70ed5511fc539676ab66f3e18bab8c00305576c6a5cf642aa0c86e"} Feb 18 19:37:13 crc kubenswrapper[4942]: I0218 19:37:13.273044 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cwvcd\" (UniqueName: \"kubernetes.io/projected/2e2cb901-5468-4fa9-9b3a-a16f238ff6e2-kube-api-access-cwvcd\") pod \"nova-cell1-conductor-db-sync-bqfl9\" (UID: \"2e2cb901-5468-4fa9-9b3a-a16f238ff6e2\") " pod="openstack/nova-cell1-conductor-db-sync-bqfl9" Feb 18 19:37:13 crc kubenswrapper[4942]: I0218 19:37:13.273147 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2e2cb901-5468-4fa9-9b3a-a16f238ff6e2-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-bqfl9\" (UID: \"2e2cb901-5468-4fa9-9b3a-a16f238ff6e2\") " pod="openstack/nova-cell1-conductor-db-sync-bqfl9" Feb 18 19:37:13 crc kubenswrapper[4942]: I0218 19:37:13.273250 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2e2cb901-5468-4fa9-9b3a-a16f238ff6e2-scripts\") pod \"nova-cell1-conductor-db-sync-bqfl9\" (UID: \"2e2cb901-5468-4fa9-9b3a-a16f238ff6e2\") " pod="openstack/nova-cell1-conductor-db-sync-bqfl9" Feb 18 19:37:13 crc kubenswrapper[4942]: I0218 19:37:13.273404 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2e2cb901-5468-4fa9-9b3a-a16f238ff6e2-config-data\") pod \"nova-cell1-conductor-db-sync-bqfl9\" (UID: \"2e2cb901-5468-4fa9-9b3a-a16f238ff6e2\") " pod="openstack/nova-cell1-conductor-db-sync-bqfl9" Feb 18 19:37:13 crc kubenswrapper[4942]: I0218 19:37:13.375097 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2e2cb901-5468-4fa9-9b3a-a16f238ff6e2-config-data\") pod \"nova-cell1-conductor-db-sync-bqfl9\" (UID: \"2e2cb901-5468-4fa9-9b3a-a16f238ff6e2\") " pod="openstack/nova-cell1-conductor-db-sync-bqfl9" Feb 18 19:37:13 crc kubenswrapper[4942]: I0218 19:37:13.375254 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cwvcd\" (UniqueName: \"kubernetes.io/projected/2e2cb901-5468-4fa9-9b3a-a16f238ff6e2-kube-api-access-cwvcd\") pod \"nova-cell1-conductor-db-sync-bqfl9\" (UID: \"2e2cb901-5468-4fa9-9b3a-a16f238ff6e2\") " pod="openstack/nova-cell1-conductor-db-sync-bqfl9" Feb 18 19:37:13 crc kubenswrapper[4942]: I0218 19:37:13.375294 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2e2cb901-5468-4fa9-9b3a-a16f238ff6e2-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-bqfl9\" (UID: \"2e2cb901-5468-4fa9-9b3a-a16f238ff6e2\") " pod="openstack/nova-cell1-conductor-db-sync-bqfl9" Feb 18 19:37:13 crc kubenswrapper[4942]: I0218 19:37:13.375373 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2e2cb901-5468-4fa9-9b3a-a16f238ff6e2-scripts\") pod \"nova-cell1-conductor-db-sync-bqfl9\" (UID: \"2e2cb901-5468-4fa9-9b3a-a16f238ff6e2\") " pod="openstack/nova-cell1-conductor-db-sync-bqfl9" Feb 18 19:37:13 crc kubenswrapper[4942]: I0218 19:37:13.383524 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2e2cb901-5468-4fa9-9b3a-a16f238ff6e2-scripts\") pod \"nova-cell1-conductor-db-sync-bqfl9\" (UID: \"2e2cb901-5468-4fa9-9b3a-a16f238ff6e2\") " pod="openstack/nova-cell1-conductor-db-sync-bqfl9" Feb 18 19:37:13 crc kubenswrapper[4942]: I0218 19:37:13.383776 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2e2cb901-5468-4fa9-9b3a-a16f238ff6e2-config-data\") pod \"nova-cell1-conductor-db-sync-bqfl9\" (UID: \"2e2cb901-5468-4fa9-9b3a-a16f238ff6e2\") " pod="openstack/nova-cell1-conductor-db-sync-bqfl9" Feb 18 19:37:13 crc kubenswrapper[4942]: I0218 19:37:13.383911 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2e2cb901-5468-4fa9-9b3a-a16f238ff6e2-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-bqfl9\" (UID: \"2e2cb901-5468-4fa9-9b3a-a16f238ff6e2\") " pod="openstack/nova-cell1-conductor-db-sync-bqfl9" Feb 18 19:37:13 crc kubenswrapper[4942]: I0218 19:37:13.403967 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cwvcd\" (UniqueName: \"kubernetes.io/projected/2e2cb901-5468-4fa9-9b3a-a16f238ff6e2-kube-api-access-cwvcd\") pod \"nova-cell1-conductor-db-sync-bqfl9\" (UID: \"2e2cb901-5468-4fa9-9b3a-a16f238ff6e2\") " pod="openstack/nova-cell1-conductor-db-sync-bqfl9" Feb 18 19:37:13 crc kubenswrapper[4942]: I0218 19:37:13.438987 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-bqfl9" Feb 18 19:37:13 crc kubenswrapper[4942]: I0218 19:37:13.537064 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-757b4f8459-4gdxj"] Feb 18 19:37:13 crc kubenswrapper[4942]: I0218 19:37:13.549168 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Feb 18 19:37:13 crc kubenswrapper[4942]: I0218 19:37:13.969622 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-bqfl9"] Feb 18 19:37:14 crc kubenswrapper[4942]: I0218 19:37:14.286952 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"4e00cb35-640c-4e86-8ef4-9c11a4a83768","Type":"ContainerStarted","Data":"709413498c2a9aaa5df37a75330ab20cc5f02facee6f8f09c4d5399431b7f4ad"} Feb 18 19:37:14 crc kubenswrapper[4942]: I0218 19:37:14.299444 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-6wkkj" event={"ID":"b4a19078-b432-452e-8918-7b0f8c60e632","Type":"ContainerStarted","Data":"3d586c465df9e16d18d5348d207063c859dc4c0c45589222afa474013bd766c5"} Feb 18 19:37:14 crc kubenswrapper[4942]: I0218 19:37:14.303436 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-bqfl9" event={"ID":"2e2cb901-5468-4fa9-9b3a-a16f238ff6e2","Type":"ContainerStarted","Data":"2d29442d9649dbaa907e5735ea0dda7657607ca6fa24c4f83c7c2be4ce910a11"} Feb 18 19:37:14 crc kubenswrapper[4942]: I0218 19:37:14.303494 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-bqfl9" event={"ID":"2e2cb901-5468-4fa9-9b3a-a16f238ff6e2","Type":"ContainerStarted","Data":"bfe2fb8153deab5eb20af7fd17ac70900646805335b4c77b479b2ae83af42455"} Feb 18 19:37:14 crc kubenswrapper[4942]: I0218 19:37:14.323154 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"1b369297-3ab8-4077-9af5-68455e6f2fa7","Type":"ContainerStarted","Data":"76e19217db79153ca1c48808a6a49fd7fae4dd51157fee8775824302253c37bb"} Feb 18 19:37:14 crc kubenswrapper[4942]: I0218 19:37:14.329504 4942 generic.go:334] "Generic (PLEG): container finished" podID="df5e2192-70b4-43cc-a9e0-f9023ba0d4a9" containerID="5f605ec20eeba22cd1e0c8f762ce0215e3f892afe0ae0fcbbbb922ee4f5af646" exitCode=0 Feb 18 19:37:14 crc kubenswrapper[4942]: I0218 19:37:14.329549 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-757b4f8459-4gdxj" event={"ID":"df5e2192-70b4-43cc-a9e0-f9023ba0d4a9","Type":"ContainerDied","Data":"5f605ec20eeba22cd1e0c8f762ce0215e3f892afe0ae0fcbbbb922ee4f5af646"} Feb 18 19:37:14 crc kubenswrapper[4942]: I0218 19:37:14.329574 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-757b4f8459-4gdxj" event={"ID":"df5e2192-70b4-43cc-a9e0-f9023ba0d4a9","Type":"ContainerStarted","Data":"ac7c2c212726ec658ded163971fdbf65aa1ee8ef5f331c952d6143e9bfa521d8"} Feb 18 19:37:14 crc kubenswrapper[4942]: I0218 19:37:14.341956 4942 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-cell-mapping-6wkkj" podStartSLOduration=3.341936384 podStartE2EDuration="3.341936384s" podCreationTimestamp="2026-02-18 19:37:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:37:14.338384481 +0000 UTC m=+1194.043317156" watchObservedRunningTime="2026-02-18 19:37:14.341936384 +0000 UTC m=+1194.046869049" Feb 18 19:37:14 crc kubenswrapper[4942]: I0218 19:37:14.371633 4942 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-db-sync-bqfl9" podStartSLOduration=1.371611702 podStartE2EDuration="1.371611702s" podCreationTimestamp="2026-02-18 19:37:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:37:14.358258402 +0000 UTC m=+1194.063191067" watchObservedRunningTime="2026-02-18 19:37:14.371611702 +0000 UTC m=+1194.076544367" Feb 18 19:37:15 crc kubenswrapper[4942]: I0218 19:37:15.343219 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-757b4f8459-4gdxj" event={"ID":"df5e2192-70b4-43cc-a9e0-f9023ba0d4a9","Type":"ContainerStarted","Data":"b68121de2fea4f07edecadb5789b88b34bf8d27823e96cbebb2e52ee0368565c"} Feb 18 19:37:15 crc kubenswrapper[4942]: I0218 19:37:15.370362 4942 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-757b4f8459-4gdxj" podStartSLOduration=3.3703448959999998 podStartE2EDuration="3.370344896s" podCreationTimestamp="2026-02-18 19:37:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:37:15.366650679 +0000 UTC m=+1195.071583344" watchObservedRunningTime="2026-02-18 19:37:15.370344896 +0000 UTC m=+1195.075277561" Feb 18 19:37:16 crc kubenswrapper[4942]: I0218 19:37:16.112904 4942 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Feb 18 19:37:16 crc kubenswrapper[4942]: I0218 19:37:16.126644 4942 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 18 19:37:16 crc kubenswrapper[4942]: I0218 19:37:16.352187 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-757b4f8459-4gdxj" Feb 18 19:37:17 crc kubenswrapper[4942]: I0218 19:37:17.361557 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"1b369297-3ab8-4077-9af5-68455e6f2fa7","Type":"ContainerStarted","Data":"03b6bb631528443b5f5f07cb8b13ef384d45de36f72e71bf857cfad0d68ac856"} Feb 18 19:37:17 crc kubenswrapper[4942]: I0218 19:37:17.363487 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"f1366c48-2eab-4f52-b946-41b5cd9682a9","Type":"ContainerStarted","Data":"caf503a14e33f1f6c75a84e13fcab72b56d9b16362dbeda0c58791f6b27e6fcf"} Feb 18 19:37:17 crc kubenswrapper[4942]: I0218 19:37:17.363511 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"f1366c48-2eab-4f52-b946-41b5cd9682a9","Type":"ContainerStarted","Data":"0689b0c38955bef713bfebb9dc862d00cd9be367d7fc866a2fa0a00dec3cd055"} Feb 18 19:37:17 crc kubenswrapper[4942]: I0218 19:37:17.365403 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"4e00cb35-640c-4e86-8ef4-9c11a4a83768","Type":"ContainerStarted","Data":"4d23d58052be19c944bbfb1bdcae23f79449638dec97cb1fe1f8ae8d61b02fff"} Feb 18 19:37:17 crc kubenswrapper[4942]: I0218 19:37:17.365469 4942 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell1-novncproxy-0" podUID="4e00cb35-640c-4e86-8ef4-9c11a4a83768" containerName="nova-cell1-novncproxy-novncproxy" containerID="cri-o://4d23d58052be19c944bbfb1bdcae23f79449638dec97cb1fe1f8ae8d61b02fff" gracePeriod=30 Feb 18 19:37:17 crc kubenswrapper[4942]: I0218 19:37:17.367037 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"1dcc14d9-d4a9-41a7-a380-d28ed9d39ef3","Type":"ContainerStarted","Data":"5c56a687bcaef7e5e54c6de1b78374726c82904080884876b458c8525f4a0752"} Feb 18 19:37:17 crc kubenswrapper[4942]: I0218 19:37:17.367068 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"1dcc14d9-d4a9-41a7-a380-d28ed9d39ef3","Type":"ContainerStarted","Data":"a988a34c898a05087381b3c398ec9025e84f7ccd37d7a000f5a4025b770b9c31"} Feb 18 19:37:17 crc kubenswrapper[4942]: I0218 19:37:17.367154 4942 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="1dcc14d9-d4a9-41a7-a380-d28ed9d39ef3" containerName="nova-metadata-log" containerID="cri-o://a988a34c898a05087381b3c398ec9025e84f7ccd37d7a000f5a4025b770b9c31" gracePeriod=30 Feb 18 19:37:17 crc kubenswrapper[4942]: I0218 19:37:17.367209 4942 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="1dcc14d9-d4a9-41a7-a380-d28ed9d39ef3" containerName="nova-metadata-metadata" containerID="cri-o://5c56a687bcaef7e5e54c6de1b78374726c82904080884876b458c8525f4a0752" gracePeriod=30 Feb 18 19:37:17 crc kubenswrapper[4942]: I0218 19:37:17.384913 4942 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.16112209 podStartE2EDuration="5.384899332s" podCreationTimestamp="2026-02-18 19:37:12 +0000 UTC" firstStartedPulling="2026-02-18 19:37:13.5715027 +0000 UTC m=+1193.276435365" lastFinishedPulling="2026-02-18 19:37:16.795279942 +0000 UTC m=+1196.500212607" observedRunningTime="2026-02-18 19:37:17.381347759 +0000 UTC m=+1197.086280424" watchObservedRunningTime="2026-02-18 19:37:17.384899332 +0000 UTC m=+1197.089831997" Feb 18 19:37:17 crc kubenswrapper[4942]: I0218 19:37:17.411103 4942 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=1.857560045 podStartE2EDuration="5.411083749s" podCreationTimestamp="2026-02-18 19:37:12 +0000 UTC" firstStartedPulling="2026-02-18 19:37:13.245483306 +0000 UTC m=+1192.950415971" lastFinishedPulling="2026-02-18 19:37:16.79900702 +0000 UTC m=+1196.503939675" observedRunningTime="2026-02-18 19:37:17.407811804 +0000 UTC m=+1197.112744469" watchObservedRunningTime="2026-02-18 19:37:17.411083749 +0000 UTC m=+1197.116016414" Feb 18 19:37:17 crc kubenswrapper[4942]: I0218 19:37:17.435939 4942 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=1.81890136 podStartE2EDuration="5.435919981s" podCreationTimestamp="2026-02-18 19:37:12 +0000 UTC" firstStartedPulling="2026-02-18 19:37:13.179117994 +0000 UTC m=+1192.884050659" lastFinishedPulling="2026-02-18 19:37:16.796136615 +0000 UTC m=+1196.501069280" observedRunningTime="2026-02-18 19:37:17.426931065 +0000 UTC m=+1197.131863740" watchObservedRunningTime="2026-02-18 19:37:17.435919981 +0000 UTC m=+1197.140852666" Feb 18 19:37:17 crc kubenswrapper[4942]: I0218 19:37:17.454552 4942 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=1.919821458 podStartE2EDuration="5.454530439s" podCreationTimestamp="2026-02-18 19:37:12 +0000 UTC" firstStartedPulling="2026-02-18 19:37:13.258647711 +0000 UTC m=+1192.963580376" lastFinishedPulling="2026-02-18 19:37:16.793356662 +0000 UTC m=+1196.498289357" observedRunningTime="2026-02-18 19:37:17.443362936 +0000 UTC m=+1197.148295641" watchObservedRunningTime="2026-02-18 19:37:17.454530439 +0000 UTC m=+1197.159463104" Feb 18 19:37:17 crc kubenswrapper[4942]: I0218 19:37:17.611035 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Feb 18 19:37:17 crc kubenswrapper[4942]: I0218 19:37:17.611093 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Feb 18 19:37:17 crc kubenswrapper[4942]: I0218 19:37:17.620194 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Feb 18 19:37:17 crc kubenswrapper[4942]: I0218 19:37:17.861188 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Feb 18 19:37:18 crc kubenswrapper[4942]: I0218 19:37:18.377852 4942 generic.go:334] "Generic (PLEG): container finished" podID="1dcc14d9-d4a9-41a7-a380-d28ed9d39ef3" containerID="a988a34c898a05087381b3c398ec9025e84f7ccd37d7a000f5a4025b770b9c31" exitCode=143 Feb 18 19:37:18 crc kubenswrapper[4942]: I0218 19:37:18.378640 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"1dcc14d9-d4a9-41a7-a380-d28ed9d39ef3","Type":"ContainerDied","Data":"a988a34c898a05087381b3c398ec9025e84f7ccd37d7a000f5a4025b770b9c31"} Feb 18 19:37:21 crc kubenswrapper[4942]: I0218 19:37:21.411301 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-bqfl9" event={"ID":"2e2cb901-5468-4fa9-9b3a-a16f238ff6e2","Type":"ContainerDied","Data":"2d29442d9649dbaa907e5735ea0dda7657607ca6fa24c4f83c7c2be4ce910a11"} Feb 18 19:37:21 crc kubenswrapper[4942]: I0218 19:37:21.412361 4942 generic.go:334] "Generic (PLEG): container finished" podID="2e2cb901-5468-4fa9-9b3a-a16f238ff6e2" containerID="2d29442d9649dbaa907e5735ea0dda7657607ca6fa24c4f83c7c2be4ce910a11" exitCode=0 Feb 18 19:37:21 crc kubenswrapper[4942]: I0218 19:37:21.414809 4942 generic.go:334] "Generic (PLEG): container finished" podID="b4a19078-b432-452e-8918-7b0f8c60e632" containerID="3d586c465df9e16d18d5348d207063c859dc4c0c45589222afa474013bd766c5" exitCode=0 Feb 18 19:37:21 crc kubenswrapper[4942]: I0218 19:37:21.414844 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-6wkkj" event={"ID":"b4a19078-b432-452e-8918-7b0f8c60e632","Type":"ContainerDied","Data":"3d586c465df9e16d18d5348d207063c859dc4c0c45589222afa474013bd766c5"} Feb 18 19:37:22 crc kubenswrapper[4942]: I0218 19:37:22.553939 4942 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 18 19:37:22 crc kubenswrapper[4942]: I0218 19:37:22.554304 4942 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 18 19:37:22 crc kubenswrapper[4942]: I0218 19:37:22.844993 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-757b4f8459-4gdxj" Feb 18 19:37:22 crc kubenswrapper[4942]: I0218 19:37:22.862299 4942 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Feb 18 19:37:22 crc kubenswrapper[4942]: I0218 19:37:22.924327 4942 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Feb 18 19:37:22 crc kubenswrapper[4942]: I0218 19:37:22.943772 4942 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5c9776ccc5-lrqxl"] Feb 18 19:37:22 crc kubenswrapper[4942]: I0218 19:37:22.944034 4942 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-5c9776ccc5-lrqxl" podUID="d97cbf5c-7da6-4d2e-a3d6-6bbe07441ae0" containerName="dnsmasq-dns" containerID="cri-o://2e107e6e0a09eb362ca701ccec933f2884a01ef22670bcf63ff6185d0e31a00b" gracePeriod=10 Feb 18 19:37:23 crc kubenswrapper[4942]: I0218 19:37:23.042392 4942 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-6wkkj" Feb 18 19:37:23 crc kubenswrapper[4942]: I0218 19:37:23.069422 4942 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-bqfl9" Feb 18 19:37:23 crc kubenswrapper[4942]: I0218 19:37:23.127151 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2e2cb901-5468-4fa9-9b3a-a16f238ff6e2-scripts\") pod \"2e2cb901-5468-4fa9-9b3a-a16f238ff6e2\" (UID: \"2e2cb901-5468-4fa9-9b3a-a16f238ff6e2\") " Feb 18 19:37:23 crc kubenswrapper[4942]: I0218 19:37:23.127249 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b6d97\" (UniqueName: \"kubernetes.io/projected/b4a19078-b432-452e-8918-7b0f8c60e632-kube-api-access-b6d97\") pod \"b4a19078-b432-452e-8918-7b0f8c60e632\" (UID: \"b4a19078-b432-452e-8918-7b0f8c60e632\") " Feb 18 19:37:23 crc kubenswrapper[4942]: I0218 19:37:23.127293 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2e2cb901-5468-4fa9-9b3a-a16f238ff6e2-config-data\") pod \"2e2cb901-5468-4fa9-9b3a-a16f238ff6e2\" (UID: \"2e2cb901-5468-4fa9-9b3a-a16f238ff6e2\") " Feb 18 19:37:23 crc kubenswrapper[4942]: I0218 19:37:23.127341 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b4a19078-b432-452e-8918-7b0f8c60e632-combined-ca-bundle\") pod \"b4a19078-b432-452e-8918-7b0f8c60e632\" (UID: \"b4a19078-b432-452e-8918-7b0f8c60e632\") " Feb 18 19:37:23 crc kubenswrapper[4942]: I0218 19:37:23.127383 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cwvcd\" (UniqueName: \"kubernetes.io/projected/2e2cb901-5468-4fa9-9b3a-a16f238ff6e2-kube-api-access-cwvcd\") pod \"2e2cb901-5468-4fa9-9b3a-a16f238ff6e2\" (UID: \"2e2cb901-5468-4fa9-9b3a-a16f238ff6e2\") " Feb 18 19:37:23 crc kubenswrapper[4942]: I0218 19:37:23.127452 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2e2cb901-5468-4fa9-9b3a-a16f238ff6e2-combined-ca-bundle\") pod \"2e2cb901-5468-4fa9-9b3a-a16f238ff6e2\" (UID: \"2e2cb901-5468-4fa9-9b3a-a16f238ff6e2\") " Feb 18 19:37:23 crc kubenswrapper[4942]: I0218 19:37:23.127501 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b4a19078-b432-452e-8918-7b0f8c60e632-config-data\") pod \"b4a19078-b432-452e-8918-7b0f8c60e632\" (UID: \"b4a19078-b432-452e-8918-7b0f8c60e632\") " Feb 18 19:37:23 crc kubenswrapper[4942]: I0218 19:37:23.127528 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b4a19078-b432-452e-8918-7b0f8c60e632-scripts\") pod \"b4a19078-b432-452e-8918-7b0f8c60e632\" (UID: \"b4a19078-b432-452e-8918-7b0f8c60e632\") " Feb 18 19:37:23 crc kubenswrapper[4942]: I0218 19:37:23.133668 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b4a19078-b432-452e-8918-7b0f8c60e632-kube-api-access-b6d97" (OuterVolumeSpecName: "kube-api-access-b6d97") pod "b4a19078-b432-452e-8918-7b0f8c60e632" (UID: "b4a19078-b432-452e-8918-7b0f8c60e632"). InnerVolumeSpecName "kube-api-access-b6d97". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:37:23 crc kubenswrapper[4942]: I0218 19:37:23.135739 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2e2cb901-5468-4fa9-9b3a-a16f238ff6e2-scripts" (OuterVolumeSpecName: "scripts") pod "2e2cb901-5468-4fa9-9b3a-a16f238ff6e2" (UID: "2e2cb901-5468-4fa9-9b3a-a16f238ff6e2"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:37:23 crc kubenswrapper[4942]: I0218 19:37:23.137986 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b4a19078-b432-452e-8918-7b0f8c60e632-scripts" (OuterVolumeSpecName: "scripts") pod "b4a19078-b432-452e-8918-7b0f8c60e632" (UID: "b4a19078-b432-452e-8918-7b0f8c60e632"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:37:23 crc kubenswrapper[4942]: I0218 19:37:23.143427 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2e2cb901-5468-4fa9-9b3a-a16f238ff6e2-kube-api-access-cwvcd" (OuterVolumeSpecName: "kube-api-access-cwvcd") pod "2e2cb901-5468-4fa9-9b3a-a16f238ff6e2" (UID: "2e2cb901-5468-4fa9-9b3a-a16f238ff6e2"). InnerVolumeSpecName "kube-api-access-cwvcd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:37:23 crc kubenswrapper[4942]: I0218 19:37:23.159698 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2e2cb901-5468-4fa9-9b3a-a16f238ff6e2-config-data" (OuterVolumeSpecName: "config-data") pod "2e2cb901-5468-4fa9-9b3a-a16f238ff6e2" (UID: "2e2cb901-5468-4fa9-9b3a-a16f238ff6e2"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:37:23 crc kubenswrapper[4942]: I0218 19:37:23.163831 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b4a19078-b432-452e-8918-7b0f8c60e632-config-data" (OuterVolumeSpecName: "config-data") pod "b4a19078-b432-452e-8918-7b0f8c60e632" (UID: "b4a19078-b432-452e-8918-7b0f8c60e632"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:37:23 crc kubenswrapper[4942]: I0218 19:37:23.179716 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2e2cb901-5468-4fa9-9b3a-a16f238ff6e2-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "2e2cb901-5468-4fa9-9b3a-a16f238ff6e2" (UID: "2e2cb901-5468-4fa9-9b3a-a16f238ff6e2"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:37:23 crc kubenswrapper[4942]: I0218 19:37:23.200882 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b4a19078-b432-452e-8918-7b0f8c60e632-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b4a19078-b432-452e-8918-7b0f8c60e632" (UID: "b4a19078-b432-452e-8918-7b0f8c60e632"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:37:23 crc kubenswrapper[4942]: I0218 19:37:23.242969 4942 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2e2cb901-5468-4fa9-9b3a-a16f238ff6e2-scripts\") on node \"crc\" DevicePath \"\"" Feb 18 19:37:23 crc kubenswrapper[4942]: I0218 19:37:23.243010 4942 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b6d97\" (UniqueName: \"kubernetes.io/projected/b4a19078-b432-452e-8918-7b0f8c60e632-kube-api-access-b6d97\") on node \"crc\" DevicePath \"\"" Feb 18 19:37:23 crc kubenswrapper[4942]: I0218 19:37:23.243026 4942 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2e2cb901-5468-4fa9-9b3a-a16f238ff6e2-config-data\") on node \"crc\" DevicePath \"\"" Feb 18 19:37:23 crc kubenswrapper[4942]: I0218 19:37:23.243038 4942 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b4a19078-b432-452e-8918-7b0f8c60e632-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 19:37:23 crc kubenswrapper[4942]: I0218 19:37:23.243052 4942 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cwvcd\" (UniqueName: \"kubernetes.io/projected/2e2cb901-5468-4fa9-9b3a-a16f238ff6e2-kube-api-access-cwvcd\") on node \"crc\" DevicePath \"\"" Feb 18 19:37:23 crc kubenswrapper[4942]: I0218 19:37:23.243064 4942 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2e2cb901-5468-4fa9-9b3a-a16f238ff6e2-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 19:37:23 crc kubenswrapper[4942]: I0218 19:37:23.243075 4942 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b4a19078-b432-452e-8918-7b0f8c60e632-config-data\") on node \"crc\" DevicePath \"\"" Feb 18 19:37:23 crc kubenswrapper[4942]: I0218 19:37:23.243086 4942 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b4a19078-b432-452e-8918-7b0f8c60e632-scripts\") on node \"crc\" DevicePath \"\"" Feb 18 19:37:23 crc kubenswrapper[4942]: I0218 19:37:23.381750 4942 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c9776ccc5-lrqxl" Feb 18 19:37:23 crc kubenswrapper[4942]: I0218 19:37:23.438208 4942 generic.go:334] "Generic (PLEG): container finished" podID="d97cbf5c-7da6-4d2e-a3d6-6bbe07441ae0" containerID="2e107e6e0a09eb362ca701ccec933f2884a01ef22670bcf63ff6185d0e31a00b" exitCode=0 Feb 18 19:37:23 crc kubenswrapper[4942]: I0218 19:37:23.438290 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c9776ccc5-lrqxl" event={"ID":"d97cbf5c-7da6-4d2e-a3d6-6bbe07441ae0","Type":"ContainerDied","Data":"2e107e6e0a09eb362ca701ccec933f2884a01ef22670bcf63ff6185d0e31a00b"} Feb 18 19:37:23 crc kubenswrapper[4942]: I0218 19:37:23.438325 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c9776ccc5-lrqxl" event={"ID":"d97cbf5c-7da6-4d2e-a3d6-6bbe07441ae0","Type":"ContainerDied","Data":"e8c28553c41794bb048bb9a7187c1a1ab7f1585b41b9526ea5b0ab594f5efa4f"} Feb 18 19:37:23 crc kubenswrapper[4942]: I0218 19:37:23.438347 4942 scope.go:117] "RemoveContainer" containerID="2e107e6e0a09eb362ca701ccec933f2884a01ef22670bcf63ff6185d0e31a00b" Feb 18 19:37:23 crc kubenswrapper[4942]: I0218 19:37:23.438505 4942 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c9776ccc5-lrqxl" Feb 18 19:37:23 crc kubenswrapper[4942]: I0218 19:37:23.445189 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-bqfl9" event={"ID":"2e2cb901-5468-4fa9-9b3a-a16f238ff6e2","Type":"ContainerDied","Data":"bfe2fb8153deab5eb20af7fd17ac70900646805335b4c77b479b2ae83af42455"} Feb 18 19:37:23 crc kubenswrapper[4942]: I0218 19:37:23.445239 4942 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bfe2fb8153deab5eb20af7fd17ac70900646805335b4c77b479b2ae83af42455" Feb 18 19:37:23 crc kubenswrapper[4942]: I0218 19:37:23.445246 4942 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-bqfl9" Feb 18 19:37:23 crc kubenswrapper[4942]: I0218 19:37:23.448411 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d97cbf5c-7da6-4d2e-a3d6-6bbe07441ae0-config\") pod \"d97cbf5c-7da6-4d2e-a3d6-6bbe07441ae0\" (UID: \"d97cbf5c-7da6-4d2e-a3d6-6bbe07441ae0\") " Feb 18 19:37:23 crc kubenswrapper[4942]: I0218 19:37:23.448494 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d97cbf5c-7da6-4d2e-a3d6-6bbe07441ae0-dns-swift-storage-0\") pod \"d97cbf5c-7da6-4d2e-a3d6-6bbe07441ae0\" (UID: \"d97cbf5c-7da6-4d2e-a3d6-6bbe07441ae0\") " Feb 18 19:37:23 crc kubenswrapper[4942]: I0218 19:37:23.448554 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8tb7v\" (UniqueName: \"kubernetes.io/projected/d97cbf5c-7da6-4d2e-a3d6-6bbe07441ae0-kube-api-access-8tb7v\") pod \"d97cbf5c-7da6-4d2e-a3d6-6bbe07441ae0\" (UID: \"d97cbf5c-7da6-4d2e-a3d6-6bbe07441ae0\") " Feb 18 19:37:23 crc kubenswrapper[4942]: I0218 19:37:23.448651 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d97cbf5c-7da6-4d2e-a3d6-6bbe07441ae0-ovsdbserver-nb\") pod \"d97cbf5c-7da6-4d2e-a3d6-6bbe07441ae0\" (UID: \"d97cbf5c-7da6-4d2e-a3d6-6bbe07441ae0\") " Feb 18 19:37:23 crc kubenswrapper[4942]: I0218 19:37:23.448696 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d97cbf5c-7da6-4d2e-a3d6-6bbe07441ae0-dns-svc\") pod \"d97cbf5c-7da6-4d2e-a3d6-6bbe07441ae0\" (UID: \"d97cbf5c-7da6-4d2e-a3d6-6bbe07441ae0\") " Feb 18 19:37:23 crc kubenswrapper[4942]: I0218 19:37:23.448757 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d97cbf5c-7da6-4d2e-a3d6-6bbe07441ae0-ovsdbserver-sb\") pod \"d97cbf5c-7da6-4d2e-a3d6-6bbe07441ae0\" (UID: \"d97cbf5c-7da6-4d2e-a3d6-6bbe07441ae0\") " Feb 18 19:37:23 crc kubenswrapper[4942]: I0218 19:37:23.453665 4942 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-6wkkj" Feb 18 19:37:23 crc kubenswrapper[4942]: I0218 19:37:23.453742 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-6wkkj" event={"ID":"b4a19078-b432-452e-8918-7b0f8c60e632","Type":"ContainerDied","Data":"117e05763b70ed5511fc539676ab66f3e18bab8c00305576c6a5cf642aa0c86e"} Feb 18 19:37:23 crc kubenswrapper[4942]: I0218 19:37:23.453830 4942 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="117e05763b70ed5511fc539676ab66f3e18bab8c00305576c6a5cf642aa0c86e" Feb 18 19:37:23 crc kubenswrapper[4942]: I0218 19:37:23.458015 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d97cbf5c-7da6-4d2e-a3d6-6bbe07441ae0-kube-api-access-8tb7v" (OuterVolumeSpecName: "kube-api-access-8tb7v") pod "d97cbf5c-7da6-4d2e-a3d6-6bbe07441ae0" (UID: "d97cbf5c-7da6-4d2e-a3d6-6bbe07441ae0"). InnerVolumeSpecName "kube-api-access-8tb7v". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:37:23 crc kubenswrapper[4942]: I0218 19:37:23.471459 4942 scope.go:117] "RemoveContainer" containerID="dd0e1dffa19992cdfee9a8283a58b64cddc29aa874d5b918d39a6e3462563edd" Feb 18 19:37:23 crc kubenswrapper[4942]: I0218 19:37:23.524365 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Feb 18 19:37:23 crc kubenswrapper[4942]: I0218 19:37:23.544500 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d97cbf5c-7da6-4d2e-a3d6-6bbe07441ae0-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "d97cbf5c-7da6-4d2e-a3d6-6bbe07441ae0" (UID: "d97cbf5c-7da6-4d2e-a3d6-6bbe07441ae0"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:37:23 crc kubenswrapper[4942]: I0218 19:37:23.569447 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d97cbf5c-7da6-4d2e-a3d6-6bbe07441ae0-config" (OuterVolumeSpecName: "config") pod "d97cbf5c-7da6-4d2e-a3d6-6bbe07441ae0" (UID: "d97cbf5c-7da6-4d2e-a3d6-6bbe07441ae0"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:37:23 crc kubenswrapper[4942]: I0218 19:37:23.570653 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d97cbf5c-7da6-4d2e-a3d6-6bbe07441ae0-config\") pod \"d97cbf5c-7da6-4d2e-a3d6-6bbe07441ae0\" (UID: \"d97cbf5c-7da6-4d2e-a3d6-6bbe07441ae0\") " Feb 18 19:37:23 crc kubenswrapper[4942]: I0218 19:37:23.570906 4942 scope.go:117] "RemoveContainer" containerID="2e107e6e0a09eb362ca701ccec933f2884a01ef22670bcf63ff6185d0e31a00b" Feb 18 19:37:23 crc kubenswrapper[4942]: W0218 19:37:23.571172 4942 empty_dir.go:500] Warning: Unmount skipped because path does not exist: /var/lib/kubelet/pods/d97cbf5c-7da6-4d2e-a3d6-6bbe07441ae0/volumes/kubernetes.io~configmap/config Feb 18 19:37:23 crc kubenswrapper[4942]: I0218 19:37:23.571193 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d97cbf5c-7da6-4d2e-a3d6-6bbe07441ae0-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "d97cbf5c-7da6-4d2e-a3d6-6bbe07441ae0" (UID: "d97cbf5c-7da6-4d2e-a3d6-6bbe07441ae0"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:37:23 crc kubenswrapper[4942]: I0218 19:37:23.571199 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d97cbf5c-7da6-4d2e-a3d6-6bbe07441ae0-config" (OuterVolumeSpecName: "config") pod "d97cbf5c-7da6-4d2e-a3d6-6bbe07441ae0" (UID: "d97cbf5c-7da6-4d2e-a3d6-6bbe07441ae0"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:37:23 crc kubenswrapper[4942]: E0218 19:37:23.571482 4942 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2e107e6e0a09eb362ca701ccec933f2884a01ef22670bcf63ff6185d0e31a00b\": container with ID starting with 2e107e6e0a09eb362ca701ccec933f2884a01ef22670bcf63ff6185d0e31a00b not found: ID does not exist" containerID="2e107e6e0a09eb362ca701ccec933f2884a01ef22670bcf63ff6185d0e31a00b" Feb 18 19:37:23 crc kubenswrapper[4942]: I0218 19:37:23.571519 4942 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2e107e6e0a09eb362ca701ccec933f2884a01ef22670bcf63ff6185d0e31a00b"} err="failed to get container status \"2e107e6e0a09eb362ca701ccec933f2884a01ef22670bcf63ff6185d0e31a00b\": rpc error: code = NotFound desc = could not find container \"2e107e6e0a09eb362ca701ccec933f2884a01ef22670bcf63ff6185d0e31a00b\": container with ID starting with 2e107e6e0a09eb362ca701ccec933f2884a01ef22670bcf63ff6185d0e31a00b not found: ID does not exist" Feb 18 19:37:23 crc kubenswrapper[4942]: I0218 19:37:23.571544 4942 scope.go:117] "RemoveContainer" containerID="dd0e1dffa19992cdfee9a8283a58b64cddc29aa874d5b918d39a6e3462563edd" Feb 18 19:37:23 crc kubenswrapper[4942]: E0218 19:37:23.571948 4942 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dd0e1dffa19992cdfee9a8283a58b64cddc29aa874d5b918d39a6e3462563edd\": container with ID starting with dd0e1dffa19992cdfee9a8283a58b64cddc29aa874d5b918d39a6e3462563edd not found: ID does not exist" containerID="dd0e1dffa19992cdfee9a8283a58b64cddc29aa874d5b918d39a6e3462563edd" Feb 18 19:37:23 crc kubenswrapper[4942]: I0218 19:37:23.571979 4942 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dd0e1dffa19992cdfee9a8283a58b64cddc29aa874d5b918d39a6e3462563edd"} err="failed to get container status \"dd0e1dffa19992cdfee9a8283a58b64cddc29aa874d5b918d39a6e3462563edd\": rpc error: code = NotFound desc = could not find container \"dd0e1dffa19992cdfee9a8283a58b64cddc29aa874d5b918d39a6e3462563edd\": container with ID starting with dd0e1dffa19992cdfee9a8283a58b64cddc29aa874d5b918d39a6e3462563edd not found: ID does not exist" Feb 18 19:37:23 crc kubenswrapper[4942]: I0218 19:37:23.572698 4942 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d97cbf5c-7da6-4d2e-a3d6-6bbe07441ae0-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 18 19:37:23 crc kubenswrapper[4942]: I0218 19:37:23.572789 4942 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8tb7v\" (UniqueName: \"kubernetes.io/projected/d97cbf5c-7da6-4d2e-a3d6-6bbe07441ae0-kube-api-access-8tb7v\") on node \"crc\" DevicePath \"\"" Feb 18 19:37:23 crc kubenswrapper[4942]: I0218 19:37:23.572851 4942 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d97cbf5c-7da6-4d2e-a3d6-6bbe07441ae0-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 18 19:37:23 crc kubenswrapper[4942]: I0218 19:37:23.572917 4942 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d97cbf5c-7da6-4d2e-a3d6-6bbe07441ae0-config\") on node \"crc\" DevicePath \"\"" Feb 18 19:37:23 crc kubenswrapper[4942]: I0218 19:37:23.596562 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d97cbf5c-7da6-4d2e-a3d6-6bbe07441ae0-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "d97cbf5c-7da6-4d2e-a3d6-6bbe07441ae0" (UID: "d97cbf5c-7da6-4d2e-a3d6-6bbe07441ae0"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:37:23 crc kubenswrapper[4942]: I0218 19:37:23.604189 4942 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-0"] Feb 18 19:37:23 crc kubenswrapper[4942]: E0218 19:37:23.606870 4942 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2e2cb901-5468-4fa9-9b3a-a16f238ff6e2" containerName="nova-cell1-conductor-db-sync" Feb 18 19:37:23 crc kubenswrapper[4942]: I0218 19:37:23.606894 4942 state_mem.go:107] "Deleted CPUSet assignment" podUID="2e2cb901-5468-4fa9-9b3a-a16f238ff6e2" containerName="nova-cell1-conductor-db-sync" Feb 18 19:37:23 crc kubenswrapper[4942]: E0218 19:37:23.606920 4942 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d97cbf5c-7da6-4d2e-a3d6-6bbe07441ae0" containerName="dnsmasq-dns" Feb 18 19:37:23 crc kubenswrapper[4942]: I0218 19:37:23.606928 4942 state_mem.go:107] "Deleted CPUSet assignment" podUID="d97cbf5c-7da6-4d2e-a3d6-6bbe07441ae0" containerName="dnsmasq-dns" Feb 18 19:37:23 crc kubenswrapper[4942]: E0218 19:37:23.606955 4942 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b4a19078-b432-452e-8918-7b0f8c60e632" containerName="nova-manage" Feb 18 19:37:23 crc kubenswrapper[4942]: I0218 19:37:23.606965 4942 state_mem.go:107] "Deleted CPUSet assignment" podUID="b4a19078-b432-452e-8918-7b0f8c60e632" containerName="nova-manage" Feb 18 19:37:23 crc kubenswrapper[4942]: E0218 19:37:23.606985 4942 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d97cbf5c-7da6-4d2e-a3d6-6bbe07441ae0" containerName="init" Feb 18 19:37:23 crc kubenswrapper[4942]: I0218 19:37:23.606991 4942 state_mem.go:107] "Deleted CPUSet assignment" podUID="d97cbf5c-7da6-4d2e-a3d6-6bbe07441ae0" containerName="init" Feb 18 19:37:23 crc kubenswrapper[4942]: I0218 19:37:23.607323 4942 memory_manager.go:354] "RemoveStaleState removing state" podUID="b4a19078-b432-452e-8918-7b0f8c60e632" containerName="nova-manage" Feb 18 19:37:23 crc kubenswrapper[4942]: I0218 19:37:23.607366 4942 memory_manager.go:354] "RemoveStaleState removing state" podUID="2e2cb901-5468-4fa9-9b3a-a16f238ff6e2" containerName="nova-cell1-conductor-db-sync" Feb 18 19:37:23 crc kubenswrapper[4942]: I0218 19:37:23.607383 4942 memory_manager.go:354] "RemoveStaleState removing state" podUID="d97cbf5c-7da6-4d2e-a3d6-6bbe07441ae0" containerName="dnsmasq-dns" Feb 18 19:37:23 crc kubenswrapper[4942]: I0218 19:37:23.609248 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Feb 18 19:37:23 crc kubenswrapper[4942]: I0218 19:37:23.611213 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Feb 18 19:37:23 crc kubenswrapper[4942]: I0218 19:37:23.621201 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Feb 18 19:37:23 crc kubenswrapper[4942]: I0218 19:37:23.631013 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d97cbf5c-7da6-4d2e-a3d6-6bbe07441ae0-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "d97cbf5c-7da6-4d2e-a3d6-6bbe07441ae0" (UID: "d97cbf5c-7da6-4d2e-a3d6-6bbe07441ae0"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:37:23 crc kubenswrapper[4942]: I0218 19:37:23.653029 4942 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="f1366c48-2eab-4f52-b946-41b5cd9682a9" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.0.205:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 18 19:37:23 crc kubenswrapper[4942]: I0218 19:37:23.653047 4942 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="f1366c48-2eab-4f52-b946-41b5cd9682a9" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.0.205:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 18 19:37:23 crc kubenswrapper[4942]: I0218 19:37:23.675413 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5a93f06c-139b-4052-9519-bbd4476a9dab-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"5a93f06c-139b-4052-9519-bbd4476a9dab\") " pod="openstack/nova-cell1-conductor-0" Feb 18 19:37:23 crc kubenswrapper[4942]: I0218 19:37:23.675832 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mtd92\" (UniqueName: \"kubernetes.io/projected/5a93f06c-139b-4052-9519-bbd4476a9dab-kube-api-access-mtd92\") pod \"nova-cell1-conductor-0\" (UID: \"5a93f06c-139b-4052-9519-bbd4476a9dab\") " pod="openstack/nova-cell1-conductor-0" Feb 18 19:37:23 crc kubenswrapper[4942]: I0218 19:37:23.676947 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5a93f06c-139b-4052-9519-bbd4476a9dab-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"5a93f06c-139b-4052-9519-bbd4476a9dab\") " pod="openstack/nova-cell1-conductor-0" Feb 18 19:37:23 crc kubenswrapper[4942]: I0218 19:37:23.677228 4942 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d97cbf5c-7da6-4d2e-a3d6-6bbe07441ae0-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 18 19:37:23 crc kubenswrapper[4942]: I0218 19:37:23.677247 4942 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d97cbf5c-7da6-4d2e-a3d6-6bbe07441ae0-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 18 19:37:23 crc kubenswrapper[4942]: I0218 19:37:23.704539 4942 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Feb 18 19:37:23 crc kubenswrapper[4942]: I0218 19:37:23.704828 4942 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="f1366c48-2eab-4f52-b946-41b5cd9682a9" containerName="nova-api-log" containerID="cri-o://0689b0c38955bef713bfebb9dc862d00cd9be367d7fc866a2fa0a00dec3cd055" gracePeriod=30 Feb 18 19:37:23 crc kubenswrapper[4942]: I0218 19:37:23.704895 4942 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="f1366c48-2eab-4f52-b946-41b5cd9682a9" containerName="nova-api-api" containerID="cri-o://caf503a14e33f1f6c75a84e13fcab72b56d9b16362dbeda0c58791f6b27e6fcf" gracePeriod=30 Feb 18 19:37:23 crc kubenswrapper[4942]: I0218 19:37:23.740699 4942 patch_prober.go:28] interesting pod/machine-config-daemon-wqxh4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 19:37:23 crc kubenswrapper[4942]: I0218 19:37:23.740781 4942 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wqxh4" podUID="28921539-823a-4439-a230-3b5aed7085cc" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 19:37:23 crc kubenswrapper[4942]: I0218 19:37:23.771114 4942 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5c9776ccc5-lrqxl"] Feb 18 19:37:23 crc kubenswrapper[4942]: I0218 19:37:23.778946 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5a93f06c-139b-4052-9519-bbd4476a9dab-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"5a93f06c-139b-4052-9519-bbd4476a9dab\") " pod="openstack/nova-cell1-conductor-0" Feb 18 19:37:23 crc kubenswrapper[4942]: I0218 19:37:23.779042 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mtd92\" (UniqueName: \"kubernetes.io/projected/5a93f06c-139b-4052-9519-bbd4476a9dab-kube-api-access-mtd92\") pod \"nova-cell1-conductor-0\" (UID: \"5a93f06c-139b-4052-9519-bbd4476a9dab\") " pod="openstack/nova-cell1-conductor-0" Feb 18 19:37:23 crc kubenswrapper[4942]: I0218 19:37:23.779403 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5a93f06c-139b-4052-9519-bbd4476a9dab-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"5a93f06c-139b-4052-9519-bbd4476a9dab\") " pod="openstack/nova-cell1-conductor-0" Feb 18 19:37:23 crc kubenswrapper[4942]: I0218 19:37:23.783394 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5a93f06c-139b-4052-9519-bbd4476a9dab-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"5a93f06c-139b-4052-9519-bbd4476a9dab\") " pod="openstack/nova-cell1-conductor-0" Feb 18 19:37:23 crc kubenswrapper[4942]: I0218 19:37:23.788396 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5a93f06c-139b-4052-9519-bbd4476a9dab-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"5a93f06c-139b-4052-9519-bbd4476a9dab\") " pod="openstack/nova-cell1-conductor-0" Feb 18 19:37:23 crc kubenswrapper[4942]: I0218 19:37:23.793684 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mtd92\" (UniqueName: \"kubernetes.io/projected/5a93f06c-139b-4052-9519-bbd4476a9dab-kube-api-access-mtd92\") pod \"nova-cell1-conductor-0\" (UID: \"5a93f06c-139b-4052-9519-bbd4476a9dab\") " pod="openstack/nova-cell1-conductor-0" Feb 18 19:37:23 crc kubenswrapper[4942]: I0218 19:37:23.794244 4942 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5c9776ccc5-lrqxl"] Feb 18 19:37:23 crc kubenswrapper[4942]: I0218 19:37:23.935173 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Feb 18 19:37:24 crc kubenswrapper[4942]: I0218 19:37:24.014915 4942 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Feb 18 19:37:24 crc kubenswrapper[4942]: I0218 19:37:24.460662 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Feb 18 19:37:24 crc kubenswrapper[4942]: I0218 19:37:24.477594 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"5a93f06c-139b-4052-9519-bbd4476a9dab","Type":"ContainerStarted","Data":"c9cf565b46959ea33ee5857ca6e67aaf00420a397fed906256718948f3150fdc"} Feb 18 19:37:24 crc kubenswrapper[4942]: I0218 19:37:24.481039 4942 generic.go:334] "Generic (PLEG): container finished" podID="f1366c48-2eab-4f52-b946-41b5cd9682a9" containerID="0689b0c38955bef713bfebb9dc862d00cd9be367d7fc866a2fa0a00dec3cd055" exitCode=143 Feb 18 19:37:24 crc kubenswrapper[4942]: I0218 19:37:24.482350 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"f1366c48-2eab-4f52-b946-41b5cd9682a9","Type":"ContainerDied","Data":"0689b0c38955bef713bfebb9dc862d00cd9be367d7fc866a2fa0a00dec3cd055"} Feb 18 19:37:25 crc kubenswrapper[4942]: I0218 19:37:25.050109 4942 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d97cbf5c-7da6-4d2e-a3d6-6bbe07441ae0" path="/var/lib/kubelet/pods/d97cbf5c-7da6-4d2e-a3d6-6bbe07441ae0/volumes" Feb 18 19:37:25 crc kubenswrapper[4942]: I0218 19:37:25.492074 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"5a93f06c-139b-4052-9519-bbd4476a9dab","Type":"ContainerStarted","Data":"592a1177d47bd23fe639828c67034da3b063910dc835e128f26e637c35dda821"} Feb 18 19:37:25 crc kubenswrapper[4942]: I0218 19:37:25.492457 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-conductor-0" Feb 18 19:37:25 crc kubenswrapper[4942]: I0218 19:37:25.492328 4942 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="1b369297-3ab8-4077-9af5-68455e6f2fa7" containerName="nova-scheduler-scheduler" containerID="cri-o://03b6bb631528443b5f5f07cb8b13ef384d45de36f72e71bf857cfad0d68ac856" gracePeriod=30 Feb 18 19:37:25 crc kubenswrapper[4942]: I0218 19:37:25.513979 4942 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-0" podStartSLOduration=2.5139573459999998 podStartE2EDuration="2.513957346s" podCreationTimestamp="2026-02-18 19:37:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:37:25.507402244 +0000 UTC m=+1205.212334909" watchObservedRunningTime="2026-02-18 19:37:25.513957346 +0000 UTC m=+1205.218890011" Feb 18 19:37:27 crc kubenswrapper[4942]: E0218 19:37:27.865618 4942 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="03b6bb631528443b5f5f07cb8b13ef384d45de36f72e71bf857cfad0d68ac856" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Feb 18 19:37:27 crc kubenswrapper[4942]: E0218 19:37:27.869356 4942 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="03b6bb631528443b5f5f07cb8b13ef384d45de36f72e71bf857cfad0d68ac856" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Feb 18 19:37:27 crc kubenswrapper[4942]: E0218 19:37:27.872071 4942 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="03b6bb631528443b5f5f07cb8b13ef384d45de36f72e71bf857cfad0d68ac856" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Feb 18 19:37:27 crc kubenswrapper[4942]: E0218 19:37:27.872136 4942 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-scheduler-0" podUID="1b369297-3ab8-4077-9af5-68455e6f2fa7" containerName="nova-scheduler-scheduler" Feb 18 19:37:28 crc kubenswrapper[4942]: I0218 19:37:28.551166 4942 generic.go:334] "Generic (PLEG): container finished" podID="1b369297-3ab8-4077-9af5-68455e6f2fa7" containerID="03b6bb631528443b5f5f07cb8b13ef384d45de36f72e71bf857cfad0d68ac856" exitCode=0 Feb 18 19:37:28 crc kubenswrapper[4942]: I0218 19:37:28.551314 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"1b369297-3ab8-4077-9af5-68455e6f2fa7","Type":"ContainerDied","Data":"03b6bb631528443b5f5f07cb8b13ef384d45de36f72e71bf857cfad0d68ac856"} Feb 18 19:37:28 crc kubenswrapper[4942]: I0218 19:37:28.741305 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Feb 18 19:37:29 crc kubenswrapper[4942]: I0218 19:37:29.000718 4942 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 18 19:37:29 crc kubenswrapper[4942]: I0218 19:37:29.097096 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1b369297-3ab8-4077-9af5-68455e6f2fa7-combined-ca-bundle\") pod \"1b369297-3ab8-4077-9af5-68455e6f2fa7\" (UID: \"1b369297-3ab8-4077-9af5-68455e6f2fa7\") " Feb 18 19:37:29 crc kubenswrapper[4942]: I0218 19:37:29.097328 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5blns\" (UniqueName: \"kubernetes.io/projected/1b369297-3ab8-4077-9af5-68455e6f2fa7-kube-api-access-5blns\") pod \"1b369297-3ab8-4077-9af5-68455e6f2fa7\" (UID: \"1b369297-3ab8-4077-9af5-68455e6f2fa7\") " Feb 18 19:37:29 crc kubenswrapper[4942]: I0218 19:37:29.097410 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1b369297-3ab8-4077-9af5-68455e6f2fa7-config-data\") pod \"1b369297-3ab8-4077-9af5-68455e6f2fa7\" (UID: \"1b369297-3ab8-4077-9af5-68455e6f2fa7\") " Feb 18 19:37:29 crc kubenswrapper[4942]: I0218 19:37:29.104874 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1b369297-3ab8-4077-9af5-68455e6f2fa7-kube-api-access-5blns" (OuterVolumeSpecName: "kube-api-access-5blns") pod "1b369297-3ab8-4077-9af5-68455e6f2fa7" (UID: "1b369297-3ab8-4077-9af5-68455e6f2fa7"). InnerVolumeSpecName "kube-api-access-5blns". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:37:29 crc kubenswrapper[4942]: I0218 19:37:29.126051 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1b369297-3ab8-4077-9af5-68455e6f2fa7-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "1b369297-3ab8-4077-9af5-68455e6f2fa7" (UID: "1b369297-3ab8-4077-9af5-68455e6f2fa7"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:37:29 crc kubenswrapper[4942]: I0218 19:37:29.131138 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1b369297-3ab8-4077-9af5-68455e6f2fa7-config-data" (OuterVolumeSpecName: "config-data") pod "1b369297-3ab8-4077-9af5-68455e6f2fa7" (UID: "1b369297-3ab8-4077-9af5-68455e6f2fa7"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:37:29 crc kubenswrapper[4942]: I0218 19:37:29.199951 4942 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5blns\" (UniqueName: \"kubernetes.io/projected/1b369297-3ab8-4077-9af5-68455e6f2fa7-kube-api-access-5blns\") on node \"crc\" DevicePath \"\"" Feb 18 19:37:29 crc kubenswrapper[4942]: I0218 19:37:29.200000 4942 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1b369297-3ab8-4077-9af5-68455e6f2fa7-config-data\") on node \"crc\" DevicePath \"\"" Feb 18 19:37:29 crc kubenswrapper[4942]: I0218 19:37:29.200014 4942 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1b369297-3ab8-4077-9af5-68455e6f2fa7-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 19:37:29 crc kubenswrapper[4942]: I0218 19:37:29.565372 4942 generic.go:334] "Generic (PLEG): container finished" podID="f1366c48-2eab-4f52-b946-41b5cd9682a9" containerID="caf503a14e33f1f6c75a84e13fcab72b56d9b16362dbeda0c58791f6b27e6fcf" exitCode=0 Feb 18 19:37:29 crc kubenswrapper[4942]: I0218 19:37:29.565571 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"f1366c48-2eab-4f52-b946-41b5cd9682a9","Type":"ContainerDied","Data":"caf503a14e33f1f6c75a84e13fcab72b56d9b16362dbeda0c58791f6b27e6fcf"} Feb 18 19:37:29 crc kubenswrapper[4942]: I0218 19:37:29.567144 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"1b369297-3ab8-4077-9af5-68455e6f2fa7","Type":"ContainerDied","Data":"76e19217db79153ca1c48808a6a49fd7fae4dd51157fee8775824302253c37bb"} Feb 18 19:37:29 crc kubenswrapper[4942]: I0218 19:37:29.567199 4942 scope.go:117] "RemoveContainer" containerID="03b6bb631528443b5f5f07cb8b13ef384d45de36f72e71bf857cfad0d68ac856" Feb 18 19:37:29 crc kubenswrapper[4942]: I0218 19:37:29.567238 4942 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 18 19:37:29 crc kubenswrapper[4942]: I0218 19:37:29.675644 4942 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 18 19:37:29 crc kubenswrapper[4942]: I0218 19:37:29.702564 4942 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Feb 18 19:37:29 crc kubenswrapper[4942]: I0218 19:37:29.709161 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f1366c48-2eab-4f52-b946-41b5cd9682a9-combined-ca-bundle\") pod \"f1366c48-2eab-4f52-b946-41b5cd9682a9\" (UID: \"f1366c48-2eab-4f52-b946-41b5cd9682a9\") " Feb 18 19:37:29 crc kubenswrapper[4942]: I0218 19:37:29.709241 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f1366c48-2eab-4f52-b946-41b5cd9682a9-config-data\") pod \"f1366c48-2eab-4f52-b946-41b5cd9682a9\" (UID: \"f1366c48-2eab-4f52-b946-41b5cd9682a9\") " Feb 18 19:37:29 crc kubenswrapper[4942]: I0218 19:37:29.709283 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f1366c48-2eab-4f52-b946-41b5cd9682a9-logs\") pod \"f1366c48-2eab-4f52-b946-41b5cd9682a9\" (UID: \"f1366c48-2eab-4f52-b946-41b5cd9682a9\") " Feb 18 19:37:29 crc kubenswrapper[4942]: I0218 19:37:29.709319 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-knn95\" (UniqueName: \"kubernetes.io/projected/f1366c48-2eab-4f52-b946-41b5cd9682a9-kube-api-access-knn95\") pod \"f1366c48-2eab-4f52-b946-41b5cd9682a9\" (UID: \"f1366c48-2eab-4f52-b946-41b5cd9682a9\") " Feb 18 19:37:29 crc kubenswrapper[4942]: I0218 19:37:29.713739 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f1366c48-2eab-4f52-b946-41b5cd9682a9-logs" (OuterVolumeSpecName: "logs") pod "f1366c48-2eab-4f52-b946-41b5cd9682a9" (UID: "f1366c48-2eab-4f52-b946-41b5cd9682a9"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 19:37:29 crc kubenswrapper[4942]: I0218 19:37:29.726743 4942 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Feb 18 19:37:29 crc kubenswrapper[4942]: I0218 19:37:29.754413 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f1366c48-2eab-4f52-b946-41b5cd9682a9-kube-api-access-knn95" (OuterVolumeSpecName: "kube-api-access-knn95") pod "f1366c48-2eab-4f52-b946-41b5cd9682a9" (UID: "f1366c48-2eab-4f52-b946-41b5cd9682a9"). InnerVolumeSpecName "kube-api-access-knn95". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:37:29 crc kubenswrapper[4942]: I0218 19:37:29.760941 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f1366c48-2eab-4f52-b946-41b5cd9682a9-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f1366c48-2eab-4f52-b946-41b5cd9682a9" (UID: "f1366c48-2eab-4f52-b946-41b5cd9682a9"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:37:29 crc kubenswrapper[4942]: I0218 19:37:29.788856 4942 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Feb 18 19:37:29 crc kubenswrapper[4942]: E0218 19:37:29.789622 4942 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f1366c48-2eab-4f52-b946-41b5cd9682a9" containerName="nova-api-log" Feb 18 19:37:29 crc kubenswrapper[4942]: I0218 19:37:29.789633 4942 state_mem.go:107] "Deleted CPUSet assignment" podUID="f1366c48-2eab-4f52-b946-41b5cd9682a9" containerName="nova-api-log" Feb 18 19:37:29 crc kubenswrapper[4942]: E0218 19:37:29.789651 4942 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1b369297-3ab8-4077-9af5-68455e6f2fa7" containerName="nova-scheduler-scheduler" Feb 18 19:37:29 crc kubenswrapper[4942]: I0218 19:37:29.789657 4942 state_mem.go:107] "Deleted CPUSet assignment" podUID="1b369297-3ab8-4077-9af5-68455e6f2fa7" containerName="nova-scheduler-scheduler" Feb 18 19:37:29 crc kubenswrapper[4942]: E0218 19:37:29.789687 4942 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f1366c48-2eab-4f52-b946-41b5cd9682a9" containerName="nova-api-api" Feb 18 19:37:29 crc kubenswrapper[4942]: I0218 19:37:29.789693 4942 state_mem.go:107] "Deleted CPUSet assignment" podUID="f1366c48-2eab-4f52-b946-41b5cd9682a9" containerName="nova-api-api" Feb 18 19:37:29 crc kubenswrapper[4942]: I0218 19:37:29.790000 4942 memory_manager.go:354] "RemoveStaleState removing state" podUID="f1366c48-2eab-4f52-b946-41b5cd9682a9" containerName="nova-api-log" Feb 18 19:37:29 crc kubenswrapper[4942]: I0218 19:37:29.790026 4942 memory_manager.go:354] "RemoveStaleState removing state" podUID="f1366c48-2eab-4f52-b946-41b5cd9682a9" containerName="nova-api-api" Feb 18 19:37:29 crc kubenswrapper[4942]: I0218 19:37:29.790050 4942 memory_manager.go:354] "RemoveStaleState removing state" podUID="1b369297-3ab8-4077-9af5-68455e6f2fa7" containerName="nova-scheduler-scheduler" Feb 18 19:37:29 crc kubenswrapper[4942]: I0218 19:37:29.790932 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 18 19:37:29 crc kubenswrapper[4942]: I0218 19:37:29.792881 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Feb 18 19:37:29 crc kubenswrapper[4942]: I0218 19:37:29.820913 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Feb 18 19:37:29 crc kubenswrapper[4942]: I0218 19:37:29.822473 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/934ec68d-b7d2-4435-8e54-4984cea15920-config-data\") pod \"nova-scheduler-0\" (UID: \"934ec68d-b7d2-4435-8e54-4984cea15920\") " pod="openstack/nova-scheduler-0" Feb 18 19:37:29 crc kubenswrapper[4942]: I0218 19:37:29.822543 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nb4b2\" (UniqueName: \"kubernetes.io/projected/934ec68d-b7d2-4435-8e54-4984cea15920-kube-api-access-nb4b2\") pod \"nova-scheduler-0\" (UID: \"934ec68d-b7d2-4435-8e54-4984cea15920\") " pod="openstack/nova-scheduler-0" Feb 18 19:37:29 crc kubenswrapper[4942]: I0218 19:37:29.822587 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/934ec68d-b7d2-4435-8e54-4984cea15920-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"934ec68d-b7d2-4435-8e54-4984cea15920\") " pod="openstack/nova-scheduler-0" Feb 18 19:37:29 crc kubenswrapper[4942]: I0218 19:37:29.822691 4942 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f1366c48-2eab-4f52-b946-41b5cd9682a9-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 19:37:29 crc kubenswrapper[4942]: I0218 19:37:29.822776 4942 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f1366c48-2eab-4f52-b946-41b5cd9682a9-logs\") on node \"crc\" DevicePath \"\"" Feb 18 19:37:29 crc kubenswrapper[4942]: I0218 19:37:29.822787 4942 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-knn95\" (UniqueName: \"kubernetes.io/projected/f1366c48-2eab-4f52-b946-41b5cd9682a9-kube-api-access-knn95\") on node \"crc\" DevicePath \"\"" Feb 18 19:37:29 crc kubenswrapper[4942]: I0218 19:37:29.825787 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f1366c48-2eab-4f52-b946-41b5cd9682a9-config-data" (OuterVolumeSpecName: "config-data") pod "f1366c48-2eab-4f52-b946-41b5cd9682a9" (UID: "f1366c48-2eab-4f52-b946-41b5cd9682a9"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:37:29 crc kubenswrapper[4942]: I0218 19:37:29.924853 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/934ec68d-b7d2-4435-8e54-4984cea15920-config-data\") pod \"nova-scheduler-0\" (UID: \"934ec68d-b7d2-4435-8e54-4984cea15920\") " pod="openstack/nova-scheduler-0" Feb 18 19:37:29 crc kubenswrapper[4942]: I0218 19:37:29.924911 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nb4b2\" (UniqueName: \"kubernetes.io/projected/934ec68d-b7d2-4435-8e54-4984cea15920-kube-api-access-nb4b2\") pod \"nova-scheduler-0\" (UID: \"934ec68d-b7d2-4435-8e54-4984cea15920\") " pod="openstack/nova-scheduler-0" Feb 18 19:37:29 crc kubenswrapper[4942]: I0218 19:37:29.924944 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/934ec68d-b7d2-4435-8e54-4984cea15920-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"934ec68d-b7d2-4435-8e54-4984cea15920\") " pod="openstack/nova-scheduler-0" Feb 18 19:37:29 crc kubenswrapper[4942]: I0218 19:37:29.925034 4942 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f1366c48-2eab-4f52-b946-41b5cd9682a9-config-data\") on node \"crc\" DevicePath \"\"" Feb 18 19:37:29 crc kubenswrapper[4942]: I0218 19:37:29.928458 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/934ec68d-b7d2-4435-8e54-4984cea15920-config-data\") pod \"nova-scheduler-0\" (UID: \"934ec68d-b7d2-4435-8e54-4984cea15920\") " pod="openstack/nova-scheduler-0" Feb 18 19:37:29 crc kubenswrapper[4942]: I0218 19:37:29.928737 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/934ec68d-b7d2-4435-8e54-4984cea15920-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"934ec68d-b7d2-4435-8e54-4984cea15920\") " pod="openstack/nova-scheduler-0" Feb 18 19:37:29 crc kubenswrapper[4942]: I0218 19:37:29.942349 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nb4b2\" (UniqueName: \"kubernetes.io/projected/934ec68d-b7d2-4435-8e54-4984cea15920-kube-api-access-nb4b2\") pod \"nova-scheduler-0\" (UID: \"934ec68d-b7d2-4435-8e54-4984cea15920\") " pod="openstack/nova-scheduler-0" Feb 18 19:37:30 crc kubenswrapper[4942]: I0218 19:37:30.185876 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 18 19:37:30 crc kubenswrapper[4942]: I0218 19:37:30.585171 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"f1366c48-2eab-4f52-b946-41b5cd9682a9","Type":"ContainerDied","Data":"7b2a3f65d372ea4e30eae9c1cb3d4c4737814a71cc9dd040162565d4ea30b91c"} Feb 18 19:37:30 crc kubenswrapper[4942]: I0218 19:37:30.585299 4942 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 18 19:37:30 crc kubenswrapper[4942]: I0218 19:37:30.585382 4942 scope.go:117] "RemoveContainer" containerID="caf503a14e33f1f6c75a84e13fcab72b56d9b16362dbeda0c58791f6b27e6fcf" Feb 18 19:37:30 crc kubenswrapper[4942]: I0218 19:37:30.646959 4942 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Feb 18 19:37:30 crc kubenswrapper[4942]: I0218 19:37:30.650883 4942 scope.go:117] "RemoveContainer" containerID="0689b0c38955bef713bfebb9dc862d00cd9be367d7fc866a2fa0a00dec3cd055" Feb 18 19:37:30 crc kubenswrapper[4942]: I0218 19:37:30.662807 4942 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Feb 18 19:37:30 crc kubenswrapper[4942]: I0218 19:37:30.674517 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Feb 18 19:37:30 crc kubenswrapper[4942]: I0218 19:37:30.685864 4942 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Feb 18 19:37:30 crc kubenswrapper[4942]: I0218 19:37:30.688474 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 18 19:37:30 crc kubenswrapper[4942]: I0218 19:37:30.692201 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Feb 18 19:37:30 crc kubenswrapper[4942]: I0218 19:37:30.696645 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 18 19:37:30 crc kubenswrapper[4942]: I0218 19:37:30.747124 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e52b4ebc-1d74-4e58-9a7a-dddb7b2d036a-config-data\") pod \"nova-api-0\" (UID: \"e52b4ebc-1d74-4e58-9a7a-dddb7b2d036a\") " pod="openstack/nova-api-0" Feb 18 19:37:30 crc kubenswrapper[4942]: I0218 19:37:30.747222 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e52b4ebc-1d74-4e58-9a7a-dddb7b2d036a-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"e52b4ebc-1d74-4e58-9a7a-dddb7b2d036a\") " pod="openstack/nova-api-0" Feb 18 19:37:30 crc kubenswrapper[4942]: I0218 19:37:30.747258 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e52b4ebc-1d74-4e58-9a7a-dddb7b2d036a-logs\") pod \"nova-api-0\" (UID: \"e52b4ebc-1d74-4e58-9a7a-dddb7b2d036a\") " pod="openstack/nova-api-0" Feb 18 19:37:30 crc kubenswrapper[4942]: I0218 19:37:30.747320 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hbc72\" (UniqueName: \"kubernetes.io/projected/e52b4ebc-1d74-4e58-9a7a-dddb7b2d036a-kube-api-access-hbc72\") pod \"nova-api-0\" (UID: \"e52b4ebc-1d74-4e58-9a7a-dddb7b2d036a\") " pod="openstack/nova-api-0" Feb 18 19:37:30 crc kubenswrapper[4942]: I0218 19:37:30.848783 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e52b4ebc-1d74-4e58-9a7a-dddb7b2d036a-logs\") pod \"nova-api-0\" (UID: \"e52b4ebc-1d74-4e58-9a7a-dddb7b2d036a\") " pod="openstack/nova-api-0" Feb 18 19:37:30 crc kubenswrapper[4942]: I0218 19:37:30.848874 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hbc72\" (UniqueName: \"kubernetes.io/projected/e52b4ebc-1d74-4e58-9a7a-dddb7b2d036a-kube-api-access-hbc72\") pod \"nova-api-0\" (UID: \"e52b4ebc-1d74-4e58-9a7a-dddb7b2d036a\") " pod="openstack/nova-api-0" Feb 18 19:37:30 crc kubenswrapper[4942]: I0218 19:37:30.848933 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e52b4ebc-1d74-4e58-9a7a-dddb7b2d036a-config-data\") pod \"nova-api-0\" (UID: \"e52b4ebc-1d74-4e58-9a7a-dddb7b2d036a\") " pod="openstack/nova-api-0" Feb 18 19:37:30 crc kubenswrapper[4942]: I0218 19:37:30.849007 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e52b4ebc-1d74-4e58-9a7a-dddb7b2d036a-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"e52b4ebc-1d74-4e58-9a7a-dddb7b2d036a\") " pod="openstack/nova-api-0" Feb 18 19:37:30 crc kubenswrapper[4942]: I0218 19:37:30.849356 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e52b4ebc-1d74-4e58-9a7a-dddb7b2d036a-logs\") pod \"nova-api-0\" (UID: \"e52b4ebc-1d74-4e58-9a7a-dddb7b2d036a\") " pod="openstack/nova-api-0" Feb 18 19:37:30 crc kubenswrapper[4942]: I0218 19:37:30.853386 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e52b4ebc-1d74-4e58-9a7a-dddb7b2d036a-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"e52b4ebc-1d74-4e58-9a7a-dddb7b2d036a\") " pod="openstack/nova-api-0" Feb 18 19:37:30 crc kubenswrapper[4942]: I0218 19:37:30.853593 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e52b4ebc-1d74-4e58-9a7a-dddb7b2d036a-config-data\") pod \"nova-api-0\" (UID: \"e52b4ebc-1d74-4e58-9a7a-dddb7b2d036a\") " pod="openstack/nova-api-0" Feb 18 19:37:30 crc kubenswrapper[4942]: I0218 19:37:30.870020 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hbc72\" (UniqueName: \"kubernetes.io/projected/e52b4ebc-1d74-4e58-9a7a-dddb7b2d036a-kube-api-access-hbc72\") pod \"nova-api-0\" (UID: \"e52b4ebc-1d74-4e58-9a7a-dddb7b2d036a\") " pod="openstack/nova-api-0" Feb 18 19:37:31 crc kubenswrapper[4942]: I0218 19:37:31.059980 4942 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1b369297-3ab8-4077-9af5-68455e6f2fa7" path="/var/lib/kubelet/pods/1b369297-3ab8-4077-9af5-68455e6f2fa7/volumes" Feb 18 19:37:31 crc kubenswrapper[4942]: I0218 19:37:31.061220 4942 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f1366c48-2eab-4f52-b946-41b5cd9682a9" path="/var/lib/kubelet/pods/f1366c48-2eab-4f52-b946-41b5cd9682a9/volumes" Feb 18 19:37:31 crc kubenswrapper[4942]: I0218 19:37:31.165146 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 18 19:37:31 crc kubenswrapper[4942]: I0218 19:37:31.595864 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"934ec68d-b7d2-4435-8e54-4984cea15920","Type":"ContainerStarted","Data":"f8f16eaf99b27e5378de6b9f610d1eff9cec3f93c1ffd82c5027dc6d962fe712"} Feb 18 19:37:31 crc kubenswrapper[4942]: I0218 19:37:31.595909 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"934ec68d-b7d2-4435-8e54-4984cea15920","Type":"ContainerStarted","Data":"c51803e068a4df40cf491f2ad59ffe56be6273114ad918ad454d9a2712bc7592"} Feb 18 19:37:31 crc kubenswrapper[4942]: I0218 19:37:31.620598 4942 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.620578177 podStartE2EDuration="2.620578177s" podCreationTimestamp="2026-02-18 19:37:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:37:31.615715779 +0000 UTC m=+1211.320648454" watchObservedRunningTime="2026-02-18 19:37:31.620578177 +0000 UTC m=+1211.325510832" Feb 18 19:37:31 crc kubenswrapper[4942]: I0218 19:37:31.664720 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 18 19:37:32 crc kubenswrapper[4942]: I0218 19:37:32.607326 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"e52b4ebc-1d74-4e58-9a7a-dddb7b2d036a","Type":"ContainerStarted","Data":"76b344abb86d057dcf20c894a0759c7d468787e44aa3785ecfbdff449a08568b"} Feb 18 19:37:32 crc kubenswrapper[4942]: I0218 19:37:32.607631 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"e52b4ebc-1d74-4e58-9a7a-dddb7b2d036a","Type":"ContainerStarted","Data":"7b3e16de2841d45806031cfe8067c2ec6814cccea938f4c062433863dc9f77c2"} Feb 18 19:37:32 crc kubenswrapper[4942]: I0218 19:37:32.607643 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"e52b4ebc-1d74-4e58-9a7a-dddb7b2d036a","Type":"ContainerStarted","Data":"cf0e7a844a7633e58acd2bee9698d5dc7b514eece929972cfe261a5a10983dd7"} Feb 18 19:37:32 crc kubenswrapper[4942]: I0218 19:37:32.631969 4942 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.631946172 podStartE2EDuration="2.631946172s" podCreationTimestamp="2026-02-18 19:37:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:37:32.623943112 +0000 UTC m=+1212.328875777" watchObservedRunningTime="2026-02-18 19:37:32.631946172 +0000 UTC m=+1212.336878837" Feb 18 19:37:32 crc kubenswrapper[4942]: I0218 19:37:32.865957 4942 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 18 19:37:32 crc kubenswrapper[4942]: I0218 19:37:32.866243 4942 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/kube-state-metrics-0" podUID="a8f1712c-12df-4ca2-81d3-dc649c747868" containerName="kube-state-metrics" containerID="cri-o://91cd24a25481f6b5fa46205492b122ca37c2c2a0ef88de3487c62657546ed3a6" gracePeriod=30 Feb 18 19:37:33 crc kubenswrapper[4942]: I0218 19:37:33.364302 4942 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Feb 18 19:37:33 crc kubenswrapper[4942]: I0218 19:37:33.425608 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f75rx\" (UniqueName: \"kubernetes.io/projected/a8f1712c-12df-4ca2-81d3-dc649c747868-kube-api-access-f75rx\") pod \"a8f1712c-12df-4ca2-81d3-dc649c747868\" (UID: \"a8f1712c-12df-4ca2-81d3-dc649c747868\") " Feb 18 19:37:33 crc kubenswrapper[4942]: I0218 19:37:33.436729 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a8f1712c-12df-4ca2-81d3-dc649c747868-kube-api-access-f75rx" (OuterVolumeSpecName: "kube-api-access-f75rx") pod "a8f1712c-12df-4ca2-81d3-dc649c747868" (UID: "a8f1712c-12df-4ca2-81d3-dc649c747868"). InnerVolumeSpecName "kube-api-access-f75rx". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:37:33 crc kubenswrapper[4942]: I0218 19:37:33.528556 4942 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f75rx\" (UniqueName: \"kubernetes.io/projected/a8f1712c-12df-4ca2-81d3-dc649c747868-kube-api-access-f75rx\") on node \"crc\" DevicePath \"\"" Feb 18 19:37:33 crc kubenswrapper[4942]: I0218 19:37:33.616852 4942 generic.go:334] "Generic (PLEG): container finished" podID="a8f1712c-12df-4ca2-81d3-dc649c747868" containerID="91cd24a25481f6b5fa46205492b122ca37c2c2a0ef88de3487c62657546ed3a6" exitCode=2 Feb 18 19:37:33 crc kubenswrapper[4942]: I0218 19:37:33.616924 4942 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Feb 18 19:37:33 crc kubenswrapper[4942]: I0218 19:37:33.616911 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"a8f1712c-12df-4ca2-81d3-dc649c747868","Type":"ContainerDied","Data":"91cd24a25481f6b5fa46205492b122ca37c2c2a0ef88de3487c62657546ed3a6"} Feb 18 19:37:33 crc kubenswrapper[4942]: I0218 19:37:33.618009 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"a8f1712c-12df-4ca2-81d3-dc649c747868","Type":"ContainerDied","Data":"60cb4ff34d0b296ea32561c63d6c9eaa0072a589abe5d55659f37a97a3ea461d"} Feb 18 19:37:33 crc kubenswrapper[4942]: I0218 19:37:33.618056 4942 scope.go:117] "RemoveContainer" containerID="91cd24a25481f6b5fa46205492b122ca37c2c2a0ef88de3487c62657546ed3a6" Feb 18 19:37:33 crc kubenswrapper[4942]: I0218 19:37:33.642945 4942 scope.go:117] "RemoveContainer" containerID="91cd24a25481f6b5fa46205492b122ca37c2c2a0ef88de3487c62657546ed3a6" Feb 18 19:37:33 crc kubenswrapper[4942]: E0218 19:37:33.643350 4942 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"91cd24a25481f6b5fa46205492b122ca37c2c2a0ef88de3487c62657546ed3a6\": container with ID starting with 91cd24a25481f6b5fa46205492b122ca37c2c2a0ef88de3487c62657546ed3a6 not found: ID does not exist" containerID="91cd24a25481f6b5fa46205492b122ca37c2c2a0ef88de3487c62657546ed3a6" Feb 18 19:37:33 crc kubenswrapper[4942]: I0218 19:37:33.643408 4942 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"91cd24a25481f6b5fa46205492b122ca37c2c2a0ef88de3487c62657546ed3a6"} err="failed to get container status \"91cd24a25481f6b5fa46205492b122ca37c2c2a0ef88de3487c62657546ed3a6\": rpc error: code = NotFound desc = could not find container \"91cd24a25481f6b5fa46205492b122ca37c2c2a0ef88de3487c62657546ed3a6\": container with ID starting with 91cd24a25481f6b5fa46205492b122ca37c2c2a0ef88de3487c62657546ed3a6 not found: ID does not exist" Feb 18 19:37:33 crc kubenswrapper[4942]: I0218 19:37:33.656902 4942 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 18 19:37:33 crc kubenswrapper[4942]: I0218 19:37:33.668055 4942 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 18 19:37:33 crc kubenswrapper[4942]: I0218 19:37:33.685319 4942 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Feb 18 19:37:33 crc kubenswrapper[4942]: E0218 19:37:33.685988 4942 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a8f1712c-12df-4ca2-81d3-dc649c747868" containerName="kube-state-metrics" Feb 18 19:37:33 crc kubenswrapper[4942]: I0218 19:37:33.686019 4942 state_mem.go:107] "Deleted CPUSet assignment" podUID="a8f1712c-12df-4ca2-81d3-dc649c747868" containerName="kube-state-metrics" Feb 18 19:37:33 crc kubenswrapper[4942]: I0218 19:37:33.686297 4942 memory_manager.go:354] "RemoveStaleState removing state" podUID="a8f1712c-12df-4ca2-81d3-dc649c747868" containerName="kube-state-metrics" Feb 18 19:37:33 crc kubenswrapper[4942]: I0218 19:37:33.687274 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Feb 18 19:37:33 crc kubenswrapper[4942]: I0218 19:37:33.690220 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-kube-state-metrics-svc" Feb 18 19:37:33 crc kubenswrapper[4942]: I0218 19:37:33.691326 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"kube-state-metrics-tls-config" Feb 18 19:37:33 crc kubenswrapper[4942]: I0218 19:37:33.695587 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 18 19:37:33 crc kubenswrapper[4942]: I0218 19:37:33.833728 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/304de92f-d344-46e6-86b1-5f132f3698b1-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"304de92f-d344-46e6-86b1-5f132f3698b1\") " pod="openstack/kube-state-metrics-0" Feb 18 19:37:33 crc kubenswrapper[4942]: I0218 19:37:33.834068 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/304de92f-d344-46e6-86b1-5f132f3698b1-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"304de92f-d344-46e6-86b1-5f132f3698b1\") " pod="openstack/kube-state-metrics-0" Feb 18 19:37:33 crc kubenswrapper[4942]: I0218 19:37:33.834199 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hb2gm\" (UniqueName: \"kubernetes.io/projected/304de92f-d344-46e6-86b1-5f132f3698b1-kube-api-access-hb2gm\") pod \"kube-state-metrics-0\" (UID: \"304de92f-d344-46e6-86b1-5f132f3698b1\") " pod="openstack/kube-state-metrics-0" Feb 18 19:37:33 crc kubenswrapper[4942]: I0218 19:37:33.834429 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/304de92f-d344-46e6-86b1-5f132f3698b1-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"304de92f-d344-46e6-86b1-5f132f3698b1\") " pod="openstack/kube-state-metrics-0" Feb 18 19:37:33 crc kubenswrapper[4942]: I0218 19:37:33.936095 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/304de92f-d344-46e6-86b1-5f132f3698b1-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"304de92f-d344-46e6-86b1-5f132f3698b1\") " pod="openstack/kube-state-metrics-0" Feb 18 19:37:33 crc kubenswrapper[4942]: I0218 19:37:33.936658 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/304de92f-d344-46e6-86b1-5f132f3698b1-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"304de92f-d344-46e6-86b1-5f132f3698b1\") " pod="openstack/kube-state-metrics-0" Feb 18 19:37:33 crc kubenswrapper[4942]: I0218 19:37:33.936918 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hb2gm\" (UniqueName: \"kubernetes.io/projected/304de92f-d344-46e6-86b1-5f132f3698b1-kube-api-access-hb2gm\") pod \"kube-state-metrics-0\" (UID: \"304de92f-d344-46e6-86b1-5f132f3698b1\") " pod="openstack/kube-state-metrics-0" Feb 18 19:37:33 crc kubenswrapper[4942]: I0218 19:37:33.937064 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/304de92f-d344-46e6-86b1-5f132f3698b1-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"304de92f-d344-46e6-86b1-5f132f3698b1\") " pod="openstack/kube-state-metrics-0" Feb 18 19:37:33 crc kubenswrapper[4942]: I0218 19:37:33.941229 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/304de92f-d344-46e6-86b1-5f132f3698b1-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"304de92f-d344-46e6-86b1-5f132f3698b1\") " pod="openstack/kube-state-metrics-0" Feb 18 19:37:33 crc kubenswrapper[4942]: I0218 19:37:33.942227 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/304de92f-d344-46e6-86b1-5f132f3698b1-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"304de92f-d344-46e6-86b1-5f132f3698b1\") " pod="openstack/kube-state-metrics-0" Feb 18 19:37:33 crc kubenswrapper[4942]: I0218 19:37:33.942614 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/304de92f-d344-46e6-86b1-5f132f3698b1-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"304de92f-d344-46e6-86b1-5f132f3698b1\") " pod="openstack/kube-state-metrics-0" Feb 18 19:37:33 crc kubenswrapper[4942]: I0218 19:37:33.957587 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hb2gm\" (UniqueName: \"kubernetes.io/projected/304de92f-d344-46e6-86b1-5f132f3698b1-kube-api-access-hb2gm\") pod \"kube-state-metrics-0\" (UID: \"304de92f-d344-46e6-86b1-5f132f3698b1\") " pod="openstack/kube-state-metrics-0" Feb 18 19:37:33 crc kubenswrapper[4942]: I0218 19:37:33.965371 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-conductor-0" Feb 18 19:37:34 crc kubenswrapper[4942]: I0218 19:37:34.013595 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Feb 18 19:37:34 crc kubenswrapper[4942]: I0218 19:37:34.490311 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 18 19:37:34 crc kubenswrapper[4942]: I0218 19:37:34.638535 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"304de92f-d344-46e6-86b1-5f132f3698b1","Type":"ContainerStarted","Data":"6d7b9afd9429a7f188836af6e87520172ae02b0de8e27e6e63d71774f11f89d7"} Feb 18 19:37:34 crc kubenswrapper[4942]: I0218 19:37:34.693631 4942 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 18 19:37:34 crc kubenswrapper[4942]: I0218 19:37:34.694072 4942 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="4981b67f-ebf1-4d2e-a717-67edbc242474" containerName="ceilometer-central-agent" containerID="cri-o://4b1f480ddf927d40a046c53a831832dfc0661e5a1bfbb9d0061a5f0118ebd54e" gracePeriod=30 Feb 18 19:37:34 crc kubenswrapper[4942]: I0218 19:37:34.694287 4942 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="4981b67f-ebf1-4d2e-a717-67edbc242474" containerName="proxy-httpd" containerID="cri-o://ca5ad4aca4a617b9bbb63455e63bf0207750221ea32250b434a89953bffe9fd9" gracePeriod=30 Feb 18 19:37:34 crc kubenswrapper[4942]: I0218 19:37:34.694513 4942 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="4981b67f-ebf1-4d2e-a717-67edbc242474" containerName="ceilometer-notification-agent" containerID="cri-o://a0cad6f7c64293c0ca724387bf6888f39862910aef71002691436d4792c6de55" gracePeriod=30 Feb 18 19:37:34 crc kubenswrapper[4942]: I0218 19:37:34.694572 4942 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="4981b67f-ebf1-4d2e-a717-67edbc242474" containerName="sg-core" containerID="cri-o://611f841412a878234cbc129413f605782e9a919aae0510161f0f5229befb06e0" gracePeriod=30 Feb 18 19:37:35 crc kubenswrapper[4942]: I0218 19:37:35.062344 4942 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a8f1712c-12df-4ca2-81d3-dc649c747868" path="/var/lib/kubelet/pods/a8f1712c-12df-4ca2-81d3-dc649c747868/volumes" Feb 18 19:37:35 crc kubenswrapper[4942]: I0218 19:37:35.186855 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Feb 18 19:37:35 crc kubenswrapper[4942]: I0218 19:37:35.652962 4942 generic.go:334] "Generic (PLEG): container finished" podID="4981b67f-ebf1-4d2e-a717-67edbc242474" containerID="ca5ad4aca4a617b9bbb63455e63bf0207750221ea32250b434a89953bffe9fd9" exitCode=0 Feb 18 19:37:35 crc kubenswrapper[4942]: I0218 19:37:35.653002 4942 generic.go:334] "Generic (PLEG): container finished" podID="4981b67f-ebf1-4d2e-a717-67edbc242474" containerID="611f841412a878234cbc129413f605782e9a919aae0510161f0f5229befb06e0" exitCode=2 Feb 18 19:37:35 crc kubenswrapper[4942]: I0218 19:37:35.653012 4942 generic.go:334] "Generic (PLEG): container finished" podID="4981b67f-ebf1-4d2e-a717-67edbc242474" containerID="4b1f480ddf927d40a046c53a831832dfc0661e5a1bfbb9d0061a5f0118ebd54e" exitCode=0 Feb 18 19:37:35 crc kubenswrapper[4942]: I0218 19:37:35.653039 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4981b67f-ebf1-4d2e-a717-67edbc242474","Type":"ContainerDied","Data":"ca5ad4aca4a617b9bbb63455e63bf0207750221ea32250b434a89953bffe9fd9"} Feb 18 19:37:35 crc kubenswrapper[4942]: I0218 19:37:35.653077 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4981b67f-ebf1-4d2e-a717-67edbc242474","Type":"ContainerDied","Data":"611f841412a878234cbc129413f605782e9a919aae0510161f0f5229befb06e0"} Feb 18 19:37:35 crc kubenswrapper[4942]: I0218 19:37:35.653105 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4981b67f-ebf1-4d2e-a717-67edbc242474","Type":"ContainerDied","Data":"4b1f480ddf927d40a046c53a831832dfc0661e5a1bfbb9d0061a5f0118ebd54e"} Feb 18 19:37:35 crc kubenswrapper[4942]: I0218 19:37:35.654872 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"304de92f-d344-46e6-86b1-5f132f3698b1","Type":"ContainerStarted","Data":"8c239e34c8dd9c41e4e4961eec98a816e5464b403c43822bf1283ca96cd98a62"} Feb 18 19:37:35 crc kubenswrapper[4942]: I0218 19:37:35.655041 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Feb 18 19:37:35 crc kubenswrapper[4942]: I0218 19:37:35.677239 4942 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=2.224582656 podStartE2EDuration="2.677221712s" podCreationTimestamp="2026-02-18 19:37:33 +0000 UTC" firstStartedPulling="2026-02-18 19:37:34.491600144 +0000 UTC m=+1214.196532809" lastFinishedPulling="2026-02-18 19:37:34.9442392 +0000 UTC m=+1214.649171865" observedRunningTime="2026-02-18 19:37:35.670593398 +0000 UTC m=+1215.375526083" watchObservedRunningTime="2026-02-18 19:37:35.677221712 +0000 UTC m=+1215.382154377" Feb 18 19:37:36 crc kubenswrapper[4942]: I0218 19:37:36.666830 4942 generic.go:334] "Generic (PLEG): container finished" podID="4981b67f-ebf1-4d2e-a717-67edbc242474" containerID="a0cad6f7c64293c0ca724387bf6888f39862910aef71002691436d4792c6de55" exitCode=0 Feb 18 19:37:36 crc kubenswrapper[4942]: I0218 19:37:36.666898 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4981b67f-ebf1-4d2e-a717-67edbc242474","Type":"ContainerDied","Data":"a0cad6f7c64293c0ca724387bf6888f39862910aef71002691436d4792c6de55"} Feb 18 19:37:36 crc kubenswrapper[4942]: I0218 19:37:36.976123 4942 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 18 19:37:37 crc kubenswrapper[4942]: I0218 19:37:37.100484 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v2dh2\" (UniqueName: \"kubernetes.io/projected/4981b67f-ebf1-4d2e-a717-67edbc242474-kube-api-access-v2dh2\") pod \"4981b67f-ebf1-4d2e-a717-67edbc242474\" (UID: \"4981b67f-ebf1-4d2e-a717-67edbc242474\") " Feb 18 19:37:37 crc kubenswrapper[4942]: I0218 19:37:37.100581 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4981b67f-ebf1-4d2e-a717-67edbc242474-config-data\") pod \"4981b67f-ebf1-4d2e-a717-67edbc242474\" (UID: \"4981b67f-ebf1-4d2e-a717-67edbc242474\") " Feb 18 19:37:37 crc kubenswrapper[4942]: I0218 19:37:37.100678 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4981b67f-ebf1-4d2e-a717-67edbc242474-scripts\") pod \"4981b67f-ebf1-4d2e-a717-67edbc242474\" (UID: \"4981b67f-ebf1-4d2e-a717-67edbc242474\") " Feb 18 19:37:37 crc kubenswrapper[4942]: I0218 19:37:37.100860 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/4981b67f-ebf1-4d2e-a717-67edbc242474-sg-core-conf-yaml\") pod \"4981b67f-ebf1-4d2e-a717-67edbc242474\" (UID: \"4981b67f-ebf1-4d2e-a717-67edbc242474\") " Feb 18 19:37:37 crc kubenswrapper[4942]: I0218 19:37:37.100929 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4981b67f-ebf1-4d2e-a717-67edbc242474-log-httpd\") pod \"4981b67f-ebf1-4d2e-a717-67edbc242474\" (UID: \"4981b67f-ebf1-4d2e-a717-67edbc242474\") " Feb 18 19:37:37 crc kubenswrapper[4942]: I0218 19:37:37.101030 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4981b67f-ebf1-4d2e-a717-67edbc242474-run-httpd\") pod \"4981b67f-ebf1-4d2e-a717-67edbc242474\" (UID: \"4981b67f-ebf1-4d2e-a717-67edbc242474\") " Feb 18 19:37:37 crc kubenswrapper[4942]: I0218 19:37:37.101056 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4981b67f-ebf1-4d2e-a717-67edbc242474-combined-ca-bundle\") pod \"4981b67f-ebf1-4d2e-a717-67edbc242474\" (UID: \"4981b67f-ebf1-4d2e-a717-67edbc242474\") " Feb 18 19:37:37 crc kubenswrapper[4942]: I0218 19:37:37.101486 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4981b67f-ebf1-4d2e-a717-67edbc242474-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "4981b67f-ebf1-4d2e-a717-67edbc242474" (UID: "4981b67f-ebf1-4d2e-a717-67edbc242474"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 19:37:37 crc kubenswrapper[4942]: I0218 19:37:37.101891 4942 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4981b67f-ebf1-4d2e-a717-67edbc242474-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 18 19:37:37 crc kubenswrapper[4942]: I0218 19:37:37.101876 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4981b67f-ebf1-4d2e-a717-67edbc242474-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "4981b67f-ebf1-4d2e-a717-67edbc242474" (UID: "4981b67f-ebf1-4d2e-a717-67edbc242474"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 19:37:37 crc kubenswrapper[4942]: I0218 19:37:37.106887 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4981b67f-ebf1-4d2e-a717-67edbc242474-kube-api-access-v2dh2" (OuterVolumeSpecName: "kube-api-access-v2dh2") pod "4981b67f-ebf1-4d2e-a717-67edbc242474" (UID: "4981b67f-ebf1-4d2e-a717-67edbc242474"). InnerVolumeSpecName "kube-api-access-v2dh2". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:37:37 crc kubenswrapper[4942]: I0218 19:37:37.115703 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4981b67f-ebf1-4d2e-a717-67edbc242474-scripts" (OuterVolumeSpecName: "scripts") pod "4981b67f-ebf1-4d2e-a717-67edbc242474" (UID: "4981b67f-ebf1-4d2e-a717-67edbc242474"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:37:37 crc kubenswrapper[4942]: I0218 19:37:37.133734 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4981b67f-ebf1-4d2e-a717-67edbc242474-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "4981b67f-ebf1-4d2e-a717-67edbc242474" (UID: "4981b67f-ebf1-4d2e-a717-67edbc242474"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:37:37 crc kubenswrapper[4942]: I0218 19:37:37.195704 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4981b67f-ebf1-4d2e-a717-67edbc242474-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "4981b67f-ebf1-4d2e-a717-67edbc242474" (UID: "4981b67f-ebf1-4d2e-a717-67edbc242474"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:37:37 crc kubenswrapper[4942]: I0218 19:37:37.203374 4942 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4981b67f-ebf1-4d2e-a717-67edbc242474-scripts\") on node \"crc\" DevicePath \"\"" Feb 18 19:37:37 crc kubenswrapper[4942]: I0218 19:37:37.203413 4942 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/4981b67f-ebf1-4d2e-a717-67edbc242474-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 18 19:37:37 crc kubenswrapper[4942]: I0218 19:37:37.203428 4942 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4981b67f-ebf1-4d2e-a717-67edbc242474-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 18 19:37:37 crc kubenswrapper[4942]: I0218 19:37:37.203446 4942 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4981b67f-ebf1-4d2e-a717-67edbc242474-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 19:37:37 crc kubenswrapper[4942]: I0218 19:37:37.203462 4942 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v2dh2\" (UniqueName: \"kubernetes.io/projected/4981b67f-ebf1-4d2e-a717-67edbc242474-kube-api-access-v2dh2\") on node \"crc\" DevicePath \"\"" Feb 18 19:37:37 crc kubenswrapper[4942]: I0218 19:37:37.212011 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4981b67f-ebf1-4d2e-a717-67edbc242474-config-data" (OuterVolumeSpecName: "config-data") pod "4981b67f-ebf1-4d2e-a717-67edbc242474" (UID: "4981b67f-ebf1-4d2e-a717-67edbc242474"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:37:37 crc kubenswrapper[4942]: I0218 19:37:37.305301 4942 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4981b67f-ebf1-4d2e-a717-67edbc242474-config-data\") on node \"crc\" DevicePath \"\"" Feb 18 19:37:37 crc kubenswrapper[4942]: I0218 19:37:37.676686 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4981b67f-ebf1-4d2e-a717-67edbc242474","Type":"ContainerDied","Data":"30ee69691a3055e0e7dae81b55c1720ecd6dcb44e21ef193aead637c15341932"} Feb 18 19:37:37 crc kubenswrapper[4942]: I0218 19:37:37.676731 4942 scope.go:117] "RemoveContainer" containerID="ca5ad4aca4a617b9bbb63455e63bf0207750221ea32250b434a89953bffe9fd9" Feb 18 19:37:37 crc kubenswrapper[4942]: I0218 19:37:37.676859 4942 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 18 19:37:37 crc kubenswrapper[4942]: I0218 19:37:37.710322 4942 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 18 19:37:37 crc kubenswrapper[4942]: I0218 19:37:37.717127 4942 scope.go:117] "RemoveContainer" containerID="611f841412a878234cbc129413f605782e9a919aae0510161f0f5229befb06e0" Feb 18 19:37:37 crc kubenswrapper[4942]: I0218 19:37:37.717529 4942 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 18 19:37:37 crc kubenswrapper[4942]: I0218 19:37:37.734745 4942 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 18 19:37:37 crc kubenswrapper[4942]: E0218 19:37:37.735164 4942 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4981b67f-ebf1-4d2e-a717-67edbc242474" containerName="ceilometer-notification-agent" Feb 18 19:37:37 crc kubenswrapper[4942]: I0218 19:37:37.735184 4942 state_mem.go:107] "Deleted CPUSet assignment" podUID="4981b67f-ebf1-4d2e-a717-67edbc242474" containerName="ceilometer-notification-agent" Feb 18 19:37:37 crc kubenswrapper[4942]: E0218 19:37:37.735209 4942 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4981b67f-ebf1-4d2e-a717-67edbc242474" containerName="ceilometer-central-agent" Feb 18 19:37:37 crc kubenswrapper[4942]: I0218 19:37:37.735216 4942 state_mem.go:107] "Deleted CPUSet assignment" podUID="4981b67f-ebf1-4d2e-a717-67edbc242474" containerName="ceilometer-central-agent" Feb 18 19:37:37 crc kubenswrapper[4942]: E0218 19:37:37.735236 4942 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4981b67f-ebf1-4d2e-a717-67edbc242474" containerName="sg-core" Feb 18 19:37:37 crc kubenswrapper[4942]: I0218 19:37:37.735243 4942 state_mem.go:107] "Deleted CPUSet assignment" podUID="4981b67f-ebf1-4d2e-a717-67edbc242474" containerName="sg-core" Feb 18 19:37:37 crc kubenswrapper[4942]: E0218 19:37:37.735254 4942 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4981b67f-ebf1-4d2e-a717-67edbc242474" containerName="proxy-httpd" Feb 18 19:37:37 crc kubenswrapper[4942]: I0218 19:37:37.735259 4942 state_mem.go:107] "Deleted CPUSet assignment" podUID="4981b67f-ebf1-4d2e-a717-67edbc242474" containerName="proxy-httpd" Feb 18 19:37:37 crc kubenswrapper[4942]: I0218 19:37:37.735419 4942 memory_manager.go:354] "RemoveStaleState removing state" podUID="4981b67f-ebf1-4d2e-a717-67edbc242474" containerName="ceilometer-central-agent" Feb 18 19:37:37 crc kubenswrapper[4942]: I0218 19:37:37.735433 4942 memory_manager.go:354] "RemoveStaleState removing state" podUID="4981b67f-ebf1-4d2e-a717-67edbc242474" containerName="sg-core" Feb 18 19:37:37 crc kubenswrapper[4942]: I0218 19:37:37.735449 4942 memory_manager.go:354] "RemoveStaleState removing state" podUID="4981b67f-ebf1-4d2e-a717-67edbc242474" containerName="ceilometer-notification-agent" Feb 18 19:37:37 crc kubenswrapper[4942]: I0218 19:37:37.735462 4942 memory_manager.go:354] "RemoveStaleState removing state" podUID="4981b67f-ebf1-4d2e-a717-67edbc242474" containerName="proxy-httpd" Feb 18 19:37:37 crc kubenswrapper[4942]: I0218 19:37:37.737032 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 18 19:37:37 crc kubenswrapper[4942]: I0218 19:37:37.741137 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 18 19:37:37 crc kubenswrapper[4942]: I0218 19:37:37.742568 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 18 19:37:37 crc kubenswrapper[4942]: I0218 19:37:37.742806 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Feb 18 19:37:37 crc kubenswrapper[4942]: I0218 19:37:37.747134 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 18 19:37:37 crc kubenswrapper[4942]: I0218 19:37:37.751192 4942 scope.go:117] "RemoveContainer" containerID="a0cad6f7c64293c0ca724387bf6888f39862910aef71002691436d4792c6de55" Feb 18 19:37:37 crc kubenswrapper[4942]: I0218 19:37:37.814240 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b3ce1800-8544-49d6-84a8-f635038f26da-scripts\") pod \"ceilometer-0\" (UID: \"b3ce1800-8544-49d6-84a8-f635038f26da\") " pod="openstack/ceilometer-0" Feb 18 19:37:37 crc kubenswrapper[4942]: I0218 19:37:37.814533 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b3ce1800-8544-49d6-84a8-f635038f26da-config-data\") pod \"ceilometer-0\" (UID: \"b3ce1800-8544-49d6-84a8-f635038f26da\") " pod="openstack/ceilometer-0" Feb 18 19:37:37 crc kubenswrapper[4942]: I0218 19:37:37.814569 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b3ce1800-8544-49d6-84a8-f635038f26da-run-httpd\") pod \"ceilometer-0\" (UID: \"b3ce1800-8544-49d6-84a8-f635038f26da\") " pod="openstack/ceilometer-0" Feb 18 19:37:37 crc kubenswrapper[4942]: I0218 19:37:37.814608 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b3ce1800-8544-49d6-84a8-f635038f26da-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"b3ce1800-8544-49d6-84a8-f635038f26da\") " pod="openstack/ceilometer-0" Feb 18 19:37:37 crc kubenswrapper[4942]: I0218 19:37:37.814651 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/b3ce1800-8544-49d6-84a8-f635038f26da-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"b3ce1800-8544-49d6-84a8-f635038f26da\") " pod="openstack/ceilometer-0" Feb 18 19:37:37 crc kubenswrapper[4942]: I0218 19:37:37.814687 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/b3ce1800-8544-49d6-84a8-f635038f26da-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"b3ce1800-8544-49d6-84a8-f635038f26da\") " pod="openstack/ceilometer-0" Feb 18 19:37:37 crc kubenswrapper[4942]: I0218 19:37:37.814707 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d2f69\" (UniqueName: \"kubernetes.io/projected/b3ce1800-8544-49d6-84a8-f635038f26da-kube-api-access-d2f69\") pod \"ceilometer-0\" (UID: \"b3ce1800-8544-49d6-84a8-f635038f26da\") " pod="openstack/ceilometer-0" Feb 18 19:37:37 crc kubenswrapper[4942]: I0218 19:37:37.814732 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b3ce1800-8544-49d6-84a8-f635038f26da-log-httpd\") pod \"ceilometer-0\" (UID: \"b3ce1800-8544-49d6-84a8-f635038f26da\") " pod="openstack/ceilometer-0" Feb 18 19:37:37 crc kubenswrapper[4942]: I0218 19:37:37.815732 4942 scope.go:117] "RemoveContainer" containerID="4b1f480ddf927d40a046c53a831832dfc0661e5a1bfbb9d0061a5f0118ebd54e" Feb 18 19:37:37 crc kubenswrapper[4942]: I0218 19:37:37.916334 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b3ce1800-8544-49d6-84a8-f635038f26da-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"b3ce1800-8544-49d6-84a8-f635038f26da\") " pod="openstack/ceilometer-0" Feb 18 19:37:37 crc kubenswrapper[4942]: I0218 19:37:37.916400 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/b3ce1800-8544-49d6-84a8-f635038f26da-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"b3ce1800-8544-49d6-84a8-f635038f26da\") " pod="openstack/ceilometer-0" Feb 18 19:37:37 crc kubenswrapper[4942]: I0218 19:37:37.916440 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/b3ce1800-8544-49d6-84a8-f635038f26da-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"b3ce1800-8544-49d6-84a8-f635038f26da\") " pod="openstack/ceilometer-0" Feb 18 19:37:37 crc kubenswrapper[4942]: I0218 19:37:37.916464 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d2f69\" (UniqueName: \"kubernetes.io/projected/b3ce1800-8544-49d6-84a8-f635038f26da-kube-api-access-d2f69\") pod \"ceilometer-0\" (UID: \"b3ce1800-8544-49d6-84a8-f635038f26da\") " pod="openstack/ceilometer-0" Feb 18 19:37:37 crc kubenswrapper[4942]: I0218 19:37:37.916495 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b3ce1800-8544-49d6-84a8-f635038f26da-log-httpd\") pod \"ceilometer-0\" (UID: \"b3ce1800-8544-49d6-84a8-f635038f26da\") " pod="openstack/ceilometer-0" Feb 18 19:37:37 crc kubenswrapper[4942]: I0218 19:37:37.916830 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b3ce1800-8544-49d6-84a8-f635038f26da-scripts\") pod \"ceilometer-0\" (UID: \"b3ce1800-8544-49d6-84a8-f635038f26da\") " pod="openstack/ceilometer-0" Feb 18 19:37:37 crc kubenswrapper[4942]: I0218 19:37:37.917249 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b3ce1800-8544-49d6-84a8-f635038f26da-config-data\") pod \"ceilometer-0\" (UID: \"b3ce1800-8544-49d6-84a8-f635038f26da\") " pod="openstack/ceilometer-0" Feb 18 19:37:37 crc kubenswrapper[4942]: I0218 19:37:37.917286 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b3ce1800-8544-49d6-84a8-f635038f26da-run-httpd\") pod \"ceilometer-0\" (UID: \"b3ce1800-8544-49d6-84a8-f635038f26da\") " pod="openstack/ceilometer-0" Feb 18 19:37:37 crc kubenswrapper[4942]: I0218 19:37:37.917202 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b3ce1800-8544-49d6-84a8-f635038f26da-log-httpd\") pod \"ceilometer-0\" (UID: \"b3ce1800-8544-49d6-84a8-f635038f26da\") " pod="openstack/ceilometer-0" Feb 18 19:37:37 crc kubenswrapper[4942]: I0218 19:37:37.917543 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b3ce1800-8544-49d6-84a8-f635038f26da-run-httpd\") pod \"ceilometer-0\" (UID: \"b3ce1800-8544-49d6-84a8-f635038f26da\") " pod="openstack/ceilometer-0" Feb 18 19:37:37 crc kubenswrapper[4942]: I0218 19:37:37.921274 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/b3ce1800-8544-49d6-84a8-f635038f26da-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"b3ce1800-8544-49d6-84a8-f635038f26da\") " pod="openstack/ceilometer-0" Feb 18 19:37:37 crc kubenswrapper[4942]: I0218 19:37:37.921969 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b3ce1800-8544-49d6-84a8-f635038f26da-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"b3ce1800-8544-49d6-84a8-f635038f26da\") " pod="openstack/ceilometer-0" Feb 18 19:37:37 crc kubenswrapper[4942]: I0218 19:37:37.922019 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b3ce1800-8544-49d6-84a8-f635038f26da-config-data\") pod \"ceilometer-0\" (UID: \"b3ce1800-8544-49d6-84a8-f635038f26da\") " pod="openstack/ceilometer-0" Feb 18 19:37:37 crc kubenswrapper[4942]: I0218 19:37:37.924520 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b3ce1800-8544-49d6-84a8-f635038f26da-scripts\") pod \"ceilometer-0\" (UID: \"b3ce1800-8544-49d6-84a8-f635038f26da\") " pod="openstack/ceilometer-0" Feb 18 19:37:37 crc kubenswrapper[4942]: I0218 19:37:37.925835 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/b3ce1800-8544-49d6-84a8-f635038f26da-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"b3ce1800-8544-49d6-84a8-f635038f26da\") " pod="openstack/ceilometer-0" Feb 18 19:37:37 crc kubenswrapper[4942]: I0218 19:37:37.935529 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d2f69\" (UniqueName: \"kubernetes.io/projected/b3ce1800-8544-49d6-84a8-f635038f26da-kube-api-access-d2f69\") pod \"ceilometer-0\" (UID: \"b3ce1800-8544-49d6-84a8-f635038f26da\") " pod="openstack/ceilometer-0" Feb 18 19:37:38 crc kubenswrapper[4942]: I0218 19:37:38.111928 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 18 19:37:38 crc kubenswrapper[4942]: I0218 19:37:38.599001 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 18 19:37:38 crc kubenswrapper[4942]: I0218 19:37:38.685567 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b3ce1800-8544-49d6-84a8-f635038f26da","Type":"ContainerStarted","Data":"4e628a6fe7bba13d144b29335e6e3f96fe39a4ea90cabd697733b132cffcd80d"} Feb 18 19:37:39 crc kubenswrapper[4942]: I0218 19:37:39.046267 4942 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4981b67f-ebf1-4d2e-a717-67edbc242474" path="/var/lib/kubelet/pods/4981b67f-ebf1-4d2e-a717-67edbc242474/volumes" Feb 18 19:37:39 crc kubenswrapper[4942]: I0218 19:37:39.709809 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b3ce1800-8544-49d6-84a8-f635038f26da","Type":"ContainerStarted","Data":"122dbfd5620ffaa553bc2db1d7e57c3c94dde9e3c18c2a3f01a6cf8f6a924404"} Feb 18 19:37:40 crc kubenswrapper[4942]: I0218 19:37:40.187068 4942 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Feb 18 19:37:40 crc kubenswrapper[4942]: I0218 19:37:40.224367 4942 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Feb 18 19:37:40 crc kubenswrapper[4942]: I0218 19:37:40.722538 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b3ce1800-8544-49d6-84a8-f635038f26da","Type":"ContainerStarted","Data":"185207e1f6c945d8a225d619dcb1bdc76dddd2e40a23e7344e8ebfbde1ab9c92"} Feb 18 19:37:40 crc kubenswrapper[4942]: I0218 19:37:40.722886 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b3ce1800-8544-49d6-84a8-f635038f26da","Type":"ContainerStarted","Data":"9e97f1132a204b3bf2c933f6371c1ae1d572289c719b35111b591635e9241e91"} Feb 18 19:37:40 crc kubenswrapper[4942]: I0218 19:37:40.764555 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Feb 18 19:37:41 crc kubenswrapper[4942]: I0218 19:37:41.166184 4942 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 18 19:37:41 crc kubenswrapper[4942]: I0218 19:37:41.166530 4942 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 18 19:37:42 crc kubenswrapper[4942]: I0218 19:37:42.166083 4942 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="e52b4ebc-1d74-4e58-9a7a-dddb7b2d036a" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.0.213:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 18 19:37:42 crc kubenswrapper[4942]: I0218 19:37:42.207023 4942 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="e52b4ebc-1d74-4e58-9a7a-dddb7b2d036a" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.0.213:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 18 19:37:42 crc kubenswrapper[4942]: I0218 19:37:42.744835 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b3ce1800-8544-49d6-84a8-f635038f26da","Type":"ContainerStarted","Data":"9ec5caf96b65f1b74beab3396ebf587794daec1e7c6b002fe84e8ad8a0730e95"} Feb 18 19:37:42 crc kubenswrapper[4942]: I0218 19:37:42.745074 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 18 19:37:42 crc kubenswrapper[4942]: I0218 19:37:42.775305 4942 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.2465024590000002 podStartE2EDuration="5.775277545s" podCreationTimestamp="2026-02-18 19:37:37 +0000 UTC" firstStartedPulling="2026-02-18 19:37:38.614408115 +0000 UTC m=+1218.319340800" lastFinishedPulling="2026-02-18 19:37:42.143183221 +0000 UTC m=+1221.848115886" observedRunningTime="2026-02-18 19:37:42.760667342 +0000 UTC m=+1222.465600007" watchObservedRunningTime="2026-02-18 19:37:42.775277545 +0000 UTC m=+1222.480210240" Feb 18 19:37:44 crc kubenswrapper[4942]: I0218 19:37:44.024427 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Feb 18 19:37:47 crc kubenswrapper[4942]: E0218 19:37:47.428297 4942 manager.go:1116] Failed to create existing container: /kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4981b67f_ebf1_4d2e_a717_67edbc242474.slice/crio-30ee69691a3055e0e7dae81b55c1720ecd6dcb44e21ef193aead637c15341932: Error finding container 30ee69691a3055e0e7dae81b55c1720ecd6dcb44e21ef193aead637c15341932: Status 404 returned error can't find the container with id 30ee69691a3055e0e7dae81b55c1720ecd6dcb44e21ef193aead637c15341932 Feb 18 19:37:47 crc kubenswrapper[4942]: E0218 19:37:47.707610 4942 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4e00cb35_640c_4e86_8ef4_9c11a4a83768.slice/crio-conmon-4d23d58052be19c944bbfb1bdcae23f79449638dec97cb1fe1f8ae8d61b02fff.scope\": RecentStats: unable to find data in memory cache]" Feb 18 19:37:47 crc kubenswrapper[4942]: I0218 19:37:47.801099 4942 generic.go:334] "Generic (PLEG): container finished" podID="1dcc14d9-d4a9-41a7-a380-d28ed9d39ef3" containerID="5c56a687bcaef7e5e54c6de1b78374726c82904080884876b458c8525f4a0752" exitCode=137 Feb 18 19:37:47 crc kubenswrapper[4942]: I0218 19:37:47.801140 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"1dcc14d9-d4a9-41a7-a380-d28ed9d39ef3","Type":"ContainerDied","Data":"5c56a687bcaef7e5e54c6de1b78374726c82904080884876b458c8525f4a0752"} Feb 18 19:37:47 crc kubenswrapper[4942]: I0218 19:37:47.801203 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"1dcc14d9-d4a9-41a7-a380-d28ed9d39ef3","Type":"ContainerDied","Data":"c9b2d102cdaeda4714a41f8fd9d6eea88f81b5d3b64632545c0357f2607bbf2b"} Feb 18 19:37:47 crc kubenswrapper[4942]: I0218 19:37:47.801217 4942 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c9b2d102cdaeda4714a41f8fd9d6eea88f81b5d3b64632545c0357f2607bbf2b" Feb 18 19:37:47 crc kubenswrapper[4942]: I0218 19:37:47.802504 4942 generic.go:334] "Generic (PLEG): container finished" podID="4e00cb35-640c-4e86-8ef4-9c11a4a83768" containerID="4d23d58052be19c944bbfb1bdcae23f79449638dec97cb1fe1f8ae8d61b02fff" exitCode=137 Feb 18 19:37:47 crc kubenswrapper[4942]: I0218 19:37:47.802542 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"4e00cb35-640c-4e86-8ef4-9c11a4a83768","Type":"ContainerDied","Data":"4d23d58052be19c944bbfb1bdcae23f79449638dec97cb1fe1f8ae8d61b02fff"} Feb 18 19:37:47 crc kubenswrapper[4942]: I0218 19:37:47.802562 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"4e00cb35-640c-4e86-8ef4-9c11a4a83768","Type":"ContainerDied","Data":"709413498c2a9aaa5df37a75330ab20cc5f02facee6f8f09c4d5399431b7f4ad"} Feb 18 19:37:47 crc kubenswrapper[4942]: I0218 19:37:47.802574 4942 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="709413498c2a9aaa5df37a75330ab20cc5f02facee6f8f09c4d5399431b7f4ad" Feb 18 19:37:47 crc kubenswrapper[4942]: I0218 19:37:47.840717 4942 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Feb 18 19:37:47 crc kubenswrapper[4942]: I0218 19:37:47.847104 4942 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 18 19:37:47 crc kubenswrapper[4942]: I0218 19:37:47.929087 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mlgsg\" (UniqueName: \"kubernetes.io/projected/4e00cb35-640c-4e86-8ef4-9c11a4a83768-kube-api-access-mlgsg\") pod \"4e00cb35-640c-4e86-8ef4-9c11a4a83768\" (UID: \"4e00cb35-640c-4e86-8ef4-9c11a4a83768\") " Feb 18 19:37:47 crc kubenswrapper[4942]: I0218 19:37:47.929137 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1dcc14d9-d4a9-41a7-a380-d28ed9d39ef3-config-data\") pod \"1dcc14d9-d4a9-41a7-a380-d28ed9d39ef3\" (UID: \"1dcc14d9-d4a9-41a7-a380-d28ed9d39ef3\") " Feb 18 19:37:47 crc kubenswrapper[4942]: I0218 19:37:47.929262 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1dcc14d9-d4a9-41a7-a380-d28ed9d39ef3-logs\") pod \"1dcc14d9-d4a9-41a7-a380-d28ed9d39ef3\" (UID: \"1dcc14d9-d4a9-41a7-a380-d28ed9d39ef3\") " Feb 18 19:37:47 crc kubenswrapper[4942]: I0218 19:37:47.929357 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hq77n\" (UniqueName: \"kubernetes.io/projected/1dcc14d9-d4a9-41a7-a380-d28ed9d39ef3-kube-api-access-hq77n\") pod \"1dcc14d9-d4a9-41a7-a380-d28ed9d39ef3\" (UID: \"1dcc14d9-d4a9-41a7-a380-d28ed9d39ef3\") " Feb 18 19:37:47 crc kubenswrapper[4942]: I0218 19:37:47.929379 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1dcc14d9-d4a9-41a7-a380-d28ed9d39ef3-combined-ca-bundle\") pod \"1dcc14d9-d4a9-41a7-a380-d28ed9d39ef3\" (UID: \"1dcc14d9-d4a9-41a7-a380-d28ed9d39ef3\") " Feb 18 19:37:47 crc kubenswrapper[4942]: I0218 19:37:47.929472 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4e00cb35-640c-4e86-8ef4-9c11a4a83768-combined-ca-bundle\") pod \"4e00cb35-640c-4e86-8ef4-9c11a4a83768\" (UID: \"4e00cb35-640c-4e86-8ef4-9c11a4a83768\") " Feb 18 19:37:47 crc kubenswrapper[4942]: I0218 19:37:47.929740 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4e00cb35-640c-4e86-8ef4-9c11a4a83768-config-data\") pod \"4e00cb35-640c-4e86-8ef4-9c11a4a83768\" (UID: \"4e00cb35-640c-4e86-8ef4-9c11a4a83768\") " Feb 18 19:37:47 crc kubenswrapper[4942]: I0218 19:37:47.929584 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1dcc14d9-d4a9-41a7-a380-d28ed9d39ef3-logs" (OuterVolumeSpecName: "logs") pod "1dcc14d9-d4a9-41a7-a380-d28ed9d39ef3" (UID: "1dcc14d9-d4a9-41a7-a380-d28ed9d39ef3"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 19:37:47 crc kubenswrapper[4942]: I0218 19:37:47.930287 4942 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1dcc14d9-d4a9-41a7-a380-d28ed9d39ef3-logs\") on node \"crc\" DevicePath \"\"" Feb 18 19:37:47 crc kubenswrapper[4942]: I0218 19:37:47.934248 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1dcc14d9-d4a9-41a7-a380-d28ed9d39ef3-kube-api-access-hq77n" (OuterVolumeSpecName: "kube-api-access-hq77n") pod "1dcc14d9-d4a9-41a7-a380-d28ed9d39ef3" (UID: "1dcc14d9-d4a9-41a7-a380-d28ed9d39ef3"). InnerVolumeSpecName "kube-api-access-hq77n". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:37:47 crc kubenswrapper[4942]: I0218 19:37:47.934965 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4e00cb35-640c-4e86-8ef4-9c11a4a83768-kube-api-access-mlgsg" (OuterVolumeSpecName: "kube-api-access-mlgsg") pod "4e00cb35-640c-4e86-8ef4-9c11a4a83768" (UID: "4e00cb35-640c-4e86-8ef4-9c11a4a83768"). InnerVolumeSpecName "kube-api-access-mlgsg". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:37:47 crc kubenswrapper[4942]: I0218 19:37:47.962604 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1dcc14d9-d4a9-41a7-a380-d28ed9d39ef3-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "1dcc14d9-d4a9-41a7-a380-d28ed9d39ef3" (UID: "1dcc14d9-d4a9-41a7-a380-d28ed9d39ef3"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:37:47 crc kubenswrapper[4942]: I0218 19:37:47.962937 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4e00cb35-640c-4e86-8ef4-9c11a4a83768-config-data" (OuterVolumeSpecName: "config-data") pod "4e00cb35-640c-4e86-8ef4-9c11a4a83768" (UID: "4e00cb35-640c-4e86-8ef4-9c11a4a83768"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:37:47 crc kubenswrapper[4942]: I0218 19:37:47.964864 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4e00cb35-640c-4e86-8ef4-9c11a4a83768-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "4e00cb35-640c-4e86-8ef4-9c11a4a83768" (UID: "4e00cb35-640c-4e86-8ef4-9c11a4a83768"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:37:47 crc kubenswrapper[4942]: I0218 19:37:47.984196 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1dcc14d9-d4a9-41a7-a380-d28ed9d39ef3-config-data" (OuterVolumeSpecName: "config-data") pod "1dcc14d9-d4a9-41a7-a380-d28ed9d39ef3" (UID: "1dcc14d9-d4a9-41a7-a380-d28ed9d39ef3"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:37:48 crc kubenswrapper[4942]: I0218 19:37:48.031799 4942 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hq77n\" (UniqueName: \"kubernetes.io/projected/1dcc14d9-d4a9-41a7-a380-d28ed9d39ef3-kube-api-access-hq77n\") on node \"crc\" DevicePath \"\"" Feb 18 19:37:48 crc kubenswrapper[4942]: I0218 19:37:48.031832 4942 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1dcc14d9-d4a9-41a7-a380-d28ed9d39ef3-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 19:37:48 crc kubenswrapper[4942]: I0218 19:37:48.031841 4942 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4e00cb35-640c-4e86-8ef4-9c11a4a83768-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 19:37:48 crc kubenswrapper[4942]: I0218 19:37:48.031849 4942 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4e00cb35-640c-4e86-8ef4-9c11a4a83768-config-data\") on node \"crc\" DevicePath \"\"" Feb 18 19:37:48 crc kubenswrapper[4942]: I0218 19:37:48.031858 4942 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mlgsg\" (UniqueName: \"kubernetes.io/projected/4e00cb35-640c-4e86-8ef4-9c11a4a83768-kube-api-access-mlgsg\") on node \"crc\" DevicePath \"\"" Feb 18 19:37:48 crc kubenswrapper[4942]: I0218 19:37:48.031867 4942 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1dcc14d9-d4a9-41a7-a380-d28ed9d39ef3-config-data\") on node \"crc\" DevicePath \"\"" Feb 18 19:37:48 crc kubenswrapper[4942]: I0218 19:37:48.813813 4942 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 18 19:37:48 crc kubenswrapper[4942]: I0218 19:37:48.813916 4942 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Feb 18 19:37:48 crc kubenswrapper[4942]: I0218 19:37:48.866865 4942 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 18 19:37:48 crc kubenswrapper[4942]: I0218 19:37:48.876819 4942 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 18 19:37:48 crc kubenswrapper[4942]: I0218 19:37:48.886685 4942 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Feb 18 19:37:48 crc kubenswrapper[4942]: I0218 19:37:48.922437 4942 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Feb 18 19:37:48 crc kubenswrapper[4942]: I0218 19:37:48.938517 4942 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 18 19:37:48 crc kubenswrapper[4942]: E0218 19:37:48.939166 4942 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1dcc14d9-d4a9-41a7-a380-d28ed9d39ef3" containerName="nova-metadata-metadata" Feb 18 19:37:48 crc kubenswrapper[4942]: I0218 19:37:48.939200 4942 state_mem.go:107] "Deleted CPUSet assignment" podUID="1dcc14d9-d4a9-41a7-a380-d28ed9d39ef3" containerName="nova-metadata-metadata" Feb 18 19:37:48 crc kubenswrapper[4942]: E0218 19:37:48.939221 4942 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1dcc14d9-d4a9-41a7-a380-d28ed9d39ef3" containerName="nova-metadata-log" Feb 18 19:37:48 crc kubenswrapper[4942]: I0218 19:37:48.939227 4942 state_mem.go:107] "Deleted CPUSet assignment" podUID="1dcc14d9-d4a9-41a7-a380-d28ed9d39ef3" containerName="nova-metadata-log" Feb 18 19:37:48 crc kubenswrapper[4942]: E0218 19:37:48.939270 4942 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4e00cb35-640c-4e86-8ef4-9c11a4a83768" containerName="nova-cell1-novncproxy-novncproxy" Feb 18 19:37:48 crc kubenswrapper[4942]: I0218 19:37:48.939277 4942 state_mem.go:107] "Deleted CPUSet assignment" podUID="4e00cb35-640c-4e86-8ef4-9c11a4a83768" containerName="nova-cell1-novncproxy-novncproxy" Feb 18 19:37:48 crc kubenswrapper[4942]: I0218 19:37:48.939446 4942 memory_manager.go:354] "RemoveStaleState removing state" podUID="4e00cb35-640c-4e86-8ef4-9c11a4a83768" containerName="nova-cell1-novncproxy-novncproxy" Feb 18 19:37:48 crc kubenswrapper[4942]: I0218 19:37:48.939470 4942 memory_manager.go:354] "RemoveStaleState removing state" podUID="1dcc14d9-d4a9-41a7-a380-d28ed9d39ef3" containerName="nova-metadata-log" Feb 18 19:37:48 crc kubenswrapper[4942]: I0218 19:37:48.939480 4942 memory_manager.go:354] "RemoveStaleState removing state" podUID="1dcc14d9-d4a9-41a7-a380-d28ed9d39ef3" containerName="nova-metadata-metadata" Feb 18 19:37:48 crc kubenswrapper[4942]: I0218 19:37:48.940895 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Feb 18 19:37:48 crc kubenswrapper[4942]: I0218 19:37:48.954573 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-public-svc" Feb 18 19:37:48 crc kubenswrapper[4942]: I0218 19:37:48.954720 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-vencrypt" Feb 18 19:37:48 crc kubenswrapper[4942]: I0218 19:37:48.954838 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Feb 18 19:37:48 crc kubenswrapper[4942]: I0218 19:37:48.955886 4942 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Feb 18 19:37:48 crc kubenswrapper[4942]: I0218 19:37:48.958003 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 18 19:37:48 crc kubenswrapper[4942]: I0218 19:37:48.962300 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Feb 18 19:37:48 crc kubenswrapper[4942]: I0218 19:37:48.963418 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Feb 18 19:37:48 crc kubenswrapper[4942]: I0218 19:37:48.973154 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 18 19:37:48 crc kubenswrapper[4942]: I0218 19:37:48.988802 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 18 19:37:49 crc kubenswrapper[4942]: I0218 19:37:49.045414 4942 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1dcc14d9-d4a9-41a7-a380-d28ed9d39ef3" path="/var/lib/kubelet/pods/1dcc14d9-d4a9-41a7-a380-d28ed9d39ef3/volumes" Feb 18 19:37:49 crc kubenswrapper[4942]: I0218 19:37:49.045987 4942 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4e00cb35-640c-4e86-8ef4-9c11a4a83768" path="/var/lib/kubelet/pods/4e00cb35-640c-4e86-8ef4-9c11a4a83768/volumes" Feb 18 19:37:49 crc kubenswrapper[4942]: I0218 19:37:49.055745 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-89kwj\" (UniqueName: \"kubernetes.io/projected/8f2c79fe-40ed-4218-9db5-ecf2750cd43c-kube-api-access-89kwj\") pod \"nova-metadata-0\" (UID: \"8f2c79fe-40ed-4218-9db5-ecf2750cd43c\") " pod="openstack/nova-metadata-0" Feb 18 19:37:49 crc kubenswrapper[4942]: I0218 19:37:49.055815 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/c401dd00-c0a5-41c1-98ea-873e0e2ce7bf-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"c401dd00-c0a5-41c1-98ea-873e0e2ce7bf\") " pod="openstack/nova-cell1-novncproxy-0" Feb 18 19:37:49 crc kubenswrapper[4942]: I0218 19:37:49.055841 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/8f2c79fe-40ed-4218-9db5-ecf2750cd43c-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"8f2c79fe-40ed-4218-9db5-ecf2750cd43c\") " pod="openstack/nova-metadata-0" Feb 18 19:37:49 crc kubenswrapper[4942]: I0218 19:37:49.055882 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8f2c79fe-40ed-4218-9db5-ecf2750cd43c-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"8f2c79fe-40ed-4218-9db5-ecf2750cd43c\") " pod="openstack/nova-metadata-0" Feb 18 19:37:49 crc kubenswrapper[4942]: I0218 19:37:49.055903 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/c401dd00-c0a5-41c1-98ea-873e0e2ce7bf-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"c401dd00-c0a5-41c1-98ea-873e0e2ce7bf\") " pod="openstack/nova-cell1-novncproxy-0" Feb 18 19:37:49 crc kubenswrapper[4942]: I0218 19:37:49.055974 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c401dd00-c0a5-41c1-98ea-873e0e2ce7bf-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"c401dd00-c0a5-41c1-98ea-873e0e2ce7bf\") " pod="openstack/nova-cell1-novncproxy-0" Feb 18 19:37:49 crc kubenswrapper[4942]: I0218 19:37:49.056028 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8f2c79fe-40ed-4218-9db5-ecf2750cd43c-logs\") pod \"nova-metadata-0\" (UID: \"8f2c79fe-40ed-4218-9db5-ecf2750cd43c\") " pod="openstack/nova-metadata-0" Feb 18 19:37:49 crc kubenswrapper[4942]: I0218 19:37:49.056092 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8f2c79fe-40ed-4218-9db5-ecf2750cd43c-config-data\") pod \"nova-metadata-0\" (UID: \"8f2c79fe-40ed-4218-9db5-ecf2750cd43c\") " pod="openstack/nova-metadata-0" Feb 18 19:37:49 crc kubenswrapper[4942]: I0218 19:37:49.056183 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c401dd00-c0a5-41c1-98ea-873e0e2ce7bf-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"c401dd00-c0a5-41c1-98ea-873e0e2ce7bf\") " pod="openstack/nova-cell1-novncproxy-0" Feb 18 19:37:49 crc kubenswrapper[4942]: I0218 19:37:49.056240 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mjq7x\" (UniqueName: \"kubernetes.io/projected/c401dd00-c0a5-41c1-98ea-873e0e2ce7bf-kube-api-access-mjq7x\") pod \"nova-cell1-novncproxy-0\" (UID: \"c401dd00-c0a5-41c1-98ea-873e0e2ce7bf\") " pod="openstack/nova-cell1-novncproxy-0" Feb 18 19:37:49 crc kubenswrapper[4942]: I0218 19:37:49.157786 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c401dd00-c0a5-41c1-98ea-873e0e2ce7bf-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"c401dd00-c0a5-41c1-98ea-873e0e2ce7bf\") " pod="openstack/nova-cell1-novncproxy-0" Feb 18 19:37:49 crc kubenswrapper[4942]: I0218 19:37:49.157890 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8f2c79fe-40ed-4218-9db5-ecf2750cd43c-logs\") pod \"nova-metadata-0\" (UID: \"8f2c79fe-40ed-4218-9db5-ecf2750cd43c\") " pod="openstack/nova-metadata-0" Feb 18 19:37:49 crc kubenswrapper[4942]: I0218 19:37:49.158640 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8f2c79fe-40ed-4218-9db5-ecf2750cd43c-logs\") pod \"nova-metadata-0\" (UID: \"8f2c79fe-40ed-4218-9db5-ecf2750cd43c\") " pod="openstack/nova-metadata-0" Feb 18 19:37:49 crc kubenswrapper[4942]: I0218 19:37:49.158865 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8f2c79fe-40ed-4218-9db5-ecf2750cd43c-config-data\") pod \"nova-metadata-0\" (UID: \"8f2c79fe-40ed-4218-9db5-ecf2750cd43c\") " pod="openstack/nova-metadata-0" Feb 18 19:37:49 crc kubenswrapper[4942]: I0218 19:37:49.158894 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c401dd00-c0a5-41c1-98ea-873e0e2ce7bf-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"c401dd00-c0a5-41c1-98ea-873e0e2ce7bf\") " pod="openstack/nova-cell1-novncproxy-0" Feb 18 19:37:49 crc kubenswrapper[4942]: I0218 19:37:49.159307 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mjq7x\" (UniqueName: \"kubernetes.io/projected/c401dd00-c0a5-41c1-98ea-873e0e2ce7bf-kube-api-access-mjq7x\") pod \"nova-cell1-novncproxy-0\" (UID: \"c401dd00-c0a5-41c1-98ea-873e0e2ce7bf\") " pod="openstack/nova-cell1-novncproxy-0" Feb 18 19:37:49 crc kubenswrapper[4942]: I0218 19:37:49.159366 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-89kwj\" (UniqueName: \"kubernetes.io/projected/8f2c79fe-40ed-4218-9db5-ecf2750cd43c-kube-api-access-89kwj\") pod \"nova-metadata-0\" (UID: \"8f2c79fe-40ed-4218-9db5-ecf2750cd43c\") " pod="openstack/nova-metadata-0" Feb 18 19:37:49 crc kubenswrapper[4942]: I0218 19:37:49.159429 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/c401dd00-c0a5-41c1-98ea-873e0e2ce7bf-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"c401dd00-c0a5-41c1-98ea-873e0e2ce7bf\") " pod="openstack/nova-cell1-novncproxy-0" Feb 18 19:37:49 crc kubenswrapper[4942]: I0218 19:37:49.159459 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/8f2c79fe-40ed-4218-9db5-ecf2750cd43c-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"8f2c79fe-40ed-4218-9db5-ecf2750cd43c\") " pod="openstack/nova-metadata-0" Feb 18 19:37:49 crc kubenswrapper[4942]: I0218 19:37:49.159479 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8f2c79fe-40ed-4218-9db5-ecf2750cd43c-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"8f2c79fe-40ed-4218-9db5-ecf2750cd43c\") " pod="openstack/nova-metadata-0" Feb 18 19:37:49 crc kubenswrapper[4942]: I0218 19:37:49.159499 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/c401dd00-c0a5-41c1-98ea-873e0e2ce7bf-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"c401dd00-c0a5-41c1-98ea-873e0e2ce7bf\") " pod="openstack/nova-cell1-novncproxy-0" Feb 18 19:37:49 crc kubenswrapper[4942]: I0218 19:37:49.165520 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c401dd00-c0a5-41c1-98ea-873e0e2ce7bf-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"c401dd00-c0a5-41c1-98ea-873e0e2ce7bf\") " pod="openstack/nova-cell1-novncproxy-0" Feb 18 19:37:49 crc kubenswrapper[4942]: I0218 19:37:49.165803 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c401dd00-c0a5-41c1-98ea-873e0e2ce7bf-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"c401dd00-c0a5-41c1-98ea-873e0e2ce7bf\") " pod="openstack/nova-cell1-novncproxy-0" Feb 18 19:37:49 crc kubenswrapper[4942]: I0218 19:37:49.165921 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/c401dd00-c0a5-41c1-98ea-873e0e2ce7bf-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"c401dd00-c0a5-41c1-98ea-873e0e2ce7bf\") " pod="openstack/nova-cell1-novncproxy-0" Feb 18 19:37:49 crc kubenswrapper[4942]: I0218 19:37:49.167222 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/c401dd00-c0a5-41c1-98ea-873e0e2ce7bf-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"c401dd00-c0a5-41c1-98ea-873e0e2ce7bf\") " pod="openstack/nova-cell1-novncproxy-0" Feb 18 19:37:49 crc kubenswrapper[4942]: I0218 19:37:49.170247 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8f2c79fe-40ed-4218-9db5-ecf2750cd43c-config-data\") pod \"nova-metadata-0\" (UID: \"8f2c79fe-40ed-4218-9db5-ecf2750cd43c\") " pod="openstack/nova-metadata-0" Feb 18 19:37:49 crc kubenswrapper[4942]: I0218 19:37:49.172846 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8f2c79fe-40ed-4218-9db5-ecf2750cd43c-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"8f2c79fe-40ed-4218-9db5-ecf2750cd43c\") " pod="openstack/nova-metadata-0" Feb 18 19:37:49 crc kubenswrapper[4942]: I0218 19:37:49.174373 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/8f2c79fe-40ed-4218-9db5-ecf2750cd43c-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"8f2c79fe-40ed-4218-9db5-ecf2750cd43c\") " pod="openstack/nova-metadata-0" Feb 18 19:37:49 crc kubenswrapper[4942]: I0218 19:37:49.179506 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-89kwj\" (UniqueName: \"kubernetes.io/projected/8f2c79fe-40ed-4218-9db5-ecf2750cd43c-kube-api-access-89kwj\") pod \"nova-metadata-0\" (UID: \"8f2c79fe-40ed-4218-9db5-ecf2750cd43c\") " pod="openstack/nova-metadata-0" Feb 18 19:37:49 crc kubenswrapper[4942]: I0218 19:37:49.193864 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mjq7x\" (UniqueName: \"kubernetes.io/projected/c401dd00-c0a5-41c1-98ea-873e0e2ce7bf-kube-api-access-mjq7x\") pod \"nova-cell1-novncproxy-0\" (UID: \"c401dd00-c0a5-41c1-98ea-873e0e2ce7bf\") " pod="openstack/nova-cell1-novncproxy-0" Feb 18 19:37:49 crc kubenswrapper[4942]: I0218 19:37:49.278079 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Feb 18 19:37:49 crc kubenswrapper[4942]: I0218 19:37:49.294813 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 18 19:37:49 crc kubenswrapper[4942]: I0218 19:37:49.801401 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 18 19:37:49 crc kubenswrapper[4942]: W0218 19:37:49.806436 4942 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc401dd00_c0a5_41c1_98ea_873e0e2ce7bf.slice/crio-7a3176d451d4f22629bbea51316e8da6dc1028f231b91417b79ebe376f8398f7 WatchSource:0}: Error finding container 7a3176d451d4f22629bbea51316e8da6dc1028f231b91417b79ebe376f8398f7: Status 404 returned error can't find the container with id 7a3176d451d4f22629bbea51316e8da6dc1028f231b91417b79ebe376f8398f7 Feb 18 19:37:49 crc kubenswrapper[4942]: I0218 19:37:49.823865 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"c401dd00-c0a5-41c1-98ea-873e0e2ce7bf","Type":"ContainerStarted","Data":"7a3176d451d4f22629bbea51316e8da6dc1028f231b91417b79ebe376f8398f7"} Feb 18 19:37:49 crc kubenswrapper[4942]: I0218 19:37:49.897196 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 18 19:37:49 crc kubenswrapper[4942]: W0218 19:37:49.898044 4942 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8f2c79fe_40ed_4218_9db5_ecf2750cd43c.slice/crio-3ccb0af91234a49cce52e60bb8c5d83c89a7cbfd38f25c2175232126f6780778 WatchSource:0}: Error finding container 3ccb0af91234a49cce52e60bb8c5d83c89a7cbfd38f25c2175232126f6780778: Status 404 returned error can't find the container with id 3ccb0af91234a49cce52e60bb8c5d83c89a7cbfd38f25c2175232126f6780778 Feb 18 19:37:50 crc kubenswrapper[4942]: I0218 19:37:50.836128 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"c401dd00-c0a5-41c1-98ea-873e0e2ce7bf","Type":"ContainerStarted","Data":"26d23b465934beeb44398ef9a7091b49a63a4c39f0979d582e519cc2943d3297"} Feb 18 19:37:50 crc kubenswrapper[4942]: I0218 19:37:50.838305 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"8f2c79fe-40ed-4218-9db5-ecf2750cd43c","Type":"ContainerStarted","Data":"2b016dd053ee1c6b8b02284b80f61da51907ed4b62870908ede29de5ad95f8a6"} Feb 18 19:37:50 crc kubenswrapper[4942]: I0218 19:37:50.838357 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"8f2c79fe-40ed-4218-9db5-ecf2750cd43c","Type":"ContainerStarted","Data":"754248603e713494d1ff408069c74a57b870cc3dc9fca6bf7971c23184806daf"} Feb 18 19:37:50 crc kubenswrapper[4942]: I0218 19:37:50.838372 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"8f2c79fe-40ed-4218-9db5-ecf2750cd43c","Type":"ContainerStarted","Data":"3ccb0af91234a49cce52e60bb8c5d83c89a7cbfd38f25c2175232126f6780778"} Feb 18 19:37:50 crc kubenswrapper[4942]: I0218 19:37:50.870093 4942 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=2.870071579 podStartE2EDuration="2.870071579s" podCreationTimestamp="2026-02-18 19:37:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:37:50.858031763 +0000 UTC m=+1230.562964428" watchObservedRunningTime="2026-02-18 19:37:50.870071579 +0000 UTC m=+1230.575004244" Feb 18 19:37:50 crc kubenswrapper[4942]: I0218 19:37:50.882980 4942 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.882954967 podStartE2EDuration="2.882954967s" podCreationTimestamp="2026-02-18 19:37:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:37:50.880544064 +0000 UTC m=+1230.585476719" watchObservedRunningTime="2026-02-18 19:37:50.882954967 +0000 UTC m=+1230.587887642" Feb 18 19:37:51 crc kubenswrapper[4942]: I0218 19:37:51.170306 4942 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Feb 18 19:37:51 crc kubenswrapper[4942]: I0218 19:37:51.170912 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Feb 18 19:37:51 crc kubenswrapper[4942]: I0218 19:37:51.171736 4942 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Feb 18 19:37:51 crc kubenswrapper[4942]: I0218 19:37:51.176789 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Feb 18 19:37:51 crc kubenswrapper[4942]: I0218 19:37:51.848811 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Feb 18 19:37:51 crc kubenswrapper[4942]: I0218 19:37:51.853452 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Feb 18 19:37:52 crc kubenswrapper[4942]: I0218 19:37:52.037492 4942 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-89c5cd4d5-7jhpx"] Feb 18 19:37:52 crc kubenswrapper[4942]: I0218 19:37:52.039024 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-89c5cd4d5-7jhpx" Feb 18 19:37:52 crc kubenswrapper[4942]: I0218 19:37:52.057744 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-89c5cd4d5-7jhpx"] Feb 18 19:37:52 crc kubenswrapper[4942]: I0218 19:37:52.147399 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/7097c36f-c705-4a21-be80-ea057d24ace8-dns-swift-storage-0\") pod \"dnsmasq-dns-89c5cd4d5-7jhpx\" (UID: \"7097c36f-c705-4a21-be80-ea057d24ace8\") " pod="openstack/dnsmasq-dns-89c5cd4d5-7jhpx" Feb 18 19:37:52 crc kubenswrapper[4942]: I0218 19:37:52.147454 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/7097c36f-c705-4a21-be80-ea057d24ace8-ovsdbserver-nb\") pod \"dnsmasq-dns-89c5cd4d5-7jhpx\" (UID: \"7097c36f-c705-4a21-be80-ea057d24ace8\") " pod="openstack/dnsmasq-dns-89c5cd4d5-7jhpx" Feb 18 19:37:52 crc kubenswrapper[4942]: I0218 19:37:52.147473 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7097c36f-c705-4a21-be80-ea057d24ace8-dns-svc\") pod \"dnsmasq-dns-89c5cd4d5-7jhpx\" (UID: \"7097c36f-c705-4a21-be80-ea057d24ace8\") " pod="openstack/dnsmasq-dns-89c5cd4d5-7jhpx" Feb 18 19:37:52 crc kubenswrapper[4942]: I0218 19:37:52.147818 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7097c36f-c705-4a21-be80-ea057d24ace8-config\") pod \"dnsmasq-dns-89c5cd4d5-7jhpx\" (UID: \"7097c36f-c705-4a21-be80-ea057d24ace8\") " pod="openstack/dnsmasq-dns-89c5cd4d5-7jhpx" Feb 18 19:37:52 crc kubenswrapper[4942]: I0218 19:37:52.147908 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nkld8\" (UniqueName: \"kubernetes.io/projected/7097c36f-c705-4a21-be80-ea057d24ace8-kube-api-access-nkld8\") pod \"dnsmasq-dns-89c5cd4d5-7jhpx\" (UID: \"7097c36f-c705-4a21-be80-ea057d24ace8\") " pod="openstack/dnsmasq-dns-89c5cd4d5-7jhpx" Feb 18 19:37:52 crc kubenswrapper[4942]: I0218 19:37:52.148001 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/7097c36f-c705-4a21-be80-ea057d24ace8-ovsdbserver-sb\") pod \"dnsmasq-dns-89c5cd4d5-7jhpx\" (UID: \"7097c36f-c705-4a21-be80-ea057d24ace8\") " pod="openstack/dnsmasq-dns-89c5cd4d5-7jhpx" Feb 18 19:37:52 crc kubenswrapper[4942]: I0218 19:37:52.249720 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7097c36f-c705-4a21-be80-ea057d24ace8-config\") pod \"dnsmasq-dns-89c5cd4d5-7jhpx\" (UID: \"7097c36f-c705-4a21-be80-ea057d24ace8\") " pod="openstack/dnsmasq-dns-89c5cd4d5-7jhpx" Feb 18 19:37:52 crc kubenswrapper[4942]: I0218 19:37:52.249825 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nkld8\" (UniqueName: \"kubernetes.io/projected/7097c36f-c705-4a21-be80-ea057d24ace8-kube-api-access-nkld8\") pod \"dnsmasq-dns-89c5cd4d5-7jhpx\" (UID: \"7097c36f-c705-4a21-be80-ea057d24ace8\") " pod="openstack/dnsmasq-dns-89c5cd4d5-7jhpx" Feb 18 19:37:52 crc kubenswrapper[4942]: I0218 19:37:52.249867 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/7097c36f-c705-4a21-be80-ea057d24ace8-ovsdbserver-sb\") pod \"dnsmasq-dns-89c5cd4d5-7jhpx\" (UID: \"7097c36f-c705-4a21-be80-ea057d24ace8\") " pod="openstack/dnsmasq-dns-89c5cd4d5-7jhpx" Feb 18 19:37:52 crc kubenswrapper[4942]: I0218 19:37:52.250019 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/7097c36f-c705-4a21-be80-ea057d24ace8-dns-swift-storage-0\") pod \"dnsmasq-dns-89c5cd4d5-7jhpx\" (UID: \"7097c36f-c705-4a21-be80-ea057d24ace8\") " pod="openstack/dnsmasq-dns-89c5cd4d5-7jhpx" Feb 18 19:37:52 crc kubenswrapper[4942]: I0218 19:37:52.250054 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7097c36f-c705-4a21-be80-ea057d24ace8-dns-svc\") pod \"dnsmasq-dns-89c5cd4d5-7jhpx\" (UID: \"7097c36f-c705-4a21-be80-ea057d24ace8\") " pod="openstack/dnsmasq-dns-89c5cd4d5-7jhpx" Feb 18 19:37:52 crc kubenswrapper[4942]: I0218 19:37:52.250102 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/7097c36f-c705-4a21-be80-ea057d24ace8-ovsdbserver-nb\") pod \"dnsmasq-dns-89c5cd4d5-7jhpx\" (UID: \"7097c36f-c705-4a21-be80-ea057d24ace8\") " pod="openstack/dnsmasq-dns-89c5cd4d5-7jhpx" Feb 18 19:37:52 crc kubenswrapper[4942]: I0218 19:37:52.251187 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/7097c36f-c705-4a21-be80-ea057d24ace8-ovsdbserver-nb\") pod \"dnsmasq-dns-89c5cd4d5-7jhpx\" (UID: \"7097c36f-c705-4a21-be80-ea057d24ace8\") " pod="openstack/dnsmasq-dns-89c5cd4d5-7jhpx" Feb 18 19:37:52 crc kubenswrapper[4942]: I0218 19:37:52.252574 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/7097c36f-c705-4a21-be80-ea057d24ace8-ovsdbserver-sb\") pod \"dnsmasq-dns-89c5cd4d5-7jhpx\" (UID: \"7097c36f-c705-4a21-be80-ea057d24ace8\") " pod="openstack/dnsmasq-dns-89c5cd4d5-7jhpx" Feb 18 19:37:52 crc kubenswrapper[4942]: I0218 19:37:52.252613 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/7097c36f-c705-4a21-be80-ea057d24ace8-dns-swift-storage-0\") pod \"dnsmasq-dns-89c5cd4d5-7jhpx\" (UID: \"7097c36f-c705-4a21-be80-ea057d24ace8\") " pod="openstack/dnsmasq-dns-89c5cd4d5-7jhpx" Feb 18 19:37:52 crc kubenswrapper[4942]: I0218 19:37:52.252723 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7097c36f-c705-4a21-be80-ea057d24ace8-dns-svc\") pod \"dnsmasq-dns-89c5cd4d5-7jhpx\" (UID: \"7097c36f-c705-4a21-be80-ea057d24ace8\") " pod="openstack/dnsmasq-dns-89c5cd4d5-7jhpx" Feb 18 19:37:52 crc kubenswrapper[4942]: I0218 19:37:52.252808 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7097c36f-c705-4a21-be80-ea057d24ace8-config\") pod \"dnsmasq-dns-89c5cd4d5-7jhpx\" (UID: \"7097c36f-c705-4a21-be80-ea057d24ace8\") " pod="openstack/dnsmasq-dns-89c5cd4d5-7jhpx" Feb 18 19:37:52 crc kubenswrapper[4942]: I0218 19:37:52.274538 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nkld8\" (UniqueName: \"kubernetes.io/projected/7097c36f-c705-4a21-be80-ea057d24ace8-kube-api-access-nkld8\") pod \"dnsmasq-dns-89c5cd4d5-7jhpx\" (UID: \"7097c36f-c705-4a21-be80-ea057d24ace8\") " pod="openstack/dnsmasq-dns-89c5cd4d5-7jhpx" Feb 18 19:37:52 crc kubenswrapper[4942]: I0218 19:37:52.361126 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-89c5cd4d5-7jhpx" Feb 18 19:37:53 crc kubenswrapper[4942]: I0218 19:37:52.876648 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-89c5cd4d5-7jhpx"] Feb 18 19:37:53 crc kubenswrapper[4942]: I0218 19:37:53.717279 4942 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 18 19:37:53 crc kubenswrapper[4942]: I0218 19:37:53.717838 4942 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="b3ce1800-8544-49d6-84a8-f635038f26da" containerName="ceilometer-central-agent" containerID="cri-o://122dbfd5620ffaa553bc2db1d7e57c3c94dde9e3c18c2a3f01a6cf8f6a924404" gracePeriod=30 Feb 18 19:37:53 crc kubenswrapper[4942]: I0218 19:37:53.717920 4942 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="b3ce1800-8544-49d6-84a8-f635038f26da" containerName="proxy-httpd" containerID="cri-o://9ec5caf96b65f1b74beab3396ebf587794daec1e7c6b002fe84e8ad8a0730e95" gracePeriod=30 Feb 18 19:37:53 crc kubenswrapper[4942]: I0218 19:37:53.717956 4942 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="b3ce1800-8544-49d6-84a8-f635038f26da" containerName="sg-core" containerID="cri-o://185207e1f6c945d8a225d619dcb1bdc76dddd2e40a23e7344e8ebfbde1ab9c92" gracePeriod=30 Feb 18 19:37:53 crc kubenswrapper[4942]: I0218 19:37:53.717969 4942 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="b3ce1800-8544-49d6-84a8-f635038f26da" containerName="ceilometer-notification-agent" containerID="cri-o://9e97f1132a204b3bf2c933f6371c1ae1d572289c719b35111b591635e9241e91" gracePeriod=30 Feb 18 19:37:53 crc kubenswrapper[4942]: I0218 19:37:53.736433 4942 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ceilometer-0" podUID="b3ce1800-8544-49d6-84a8-f635038f26da" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 502" Feb 18 19:37:53 crc kubenswrapper[4942]: I0218 19:37:53.741188 4942 patch_prober.go:28] interesting pod/machine-config-daemon-wqxh4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 19:37:53 crc kubenswrapper[4942]: I0218 19:37:53.741251 4942 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wqxh4" podUID="28921539-823a-4439-a230-3b5aed7085cc" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 19:37:53 crc kubenswrapper[4942]: I0218 19:37:53.866984 4942 generic.go:334] "Generic (PLEG): container finished" podID="7097c36f-c705-4a21-be80-ea057d24ace8" containerID="81cc4bd58d4674e6299bf3f92627b59ac247ba15bf8a7017013a911bae4a12c5" exitCode=0 Feb 18 19:37:53 crc kubenswrapper[4942]: I0218 19:37:53.867104 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-89c5cd4d5-7jhpx" event={"ID":"7097c36f-c705-4a21-be80-ea057d24ace8","Type":"ContainerDied","Data":"81cc4bd58d4674e6299bf3f92627b59ac247ba15bf8a7017013a911bae4a12c5"} Feb 18 19:37:53 crc kubenswrapper[4942]: I0218 19:37:53.867136 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-89c5cd4d5-7jhpx" event={"ID":"7097c36f-c705-4a21-be80-ea057d24ace8","Type":"ContainerStarted","Data":"5c67289996fab91f1e19ef4b863aed3cd05ec958251ed161ac176da9f1432384"} Feb 18 19:37:53 crc kubenswrapper[4942]: I0218 19:37:53.884207 4942 generic.go:334] "Generic (PLEG): container finished" podID="b3ce1800-8544-49d6-84a8-f635038f26da" containerID="185207e1f6c945d8a225d619dcb1bdc76dddd2e40a23e7344e8ebfbde1ab9c92" exitCode=2 Feb 18 19:37:53 crc kubenswrapper[4942]: I0218 19:37:53.884276 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b3ce1800-8544-49d6-84a8-f635038f26da","Type":"ContainerDied","Data":"185207e1f6c945d8a225d619dcb1bdc76dddd2e40a23e7344e8ebfbde1ab9c92"} Feb 18 19:37:54 crc kubenswrapper[4942]: I0218 19:37:54.279234 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Feb 18 19:37:54 crc kubenswrapper[4942]: I0218 19:37:54.294955 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Feb 18 19:37:54 crc kubenswrapper[4942]: I0218 19:37:54.295009 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Feb 18 19:37:54 crc kubenswrapper[4942]: I0218 19:37:54.618151 4942 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Feb 18 19:37:54 crc kubenswrapper[4942]: I0218 19:37:54.895648 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-89c5cd4d5-7jhpx" event={"ID":"7097c36f-c705-4a21-be80-ea057d24ace8","Type":"ContainerStarted","Data":"59bdba50db92d7f040d8a79e5e6b99a3471a426e80e12a58995334733d255e36"} Feb 18 19:37:54 crc kubenswrapper[4942]: I0218 19:37:54.895815 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-89c5cd4d5-7jhpx" Feb 18 19:37:54 crc kubenswrapper[4942]: I0218 19:37:54.898801 4942 generic.go:334] "Generic (PLEG): container finished" podID="b3ce1800-8544-49d6-84a8-f635038f26da" containerID="9ec5caf96b65f1b74beab3396ebf587794daec1e7c6b002fe84e8ad8a0730e95" exitCode=0 Feb 18 19:37:54 crc kubenswrapper[4942]: I0218 19:37:54.898848 4942 generic.go:334] "Generic (PLEG): container finished" podID="b3ce1800-8544-49d6-84a8-f635038f26da" containerID="122dbfd5620ffaa553bc2db1d7e57c3c94dde9e3c18c2a3f01a6cf8f6a924404" exitCode=0 Feb 18 19:37:54 crc kubenswrapper[4942]: I0218 19:37:54.898872 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b3ce1800-8544-49d6-84a8-f635038f26da","Type":"ContainerDied","Data":"9ec5caf96b65f1b74beab3396ebf587794daec1e7c6b002fe84e8ad8a0730e95"} Feb 18 19:37:54 crc kubenswrapper[4942]: I0218 19:37:54.898915 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b3ce1800-8544-49d6-84a8-f635038f26da","Type":"ContainerDied","Data":"122dbfd5620ffaa553bc2db1d7e57c3c94dde9e3c18c2a3f01a6cf8f6a924404"} Feb 18 19:37:54 crc kubenswrapper[4942]: I0218 19:37:54.899036 4942 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="e52b4ebc-1d74-4e58-9a7a-dddb7b2d036a" containerName="nova-api-log" containerID="cri-o://7b3e16de2841d45806031cfe8067c2ec6814cccea938f4c062433863dc9f77c2" gracePeriod=30 Feb 18 19:37:54 crc kubenswrapper[4942]: I0218 19:37:54.899083 4942 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="e52b4ebc-1d74-4e58-9a7a-dddb7b2d036a" containerName="nova-api-api" containerID="cri-o://76b344abb86d057dcf20c894a0759c7d468787e44aa3785ecfbdff449a08568b" gracePeriod=30 Feb 18 19:37:54 crc kubenswrapper[4942]: I0218 19:37:54.923677 4942 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-89c5cd4d5-7jhpx" podStartSLOduration=2.923658294 podStartE2EDuration="2.923658294s" podCreationTimestamp="2026-02-18 19:37:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:37:54.914281348 +0000 UTC m=+1234.619214013" watchObservedRunningTime="2026-02-18 19:37:54.923658294 +0000 UTC m=+1234.628590959" Feb 18 19:37:55 crc kubenswrapper[4942]: I0218 19:37:55.791836 4942 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 18 19:37:55 crc kubenswrapper[4942]: I0218 19:37:55.833262 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b3ce1800-8544-49d6-84a8-f635038f26da-run-httpd\") pod \"b3ce1800-8544-49d6-84a8-f635038f26da\" (UID: \"b3ce1800-8544-49d6-84a8-f635038f26da\") " Feb 18 19:37:55 crc kubenswrapper[4942]: I0218 19:37:55.833340 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b3ce1800-8544-49d6-84a8-f635038f26da-combined-ca-bundle\") pod \"b3ce1800-8544-49d6-84a8-f635038f26da\" (UID: \"b3ce1800-8544-49d6-84a8-f635038f26da\") " Feb 18 19:37:55 crc kubenswrapper[4942]: I0218 19:37:55.833417 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d2f69\" (UniqueName: \"kubernetes.io/projected/b3ce1800-8544-49d6-84a8-f635038f26da-kube-api-access-d2f69\") pod \"b3ce1800-8544-49d6-84a8-f635038f26da\" (UID: \"b3ce1800-8544-49d6-84a8-f635038f26da\") " Feb 18 19:37:55 crc kubenswrapper[4942]: I0218 19:37:55.833443 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b3ce1800-8544-49d6-84a8-f635038f26da-log-httpd\") pod \"b3ce1800-8544-49d6-84a8-f635038f26da\" (UID: \"b3ce1800-8544-49d6-84a8-f635038f26da\") " Feb 18 19:37:55 crc kubenswrapper[4942]: I0218 19:37:55.833492 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b3ce1800-8544-49d6-84a8-f635038f26da-config-data\") pod \"b3ce1800-8544-49d6-84a8-f635038f26da\" (UID: \"b3ce1800-8544-49d6-84a8-f635038f26da\") " Feb 18 19:37:55 crc kubenswrapper[4942]: I0218 19:37:55.833522 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/b3ce1800-8544-49d6-84a8-f635038f26da-sg-core-conf-yaml\") pod \"b3ce1800-8544-49d6-84a8-f635038f26da\" (UID: \"b3ce1800-8544-49d6-84a8-f635038f26da\") " Feb 18 19:37:55 crc kubenswrapper[4942]: I0218 19:37:55.833588 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b3ce1800-8544-49d6-84a8-f635038f26da-scripts\") pod \"b3ce1800-8544-49d6-84a8-f635038f26da\" (UID: \"b3ce1800-8544-49d6-84a8-f635038f26da\") " Feb 18 19:37:55 crc kubenswrapper[4942]: I0218 19:37:55.833630 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/b3ce1800-8544-49d6-84a8-f635038f26da-ceilometer-tls-certs\") pod \"b3ce1800-8544-49d6-84a8-f635038f26da\" (UID: \"b3ce1800-8544-49d6-84a8-f635038f26da\") " Feb 18 19:37:55 crc kubenswrapper[4942]: I0218 19:37:55.834614 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b3ce1800-8544-49d6-84a8-f635038f26da-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "b3ce1800-8544-49d6-84a8-f635038f26da" (UID: "b3ce1800-8544-49d6-84a8-f635038f26da"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 19:37:55 crc kubenswrapper[4942]: I0218 19:37:55.835032 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b3ce1800-8544-49d6-84a8-f635038f26da-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "b3ce1800-8544-49d6-84a8-f635038f26da" (UID: "b3ce1800-8544-49d6-84a8-f635038f26da"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 19:37:55 crc kubenswrapper[4942]: I0218 19:37:55.842593 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b3ce1800-8544-49d6-84a8-f635038f26da-kube-api-access-d2f69" (OuterVolumeSpecName: "kube-api-access-d2f69") pod "b3ce1800-8544-49d6-84a8-f635038f26da" (UID: "b3ce1800-8544-49d6-84a8-f635038f26da"). InnerVolumeSpecName "kube-api-access-d2f69". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:37:55 crc kubenswrapper[4942]: I0218 19:37:55.845940 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b3ce1800-8544-49d6-84a8-f635038f26da-scripts" (OuterVolumeSpecName: "scripts") pod "b3ce1800-8544-49d6-84a8-f635038f26da" (UID: "b3ce1800-8544-49d6-84a8-f635038f26da"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:37:55 crc kubenswrapper[4942]: I0218 19:37:55.876151 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b3ce1800-8544-49d6-84a8-f635038f26da-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "b3ce1800-8544-49d6-84a8-f635038f26da" (UID: "b3ce1800-8544-49d6-84a8-f635038f26da"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:37:55 crc kubenswrapper[4942]: I0218 19:37:55.910613 4942 generic.go:334] "Generic (PLEG): container finished" podID="e52b4ebc-1d74-4e58-9a7a-dddb7b2d036a" containerID="7b3e16de2841d45806031cfe8067c2ec6814cccea938f4c062433863dc9f77c2" exitCode=143 Feb 18 19:37:55 crc kubenswrapper[4942]: I0218 19:37:55.910686 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"e52b4ebc-1d74-4e58-9a7a-dddb7b2d036a","Type":"ContainerDied","Data":"7b3e16de2841d45806031cfe8067c2ec6814cccea938f4c062433863dc9f77c2"} Feb 18 19:37:55 crc kubenswrapper[4942]: I0218 19:37:55.913941 4942 generic.go:334] "Generic (PLEG): container finished" podID="b3ce1800-8544-49d6-84a8-f635038f26da" containerID="9e97f1132a204b3bf2c933f6371c1ae1d572289c719b35111b591635e9241e91" exitCode=0 Feb 18 19:37:55 crc kubenswrapper[4942]: I0218 19:37:55.914962 4942 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 18 19:37:55 crc kubenswrapper[4942]: I0218 19:37:55.915509 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b3ce1800-8544-49d6-84a8-f635038f26da","Type":"ContainerDied","Data":"9e97f1132a204b3bf2c933f6371c1ae1d572289c719b35111b591635e9241e91"} Feb 18 19:37:55 crc kubenswrapper[4942]: I0218 19:37:55.915590 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b3ce1800-8544-49d6-84a8-f635038f26da","Type":"ContainerDied","Data":"4e628a6fe7bba13d144b29335e6e3f96fe39a4ea90cabd697733b132cffcd80d"} Feb 18 19:37:55 crc kubenswrapper[4942]: I0218 19:37:55.915651 4942 scope.go:117] "RemoveContainer" containerID="9ec5caf96b65f1b74beab3396ebf587794daec1e7c6b002fe84e8ad8a0730e95" Feb 18 19:37:55 crc kubenswrapper[4942]: I0218 19:37:55.939187 4942 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b3ce1800-8544-49d6-84a8-f635038f26da-scripts\") on node \"crc\" DevicePath \"\"" Feb 18 19:37:55 crc kubenswrapper[4942]: I0218 19:37:55.939388 4942 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b3ce1800-8544-49d6-84a8-f635038f26da-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 18 19:37:55 crc kubenswrapper[4942]: I0218 19:37:55.939454 4942 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d2f69\" (UniqueName: \"kubernetes.io/projected/b3ce1800-8544-49d6-84a8-f635038f26da-kube-api-access-d2f69\") on node \"crc\" DevicePath \"\"" Feb 18 19:37:55 crc kubenswrapper[4942]: I0218 19:37:55.939515 4942 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b3ce1800-8544-49d6-84a8-f635038f26da-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 18 19:37:55 crc kubenswrapper[4942]: I0218 19:37:55.939577 4942 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/b3ce1800-8544-49d6-84a8-f635038f26da-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 18 19:37:55 crc kubenswrapper[4942]: I0218 19:37:55.946408 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b3ce1800-8544-49d6-84a8-f635038f26da-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "b3ce1800-8544-49d6-84a8-f635038f26da" (UID: "b3ce1800-8544-49d6-84a8-f635038f26da"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:37:55 crc kubenswrapper[4942]: I0218 19:37:55.948617 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b3ce1800-8544-49d6-84a8-f635038f26da-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b3ce1800-8544-49d6-84a8-f635038f26da" (UID: "b3ce1800-8544-49d6-84a8-f635038f26da"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:37:55 crc kubenswrapper[4942]: I0218 19:37:55.964899 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b3ce1800-8544-49d6-84a8-f635038f26da-config-data" (OuterVolumeSpecName: "config-data") pod "b3ce1800-8544-49d6-84a8-f635038f26da" (UID: "b3ce1800-8544-49d6-84a8-f635038f26da"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:37:55 crc kubenswrapper[4942]: I0218 19:37:55.992890 4942 scope.go:117] "RemoveContainer" containerID="185207e1f6c945d8a225d619dcb1bdc76dddd2e40a23e7344e8ebfbde1ab9c92" Feb 18 19:37:56 crc kubenswrapper[4942]: I0218 19:37:56.012714 4942 scope.go:117] "RemoveContainer" containerID="9e97f1132a204b3bf2c933f6371c1ae1d572289c719b35111b591635e9241e91" Feb 18 19:37:56 crc kubenswrapper[4942]: I0218 19:37:56.032043 4942 scope.go:117] "RemoveContainer" containerID="122dbfd5620ffaa553bc2db1d7e57c3c94dde9e3c18c2a3f01a6cf8f6a924404" Feb 18 19:37:56 crc kubenswrapper[4942]: I0218 19:37:56.042984 4942 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b3ce1800-8544-49d6-84a8-f635038f26da-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 19:37:56 crc kubenswrapper[4942]: I0218 19:37:56.043010 4942 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b3ce1800-8544-49d6-84a8-f635038f26da-config-data\") on node \"crc\" DevicePath \"\"" Feb 18 19:37:56 crc kubenswrapper[4942]: I0218 19:37:56.043019 4942 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/b3ce1800-8544-49d6-84a8-f635038f26da-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 18 19:37:56 crc kubenswrapper[4942]: I0218 19:37:56.049597 4942 scope.go:117] "RemoveContainer" containerID="9ec5caf96b65f1b74beab3396ebf587794daec1e7c6b002fe84e8ad8a0730e95" Feb 18 19:37:56 crc kubenswrapper[4942]: E0218 19:37:56.051927 4942 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9ec5caf96b65f1b74beab3396ebf587794daec1e7c6b002fe84e8ad8a0730e95\": container with ID starting with 9ec5caf96b65f1b74beab3396ebf587794daec1e7c6b002fe84e8ad8a0730e95 not found: ID does not exist" containerID="9ec5caf96b65f1b74beab3396ebf587794daec1e7c6b002fe84e8ad8a0730e95" Feb 18 19:37:56 crc kubenswrapper[4942]: I0218 19:37:56.051973 4942 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9ec5caf96b65f1b74beab3396ebf587794daec1e7c6b002fe84e8ad8a0730e95"} err="failed to get container status \"9ec5caf96b65f1b74beab3396ebf587794daec1e7c6b002fe84e8ad8a0730e95\": rpc error: code = NotFound desc = could not find container \"9ec5caf96b65f1b74beab3396ebf587794daec1e7c6b002fe84e8ad8a0730e95\": container with ID starting with 9ec5caf96b65f1b74beab3396ebf587794daec1e7c6b002fe84e8ad8a0730e95 not found: ID does not exist" Feb 18 19:37:56 crc kubenswrapper[4942]: I0218 19:37:56.052019 4942 scope.go:117] "RemoveContainer" containerID="185207e1f6c945d8a225d619dcb1bdc76dddd2e40a23e7344e8ebfbde1ab9c92" Feb 18 19:37:56 crc kubenswrapper[4942]: E0218 19:37:56.058086 4942 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"185207e1f6c945d8a225d619dcb1bdc76dddd2e40a23e7344e8ebfbde1ab9c92\": container with ID starting with 185207e1f6c945d8a225d619dcb1bdc76dddd2e40a23e7344e8ebfbde1ab9c92 not found: ID does not exist" containerID="185207e1f6c945d8a225d619dcb1bdc76dddd2e40a23e7344e8ebfbde1ab9c92" Feb 18 19:37:56 crc kubenswrapper[4942]: I0218 19:37:56.058127 4942 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"185207e1f6c945d8a225d619dcb1bdc76dddd2e40a23e7344e8ebfbde1ab9c92"} err="failed to get container status \"185207e1f6c945d8a225d619dcb1bdc76dddd2e40a23e7344e8ebfbde1ab9c92\": rpc error: code = NotFound desc = could not find container \"185207e1f6c945d8a225d619dcb1bdc76dddd2e40a23e7344e8ebfbde1ab9c92\": container with ID starting with 185207e1f6c945d8a225d619dcb1bdc76dddd2e40a23e7344e8ebfbde1ab9c92 not found: ID does not exist" Feb 18 19:37:56 crc kubenswrapper[4942]: I0218 19:37:56.058152 4942 scope.go:117] "RemoveContainer" containerID="9e97f1132a204b3bf2c933f6371c1ae1d572289c719b35111b591635e9241e91" Feb 18 19:37:56 crc kubenswrapper[4942]: E0218 19:37:56.058476 4942 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9e97f1132a204b3bf2c933f6371c1ae1d572289c719b35111b591635e9241e91\": container with ID starting with 9e97f1132a204b3bf2c933f6371c1ae1d572289c719b35111b591635e9241e91 not found: ID does not exist" containerID="9e97f1132a204b3bf2c933f6371c1ae1d572289c719b35111b591635e9241e91" Feb 18 19:37:56 crc kubenswrapper[4942]: I0218 19:37:56.058501 4942 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9e97f1132a204b3bf2c933f6371c1ae1d572289c719b35111b591635e9241e91"} err="failed to get container status \"9e97f1132a204b3bf2c933f6371c1ae1d572289c719b35111b591635e9241e91\": rpc error: code = NotFound desc = could not find container \"9e97f1132a204b3bf2c933f6371c1ae1d572289c719b35111b591635e9241e91\": container with ID starting with 9e97f1132a204b3bf2c933f6371c1ae1d572289c719b35111b591635e9241e91 not found: ID does not exist" Feb 18 19:37:56 crc kubenswrapper[4942]: I0218 19:37:56.058517 4942 scope.go:117] "RemoveContainer" containerID="122dbfd5620ffaa553bc2db1d7e57c3c94dde9e3c18c2a3f01a6cf8f6a924404" Feb 18 19:37:56 crc kubenswrapper[4942]: E0218 19:37:56.058811 4942 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"122dbfd5620ffaa553bc2db1d7e57c3c94dde9e3c18c2a3f01a6cf8f6a924404\": container with ID starting with 122dbfd5620ffaa553bc2db1d7e57c3c94dde9e3c18c2a3f01a6cf8f6a924404 not found: ID does not exist" containerID="122dbfd5620ffaa553bc2db1d7e57c3c94dde9e3c18c2a3f01a6cf8f6a924404" Feb 18 19:37:56 crc kubenswrapper[4942]: I0218 19:37:56.058847 4942 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"122dbfd5620ffaa553bc2db1d7e57c3c94dde9e3c18c2a3f01a6cf8f6a924404"} err="failed to get container status \"122dbfd5620ffaa553bc2db1d7e57c3c94dde9e3c18c2a3f01a6cf8f6a924404\": rpc error: code = NotFound desc = could not find container \"122dbfd5620ffaa553bc2db1d7e57c3c94dde9e3c18c2a3f01a6cf8f6a924404\": container with ID starting with 122dbfd5620ffaa553bc2db1d7e57c3c94dde9e3c18c2a3f01a6cf8f6a924404 not found: ID does not exist" Feb 18 19:37:56 crc kubenswrapper[4942]: I0218 19:37:56.252472 4942 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 18 19:37:56 crc kubenswrapper[4942]: I0218 19:37:56.261144 4942 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 18 19:37:56 crc kubenswrapper[4942]: I0218 19:37:56.273895 4942 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 18 19:37:56 crc kubenswrapper[4942]: E0218 19:37:56.274429 4942 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b3ce1800-8544-49d6-84a8-f635038f26da" containerName="sg-core" Feb 18 19:37:56 crc kubenswrapper[4942]: I0218 19:37:56.274492 4942 state_mem.go:107] "Deleted CPUSet assignment" podUID="b3ce1800-8544-49d6-84a8-f635038f26da" containerName="sg-core" Feb 18 19:37:56 crc kubenswrapper[4942]: E0218 19:37:56.274596 4942 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b3ce1800-8544-49d6-84a8-f635038f26da" containerName="proxy-httpd" Feb 18 19:37:56 crc kubenswrapper[4942]: I0218 19:37:56.274643 4942 state_mem.go:107] "Deleted CPUSet assignment" podUID="b3ce1800-8544-49d6-84a8-f635038f26da" containerName="proxy-httpd" Feb 18 19:37:56 crc kubenswrapper[4942]: E0218 19:37:56.274697 4942 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b3ce1800-8544-49d6-84a8-f635038f26da" containerName="ceilometer-central-agent" Feb 18 19:37:56 crc kubenswrapper[4942]: I0218 19:37:56.274741 4942 state_mem.go:107] "Deleted CPUSet assignment" podUID="b3ce1800-8544-49d6-84a8-f635038f26da" containerName="ceilometer-central-agent" Feb 18 19:37:56 crc kubenswrapper[4942]: E0218 19:37:56.274811 4942 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b3ce1800-8544-49d6-84a8-f635038f26da" containerName="ceilometer-notification-agent" Feb 18 19:37:56 crc kubenswrapper[4942]: I0218 19:37:56.275032 4942 state_mem.go:107] "Deleted CPUSet assignment" podUID="b3ce1800-8544-49d6-84a8-f635038f26da" containerName="ceilometer-notification-agent" Feb 18 19:37:56 crc kubenswrapper[4942]: I0218 19:37:56.275365 4942 memory_manager.go:354] "RemoveStaleState removing state" podUID="b3ce1800-8544-49d6-84a8-f635038f26da" containerName="ceilometer-central-agent" Feb 18 19:37:56 crc kubenswrapper[4942]: I0218 19:37:56.275431 4942 memory_manager.go:354] "RemoveStaleState removing state" podUID="b3ce1800-8544-49d6-84a8-f635038f26da" containerName="ceilometer-notification-agent" Feb 18 19:37:56 crc kubenswrapper[4942]: I0218 19:37:56.275492 4942 memory_manager.go:354] "RemoveStaleState removing state" podUID="b3ce1800-8544-49d6-84a8-f635038f26da" containerName="proxy-httpd" Feb 18 19:37:56 crc kubenswrapper[4942]: I0218 19:37:56.275542 4942 memory_manager.go:354] "RemoveStaleState removing state" podUID="b3ce1800-8544-49d6-84a8-f635038f26da" containerName="sg-core" Feb 18 19:37:56 crc kubenswrapper[4942]: I0218 19:37:56.277397 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 18 19:37:56 crc kubenswrapper[4942]: I0218 19:37:56.286564 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 18 19:37:56 crc kubenswrapper[4942]: I0218 19:37:56.286962 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 18 19:37:56 crc kubenswrapper[4942]: I0218 19:37:56.287820 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Feb 18 19:37:56 crc kubenswrapper[4942]: I0218 19:37:56.300469 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 18 19:37:56 crc kubenswrapper[4942]: I0218 19:37:56.348503 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c330a0f3-afd7-4b55-8d33-8617b38bba91-log-httpd\") pod \"ceilometer-0\" (UID: \"c330a0f3-afd7-4b55-8d33-8617b38bba91\") " pod="openstack/ceilometer-0" Feb 18 19:37:56 crc kubenswrapper[4942]: I0218 19:37:56.348617 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c330a0f3-afd7-4b55-8d33-8617b38bba91-config-data\") pod \"ceilometer-0\" (UID: \"c330a0f3-afd7-4b55-8d33-8617b38bba91\") " pod="openstack/ceilometer-0" Feb 18 19:37:56 crc kubenswrapper[4942]: I0218 19:37:56.348673 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h6wdq\" (UniqueName: \"kubernetes.io/projected/c330a0f3-afd7-4b55-8d33-8617b38bba91-kube-api-access-h6wdq\") pod \"ceilometer-0\" (UID: \"c330a0f3-afd7-4b55-8d33-8617b38bba91\") " pod="openstack/ceilometer-0" Feb 18 19:37:56 crc kubenswrapper[4942]: I0218 19:37:56.348693 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c330a0f3-afd7-4b55-8d33-8617b38bba91-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"c330a0f3-afd7-4b55-8d33-8617b38bba91\") " pod="openstack/ceilometer-0" Feb 18 19:37:56 crc kubenswrapper[4942]: I0218 19:37:56.348711 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c330a0f3-afd7-4b55-8d33-8617b38bba91-run-httpd\") pod \"ceilometer-0\" (UID: \"c330a0f3-afd7-4b55-8d33-8617b38bba91\") " pod="openstack/ceilometer-0" Feb 18 19:37:56 crc kubenswrapper[4942]: I0218 19:37:56.348779 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/c330a0f3-afd7-4b55-8d33-8617b38bba91-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"c330a0f3-afd7-4b55-8d33-8617b38bba91\") " pod="openstack/ceilometer-0" Feb 18 19:37:56 crc kubenswrapper[4942]: I0218 19:37:56.348798 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/c330a0f3-afd7-4b55-8d33-8617b38bba91-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"c330a0f3-afd7-4b55-8d33-8617b38bba91\") " pod="openstack/ceilometer-0" Feb 18 19:37:56 crc kubenswrapper[4942]: I0218 19:37:56.348815 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c330a0f3-afd7-4b55-8d33-8617b38bba91-scripts\") pod \"ceilometer-0\" (UID: \"c330a0f3-afd7-4b55-8d33-8617b38bba91\") " pod="openstack/ceilometer-0" Feb 18 19:37:56 crc kubenswrapper[4942]: I0218 19:37:56.450782 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c330a0f3-afd7-4b55-8d33-8617b38bba91-run-httpd\") pod \"ceilometer-0\" (UID: \"c330a0f3-afd7-4b55-8d33-8617b38bba91\") " pod="openstack/ceilometer-0" Feb 18 19:37:56 crc kubenswrapper[4942]: I0218 19:37:56.451288 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/c330a0f3-afd7-4b55-8d33-8617b38bba91-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"c330a0f3-afd7-4b55-8d33-8617b38bba91\") " pod="openstack/ceilometer-0" Feb 18 19:37:56 crc kubenswrapper[4942]: I0218 19:37:56.451358 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/c330a0f3-afd7-4b55-8d33-8617b38bba91-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"c330a0f3-afd7-4b55-8d33-8617b38bba91\") " pod="openstack/ceilometer-0" Feb 18 19:37:56 crc kubenswrapper[4942]: I0218 19:37:56.451445 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c330a0f3-afd7-4b55-8d33-8617b38bba91-scripts\") pod \"ceilometer-0\" (UID: \"c330a0f3-afd7-4b55-8d33-8617b38bba91\") " pod="openstack/ceilometer-0" Feb 18 19:37:56 crc kubenswrapper[4942]: I0218 19:37:56.451562 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c330a0f3-afd7-4b55-8d33-8617b38bba91-log-httpd\") pod \"ceilometer-0\" (UID: \"c330a0f3-afd7-4b55-8d33-8617b38bba91\") " pod="openstack/ceilometer-0" Feb 18 19:37:56 crc kubenswrapper[4942]: I0218 19:37:56.451687 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c330a0f3-afd7-4b55-8d33-8617b38bba91-config-data\") pod \"ceilometer-0\" (UID: \"c330a0f3-afd7-4b55-8d33-8617b38bba91\") " pod="openstack/ceilometer-0" Feb 18 19:37:56 crc kubenswrapper[4942]: I0218 19:37:56.452668 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h6wdq\" (UniqueName: \"kubernetes.io/projected/c330a0f3-afd7-4b55-8d33-8617b38bba91-kube-api-access-h6wdq\") pod \"ceilometer-0\" (UID: \"c330a0f3-afd7-4b55-8d33-8617b38bba91\") " pod="openstack/ceilometer-0" Feb 18 19:37:56 crc kubenswrapper[4942]: I0218 19:37:56.453047 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c330a0f3-afd7-4b55-8d33-8617b38bba91-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"c330a0f3-afd7-4b55-8d33-8617b38bba91\") " pod="openstack/ceilometer-0" Feb 18 19:37:56 crc kubenswrapper[4942]: I0218 19:37:56.452213 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c330a0f3-afd7-4b55-8d33-8617b38bba91-run-httpd\") pod \"ceilometer-0\" (UID: \"c330a0f3-afd7-4b55-8d33-8617b38bba91\") " pod="openstack/ceilometer-0" Feb 18 19:37:56 crc kubenswrapper[4942]: I0218 19:37:56.452303 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c330a0f3-afd7-4b55-8d33-8617b38bba91-log-httpd\") pod \"ceilometer-0\" (UID: \"c330a0f3-afd7-4b55-8d33-8617b38bba91\") " pod="openstack/ceilometer-0" Feb 18 19:37:56 crc kubenswrapper[4942]: I0218 19:37:56.455576 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/c330a0f3-afd7-4b55-8d33-8617b38bba91-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"c330a0f3-afd7-4b55-8d33-8617b38bba91\") " pod="openstack/ceilometer-0" Feb 18 19:37:56 crc kubenswrapper[4942]: I0218 19:37:56.455922 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c330a0f3-afd7-4b55-8d33-8617b38bba91-config-data\") pod \"ceilometer-0\" (UID: \"c330a0f3-afd7-4b55-8d33-8617b38bba91\") " pod="openstack/ceilometer-0" Feb 18 19:37:56 crc kubenswrapper[4942]: I0218 19:37:56.456288 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/c330a0f3-afd7-4b55-8d33-8617b38bba91-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"c330a0f3-afd7-4b55-8d33-8617b38bba91\") " pod="openstack/ceilometer-0" Feb 18 19:37:56 crc kubenswrapper[4942]: I0218 19:37:56.456660 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c330a0f3-afd7-4b55-8d33-8617b38bba91-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"c330a0f3-afd7-4b55-8d33-8617b38bba91\") " pod="openstack/ceilometer-0" Feb 18 19:37:56 crc kubenswrapper[4942]: I0218 19:37:56.458445 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c330a0f3-afd7-4b55-8d33-8617b38bba91-scripts\") pod \"ceilometer-0\" (UID: \"c330a0f3-afd7-4b55-8d33-8617b38bba91\") " pod="openstack/ceilometer-0" Feb 18 19:37:56 crc kubenswrapper[4942]: I0218 19:37:56.471404 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h6wdq\" (UniqueName: \"kubernetes.io/projected/c330a0f3-afd7-4b55-8d33-8617b38bba91-kube-api-access-h6wdq\") pod \"ceilometer-0\" (UID: \"c330a0f3-afd7-4b55-8d33-8617b38bba91\") " pod="openstack/ceilometer-0" Feb 18 19:37:56 crc kubenswrapper[4942]: I0218 19:37:56.606431 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 18 19:37:57 crc kubenswrapper[4942]: I0218 19:37:57.051127 4942 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b3ce1800-8544-49d6-84a8-f635038f26da" path="/var/lib/kubelet/pods/b3ce1800-8544-49d6-84a8-f635038f26da/volumes" Feb 18 19:37:57 crc kubenswrapper[4942]: I0218 19:37:57.108425 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 18 19:37:57 crc kubenswrapper[4942]: I0218 19:37:57.932978 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c330a0f3-afd7-4b55-8d33-8617b38bba91","Type":"ContainerStarted","Data":"724cd265bca66d36c5206546352c1744fd4175372a93790f844a697f57c62cf3"} Feb 18 19:37:57 crc kubenswrapper[4942]: I0218 19:37:57.933326 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c330a0f3-afd7-4b55-8d33-8617b38bba91","Type":"ContainerStarted","Data":"4f3eeeb1d2a0ef0c1322e2cefb10443472f8be4c64f7fa8d9722e28c555476bb"} Feb 18 19:37:58 crc kubenswrapper[4942]: I0218 19:37:58.460964 4942 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 18 19:37:58 crc kubenswrapper[4942]: I0218 19:37:58.626382 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e52b4ebc-1d74-4e58-9a7a-dddb7b2d036a-logs\") pod \"e52b4ebc-1d74-4e58-9a7a-dddb7b2d036a\" (UID: \"e52b4ebc-1d74-4e58-9a7a-dddb7b2d036a\") " Feb 18 19:37:58 crc kubenswrapper[4942]: I0218 19:37:58.626659 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e52b4ebc-1d74-4e58-9a7a-dddb7b2d036a-combined-ca-bundle\") pod \"e52b4ebc-1d74-4e58-9a7a-dddb7b2d036a\" (UID: \"e52b4ebc-1d74-4e58-9a7a-dddb7b2d036a\") " Feb 18 19:37:58 crc kubenswrapper[4942]: I0218 19:37:58.626680 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e52b4ebc-1d74-4e58-9a7a-dddb7b2d036a-config-data\") pod \"e52b4ebc-1d74-4e58-9a7a-dddb7b2d036a\" (UID: \"e52b4ebc-1d74-4e58-9a7a-dddb7b2d036a\") " Feb 18 19:37:58 crc kubenswrapper[4942]: I0218 19:37:58.626814 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hbc72\" (UniqueName: \"kubernetes.io/projected/e52b4ebc-1d74-4e58-9a7a-dddb7b2d036a-kube-api-access-hbc72\") pod \"e52b4ebc-1d74-4e58-9a7a-dddb7b2d036a\" (UID: \"e52b4ebc-1d74-4e58-9a7a-dddb7b2d036a\") " Feb 18 19:37:58 crc kubenswrapper[4942]: I0218 19:37:58.627041 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e52b4ebc-1d74-4e58-9a7a-dddb7b2d036a-logs" (OuterVolumeSpecName: "logs") pod "e52b4ebc-1d74-4e58-9a7a-dddb7b2d036a" (UID: "e52b4ebc-1d74-4e58-9a7a-dddb7b2d036a"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 19:37:58 crc kubenswrapper[4942]: I0218 19:37:58.627542 4942 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e52b4ebc-1d74-4e58-9a7a-dddb7b2d036a-logs\") on node \"crc\" DevicePath \"\"" Feb 18 19:37:58 crc kubenswrapper[4942]: I0218 19:37:58.644477 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e52b4ebc-1d74-4e58-9a7a-dddb7b2d036a-kube-api-access-hbc72" (OuterVolumeSpecName: "kube-api-access-hbc72") pod "e52b4ebc-1d74-4e58-9a7a-dddb7b2d036a" (UID: "e52b4ebc-1d74-4e58-9a7a-dddb7b2d036a"). InnerVolumeSpecName "kube-api-access-hbc72". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:37:58 crc kubenswrapper[4942]: I0218 19:37:58.661562 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e52b4ebc-1d74-4e58-9a7a-dddb7b2d036a-config-data" (OuterVolumeSpecName: "config-data") pod "e52b4ebc-1d74-4e58-9a7a-dddb7b2d036a" (UID: "e52b4ebc-1d74-4e58-9a7a-dddb7b2d036a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:37:58 crc kubenswrapper[4942]: I0218 19:37:58.661899 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e52b4ebc-1d74-4e58-9a7a-dddb7b2d036a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e52b4ebc-1d74-4e58-9a7a-dddb7b2d036a" (UID: "e52b4ebc-1d74-4e58-9a7a-dddb7b2d036a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:37:58 crc kubenswrapper[4942]: I0218 19:37:58.730073 4942 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hbc72\" (UniqueName: \"kubernetes.io/projected/e52b4ebc-1d74-4e58-9a7a-dddb7b2d036a-kube-api-access-hbc72\") on node \"crc\" DevicePath \"\"" Feb 18 19:37:58 crc kubenswrapper[4942]: I0218 19:37:58.730121 4942 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e52b4ebc-1d74-4e58-9a7a-dddb7b2d036a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 19:37:58 crc kubenswrapper[4942]: I0218 19:37:58.730138 4942 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e52b4ebc-1d74-4e58-9a7a-dddb7b2d036a-config-data\") on node \"crc\" DevicePath \"\"" Feb 18 19:37:58 crc kubenswrapper[4942]: I0218 19:37:58.944430 4942 generic.go:334] "Generic (PLEG): container finished" podID="e52b4ebc-1d74-4e58-9a7a-dddb7b2d036a" containerID="76b344abb86d057dcf20c894a0759c7d468787e44aa3785ecfbdff449a08568b" exitCode=0 Feb 18 19:37:58 crc kubenswrapper[4942]: I0218 19:37:58.944492 4942 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 18 19:37:58 crc kubenswrapper[4942]: I0218 19:37:58.944487 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"e52b4ebc-1d74-4e58-9a7a-dddb7b2d036a","Type":"ContainerDied","Data":"76b344abb86d057dcf20c894a0759c7d468787e44aa3785ecfbdff449a08568b"} Feb 18 19:37:58 crc kubenswrapper[4942]: I0218 19:37:58.944666 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"e52b4ebc-1d74-4e58-9a7a-dddb7b2d036a","Type":"ContainerDied","Data":"cf0e7a844a7633e58acd2bee9698d5dc7b514eece929972cfe261a5a10983dd7"} Feb 18 19:37:58 crc kubenswrapper[4942]: I0218 19:37:58.944703 4942 scope.go:117] "RemoveContainer" containerID="76b344abb86d057dcf20c894a0759c7d468787e44aa3785ecfbdff449a08568b" Feb 18 19:37:58 crc kubenswrapper[4942]: I0218 19:37:58.956830 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c330a0f3-afd7-4b55-8d33-8617b38bba91","Type":"ContainerStarted","Data":"532c795a258873ae20237a974d4194a954b9ccd2130576ed8beb675e6befbd60"} Feb 18 19:37:58 crc kubenswrapper[4942]: I0218 19:37:58.976284 4942 scope.go:117] "RemoveContainer" containerID="7b3e16de2841d45806031cfe8067c2ec6814cccea938f4c062433863dc9f77c2" Feb 18 19:37:59 crc kubenswrapper[4942]: I0218 19:37:59.006449 4942 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Feb 18 19:37:59 crc kubenswrapper[4942]: I0218 19:37:59.028283 4942 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Feb 18 19:37:59 crc kubenswrapper[4942]: I0218 19:37:59.062049 4942 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e52b4ebc-1d74-4e58-9a7a-dddb7b2d036a" path="/var/lib/kubelet/pods/e52b4ebc-1d74-4e58-9a7a-dddb7b2d036a/volumes" Feb 18 19:37:59 crc kubenswrapper[4942]: I0218 19:37:59.063133 4942 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Feb 18 19:37:59 crc kubenswrapper[4942]: E0218 19:37:59.063713 4942 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e52b4ebc-1d74-4e58-9a7a-dddb7b2d036a" containerName="nova-api-api" Feb 18 19:37:59 crc kubenswrapper[4942]: I0218 19:37:59.063790 4942 state_mem.go:107] "Deleted CPUSet assignment" podUID="e52b4ebc-1d74-4e58-9a7a-dddb7b2d036a" containerName="nova-api-api" Feb 18 19:37:59 crc kubenswrapper[4942]: E0218 19:37:59.063858 4942 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e52b4ebc-1d74-4e58-9a7a-dddb7b2d036a" containerName="nova-api-log" Feb 18 19:37:59 crc kubenswrapper[4942]: I0218 19:37:59.063902 4942 state_mem.go:107] "Deleted CPUSet assignment" podUID="e52b4ebc-1d74-4e58-9a7a-dddb7b2d036a" containerName="nova-api-log" Feb 18 19:37:59 crc kubenswrapper[4942]: I0218 19:37:59.064112 4942 memory_manager.go:354] "RemoveStaleState removing state" podUID="e52b4ebc-1d74-4e58-9a7a-dddb7b2d036a" containerName="nova-api-api" Feb 18 19:37:59 crc kubenswrapper[4942]: I0218 19:37:59.064198 4942 memory_manager.go:354] "RemoveStaleState removing state" podUID="e52b4ebc-1d74-4e58-9a7a-dddb7b2d036a" containerName="nova-api-log" Feb 18 19:37:59 crc kubenswrapper[4942]: I0218 19:37:59.065396 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 18 19:37:59 crc kubenswrapper[4942]: I0218 19:37:59.065543 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 18 19:37:59 crc kubenswrapper[4942]: I0218 19:37:59.067882 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Feb 18 19:37:59 crc kubenswrapper[4942]: I0218 19:37:59.067995 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Feb 18 19:37:59 crc kubenswrapper[4942]: I0218 19:37:59.068811 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Feb 18 19:37:59 crc kubenswrapper[4942]: I0218 19:37:59.094970 4942 scope.go:117] "RemoveContainer" containerID="76b344abb86d057dcf20c894a0759c7d468787e44aa3785ecfbdff449a08568b" Feb 18 19:37:59 crc kubenswrapper[4942]: E0218 19:37:59.095457 4942 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"76b344abb86d057dcf20c894a0759c7d468787e44aa3785ecfbdff449a08568b\": container with ID starting with 76b344abb86d057dcf20c894a0759c7d468787e44aa3785ecfbdff449a08568b not found: ID does not exist" containerID="76b344abb86d057dcf20c894a0759c7d468787e44aa3785ecfbdff449a08568b" Feb 18 19:37:59 crc kubenswrapper[4942]: I0218 19:37:59.095486 4942 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"76b344abb86d057dcf20c894a0759c7d468787e44aa3785ecfbdff449a08568b"} err="failed to get container status \"76b344abb86d057dcf20c894a0759c7d468787e44aa3785ecfbdff449a08568b\": rpc error: code = NotFound desc = could not find container \"76b344abb86d057dcf20c894a0759c7d468787e44aa3785ecfbdff449a08568b\": container with ID starting with 76b344abb86d057dcf20c894a0759c7d468787e44aa3785ecfbdff449a08568b not found: ID does not exist" Feb 18 19:37:59 crc kubenswrapper[4942]: I0218 19:37:59.095507 4942 scope.go:117] "RemoveContainer" containerID="7b3e16de2841d45806031cfe8067c2ec6814cccea938f4c062433863dc9f77c2" Feb 18 19:37:59 crc kubenswrapper[4942]: E0218 19:37:59.095823 4942 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7b3e16de2841d45806031cfe8067c2ec6814cccea938f4c062433863dc9f77c2\": container with ID starting with 7b3e16de2841d45806031cfe8067c2ec6814cccea938f4c062433863dc9f77c2 not found: ID does not exist" containerID="7b3e16de2841d45806031cfe8067c2ec6814cccea938f4c062433863dc9f77c2" Feb 18 19:37:59 crc kubenswrapper[4942]: I0218 19:37:59.095875 4942 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7b3e16de2841d45806031cfe8067c2ec6814cccea938f4c062433863dc9f77c2"} err="failed to get container status \"7b3e16de2841d45806031cfe8067c2ec6814cccea938f4c062433863dc9f77c2\": rpc error: code = NotFound desc = could not find container \"7b3e16de2841d45806031cfe8067c2ec6814cccea938f4c062433863dc9f77c2\": container with ID starting with 7b3e16de2841d45806031cfe8067c2ec6814cccea938f4c062433863dc9f77c2 not found: ID does not exist" Feb 18 19:37:59 crc kubenswrapper[4942]: I0218 19:37:59.245005 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dfa84b55-e3b4-425c-983b-57e60b06ee59-config-data\") pod \"nova-api-0\" (UID: \"dfa84b55-e3b4-425c-983b-57e60b06ee59\") " pod="openstack/nova-api-0" Feb 18 19:37:59 crc kubenswrapper[4942]: I0218 19:37:59.245180 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dfa84b55-e3b4-425c-983b-57e60b06ee59-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"dfa84b55-e3b4-425c-983b-57e60b06ee59\") " pod="openstack/nova-api-0" Feb 18 19:37:59 crc kubenswrapper[4942]: I0218 19:37:59.245273 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/dfa84b55-e3b4-425c-983b-57e60b06ee59-logs\") pod \"nova-api-0\" (UID: \"dfa84b55-e3b4-425c-983b-57e60b06ee59\") " pod="openstack/nova-api-0" Feb 18 19:37:59 crc kubenswrapper[4942]: I0218 19:37:59.245405 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/dfa84b55-e3b4-425c-983b-57e60b06ee59-public-tls-certs\") pod \"nova-api-0\" (UID: \"dfa84b55-e3b4-425c-983b-57e60b06ee59\") " pod="openstack/nova-api-0" Feb 18 19:37:59 crc kubenswrapper[4942]: I0218 19:37:59.245475 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gdxwb\" (UniqueName: \"kubernetes.io/projected/dfa84b55-e3b4-425c-983b-57e60b06ee59-kube-api-access-gdxwb\") pod \"nova-api-0\" (UID: \"dfa84b55-e3b4-425c-983b-57e60b06ee59\") " pod="openstack/nova-api-0" Feb 18 19:37:59 crc kubenswrapper[4942]: I0218 19:37:59.245544 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/dfa84b55-e3b4-425c-983b-57e60b06ee59-internal-tls-certs\") pod \"nova-api-0\" (UID: \"dfa84b55-e3b4-425c-983b-57e60b06ee59\") " pod="openstack/nova-api-0" Feb 18 19:37:59 crc kubenswrapper[4942]: I0218 19:37:59.280142 4942 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-cell1-novncproxy-0" Feb 18 19:37:59 crc kubenswrapper[4942]: I0218 19:37:59.295084 4942 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Feb 18 19:37:59 crc kubenswrapper[4942]: I0218 19:37:59.295422 4942 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Feb 18 19:37:59 crc kubenswrapper[4942]: I0218 19:37:59.296621 4942 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-cell1-novncproxy-0" Feb 18 19:37:59 crc kubenswrapper[4942]: I0218 19:37:59.347555 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/dfa84b55-e3b4-425c-983b-57e60b06ee59-public-tls-certs\") pod \"nova-api-0\" (UID: \"dfa84b55-e3b4-425c-983b-57e60b06ee59\") " pod="openstack/nova-api-0" Feb 18 19:37:59 crc kubenswrapper[4942]: I0218 19:37:59.347604 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gdxwb\" (UniqueName: \"kubernetes.io/projected/dfa84b55-e3b4-425c-983b-57e60b06ee59-kube-api-access-gdxwb\") pod \"nova-api-0\" (UID: \"dfa84b55-e3b4-425c-983b-57e60b06ee59\") " pod="openstack/nova-api-0" Feb 18 19:37:59 crc kubenswrapper[4942]: I0218 19:37:59.347630 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/dfa84b55-e3b4-425c-983b-57e60b06ee59-internal-tls-certs\") pod \"nova-api-0\" (UID: \"dfa84b55-e3b4-425c-983b-57e60b06ee59\") " pod="openstack/nova-api-0" Feb 18 19:37:59 crc kubenswrapper[4942]: I0218 19:37:59.347714 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dfa84b55-e3b4-425c-983b-57e60b06ee59-config-data\") pod \"nova-api-0\" (UID: \"dfa84b55-e3b4-425c-983b-57e60b06ee59\") " pod="openstack/nova-api-0" Feb 18 19:37:59 crc kubenswrapper[4942]: I0218 19:37:59.347822 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dfa84b55-e3b4-425c-983b-57e60b06ee59-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"dfa84b55-e3b4-425c-983b-57e60b06ee59\") " pod="openstack/nova-api-0" Feb 18 19:37:59 crc kubenswrapper[4942]: I0218 19:37:59.347851 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/dfa84b55-e3b4-425c-983b-57e60b06ee59-logs\") pod \"nova-api-0\" (UID: \"dfa84b55-e3b4-425c-983b-57e60b06ee59\") " pod="openstack/nova-api-0" Feb 18 19:37:59 crc kubenswrapper[4942]: I0218 19:37:59.348244 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/dfa84b55-e3b4-425c-983b-57e60b06ee59-logs\") pod \"nova-api-0\" (UID: \"dfa84b55-e3b4-425c-983b-57e60b06ee59\") " pod="openstack/nova-api-0" Feb 18 19:37:59 crc kubenswrapper[4942]: I0218 19:37:59.352492 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dfa84b55-e3b4-425c-983b-57e60b06ee59-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"dfa84b55-e3b4-425c-983b-57e60b06ee59\") " pod="openstack/nova-api-0" Feb 18 19:37:59 crc kubenswrapper[4942]: I0218 19:37:59.352634 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dfa84b55-e3b4-425c-983b-57e60b06ee59-config-data\") pod \"nova-api-0\" (UID: \"dfa84b55-e3b4-425c-983b-57e60b06ee59\") " pod="openstack/nova-api-0" Feb 18 19:37:59 crc kubenswrapper[4942]: I0218 19:37:59.353003 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/dfa84b55-e3b4-425c-983b-57e60b06ee59-internal-tls-certs\") pod \"nova-api-0\" (UID: \"dfa84b55-e3b4-425c-983b-57e60b06ee59\") " pod="openstack/nova-api-0" Feb 18 19:37:59 crc kubenswrapper[4942]: I0218 19:37:59.353546 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/dfa84b55-e3b4-425c-983b-57e60b06ee59-public-tls-certs\") pod \"nova-api-0\" (UID: \"dfa84b55-e3b4-425c-983b-57e60b06ee59\") " pod="openstack/nova-api-0" Feb 18 19:37:59 crc kubenswrapper[4942]: I0218 19:37:59.366517 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gdxwb\" (UniqueName: \"kubernetes.io/projected/dfa84b55-e3b4-425c-983b-57e60b06ee59-kube-api-access-gdxwb\") pod \"nova-api-0\" (UID: \"dfa84b55-e3b4-425c-983b-57e60b06ee59\") " pod="openstack/nova-api-0" Feb 18 19:37:59 crc kubenswrapper[4942]: I0218 19:37:59.409584 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 18 19:37:59 crc kubenswrapper[4942]: I0218 19:37:59.882427 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 18 19:37:59 crc kubenswrapper[4942]: W0218 19:37:59.885355 4942 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poddfa84b55_e3b4_425c_983b_57e60b06ee59.slice/crio-7455a83c698c3d6e0c19dcaa1a1f353e541c5f547cd7e8fdbe3ffdd928daf970 WatchSource:0}: Error finding container 7455a83c698c3d6e0c19dcaa1a1f353e541c5f547cd7e8fdbe3ffdd928daf970: Status 404 returned error can't find the container with id 7455a83c698c3d6e0c19dcaa1a1f353e541c5f547cd7e8fdbe3ffdd928daf970 Feb 18 19:37:59 crc kubenswrapper[4942]: I0218 19:37:59.967457 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"dfa84b55-e3b4-425c-983b-57e60b06ee59","Type":"ContainerStarted","Data":"7455a83c698c3d6e0c19dcaa1a1f353e541c5f547cd7e8fdbe3ffdd928daf970"} Feb 18 19:37:59 crc kubenswrapper[4942]: I0218 19:37:59.969826 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c330a0f3-afd7-4b55-8d33-8617b38bba91","Type":"ContainerStarted","Data":"f410fe69fa8e94a16f161a61d09576b4203d2de3fee69dfb84d2e69966092817"} Feb 18 19:37:59 crc kubenswrapper[4942]: I0218 19:37:59.984100 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-novncproxy-0" Feb 18 19:38:00 crc kubenswrapper[4942]: I0218 19:38:00.163883 4942 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-cell-mapping-6sjb6"] Feb 18 19:38:00 crc kubenswrapper[4942]: I0218 19:38:00.165225 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-6sjb6" Feb 18 19:38:00 crc kubenswrapper[4942]: I0218 19:38:00.168049 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-config-data" Feb 18 19:38:00 crc kubenswrapper[4942]: I0218 19:38:00.168215 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-scripts" Feb 18 19:38:00 crc kubenswrapper[4942]: I0218 19:38:00.187869 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-6sjb6"] Feb 18 19:38:00 crc kubenswrapper[4942]: I0218 19:38:00.266647 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2c972a02-9d35-43d1-9ef6-ab99f7cded50-scripts\") pod \"nova-cell1-cell-mapping-6sjb6\" (UID: \"2c972a02-9d35-43d1-9ef6-ab99f7cded50\") " pod="openstack/nova-cell1-cell-mapping-6sjb6" Feb 18 19:38:00 crc kubenswrapper[4942]: I0218 19:38:00.266697 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2c972a02-9d35-43d1-9ef6-ab99f7cded50-config-data\") pod \"nova-cell1-cell-mapping-6sjb6\" (UID: \"2c972a02-9d35-43d1-9ef6-ab99f7cded50\") " pod="openstack/nova-cell1-cell-mapping-6sjb6" Feb 18 19:38:00 crc kubenswrapper[4942]: I0218 19:38:00.266790 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2c972a02-9d35-43d1-9ef6-ab99f7cded50-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-6sjb6\" (UID: \"2c972a02-9d35-43d1-9ef6-ab99f7cded50\") " pod="openstack/nova-cell1-cell-mapping-6sjb6" Feb 18 19:38:00 crc kubenswrapper[4942]: I0218 19:38:00.266838 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sm47r\" (UniqueName: \"kubernetes.io/projected/2c972a02-9d35-43d1-9ef6-ab99f7cded50-kube-api-access-sm47r\") pod \"nova-cell1-cell-mapping-6sjb6\" (UID: \"2c972a02-9d35-43d1-9ef6-ab99f7cded50\") " pod="openstack/nova-cell1-cell-mapping-6sjb6" Feb 18 19:38:00 crc kubenswrapper[4942]: I0218 19:38:00.306970 4942 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="8f2c79fe-40ed-4218-9db5-ecf2750cd43c" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.217:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 18 19:38:00 crc kubenswrapper[4942]: I0218 19:38:00.306989 4942 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="8f2c79fe-40ed-4218-9db5-ecf2750cd43c" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.217:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 18 19:38:00 crc kubenswrapper[4942]: I0218 19:38:00.368353 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2c972a02-9d35-43d1-9ef6-ab99f7cded50-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-6sjb6\" (UID: \"2c972a02-9d35-43d1-9ef6-ab99f7cded50\") " pod="openstack/nova-cell1-cell-mapping-6sjb6" Feb 18 19:38:00 crc kubenswrapper[4942]: I0218 19:38:00.368457 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sm47r\" (UniqueName: \"kubernetes.io/projected/2c972a02-9d35-43d1-9ef6-ab99f7cded50-kube-api-access-sm47r\") pod \"nova-cell1-cell-mapping-6sjb6\" (UID: \"2c972a02-9d35-43d1-9ef6-ab99f7cded50\") " pod="openstack/nova-cell1-cell-mapping-6sjb6" Feb 18 19:38:00 crc kubenswrapper[4942]: I0218 19:38:00.368635 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2c972a02-9d35-43d1-9ef6-ab99f7cded50-scripts\") pod \"nova-cell1-cell-mapping-6sjb6\" (UID: \"2c972a02-9d35-43d1-9ef6-ab99f7cded50\") " pod="openstack/nova-cell1-cell-mapping-6sjb6" Feb 18 19:38:00 crc kubenswrapper[4942]: I0218 19:38:00.369172 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2c972a02-9d35-43d1-9ef6-ab99f7cded50-config-data\") pod \"nova-cell1-cell-mapping-6sjb6\" (UID: \"2c972a02-9d35-43d1-9ef6-ab99f7cded50\") " pod="openstack/nova-cell1-cell-mapping-6sjb6" Feb 18 19:38:00 crc kubenswrapper[4942]: I0218 19:38:00.373344 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2c972a02-9d35-43d1-9ef6-ab99f7cded50-scripts\") pod \"nova-cell1-cell-mapping-6sjb6\" (UID: \"2c972a02-9d35-43d1-9ef6-ab99f7cded50\") " pod="openstack/nova-cell1-cell-mapping-6sjb6" Feb 18 19:38:00 crc kubenswrapper[4942]: I0218 19:38:00.373415 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2c972a02-9d35-43d1-9ef6-ab99f7cded50-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-6sjb6\" (UID: \"2c972a02-9d35-43d1-9ef6-ab99f7cded50\") " pod="openstack/nova-cell1-cell-mapping-6sjb6" Feb 18 19:38:00 crc kubenswrapper[4942]: I0218 19:38:00.375338 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2c972a02-9d35-43d1-9ef6-ab99f7cded50-config-data\") pod \"nova-cell1-cell-mapping-6sjb6\" (UID: \"2c972a02-9d35-43d1-9ef6-ab99f7cded50\") " pod="openstack/nova-cell1-cell-mapping-6sjb6" Feb 18 19:38:00 crc kubenswrapper[4942]: I0218 19:38:00.387291 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sm47r\" (UniqueName: \"kubernetes.io/projected/2c972a02-9d35-43d1-9ef6-ab99f7cded50-kube-api-access-sm47r\") pod \"nova-cell1-cell-mapping-6sjb6\" (UID: \"2c972a02-9d35-43d1-9ef6-ab99f7cded50\") " pod="openstack/nova-cell1-cell-mapping-6sjb6" Feb 18 19:38:00 crc kubenswrapper[4942]: I0218 19:38:00.563706 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-6sjb6" Feb 18 19:38:01 crc kubenswrapper[4942]: I0218 19:38:01.012321 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"dfa84b55-e3b4-425c-983b-57e60b06ee59","Type":"ContainerStarted","Data":"601eab2ba5b673f055b02438a80da68f6ec4ed45d0a9b9a92cb586749d250eeb"} Feb 18 19:38:01 crc kubenswrapper[4942]: I0218 19:38:01.017727 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"dfa84b55-e3b4-425c-983b-57e60b06ee59","Type":"ContainerStarted","Data":"57854175ad36d4613dd7ba3f9c987cf448463e0159084dfab670f1b0ecf637a2"} Feb 18 19:38:01 crc kubenswrapper[4942]: I0218 19:38:01.076798 4942 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=3.076774716 podStartE2EDuration="3.076774716s" podCreationTimestamp="2026-02-18 19:37:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:38:01.052330244 +0000 UTC m=+1240.757262909" watchObservedRunningTime="2026-02-18 19:38:01.076774716 +0000 UTC m=+1240.781707401" Feb 18 19:38:01 crc kubenswrapper[4942]: I0218 19:38:01.159644 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-6sjb6"] Feb 18 19:38:01 crc kubenswrapper[4942]: W0218 19:38:01.170390 4942 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2c972a02_9d35_43d1_9ef6_ab99f7cded50.slice/crio-1818d9219730f11170821ea242e1d7c9a874730058c28d86097d81ff414749bb WatchSource:0}: Error finding container 1818d9219730f11170821ea242e1d7c9a874730058c28d86097d81ff414749bb: Status 404 returned error can't find the container with id 1818d9219730f11170821ea242e1d7c9a874730058c28d86097d81ff414749bb Feb 18 19:38:02 crc kubenswrapper[4942]: I0218 19:38:02.022870 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-6sjb6" event={"ID":"2c972a02-9d35-43d1-9ef6-ab99f7cded50","Type":"ContainerStarted","Data":"493fbf668fd581eae9f157a3d4dd7cefc935750aeaa50d79a8dc2cadd67f3413"} Feb 18 19:38:02 crc kubenswrapper[4942]: I0218 19:38:02.022917 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-6sjb6" event={"ID":"2c972a02-9d35-43d1-9ef6-ab99f7cded50","Type":"ContainerStarted","Data":"1818d9219730f11170821ea242e1d7c9a874730058c28d86097d81ff414749bb"} Feb 18 19:38:02 crc kubenswrapper[4942]: I0218 19:38:02.030313 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c330a0f3-afd7-4b55-8d33-8617b38bba91","Type":"ContainerStarted","Data":"d95c7e55f7d0cdb9979c16f83fcc95679308cf40adf688c3329d8aaa4109711b"} Feb 18 19:38:02 crc kubenswrapper[4942]: I0218 19:38:02.030352 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 18 19:38:02 crc kubenswrapper[4942]: I0218 19:38:02.046252 4942 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-cell-mapping-6sjb6" podStartSLOduration=2.046231231 podStartE2EDuration="2.046231231s" podCreationTimestamp="2026-02-18 19:38:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:38:02.041065645 +0000 UTC m=+1241.745998310" watchObservedRunningTime="2026-02-18 19:38:02.046231231 +0000 UTC m=+1241.751163896" Feb 18 19:38:02 crc kubenswrapper[4942]: I0218 19:38:02.104261 4942 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.157512312 podStartE2EDuration="6.104231303s" podCreationTimestamp="2026-02-18 19:37:56 +0000 UTC" firstStartedPulling="2026-02-18 19:37:57.112615007 +0000 UTC m=+1236.817547692" lastFinishedPulling="2026-02-18 19:38:01.059334018 +0000 UTC m=+1240.764266683" observedRunningTime="2026-02-18 19:38:02.087550195 +0000 UTC m=+1241.792482860" watchObservedRunningTime="2026-02-18 19:38:02.104231303 +0000 UTC m=+1241.809163998" Feb 18 19:38:02 crc kubenswrapper[4942]: I0218 19:38:02.363000 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-89c5cd4d5-7jhpx" Feb 18 19:38:02 crc kubenswrapper[4942]: I0218 19:38:02.449885 4942 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-757b4f8459-4gdxj"] Feb 18 19:38:02 crc kubenswrapper[4942]: I0218 19:38:02.450248 4942 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-757b4f8459-4gdxj" podUID="df5e2192-70b4-43cc-a9e0-f9023ba0d4a9" containerName="dnsmasq-dns" containerID="cri-o://b68121de2fea4f07edecadb5789b88b34bf8d27823e96cbebb2e52ee0368565c" gracePeriod=10 Feb 18 19:38:02 crc kubenswrapper[4942]: I0218 19:38:02.996894 4942 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-757b4f8459-4gdxj" Feb 18 19:38:03 crc kubenswrapper[4942]: I0218 19:38:03.037965 4942 generic.go:334] "Generic (PLEG): container finished" podID="df5e2192-70b4-43cc-a9e0-f9023ba0d4a9" containerID="b68121de2fea4f07edecadb5789b88b34bf8d27823e96cbebb2e52ee0368565c" exitCode=0 Feb 18 19:38:03 crc kubenswrapper[4942]: I0218 19:38:03.038897 4942 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-757b4f8459-4gdxj" Feb 18 19:38:03 crc kubenswrapper[4942]: I0218 19:38:03.039401 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-757b4f8459-4gdxj" event={"ID":"df5e2192-70b4-43cc-a9e0-f9023ba0d4a9","Type":"ContainerDied","Data":"b68121de2fea4f07edecadb5789b88b34bf8d27823e96cbebb2e52ee0368565c"} Feb 18 19:38:03 crc kubenswrapper[4942]: I0218 19:38:03.039433 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-757b4f8459-4gdxj" event={"ID":"df5e2192-70b4-43cc-a9e0-f9023ba0d4a9","Type":"ContainerDied","Data":"ac7c2c212726ec658ded163971fdbf65aa1ee8ef5f331c952d6143e9bfa521d8"} Feb 18 19:38:03 crc kubenswrapper[4942]: I0218 19:38:03.039451 4942 scope.go:117] "RemoveContainer" containerID="b68121de2fea4f07edecadb5789b88b34bf8d27823e96cbebb2e52ee0368565c" Feb 18 19:38:03 crc kubenswrapper[4942]: I0218 19:38:03.102009 4942 scope.go:117] "RemoveContainer" containerID="5f605ec20eeba22cd1e0c8f762ce0215e3f892afe0ae0fcbbbb922ee4f5af646" Feb 18 19:38:03 crc kubenswrapper[4942]: I0218 19:38:03.123835 4942 scope.go:117] "RemoveContainer" containerID="b68121de2fea4f07edecadb5789b88b34bf8d27823e96cbebb2e52ee0368565c" Feb 18 19:38:03 crc kubenswrapper[4942]: E0218 19:38:03.125231 4942 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b68121de2fea4f07edecadb5789b88b34bf8d27823e96cbebb2e52ee0368565c\": container with ID starting with b68121de2fea4f07edecadb5789b88b34bf8d27823e96cbebb2e52ee0368565c not found: ID does not exist" containerID="b68121de2fea4f07edecadb5789b88b34bf8d27823e96cbebb2e52ee0368565c" Feb 18 19:38:03 crc kubenswrapper[4942]: I0218 19:38:03.125269 4942 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b68121de2fea4f07edecadb5789b88b34bf8d27823e96cbebb2e52ee0368565c"} err="failed to get container status \"b68121de2fea4f07edecadb5789b88b34bf8d27823e96cbebb2e52ee0368565c\": rpc error: code = NotFound desc = could not find container \"b68121de2fea4f07edecadb5789b88b34bf8d27823e96cbebb2e52ee0368565c\": container with ID starting with b68121de2fea4f07edecadb5789b88b34bf8d27823e96cbebb2e52ee0368565c not found: ID does not exist" Feb 18 19:38:03 crc kubenswrapper[4942]: I0218 19:38:03.125296 4942 scope.go:117] "RemoveContainer" containerID="5f605ec20eeba22cd1e0c8f762ce0215e3f892afe0ae0fcbbbb922ee4f5af646" Feb 18 19:38:03 crc kubenswrapper[4942]: E0218 19:38:03.128859 4942 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5f605ec20eeba22cd1e0c8f762ce0215e3f892afe0ae0fcbbbb922ee4f5af646\": container with ID starting with 5f605ec20eeba22cd1e0c8f762ce0215e3f892afe0ae0fcbbbb922ee4f5af646 not found: ID does not exist" containerID="5f605ec20eeba22cd1e0c8f762ce0215e3f892afe0ae0fcbbbb922ee4f5af646" Feb 18 19:38:03 crc kubenswrapper[4942]: I0218 19:38:03.128921 4942 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5f605ec20eeba22cd1e0c8f762ce0215e3f892afe0ae0fcbbbb922ee4f5af646"} err="failed to get container status \"5f605ec20eeba22cd1e0c8f762ce0215e3f892afe0ae0fcbbbb922ee4f5af646\": rpc error: code = NotFound desc = could not find container \"5f605ec20eeba22cd1e0c8f762ce0215e3f892afe0ae0fcbbbb922ee4f5af646\": container with ID starting with 5f605ec20eeba22cd1e0c8f762ce0215e3f892afe0ae0fcbbbb922ee4f5af646 not found: ID does not exist" Feb 18 19:38:03 crc kubenswrapper[4942]: I0218 19:38:03.171475 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/df5e2192-70b4-43cc-a9e0-f9023ba0d4a9-config\") pod \"df5e2192-70b4-43cc-a9e0-f9023ba0d4a9\" (UID: \"df5e2192-70b4-43cc-a9e0-f9023ba0d4a9\") " Feb 18 19:38:03 crc kubenswrapper[4942]: I0218 19:38:03.171560 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/df5e2192-70b4-43cc-a9e0-f9023ba0d4a9-ovsdbserver-sb\") pod \"df5e2192-70b4-43cc-a9e0-f9023ba0d4a9\" (UID: \"df5e2192-70b4-43cc-a9e0-f9023ba0d4a9\") " Feb 18 19:38:03 crc kubenswrapper[4942]: I0218 19:38:03.171679 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/df5e2192-70b4-43cc-a9e0-f9023ba0d4a9-ovsdbserver-nb\") pod \"df5e2192-70b4-43cc-a9e0-f9023ba0d4a9\" (UID: \"df5e2192-70b4-43cc-a9e0-f9023ba0d4a9\") " Feb 18 19:38:03 crc kubenswrapper[4942]: I0218 19:38:03.171733 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-shhc6\" (UniqueName: \"kubernetes.io/projected/df5e2192-70b4-43cc-a9e0-f9023ba0d4a9-kube-api-access-shhc6\") pod \"df5e2192-70b4-43cc-a9e0-f9023ba0d4a9\" (UID: \"df5e2192-70b4-43cc-a9e0-f9023ba0d4a9\") " Feb 18 19:38:03 crc kubenswrapper[4942]: I0218 19:38:03.171774 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/df5e2192-70b4-43cc-a9e0-f9023ba0d4a9-dns-svc\") pod \"df5e2192-70b4-43cc-a9e0-f9023ba0d4a9\" (UID: \"df5e2192-70b4-43cc-a9e0-f9023ba0d4a9\") " Feb 18 19:38:03 crc kubenswrapper[4942]: I0218 19:38:03.171819 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/df5e2192-70b4-43cc-a9e0-f9023ba0d4a9-dns-swift-storage-0\") pod \"df5e2192-70b4-43cc-a9e0-f9023ba0d4a9\" (UID: \"df5e2192-70b4-43cc-a9e0-f9023ba0d4a9\") " Feb 18 19:38:03 crc kubenswrapper[4942]: I0218 19:38:03.179900 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/df5e2192-70b4-43cc-a9e0-f9023ba0d4a9-kube-api-access-shhc6" (OuterVolumeSpecName: "kube-api-access-shhc6") pod "df5e2192-70b4-43cc-a9e0-f9023ba0d4a9" (UID: "df5e2192-70b4-43cc-a9e0-f9023ba0d4a9"). InnerVolumeSpecName "kube-api-access-shhc6". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:38:03 crc kubenswrapper[4942]: I0218 19:38:03.228381 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/df5e2192-70b4-43cc-a9e0-f9023ba0d4a9-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "df5e2192-70b4-43cc-a9e0-f9023ba0d4a9" (UID: "df5e2192-70b4-43cc-a9e0-f9023ba0d4a9"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:38:03 crc kubenswrapper[4942]: I0218 19:38:03.229740 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/df5e2192-70b4-43cc-a9e0-f9023ba0d4a9-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "df5e2192-70b4-43cc-a9e0-f9023ba0d4a9" (UID: "df5e2192-70b4-43cc-a9e0-f9023ba0d4a9"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:38:03 crc kubenswrapper[4942]: I0218 19:38:03.231275 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/df5e2192-70b4-43cc-a9e0-f9023ba0d4a9-config" (OuterVolumeSpecName: "config") pod "df5e2192-70b4-43cc-a9e0-f9023ba0d4a9" (UID: "df5e2192-70b4-43cc-a9e0-f9023ba0d4a9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:38:03 crc kubenswrapper[4942]: I0218 19:38:03.246145 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/df5e2192-70b4-43cc-a9e0-f9023ba0d4a9-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "df5e2192-70b4-43cc-a9e0-f9023ba0d4a9" (UID: "df5e2192-70b4-43cc-a9e0-f9023ba0d4a9"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:38:03 crc kubenswrapper[4942]: I0218 19:38:03.263510 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/df5e2192-70b4-43cc-a9e0-f9023ba0d4a9-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "df5e2192-70b4-43cc-a9e0-f9023ba0d4a9" (UID: "df5e2192-70b4-43cc-a9e0-f9023ba0d4a9"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:38:03 crc kubenswrapper[4942]: I0218 19:38:03.275514 4942 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/df5e2192-70b4-43cc-a9e0-f9023ba0d4a9-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 18 19:38:03 crc kubenswrapper[4942]: I0218 19:38:03.275544 4942 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/df5e2192-70b4-43cc-a9e0-f9023ba0d4a9-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 18 19:38:03 crc kubenswrapper[4942]: I0218 19:38:03.275556 4942 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-shhc6\" (UniqueName: \"kubernetes.io/projected/df5e2192-70b4-43cc-a9e0-f9023ba0d4a9-kube-api-access-shhc6\") on node \"crc\" DevicePath \"\"" Feb 18 19:38:03 crc kubenswrapper[4942]: I0218 19:38:03.275567 4942 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/df5e2192-70b4-43cc-a9e0-f9023ba0d4a9-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 18 19:38:03 crc kubenswrapper[4942]: I0218 19:38:03.275577 4942 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/df5e2192-70b4-43cc-a9e0-f9023ba0d4a9-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 18 19:38:03 crc kubenswrapper[4942]: I0218 19:38:03.275585 4942 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/df5e2192-70b4-43cc-a9e0-f9023ba0d4a9-config\") on node \"crc\" DevicePath \"\"" Feb 18 19:38:03 crc kubenswrapper[4942]: I0218 19:38:03.375473 4942 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-757b4f8459-4gdxj"] Feb 18 19:38:03 crc kubenswrapper[4942]: I0218 19:38:03.388438 4942 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-757b4f8459-4gdxj"] Feb 18 19:38:05 crc kubenswrapper[4942]: I0218 19:38:05.049567 4942 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="df5e2192-70b4-43cc-a9e0-f9023ba0d4a9" path="/var/lib/kubelet/pods/df5e2192-70b4-43cc-a9e0-f9023ba0d4a9/volumes" Feb 18 19:38:07 crc kubenswrapper[4942]: I0218 19:38:07.092992 4942 generic.go:334] "Generic (PLEG): container finished" podID="2c972a02-9d35-43d1-9ef6-ab99f7cded50" containerID="493fbf668fd581eae9f157a3d4dd7cefc935750aeaa50d79a8dc2cadd67f3413" exitCode=0 Feb 18 19:38:07 crc kubenswrapper[4942]: I0218 19:38:07.093225 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-6sjb6" event={"ID":"2c972a02-9d35-43d1-9ef6-ab99f7cded50","Type":"ContainerDied","Data":"493fbf668fd581eae9f157a3d4dd7cefc935750aeaa50d79a8dc2cadd67f3413"} Feb 18 19:38:07 crc kubenswrapper[4942]: I0218 19:38:07.844364 4942 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-757b4f8459-4gdxj" podUID="df5e2192-70b4-43cc-a9e0-f9023ba0d4a9" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.208:5353: i/o timeout" Feb 18 19:38:08 crc kubenswrapper[4942]: I0218 19:38:08.537814 4942 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-6sjb6" Feb 18 19:38:08 crc kubenswrapper[4942]: I0218 19:38:08.689237 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2c972a02-9d35-43d1-9ef6-ab99f7cded50-scripts\") pod \"2c972a02-9d35-43d1-9ef6-ab99f7cded50\" (UID: \"2c972a02-9d35-43d1-9ef6-ab99f7cded50\") " Feb 18 19:38:08 crc kubenswrapper[4942]: I0218 19:38:08.689608 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2c972a02-9d35-43d1-9ef6-ab99f7cded50-combined-ca-bundle\") pod \"2c972a02-9d35-43d1-9ef6-ab99f7cded50\" (UID: \"2c972a02-9d35-43d1-9ef6-ab99f7cded50\") " Feb 18 19:38:08 crc kubenswrapper[4942]: I0218 19:38:08.689670 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sm47r\" (UniqueName: \"kubernetes.io/projected/2c972a02-9d35-43d1-9ef6-ab99f7cded50-kube-api-access-sm47r\") pod \"2c972a02-9d35-43d1-9ef6-ab99f7cded50\" (UID: \"2c972a02-9d35-43d1-9ef6-ab99f7cded50\") " Feb 18 19:38:08 crc kubenswrapper[4942]: I0218 19:38:08.689694 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2c972a02-9d35-43d1-9ef6-ab99f7cded50-config-data\") pod \"2c972a02-9d35-43d1-9ef6-ab99f7cded50\" (UID: \"2c972a02-9d35-43d1-9ef6-ab99f7cded50\") " Feb 18 19:38:08 crc kubenswrapper[4942]: I0218 19:38:08.695890 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2c972a02-9d35-43d1-9ef6-ab99f7cded50-kube-api-access-sm47r" (OuterVolumeSpecName: "kube-api-access-sm47r") pod "2c972a02-9d35-43d1-9ef6-ab99f7cded50" (UID: "2c972a02-9d35-43d1-9ef6-ab99f7cded50"). InnerVolumeSpecName "kube-api-access-sm47r". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:38:08 crc kubenswrapper[4942]: I0218 19:38:08.696825 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2c972a02-9d35-43d1-9ef6-ab99f7cded50-scripts" (OuterVolumeSpecName: "scripts") pod "2c972a02-9d35-43d1-9ef6-ab99f7cded50" (UID: "2c972a02-9d35-43d1-9ef6-ab99f7cded50"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:38:08 crc kubenswrapper[4942]: I0218 19:38:08.725865 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2c972a02-9d35-43d1-9ef6-ab99f7cded50-config-data" (OuterVolumeSpecName: "config-data") pod "2c972a02-9d35-43d1-9ef6-ab99f7cded50" (UID: "2c972a02-9d35-43d1-9ef6-ab99f7cded50"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:38:08 crc kubenswrapper[4942]: I0218 19:38:08.728011 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2c972a02-9d35-43d1-9ef6-ab99f7cded50-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "2c972a02-9d35-43d1-9ef6-ab99f7cded50" (UID: "2c972a02-9d35-43d1-9ef6-ab99f7cded50"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:38:08 crc kubenswrapper[4942]: I0218 19:38:08.792053 4942 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2c972a02-9d35-43d1-9ef6-ab99f7cded50-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 19:38:08 crc kubenswrapper[4942]: I0218 19:38:08.792119 4942 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sm47r\" (UniqueName: \"kubernetes.io/projected/2c972a02-9d35-43d1-9ef6-ab99f7cded50-kube-api-access-sm47r\") on node \"crc\" DevicePath \"\"" Feb 18 19:38:08 crc kubenswrapper[4942]: I0218 19:38:08.792135 4942 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2c972a02-9d35-43d1-9ef6-ab99f7cded50-config-data\") on node \"crc\" DevicePath \"\"" Feb 18 19:38:08 crc kubenswrapper[4942]: I0218 19:38:08.792145 4942 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2c972a02-9d35-43d1-9ef6-ab99f7cded50-scripts\") on node \"crc\" DevicePath \"\"" Feb 18 19:38:09 crc kubenswrapper[4942]: I0218 19:38:09.136149 4942 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-6sjb6" Feb 18 19:38:09 crc kubenswrapper[4942]: I0218 19:38:09.136128 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-6sjb6" event={"ID":"2c972a02-9d35-43d1-9ef6-ab99f7cded50","Type":"ContainerDied","Data":"1818d9219730f11170821ea242e1d7c9a874730058c28d86097d81ff414749bb"} Feb 18 19:38:09 crc kubenswrapper[4942]: I0218 19:38:09.140042 4942 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1818d9219730f11170821ea242e1d7c9a874730058c28d86097d81ff414749bb" Feb 18 19:38:09 crc kubenswrapper[4942]: I0218 19:38:09.302872 4942 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Feb 18 19:38:09 crc kubenswrapper[4942]: I0218 19:38:09.303558 4942 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Feb 18 19:38:09 crc kubenswrapper[4942]: I0218 19:38:09.312852 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Feb 18 19:38:09 crc kubenswrapper[4942]: I0218 19:38:09.411038 4942 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 18 19:38:09 crc kubenswrapper[4942]: I0218 19:38:09.411087 4942 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 18 19:38:09 crc kubenswrapper[4942]: I0218 19:38:09.432376 4942 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Feb 18 19:38:09 crc kubenswrapper[4942]: I0218 19:38:09.446869 4942 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Feb 18 19:38:09 crc kubenswrapper[4942]: I0218 19:38:09.447269 4942 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="934ec68d-b7d2-4435-8e54-4984cea15920" containerName="nova-scheduler-scheduler" containerID="cri-o://f8f16eaf99b27e5378de6b9f610d1eff9cec3f93c1ffd82c5027dc6d962fe712" gracePeriod=30 Feb 18 19:38:09 crc kubenswrapper[4942]: I0218 19:38:09.478088 4942 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Feb 18 19:38:10 crc kubenswrapper[4942]: I0218 19:38:10.143927 4942 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="dfa84b55-e3b4-425c-983b-57e60b06ee59" containerName="nova-api-log" containerID="cri-o://57854175ad36d4613dd7ba3f9c987cf448463e0159084dfab670f1b0ecf637a2" gracePeriod=30 Feb 18 19:38:10 crc kubenswrapper[4942]: I0218 19:38:10.144710 4942 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="dfa84b55-e3b4-425c-983b-57e60b06ee59" containerName="nova-api-api" containerID="cri-o://601eab2ba5b673f055b02438a80da68f6ec4ed45d0a9b9a92cb586749d250eeb" gracePeriod=30 Feb 18 19:38:10 crc kubenswrapper[4942]: I0218 19:38:10.155152 4942 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="dfa84b55-e3b4-425c-983b-57e60b06ee59" containerName="nova-api-log" probeResult="failure" output="Get \"https://10.217.0.220:8774/\": EOF" Feb 18 19:38:10 crc kubenswrapper[4942]: I0218 19:38:10.155846 4942 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="dfa84b55-e3b4-425c-983b-57e60b06ee59" containerName="nova-api-api" probeResult="failure" output="Get \"https://10.217.0.220:8774/\": EOF" Feb 18 19:38:10 crc kubenswrapper[4942]: I0218 19:38:10.162647 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Feb 18 19:38:10 crc kubenswrapper[4942]: E0218 19:38:10.192421 4942 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="f8f16eaf99b27e5378de6b9f610d1eff9cec3f93c1ffd82c5027dc6d962fe712" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Feb 18 19:38:10 crc kubenswrapper[4942]: E0218 19:38:10.221627 4942 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="f8f16eaf99b27e5378de6b9f610d1eff9cec3f93c1ffd82c5027dc6d962fe712" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Feb 18 19:38:10 crc kubenswrapper[4942]: E0218 19:38:10.237616 4942 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="f8f16eaf99b27e5378de6b9f610d1eff9cec3f93c1ffd82c5027dc6d962fe712" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Feb 18 19:38:10 crc kubenswrapper[4942]: E0218 19:38:10.237698 4942 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-scheduler-0" podUID="934ec68d-b7d2-4435-8e54-4984cea15920" containerName="nova-scheduler-scheduler" Feb 18 19:38:11 crc kubenswrapper[4942]: I0218 19:38:11.157260 4942 generic.go:334] "Generic (PLEG): container finished" podID="dfa84b55-e3b4-425c-983b-57e60b06ee59" containerID="57854175ad36d4613dd7ba3f9c987cf448463e0159084dfab670f1b0ecf637a2" exitCode=143 Feb 18 19:38:11 crc kubenswrapper[4942]: I0218 19:38:11.157847 4942 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="8f2c79fe-40ed-4218-9db5-ecf2750cd43c" containerName="nova-metadata-log" containerID="cri-o://754248603e713494d1ff408069c74a57b870cc3dc9fca6bf7971c23184806daf" gracePeriod=30 Feb 18 19:38:11 crc kubenswrapper[4942]: I0218 19:38:11.157936 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"dfa84b55-e3b4-425c-983b-57e60b06ee59","Type":"ContainerDied","Data":"57854175ad36d4613dd7ba3f9c987cf448463e0159084dfab670f1b0ecf637a2"} Feb 18 19:38:11 crc kubenswrapper[4942]: I0218 19:38:11.158329 4942 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="8f2c79fe-40ed-4218-9db5-ecf2750cd43c" containerName="nova-metadata-metadata" containerID="cri-o://2b016dd053ee1c6b8b02284b80f61da51907ed4b62870908ede29de5ad95f8a6" gracePeriod=30 Feb 18 19:38:12 crc kubenswrapper[4942]: I0218 19:38:12.169166 4942 generic.go:334] "Generic (PLEG): container finished" podID="8f2c79fe-40ed-4218-9db5-ecf2750cd43c" containerID="754248603e713494d1ff408069c74a57b870cc3dc9fca6bf7971c23184806daf" exitCode=143 Feb 18 19:38:12 crc kubenswrapper[4942]: I0218 19:38:12.169220 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"8f2c79fe-40ed-4218-9db5-ecf2750cd43c","Type":"ContainerDied","Data":"754248603e713494d1ff408069c74a57b870cc3dc9fca6bf7971c23184806daf"} Feb 18 19:38:14 crc kubenswrapper[4942]: I0218 19:38:14.193405 4942 generic.go:334] "Generic (PLEG): container finished" podID="934ec68d-b7d2-4435-8e54-4984cea15920" containerID="f8f16eaf99b27e5378de6b9f610d1eff9cec3f93c1ffd82c5027dc6d962fe712" exitCode=0 Feb 18 19:38:14 crc kubenswrapper[4942]: I0218 19:38:14.193450 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"934ec68d-b7d2-4435-8e54-4984cea15920","Type":"ContainerDied","Data":"f8f16eaf99b27e5378de6b9f610d1eff9cec3f93c1ffd82c5027dc6d962fe712"} Feb 18 19:38:14 crc kubenswrapper[4942]: I0218 19:38:14.301254 4942 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="8f2c79fe-40ed-4218-9db5-ecf2750cd43c" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.217:8775/\": read tcp 10.217.0.2:40718->10.217.0.217:8775: read: connection reset by peer" Feb 18 19:38:14 crc kubenswrapper[4942]: I0218 19:38:14.301314 4942 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="8f2c79fe-40ed-4218-9db5-ecf2750cd43c" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.217:8775/\": read tcp 10.217.0.2:40716->10.217.0.217:8775: read: connection reset by peer" Feb 18 19:38:14 crc kubenswrapper[4942]: I0218 19:38:14.459646 4942 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 18 19:38:14 crc kubenswrapper[4942]: I0218 19:38:14.607072 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/934ec68d-b7d2-4435-8e54-4984cea15920-config-data\") pod \"934ec68d-b7d2-4435-8e54-4984cea15920\" (UID: \"934ec68d-b7d2-4435-8e54-4984cea15920\") " Feb 18 19:38:14 crc kubenswrapper[4942]: I0218 19:38:14.607172 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/934ec68d-b7d2-4435-8e54-4984cea15920-combined-ca-bundle\") pod \"934ec68d-b7d2-4435-8e54-4984cea15920\" (UID: \"934ec68d-b7d2-4435-8e54-4984cea15920\") " Feb 18 19:38:14 crc kubenswrapper[4942]: I0218 19:38:14.608142 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nb4b2\" (UniqueName: \"kubernetes.io/projected/934ec68d-b7d2-4435-8e54-4984cea15920-kube-api-access-nb4b2\") pod \"934ec68d-b7d2-4435-8e54-4984cea15920\" (UID: \"934ec68d-b7d2-4435-8e54-4984cea15920\") " Feb 18 19:38:14 crc kubenswrapper[4942]: I0218 19:38:14.617156 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/934ec68d-b7d2-4435-8e54-4984cea15920-kube-api-access-nb4b2" (OuterVolumeSpecName: "kube-api-access-nb4b2") pod "934ec68d-b7d2-4435-8e54-4984cea15920" (UID: "934ec68d-b7d2-4435-8e54-4984cea15920"). InnerVolumeSpecName "kube-api-access-nb4b2". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:38:14 crc kubenswrapper[4942]: I0218 19:38:14.639091 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/934ec68d-b7d2-4435-8e54-4984cea15920-config-data" (OuterVolumeSpecName: "config-data") pod "934ec68d-b7d2-4435-8e54-4984cea15920" (UID: "934ec68d-b7d2-4435-8e54-4984cea15920"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:38:14 crc kubenswrapper[4942]: I0218 19:38:14.649675 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/934ec68d-b7d2-4435-8e54-4984cea15920-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "934ec68d-b7d2-4435-8e54-4984cea15920" (UID: "934ec68d-b7d2-4435-8e54-4984cea15920"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:38:14 crc kubenswrapper[4942]: I0218 19:38:14.710165 4942 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/934ec68d-b7d2-4435-8e54-4984cea15920-config-data\") on node \"crc\" DevicePath \"\"" Feb 18 19:38:14 crc kubenswrapper[4942]: I0218 19:38:14.710324 4942 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/934ec68d-b7d2-4435-8e54-4984cea15920-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 19:38:14 crc kubenswrapper[4942]: I0218 19:38:14.710501 4942 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nb4b2\" (UniqueName: \"kubernetes.io/projected/934ec68d-b7d2-4435-8e54-4984cea15920-kube-api-access-nb4b2\") on node \"crc\" DevicePath \"\"" Feb 18 19:38:14 crc kubenswrapper[4942]: I0218 19:38:14.762011 4942 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 18 19:38:14 crc kubenswrapper[4942]: I0218 19:38:14.912976 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/8f2c79fe-40ed-4218-9db5-ecf2750cd43c-nova-metadata-tls-certs\") pod \"8f2c79fe-40ed-4218-9db5-ecf2750cd43c\" (UID: \"8f2c79fe-40ed-4218-9db5-ecf2750cd43c\") " Feb 18 19:38:14 crc kubenswrapper[4942]: I0218 19:38:14.913092 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8f2c79fe-40ed-4218-9db5-ecf2750cd43c-config-data\") pod \"8f2c79fe-40ed-4218-9db5-ecf2750cd43c\" (UID: \"8f2c79fe-40ed-4218-9db5-ecf2750cd43c\") " Feb 18 19:38:14 crc kubenswrapper[4942]: I0218 19:38:14.913151 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8f2c79fe-40ed-4218-9db5-ecf2750cd43c-logs\") pod \"8f2c79fe-40ed-4218-9db5-ecf2750cd43c\" (UID: \"8f2c79fe-40ed-4218-9db5-ecf2750cd43c\") " Feb 18 19:38:14 crc kubenswrapper[4942]: I0218 19:38:14.913260 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8f2c79fe-40ed-4218-9db5-ecf2750cd43c-combined-ca-bundle\") pod \"8f2c79fe-40ed-4218-9db5-ecf2750cd43c\" (UID: \"8f2c79fe-40ed-4218-9db5-ecf2750cd43c\") " Feb 18 19:38:14 crc kubenswrapper[4942]: I0218 19:38:14.913320 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-89kwj\" (UniqueName: \"kubernetes.io/projected/8f2c79fe-40ed-4218-9db5-ecf2750cd43c-kube-api-access-89kwj\") pod \"8f2c79fe-40ed-4218-9db5-ecf2750cd43c\" (UID: \"8f2c79fe-40ed-4218-9db5-ecf2750cd43c\") " Feb 18 19:38:14 crc kubenswrapper[4942]: I0218 19:38:14.917075 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8f2c79fe-40ed-4218-9db5-ecf2750cd43c-logs" (OuterVolumeSpecName: "logs") pod "8f2c79fe-40ed-4218-9db5-ecf2750cd43c" (UID: "8f2c79fe-40ed-4218-9db5-ecf2750cd43c"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 19:38:14 crc kubenswrapper[4942]: I0218 19:38:14.924049 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f2c79fe-40ed-4218-9db5-ecf2750cd43c-kube-api-access-89kwj" (OuterVolumeSpecName: "kube-api-access-89kwj") pod "8f2c79fe-40ed-4218-9db5-ecf2750cd43c" (UID: "8f2c79fe-40ed-4218-9db5-ecf2750cd43c"). InnerVolumeSpecName "kube-api-access-89kwj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:38:14 crc kubenswrapper[4942]: I0218 19:38:14.945185 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f2c79fe-40ed-4218-9db5-ecf2750cd43c-config-data" (OuterVolumeSpecName: "config-data") pod "8f2c79fe-40ed-4218-9db5-ecf2750cd43c" (UID: "8f2c79fe-40ed-4218-9db5-ecf2750cd43c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:38:14 crc kubenswrapper[4942]: I0218 19:38:14.959881 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f2c79fe-40ed-4218-9db5-ecf2750cd43c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "8f2c79fe-40ed-4218-9db5-ecf2750cd43c" (UID: "8f2c79fe-40ed-4218-9db5-ecf2750cd43c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:38:14 crc kubenswrapper[4942]: I0218 19:38:14.970792 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f2c79fe-40ed-4218-9db5-ecf2750cd43c-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "8f2c79fe-40ed-4218-9db5-ecf2750cd43c" (UID: "8f2c79fe-40ed-4218-9db5-ecf2750cd43c"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:38:15 crc kubenswrapper[4942]: I0218 19:38:15.015276 4942 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8f2c79fe-40ed-4218-9db5-ecf2750cd43c-logs\") on node \"crc\" DevicePath \"\"" Feb 18 19:38:15 crc kubenswrapper[4942]: I0218 19:38:15.015313 4942 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8f2c79fe-40ed-4218-9db5-ecf2750cd43c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 19:38:15 crc kubenswrapper[4942]: I0218 19:38:15.015324 4942 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-89kwj\" (UniqueName: \"kubernetes.io/projected/8f2c79fe-40ed-4218-9db5-ecf2750cd43c-kube-api-access-89kwj\") on node \"crc\" DevicePath \"\"" Feb 18 19:38:15 crc kubenswrapper[4942]: I0218 19:38:15.015334 4942 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/8f2c79fe-40ed-4218-9db5-ecf2750cd43c-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 18 19:38:15 crc kubenswrapper[4942]: I0218 19:38:15.015342 4942 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8f2c79fe-40ed-4218-9db5-ecf2750cd43c-config-data\") on node \"crc\" DevicePath \"\"" Feb 18 19:38:15 crc kubenswrapper[4942]: I0218 19:38:15.203397 4942 generic.go:334] "Generic (PLEG): container finished" podID="8f2c79fe-40ed-4218-9db5-ecf2750cd43c" containerID="2b016dd053ee1c6b8b02284b80f61da51907ed4b62870908ede29de5ad95f8a6" exitCode=0 Feb 18 19:38:15 crc kubenswrapper[4942]: I0218 19:38:15.203473 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"8f2c79fe-40ed-4218-9db5-ecf2750cd43c","Type":"ContainerDied","Data":"2b016dd053ee1c6b8b02284b80f61da51907ed4b62870908ede29de5ad95f8a6"} Feb 18 19:38:15 crc kubenswrapper[4942]: I0218 19:38:15.203702 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"8f2c79fe-40ed-4218-9db5-ecf2750cd43c","Type":"ContainerDied","Data":"3ccb0af91234a49cce52e60bb8c5d83c89a7cbfd38f25c2175232126f6780778"} Feb 18 19:38:15 crc kubenswrapper[4942]: I0218 19:38:15.203720 4942 scope.go:117] "RemoveContainer" containerID="2b016dd053ee1c6b8b02284b80f61da51907ed4b62870908ede29de5ad95f8a6" Feb 18 19:38:15 crc kubenswrapper[4942]: I0218 19:38:15.203503 4942 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 18 19:38:15 crc kubenswrapper[4942]: I0218 19:38:15.207191 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"934ec68d-b7d2-4435-8e54-4984cea15920","Type":"ContainerDied","Data":"c51803e068a4df40cf491f2ad59ffe56be6273114ad918ad454d9a2712bc7592"} Feb 18 19:38:15 crc kubenswrapper[4942]: I0218 19:38:15.207348 4942 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 18 19:38:15 crc kubenswrapper[4942]: I0218 19:38:15.231093 4942 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Feb 18 19:38:15 crc kubenswrapper[4942]: I0218 19:38:15.231344 4942 scope.go:117] "RemoveContainer" containerID="754248603e713494d1ff408069c74a57b870cc3dc9fca6bf7971c23184806daf" Feb 18 19:38:15 crc kubenswrapper[4942]: I0218 19:38:15.246791 4942 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Feb 18 19:38:15 crc kubenswrapper[4942]: I0218 19:38:15.252893 4942 scope.go:117] "RemoveContainer" containerID="2b016dd053ee1c6b8b02284b80f61da51907ed4b62870908ede29de5ad95f8a6" Feb 18 19:38:15 crc kubenswrapper[4942]: E0218 19:38:15.260491 4942 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2b016dd053ee1c6b8b02284b80f61da51907ed4b62870908ede29de5ad95f8a6\": container with ID starting with 2b016dd053ee1c6b8b02284b80f61da51907ed4b62870908ede29de5ad95f8a6 not found: ID does not exist" containerID="2b016dd053ee1c6b8b02284b80f61da51907ed4b62870908ede29de5ad95f8a6" Feb 18 19:38:15 crc kubenswrapper[4942]: I0218 19:38:15.260549 4942 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2b016dd053ee1c6b8b02284b80f61da51907ed4b62870908ede29de5ad95f8a6"} err="failed to get container status \"2b016dd053ee1c6b8b02284b80f61da51907ed4b62870908ede29de5ad95f8a6\": rpc error: code = NotFound desc = could not find container \"2b016dd053ee1c6b8b02284b80f61da51907ed4b62870908ede29de5ad95f8a6\": container with ID starting with 2b016dd053ee1c6b8b02284b80f61da51907ed4b62870908ede29de5ad95f8a6 not found: ID does not exist" Feb 18 19:38:15 crc kubenswrapper[4942]: I0218 19:38:15.260579 4942 scope.go:117] "RemoveContainer" containerID="754248603e713494d1ff408069c74a57b870cc3dc9fca6bf7971c23184806daf" Feb 18 19:38:15 crc kubenswrapper[4942]: E0218 19:38:15.261094 4942 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"754248603e713494d1ff408069c74a57b870cc3dc9fca6bf7971c23184806daf\": container with ID starting with 754248603e713494d1ff408069c74a57b870cc3dc9fca6bf7971c23184806daf not found: ID does not exist" containerID="754248603e713494d1ff408069c74a57b870cc3dc9fca6bf7971c23184806daf" Feb 18 19:38:15 crc kubenswrapper[4942]: I0218 19:38:15.261145 4942 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"754248603e713494d1ff408069c74a57b870cc3dc9fca6bf7971c23184806daf"} err="failed to get container status \"754248603e713494d1ff408069c74a57b870cc3dc9fca6bf7971c23184806daf\": rpc error: code = NotFound desc = could not find container \"754248603e713494d1ff408069c74a57b870cc3dc9fca6bf7971c23184806daf\": container with ID starting with 754248603e713494d1ff408069c74a57b870cc3dc9fca6bf7971c23184806daf not found: ID does not exist" Feb 18 19:38:15 crc kubenswrapper[4942]: I0218 19:38:15.261177 4942 scope.go:117] "RemoveContainer" containerID="f8f16eaf99b27e5378de6b9f610d1eff9cec3f93c1ffd82c5027dc6d962fe712" Feb 18 19:38:15 crc kubenswrapper[4942]: I0218 19:38:15.267804 4942 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Feb 18 19:38:15 crc kubenswrapper[4942]: I0218 19:38:15.278596 4942 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Feb 18 19:38:15 crc kubenswrapper[4942]: I0218 19:38:15.293536 4942 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Feb 18 19:38:15 crc kubenswrapper[4942]: E0218 19:38:15.294015 4942 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="df5e2192-70b4-43cc-a9e0-f9023ba0d4a9" containerName="init" Feb 18 19:38:15 crc kubenswrapper[4942]: I0218 19:38:15.294035 4942 state_mem.go:107] "Deleted CPUSet assignment" podUID="df5e2192-70b4-43cc-a9e0-f9023ba0d4a9" containerName="init" Feb 18 19:38:15 crc kubenswrapper[4942]: E0218 19:38:15.294048 4942 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2c972a02-9d35-43d1-9ef6-ab99f7cded50" containerName="nova-manage" Feb 18 19:38:15 crc kubenswrapper[4942]: I0218 19:38:15.294054 4942 state_mem.go:107] "Deleted CPUSet assignment" podUID="2c972a02-9d35-43d1-9ef6-ab99f7cded50" containerName="nova-manage" Feb 18 19:38:15 crc kubenswrapper[4942]: E0218 19:38:15.294075 4942 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="df5e2192-70b4-43cc-a9e0-f9023ba0d4a9" containerName="dnsmasq-dns" Feb 18 19:38:15 crc kubenswrapper[4942]: I0218 19:38:15.294082 4942 state_mem.go:107] "Deleted CPUSet assignment" podUID="df5e2192-70b4-43cc-a9e0-f9023ba0d4a9" containerName="dnsmasq-dns" Feb 18 19:38:15 crc kubenswrapper[4942]: E0218 19:38:15.294092 4942 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8f2c79fe-40ed-4218-9db5-ecf2750cd43c" containerName="nova-metadata-metadata" Feb 18 19:38:15 crc kubenswrapper[4942]: I0218 19:38:15.294099 4942 state_mem.go:107] "Deleted CPUSet assignment" podUID="8f2c79fe-40ed-4218-9db5-ecf2750cd43c" containerName="nova-metadata-metadata" Feb 18 19:38:15 crc kubenswrapper[4942]: E0218 19:38:15.294114 4942 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="934ec68d-b7d2-4435-8e54-4984cea15920" containerName="nova-scheduler-scheduler" Feb 18 19:38:15 crc kubenswrapper[4942]: I0218 19:38:15.294121 4942 state_mem.go:107] "Deleted CPUSet assignment" podUID="934ec68d-b7d2-4435-8e54-4984cea15920" containerName="nova-scheduler-scheduler" Feb 18 19:38:15 crc kubenswrapper[4942]: E0218 19:38:15.294134 4942 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8f2c79fe-40ed-4218-9db5-ecf2750cd43c" containerName="nova-metadata-log" Feb 18 19:38:15 crc kubenswrapper[4942]: I0218 19:38:15.294140 4942 state_mem.go:107] "Deleted CPUSet assignment" podUID="8f2c79fe-40ed-4218-9db5-ecf2750cd43c" containerName="nova-metadata-log" Feb 18 19:38:15 crc kubenswrapper[4942]: I0218 19:38:15.294310 4942 memory_manager.go:354] "RemoveStaleState removing state" podUID="8f2c79fe-40ed-4218-9db5-ecf2750cd43c" containerName="nova-metadata-metadata" Feb 18 19:38:15 crc kubenswrapper[4942]: I0218 19:38:15.294326 4942 memory_manager.go:354] "RemoveStaleState removing state" podUID="8f2c79fe-40ed-4218-9db5-ecf2750cd43c" containerName="nova-metadata-log" Feb 18 19:38:15 crc kubenswrapper[4942]: I0218 19:38:15.294334 4942 memory_manager.go:354] "RemoveStaleState removing state" podUID="df5e2192-70b4-43cc-a9e0-f9023ba0d4a9" containerName="dnsmasq-dns" Feb 18 19:38:15 crc kubenswrapper[4942]: I0218 19:38:15.294342 4942 memory_manager.go:354] "RemoveStaleState removing state" podUID="2c972a02-9d35-43d1-9ef6-ab99f7cded50" containerName="nova-manage" Feb 18 19:38:15 crc kubenswrapper[4942]: I0218 19:38:15.294355 4942 memory_manager.go:354] "RemoveStaleState removing state" podUID="934ec68d-b7d2-4435-8e54-4984cea15920" containerName="nova-scheduler-scheduler" Feb 18 19:38:15 crc kubenswrapper[4942]: I0218 19:38:15.295630 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 18 19:38:15 crc kubenswrapper[4942]: I0218 19:38:15.299444 4942 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Feb 18 19:38:15 crc kubenswrapper[4942]: I0218 19:38:15.300893 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 18 19:38:15 crc kubenswrapper[4942]: I0218 19:38:15.300939 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Feb 18 19:38:15 crc kubenswrapper[4942]: I0218 19:38:15.301109 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Feb 18 19:38:15 crc kubenswrapper[4942]: I0218 19:38:15.303416 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Feb 18 19:38:15 crc kubenswrapper[4942]: I0218 19:38:15.320881 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 18 19:38:15 crc kubenswrapper[4942]: I0218 19:38:15.333422 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Feb 18 19:38:15 crc kubenswrapper[4942]: I0218 19:38:15.437147 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/59c0f540-3718-4d09-b50f-78677151be71-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"59c0f540-3718-4d09-b50f-78677151be71\") " pod="openstack/nova-scheduler-0" Feb 18 19:38:15 crc kubenswrapper[4942]: I0218 19:38:15.437202 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c2526c15-03de-4c11-83b4-0bc7689a6b23-logs\") pod \"nova-metadata-0\" (UID: \"c2526c15-03de-4c11-83b4-0bc7689a6b23\") " pod="openstack/nova-metadata-0" Feb 18 19:38:15 crc kubenswrapper[4942]: I0218 19:38:15.437276 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4dv2s\" (UniqueName: \"kubernetes.io/projected/59c0f540-3718-4d09-b50f-78677151be71-kube-api-access-4dv2s\") pod \"nova-scheduler-0\" (UID: \"59c0f540-3718-4d09-b50f-78677151be71\") " pod="openstack/nova-scheduler-0" Feb 18 19:38:15 crc kubenswrapper[4942]: I0218 19:38:15.437346 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c2526c15-03de-4c11-83b4-0bc7689a6b23-config-data\") pod \"nova-metadata-0\" (UID: \"c2526c15-03de-4c11-83b4-0bc7689a6b23\") " pod="openstack/nova-metadata-0" Feb 18 19:38:15 crc kubenswrapper[4942]: I0218 19:38:15.437400 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/59c0f540-3718-4d09-b50f-78677151be71-config-data\") pod \"nova-scheduler-0\" (UID: \"59c0f540-3718-4d09-b50f-78677151be71\") " pod="openstack/nova-scheduler-0" Feb 18 19:38:15 crc kubenswrapper[4942]: I0218 19:38:15.437466 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/c2526c15-03de-4c11-83b4-0bc7689a6b23-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"c2526c15-03de-4c11-83b4-0bc7689a6b23\") " pod="openstack/nova-metadata-0" Feb 18 19:38:15 crc kubenswrapper[4942]: I0218 19:38:15.437561 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f2gk4\" (UniqueName: \"kubernetes.io/projected/c2526c15-03de-4c11-83b4-0bc7689a6b23-kube-api-access-f2gk4\") pod \"nova-metadata-0\" (UID: \"c2526c15-03de-4c11-83b4-0bc7689a6b23\") " pod="openstack/nova-metadata-0" Feb 18 19:38:15 crc kubenswrapper[4942]: I0218 19:38:15.437612 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c2526c15-03de-4c11-83b4-0bc7689a6b23-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"c2526c15-03de-4c11-83b4-0bc7689a6b23\") " pod="openstack/nova-metadata-0" Feb 18 19:38:15 crc kubenswrapper[4942]: I0218 19:38:15.539524 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/c2526c15-03de-4c11-83b4-0bc7689a6b23-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"c2526c15-03de-4c11-83b4-0bc7689a6b23\") " pod="openstack/nova-metadata-0" Feb 18 19:38:15 crc kubenswrapper[4942]: I0218 19:38:15.539638 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f2gk4\" (UniqueName: \"kubernetes.io/projected/c2526c15-03de-4c11-83b4-0bc7689a6b23-kube-api-access-f2gk4\") pod \"nova-metadata-0\" (UID: \"c2526c15-03de-4c11-83b4-0bc7689a6b23\") " pod="openstack/nova-metadata-0" Feb 18 19:38:15 crc kubenswrapper[4942]: I0218 19:38:15.539684 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c2526c15-03de-4c11-83b4-0bc7689a6b23-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"c2526c15-03de-4c11-83b4-0bc7689a6b23\") " pod="openstack/nova-metadata-0" Feb 18 19:38:15 crc kubenswrapper[4942]: I0218 19:38:15.539735 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/59c0f540-3718-4d09-b50f-78677151be71-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"59c0f540-3718-4d09-b50f-78677151be71\") " pod="openstack/nova-scheduler-0" Feb 18 19:38:15 crc kubenswrapper[4942]: I0218 19:38:15.539791 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c2526c15-03de-4c11-83b4-0bc7689a6b23-logs\") pod \"nova-metadata-0\" (UID: \"c2526c15-03de-4c11-83b4-0bc7689a6b23\") " pod="openstack/nova-metadata-0" Feb 18 19:38:15 crc kubenswrapper[4942]: I0218 19:38:15.539844 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4dv2s\" (UniqueName: \"kubernetes.io/projected/59c0f540-3718-4d09-b50f-78677151be71-kube-api-access-4dv2s\") pod \"nova-scheduler-0\" (UID: \"59c0f540-3718-4d09-b50f-78677151be71\") " pod="openstack/nova-scheduler-0" Feb 18 19:38:15 crc kubenswrapper[4942]: I0218 19:38:15.539893 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c2526c15-03de-4c11-83b4-0bc7689a6b23-config-data\") pod \"nova-metadata-0\" (UID: \"c2526c15-03de-4c11-83b4-0bc7689a6b23\") " pod="openstack/nova-metadata-0" Feb 18 19:38:15 crc kubenswrapper[4942]: I0218 19:38:15.539921 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/59c0f540-3718-4d09-b50f-78677151be71-config-data\") pod \"nova-scheduler-0\" (UID: \"59c0f540-3718-4d09-b50f-78677151be71\") " pod="openstack/nova-scheduler-0" Feb 18 19:38:15 crc kubenswrapper[4942]: I0218 19:38:15.540471 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c2526c15-03de-4c11-83b4-0bc7689a6b23-logs\") pod \"nova-metadata-0\" (UID: \"c2526c15-03de-4c11-83b4-0bc7689a6b23\") " pod="openstack/nova-metadata-0" Feb 18 19:38:15 crc kubenswrapper[4942]: I0218 19:38:15.544144 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/c2526c15-03de-4c11-83b4-0bc7689a6b23-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"c2526c15-03de-4c11-83b4-0bc7689a6b23\") " pod="openstack/nova-metadata-0" Feb 18 19:38:15 crc kubenswrapper[4942]: I0218 19:38:15.544639 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/59c0f540-3718-4d09-b50f-78677151be71-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"59c0f540-3718-4d09-b50f-78677151be71\") " pod="openstack/nova-scheduler-0" Feb 18 19:38:15 crc kubenswrapper[4942]: I0218 19:38:15.545092 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/59c0f540-3718-4d09-b50f-78677151be71-config-data\") pod \"nova-scheduler-0\" (UID: \"59c0f540-3718-4d09-b50f-78677151be71\") " pod="openstack/nova-scheduler-0" Feb 18 19:38:15 crc kubenswrapper[4942]: I0218 19:38:15.547115 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c2526c15-03de-4c11-83b4-0bc7689a6b23-config-data\") pod \"nova-metadata-0\" (UID: \"c2526c15-03de-4c11-83b4-0bc7689a6b23\") " pod="openstack/nova-metadata-0" Feb 18 19:38:15 crc kubenswrapper[4942]: I0218 19:38:15.552531 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c2526c15-03de-4c11-83b4-0bc7689a6b23-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"c2526c15-03de-4c11-83b4-0bc7689a6b23\") " pod="openstack/nova-metadata-0" Feb 18 19:38:15 crc kubenswrapper[4942]: I0218 19:38:15.557271 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f2gk4\" (UniqueName: \"kubernetes.io/projected/c2526c15-03de-4c11-83b4-0bc7689a6b23-kube-api-access-f2gk4\") pod \"nova-metadata-0\" (UID: \"c2526c15-03de-4c11-83b4-0bc7689a6b23\") " pod="openstack/nova-metadata-0" Feb 18 19:38:15 crc kubenswrapper[4942]: I0218 19:38:15.559095 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4dv2s\" (UniqueName: \"kubernetes.io/projected/59c0f540-3718-4d09-b50f-78677151be71-kube-api-access-4dv2s\") pod \"nova-scheduler-0\" (UID: \"59c0f540-3718-4d09-b50f-78677151be71\") " pod="openstack/nova-scheduler-0" Feb 18 19:38:15 crc kubenswrapper[4942]: I0218 19:38:15.628850 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 18 19:38:15 crc kubenswrapper[4942]: I0218 19:38:15.638330 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 18 19:38:16 crc kubenswrapper[4942]: I0218 19:38:16.067973 4942 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 18 19:38:16 crc kubenswrapper[4942]: I0218 19:38:16.150149 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/dfa84b55-e3b4-425c-983b-57e60b06ee59-logs\") pod \"dfa84b55-e3b4-425c-983b-57e60b06ee59\" (UID: \"dfa84b55-e3b4-425c-983b-57e60b06ee59\") " Feb 18 19:38:16 crc kubenswrapper[4942]: I0218 19:38:16.150208 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/dfa84b55-e3b4-425c-983b-57e60b06ee59-internal-tls-certs\") pod \"dfa84b55-e3b4-425c-983b-57e60b06ee59\" (UID: \"dfa84b55-e3b4-425c-983b-57e60b06ee59\") " Feb 18 19:38:16 crc kubenswrapper[4942]: I0218 19:38:16.150252 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/dfa84b55-e3b4-425c-983b-57e60b06ee59-public-tls-certs\") pod \"dfa84b55-e3b4-425c-983b-57e60b06ee59\" (UID: \"dfa84b55-e3b4-425c-983b-57e60b06ee59\") " Feb 18 19:38:16 crc kubenswrapper[4942]: I0218 19:38:16.150311 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dfa84b55-e3b4-425c-983b-57e60b06ee59-config-data\") pod \"dfa84b55-e3b4-425c-983b-57e60b06ee59\" (UID: \"dfa84b55-e3b4-425c-983b-57e60b06ee59\") " Feb 18 19:38:16 crc kubenswrapper[4942]: I0218 19:38:16.150351 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dfa84b55-e3b4-425c-983b-57e60b06ee59-combined-ca-bundle\") pod \"dfa84b55-e3b4-425c-983b-57e60b06ee59\" (UID: \"dfa84b55-e3b4-425c-983b-57e60b06ee59\") " Feb 18 19:38:16 crc kubenswrapper[4942]: I0218 19:38:16.150383 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gdxwb\" (UniqueName: \"kubernetes.io/projected/dfa84b55-e3b4-425c-983b-57e60b06ee59-kube-api-access-gdxwb\") pod \"dfa84b55-e3b4-425c-983b-57e60b06ee59\" (UID: \"dfa84b55-e3b4-425c-983b-57e60b06ee59\") " Feb 18 19:38:16 crc kubenswrapper[4942]: I0218 19:38:16.150832 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/dfa84b55-e3b4-425c-983b-57e60b06ee59-logs" (OuterVolumeSpecName: "logs") pod "dfa84b55-e3b4-425c-983b-57e60b06ee59" (UID: "dfa84b55-e3b4-425c-983b-57e60b06ee59"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 19:38:16 crc kubenswrapper[4942]: I0218 19:38:16.151040 4942 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/dfa84b55-e3b4-425c-983b-57e60b06ee59-logs\") on node \"crc\" DevicePath \"\"" Feb 18 19:38:16 crc kubenswrapper[4942]: I0218 19:38:16.155926 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dfa84b55-e3b4-425c-983b-57e60b06ee59-kube-api-access-gdxwb" (OuterVolumeSpecName: "kube-api-access-gdxwb") pod "dfa84b55-e3b4-425c-983b-57e60b06ee59" (UID: "dfa84b55-e3b4-425c-983b-57e60b06ee59"). InnerVolumeSpecName "kube-api-access-gdxwb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:38:16 crc kubenswrapper[4942]: I0218 19:38:16.199152 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dfa84b55-e3b4-425c-983b-57e60b06ee59-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "dfa84b55-e3b4-425c-983b-57e60b06ee59" (UID: "dfa84b55-e3b4-425c-983b-57e60b06ee59"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:38:16 crc kubenswrapper[4942]: I0218 19:38:16.214536 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 18 19:38:16 crc kubenswrapper[4942]: I0218 19:38:16.223819 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dfa84b55-e3b4-425c-983b-57e60b06ee59-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "dfa84b55-e3b4-425c-983b-57e60b06ee59" (UID: "dfa84b55-e3b4-425c-983b-57e60b06ee59"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:38:16 crc kubenswrapper[4942]: I0218 19:38:16.230375 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Feb 18 19:38:16 crc kubenswrapper[4942]: I0218 19:38:16.233063 4942 generic.go:334] "Generic (PLEG): container finished" podID="dfa84b55-e3b4-425c-983b-57e60b06ee59" containerID="601eab2ba5b673f055b02438a80da68f6ec4ed45d0a9b9a92cb586749d250eeb" exitCode=0 Feb 18 19:38:16 crc kubenswrapper[4942]: I0218 19:38:16.233112 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"dfa84b55-e3b4-425c-983b-57e60b06ee59","Type":"ContainerDied","Data":"601eab2ba5b673f055b02438a80da68f6ec4ed45d0a9b9a92cb586749d250eeb"} Feb 18 19:38:16 crc kubenswrapper[4942]: I0218 19:38:16.233150 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"dfa84b55-e3b4-425c-983b-57e60b06ee59","Type":"ContainerDied","Data":"7455a83c698c3d6e0c19dcaa1a1f353e541c5f547cd7e8fdbe3ffdd928daf970"} Feb 18 19:38:16 crc kubenswrapper[4942]: I0218 19:38:16.233171 4942 scope.go:117] "RemoveContainer" containerID="601eab2ba5b673f055b02438a80da68f6ec4ed45d0a9b9a92cb586749d250eeb" Feb 18 19:38:16 crc kubenswrapper[4942]: I0218 19:38:16.233173 4942 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 18 19:38:16 crc kubenswrapper[4942]: I0218 19:38:16.233327 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dfa84b55-e3b4-425c-983b-57e60b06ee59-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "dfa84b55-e3b4-425c-983b-57e60b06ee59" (UID: "dfa84b55-e3b4-425c-983b-57e60b06ee59"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:38:16 crc kubenswrapper[4942]: I0218 19:38:16.239415 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dfa84b55-e3b4-425c-983b-57e60b06ee59-config-data" (OuterVolumeSpecName: "config-data") pod "dfa84b55-e3b4-425c-983b-57e60b06ee59" (UID: "dfa84b55-e3b4-425c-983b-57e60b06ee59"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:38:16 crc kubenswrapper[4942]: I0218 19:38:16.253536 4942 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/dfa84b55-e3b4-425c-983b-57e60b06ee59-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 18 19:38:16 crc kubenswrapper[4942]: I0218 19:38:16.253577 4942 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/dfa84b55-e3b4-425c-983b-57e60b06ee59-public-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 18 19:38:16 crc kubenswrapper[4942]: I0218 19:38:16.253590 4942 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dfa84b55-e3b4-425c-983b-57e60b06ee59-config-data\") on node \"crc\" DevicePath \"\"" Feb 18 19:38:16 crc kubenswrapper[4942]: I0218 19:38:16.253601 4942 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dfa84b55-e3b4-425c-983b-57e60b06ee59-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 19:38:16 crc kubenswrapper[4942]: I0218 19:38:16.253614 4942 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gdxwb\" (UniqueName: \"kubernetes.io/projected/dfa84b55-e3b4-425c-983b-57e60b06ee59-kube-api-access-gdxwb\") on node \"crc\" DevicePath \"\"" Feb 18 19:38:16 crc kubenswrapper[4942]: I0218 19:38:16.370195 4942 scope.go:117] "RemoveContainer" containerID="57854175ad36d4613dd7ba3f9c987cf448463e0159084dfab670f1b0ecf637a2" Feb 18 19:38:16 crc kubenswrapper[4942]: I0218 19:38:16.405926 4942 scope.go:117] "RemoveContainer" containerID="601eab2ba5b673f055b02438a80da68f6ec4ed45d0a9b9a92cb586749d250eeb" Feb 18 19:38:16 crc kubenswrapper[4942]: E0218 19:38:16.406353 4942 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"601eab2ba5b673f055b02438a80da68f6ec4ed45d0a9b9a92cb586749d250eeb\": container with ID starting with 601eab2ba5b673f055b02438a80da68f6ec4ed45d0a9b9a92cb586749d250eeb not found: ID does not exist" containerID="601eab2ba5b673f055b02438a80da68f6ec4ed45d0a9b9a92cb586749d250eeb" Feb 18 19:38:16 crc kubenswrapper[4942]: I0218 19:38:16.406404 4942 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"601eab2ba5b673f055b02438a80da68f6ec4ed45d0a9b9a92cb586749d250eeb"} err="failed to get container status \"601eab2ba5b673f055b02438a80da68f6ec4ed45d0a9b9a92cb586749d250eeb\": rpc error: code = NotFound desc = could not find container \"601eab2ba5b673f055b02438a80da68f6ec4ed45d0a9b9a92cb586749d250eeb\": container with ID starting with 601eab2ba5b673f055b02438a80da68f6ec4ed45d0a9b9a92cb586749d250eeb not found: ID does not exist" Feb 18 19:38:16 crc kubenswrapper[4942]: I0218 19:38:16.406432 4942 scope.go:117] "RemoveContainer" containerID="57854175ad36d4613dd7ba3f9c987cf448463e0159084dfab670f1b0ecf637a2" Feb 18 19:38:16 crc kubenswrapper[4942]: E0218 19:38:16.406695 4942 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"57854175ad36d4613dd7ba3f9c987cf448463e0159084dfab670f1b0ecf637a2\": container with ID starting with 57854175ad36d4613dd7ba3f9c987cf448463e0159084dfab670f1b0ecf637a2 not found: ID does not exist" containerID="57854175ad36d4613dd7ba3f9c987cf448463e0159084dfab670f1b0ecf637a2" Feb 18 19:38:16 crc kubenswrapper[4942]: I0218 19:38:16.406722 4942 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"57854175ad36d4613dd7ba3f9c987cf448463e0159084dfab670f1b0ecf637a2"} err="failed to get container status \"57854175ad36d4613dd7ba3f9c987cf448463e0159084dfab670f1b0ecf637a2\": rpc error: code = NotFound desc = could not find container \"57854175ad36d4613dd7ba3f9c987cf448463e0159084dfab670f1b0ecf637a2\": container with ID starting with 57854175ad36d4613dd7ba3f9c987cf448463e0159084dfab670f1b0ecf637a2 not found: ID does not exist" Feb 18 19:38:16 crc kubenswrapper[4942]: I0218 19:38:16.570030 4942 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Feb 18 19:38:16 crc kubenswrapper[4942]: I0218 19:38:16.586321 4942 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Feb 18 19:38:16 crc kubenswrapper[4942]: I0218 19:38:16.597880 4942 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Feb 18 19:38:16 crc kubenswrapper[4942]: E0218 19:38:16.598423 4942 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dfa84b55-e3b4-425c-983b-57e60b06ee59" containerName="nova-api-log" Feb 18 19:38:16 crc kubenswrapper[4942]: I0218 19:38:16.598448 4942 state_mem.go:107] "Deleted CPUSet assignment" podUID="dfa84b55-e3b4-425c-983b-57e60b06ee59" containerName="nova-api-log" Feb 18 19:38:16 crc kubenswrapper[4942]: E0218 19:38:16.598486 4942 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dfa84b55-e3b4-425c-983b-57e60b06ee59" containerName="nova-api-api" Feb 18 19:38:16 crc kubenswrapper[4942]: I0218 19:38:16.598496 4942 state_mem.go:107] "Deleted CPUSet assignment" podUID="dfa84b55-e3b4-425c-983b-57e60b06ee59" containerName="nova-api-api" Feb 18 19:38:16 crc kubenswrapper[4942]: I0218 19:38:16.598733 4942 memory_manager.go:354] "RemoveStaleState removing state" podUID="dfa84b55-e3b4-425c-983b-57e60b06ee59" containerName="nova-api-api" Feb 18 19:38:16 crc kubenswrapper[4942]: I0218 19:38:16.598782 4942 memory_manager.go:354] "RemoveStaleState removing state" podUID="dfa84b55-e3b4-425c-983b-57e60b06ee59" containerName="nova-api-log" Feb 18 19:38:16 crc kubenswrapper[4942]: I0218 19:38:16.600112 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 18 19:38:16 crc kubenswrapper[4942]: I0218 19:38:16.602642 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Feb 18 19:38:16 crc kubenswrapper[4942]: I0218 19:38:16.603184 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Feb 18 19:38:16 crc kubenswrapper[4942]: I0218 19:38:16.603346 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Feb 18 19:38:16 crc kubenswrapper[4942]: I0218 19:38:16.614553 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 18 19:38:16 crc kubenswrapper[4942]: I0218 19:38:16.766020 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/360fd9f7-8dca-4006-a1c3-24f346ff360e-public-tls-certs\") pod \"nova-api-0\" (UID: \"360fd9f7-8dca-4006-a1c3-24f346ff360e\") " pod="openstack/nova-api-0" Feb 18 19:38:16 crc kubenswrapper[4942]: I0218 19:38:16.766237 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/360fd9f7-8dca-4006-a1c3-24f346ff360e-internal-tls-certs\") pod \"nova-api-0\" (UID: \"360fd9f7-8dca-4006-a1c3-24f346ff360e\") " pod="openstack/nova-api-0" Feb 18 19:38:16 crc kubenswrapper[4942]: I0218 19:38:16.766397 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/360fd9f7-8dca-4006-a1c3-24f346ff360e-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"360fd9f7-8dca-4006-a1c3-24f346ff360e\") " pod="openstack/nova-api-0" Feb 18 19:38:16 crc kubenswrapper[4942]: I0218 19:38:16.766456 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k48v4\" (UniqueName: \"kubernetes.io/projected/360fd9f7-8dca-4006-a1c3-24f346ff360e-kube-api-access-k48v4\") pod \"nova-api-0\" (UID: \"360fd9f7-8dca-4006-a1c3-24f346ff360e\") " pod="openstack/nova-api-0" Feb 18 19:38:16 crc kubenswrapper[4942]: I0218 19:38:16.766624 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/360fd9f7-8dca-4006-a1c3-24f346ff360e-config-data\") pod \"nova-api-0\" (UID: \"360fd9f7-8dca-4006-a1c3-24f346ff360e\") " pod="openstack/nova-api-0" Feb 18 19:38:16 crc kubenswrapper[4942]: I0218 19:38:16.767062 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/360fd9f7-8dca-4006-a1c3-24f346ff360e-logs\") pod \"nova-api-0\" (UID: \"360fd9f7-8dca-4006-a1c3-24f346ff360e\") " pod="openstack/nova-api-0" Feb 18 19:38:16 crc kubenswrapper[4942]: I0218 19:38:16.869811 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/360fd9f7-8dca-4006-a1c3-24f346ff360e-public-tls-certs\") pod \"nova-api-0\" (UID: \"360fd9f7-8dca-4006-a1c3-24f346ff360e\") " pod="openstack/nova-api-0" Feb 18 19:38:16 crc kubenswrapper[4942]: I0218 19:38:16.870001 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/360fd9f7-8dca-4006-a1c3-24f346ff360e-internal-tls-certs\") pod \"nova-api-0\" (UID: \"360fd9f7-8dca-4006-a1c3-24f346ff360e\") " pod="openstack/nova-api-0" Feb 18 19:38:16 crc kubenswrapper[4942]: I0218 19:38:16.870173 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/360fd9f7-8dca-4006-a1c3-24f346ff360e-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"360fd9f7-8dca-4006-a1c3-24f346ff360e\") " pod="openstack/nova-api-0" Feb 18 19:38:16 crc kubenswrapper[4942]: I0218 19:38:16.870263 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k48v4\" (UniqueName: \"kubernetes.io/projected/360fd9f7-8dca-4006-a1c3-24f346ff360e-kube-api-access-k48v4\") pod \"nova-api-0\" (UID: \"360fd9f7-8dca-4006-a1c3-24f346ff360e\") " pod="openstack/nova-api-0" Feb 18 19:38:16 crc kubenswrapper[4942]: I0218 19:38:16.871264 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/360fd9f7-8dca-4006-a1c3-24f346ff360e-config-data\") pod \"nova-api-0\" (UID: \"360fd9f7-8dca-4006-a1c3-24f346ff360e\") " pod="openstack/nova-api-0" Feb 18 19:38:16 crc kubenswrapper[4942]: I0218 19:38:16.872546 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/360fd9f7-8dca-4006-a1c3-24f346ff360e-logs\") pod \"nova-api-0\" (UID: \"360fd9f7-8dca-4006-a1c3-24f346ff360e\") " pod="openstack/nova-api-0" Feb 18 19:38:16 crc kubenswrapper[4942]: I0218 19:38:16.873287 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/360fd9f7-8dca-4006-a1c3-24f346ff360e-logs\") pod \"nova-api-0\" (UID: \"360fd9f7-8dca-4006-a1c3-24f346ff360e\") " pod="openstack/nova-api-0" Feb 18 19:38:16 crc kubenswrapper[4942]: I0218 19:38:16.874413 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/360fd9f7-8dca-4006-a1c3-24f346ff360e-public-tls-certs\") pod \"nova-api-0\" (UID: \"360fd9f7-8dca-4006-a1c3-24f346ff360e\") " pod="openstack/nova-api-0" Feb 18 19:38:16 crc kubenswrapper[4942]: I0218 19:38:16.878371 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/360fd9f7-8dca-4006-a1c3-24f346ff360e-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"360fd9f7-8dca-4006-a1c3-24f346ff360e\") " pod="openstack/nova-api-0" Feb 18 19:38:16 crc kubenswrapper[4942]: I0218 19:38:16.878839 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/360fd9f7-8dca-4006-a1c3-24f346ff360e-internal-tls-certs\") pod \"nova-api-0\" (UID: \"360fd9f7-8dca-4006-a1c3-24f346ff360e\") " pod="openstack/nova-api-0" Feb 18 19:38:16 crc kubenswrapper[4942]: I0218 19:38:16.888466 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/360fd9f7-8dca-4006-a1c3-24f346ff360e-config-data\") pod \"nova-api-0\" (UID: \"360fd9f7-8dca-4006-a1c3-24f346ff360e\") " pod="openstack/nova-api-0" Feb 18 19:38:16 crc kubenswrapper[4942]: I0218 19:38:16.900494 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k48v4\" (UniqueName: \"kubernetes.io/projected/360fd9f7-8dca-4006-a1c3-24f346ff360e-kube-api-access-k48v4\") pod \"nova-api-0\" (UID: \"360fd9f7-8dca-4006-a1c3-24f346ff360e\") " pod="openstack/nova-api-0" Feb 18 19:38:16 crc kubenswrapper[4942]: I0218 19:38:16.924914 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 18 19:38:17 crc kubenswrapper[4942]: I0218 19:38:17.054281 4942 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8f2c79fe-40ed-4218-9db5-ecf2750cd43c" path="/var/lib/kubelet/pods/8f2c79fe-40ed-4218-9db5-ecf2750cd43c/volumes" Feb 18 19:38:17 crc kubenswrapper[4942]: I0218 19:38:17.055667 4942 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="934ec68d-b7d2-4435-8e54-4984cea15920" path="/var/lib/kubelet/pods/934ec68d-b7d2-4435-8e54-4984cea15920/volumes" Feb 18 19:38:17 crc kubenswrapper[4942]: I0218 19:38:17.057177 4942 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dfa84b55-e3b4-425c-983b-57e60b06ee59" path="/var/lib/kubelet/pods/dfa84b55-e3b4-425c-983b-57e60b06ee59/volumes" Feb 18 19:38:17 crc kubenswrapper[4942]: I0218 19:38:17.251138 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"59c0f540-3718-4d09-b50f-78677151be71","Type":"ContainerStarted","Data":"6d4cdb9843e3617ce9be9e454605c696cad35ab9d30b6e9edfef9a5d83b07bca"} Feb 18 19:38:17 crc kubenswrapper[4942]: I0218 19:38:17.251456 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"59c0f540-3718-4d09-b50f-78677151be71","Type":"ContainerStarted","Data":"4dd103e04ad3923a12dea9c35490243ed3d6c8414a5f5f99ff9134b6859377d8"} Feb 18 19:38:17 crc kubenswrapper[4942]: I0218 19:38:17.259125 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"c2526c15-03de-4c11-83b4-0bc7689a6b23","Type":"ContainerStarted","Data":"61ee539a751442adb567c24374e8edace8fa52e49d3551635c89816653b3f49c"} Feb 18 19:38:17 crc kubenswrapper[4942]: I0218 19:38:17.259176 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"c2526c15-03de-4c11-83b4-0bc7689a6b23","Type":"ContainerStarted","Data":"010d3227279137e002f4529203fe4c20c0c08675107e2d39375e6c08913b2504"} Feb 18 19:38:17 crc kubenswrapper[4942]: I0218 19:38:17.259194 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"c2526c15-03de-4c11-83b4-0bc7689a6b23","Type":"ContainerStarted","Data":"588b81688a9bc83b0d8097e51e95ace4170b8c4a39d7b111dcf6ff133effe808"} Feb 18 19:38:17 crc kubenswrapper[4942]: I0218 19:38:17.281749 4942 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.2817266480000002 podStartE2EDuration="2.281726648s" podCreationTimestamp="2026-02-18 19:38:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:38:17.277126647 +0000 UTC m=+1256.982059352" watchObservedRunningTime="2026-02-18 19:38:17.281726648 +0000 UTC m=+1256.986659323" Feb 18 19:38:17 crc kubenswrapper[4942]: I0218 19:38:17.302274 4942 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.302257037 podStartE2EDuration="2.302257037s" podCreationTimestamp="2026-02-18 19:38:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:38:17.29592455 +0000 UTC m=+1257.000857245" watchObservedRunningTime="2026-02-18 19:38:17.302257037 +0000 UTC m=+1257.007189702" Feb 18 19:38:17 crc kubenswrapper[4942]: W0218 19:38:17.359732 4942 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod360fd9f7_8dca_4006_a1c3_24f346ff360e.slice/crio-5c9dbd8559f984496e50528f94e0f78c9e071b45a58e2bbdcb39d25e73d24dec WatchSource:0}: Error finding container 5c9dbd8559f984496e50528f94e0f78c9e071b45a58e2bbdcb39d25e73d24dec: Status 404 returned error can't find the container with id 5c9dbd8559f984496e50528f94e0f78c9e071b45a58e2bbdcb39d25e73d24dec Feb 18 19:38:17 crc kubenswrapper[4942]: I0218 19:38:17.361538 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 18 19:38:18 crc kubenswrapper[4942]: I0218 19:38:18.272108 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"360fd9f7-8dca-4006-a1c3-24f346ff360e","Type":"ContainerStarted","Data":"308899099f016f5b8c01d40fcf1e7ef2a91302b3cf849c954e3d065ef43b744e"} Feb 18 19:38:18 crc kubenswrapper[4942]: I0218 19:38:18.272487 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"360fd9f7-8dca-4006-a1c3-24f346ff360e","Type":"ContainerStarted","Data":"81fafda8d71c1fd08cee9c90f4298976a4dafac63ba06e7349f115fa0233d3c2"} Feb 18 19:38:18 crc kubenswrapper[4942]: I0218 19:38:18.272513 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"360fd9f7-8dca-4006-a1c3-24f346ff360e","Type":"ContainerStarted","Data":"5c9dbd8559f984496e50528f94e0f78c9e071b45a58e2bbdcb39d25e73d24dec"} Feb 18 19:38:18 crc kubenswrapper[4942]: I0218 19:38:18.305487 4942 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.305466508 podStartE2EDuration="2.305466508s" podCreationTimestamp="2026-02-18 19:38:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:38:18.29905787 +0000 UTC m=+1258.003990585" watchObservedRunningTime="2026-02-18 19:38:18.305466508 +0000 UTC m=+1258.010399173" Feb 18 19:38:20 crc kubenswrapper[4942]: I0218 19:38:20.629416 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Feb 18 19:38:20 crc kubenswrapper[4942]: I0218 19:38:20.629660 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Feb 18 19:38:20 crc kubenswrapper[4942]: I0218 19:38:20.638902 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Feb 18 19:38:23 crc kubenswrapper[4942]: I0218 19:38:23.741409 4942 patch_prober.go:28] interesting pod/machine-config-daemon-wqxh4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 19:38:23 crc kubenswrapper[4942]: I0218 19:38:23.742347 4942 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wqxh4" podUID="28921539-823a-4439-a230-3b5aed7085cc" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 19:38:23 crc kubenswrapper[4942]: I0218 19:38:23.742426 4942 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-wqxh4" Feb 18 19:38:23 crc kubenswrapper[4942]: I0218 19:38:23.743396 4942 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"8ecda90ff377eb2cb3234b37ad9a8ec87fa575a7e7c5a3a78ee7c2e00f4a7b66"} pod="openshift-machine-config-operator/machine-config-daemon-wqxh4" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 18 19:38:23 crc kubenswrapper[4942]: I0218 19:38:23.743502 4942 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-wqxh4" podUID="28921539-823a-4439-a230-3b5aed7085cc" containerName="machine-config-daemon" containerID="cri-o://8ecda90ff377eb2cb3234b37ad9a8ec87fa575a7e7c5a3a78ee7c2e00f4a7b66" gracePeriod=600 Feb 18 19:38:24 crc kubenswrapper[4942]: I0218 19:38:24.345553 4942 generic.go:334] "Generic (PLEG): container finished" podID="28921539-823a-4439-a230-3b5aed7085cc" containerID="8ecda90ff377eb2cb3234b37ad9a8ec87fa575a7e7c5a3a78ee7c2e00f4a7b66" exitCode=0 Feb 18 19:38:24 crc kubenswrapper[4942]: I0218 19:38:24.345628 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wqxh4" event={"ID":"28921539-823a-4439-a230-3b5aed7085cc","Type":"ContainerDied","Data":"8ecda90ff377eb2cb3234b37ad9a8ec87fa575a7e7c5a3a78ee7c2e00f4a7b66"} Feb 18 19:38:24 crc kubenswrapper[4942]: I0218 19:38:24.345997 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wqxh4" event={"ID":"28921539-823a-4439-a230-3b5aed7085cc","Type":"ContainerStarted","Data":"0f7c7ce7194dc50e8e7ff903a9631c5d1d6654771462dbd4df2dfa299f3641bf"} Feb 18 19:38:24 crc kubenswrapper[4942]: I0218 19:38:24.346025 4942 scope.go:117] "RemoveContainer" containerID="4ad75b87330a71997979db298f42e179882b61890e654d3a0c077cf25d5cb90b" Feb 18 19:38:25 crc kubenswrapper[4942]: I0218 19:38:25.630130 4942 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Feb 18 19:38:25 crc kubenswrapper[4942]: I0218 19:38:25.632212 4942 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Feb 18 19:38:25 crc kubenswrapper[4942]: I0218 19:38:25.639484 4942 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Feb 18 19:38:25 crc kubenswrapper[4942]: I0218 19:38:25.692182 4942 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Feb 18 19:38:26 crc kubenswrapper[4942]: I0218 19:38:26.409949 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Feb 18 19:38:26 crc kubenswrapper[4942]: I0218 19:38:26.620713 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Feb 18 19:38:26 crc kubenswrapper[4942]: I0218 19:38:26.660629 4942 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="c2526c15-03de-4c11-83b4-0bc7689a6b23" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.222:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 18 19:38:26 crc kubenswrapper[4942]: I0218 19:38:26.661050 4942 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="c2526c15-03de-4c11-83b4-0bc7689a6b23" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.222:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 18 19:38:26 crc kubenswrapper[4942]: I0218 19:38:26.925434 4942 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 18 19:38:26 crc kubenswrapper[4942]: I0218 19:38:26.925526 4942 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 18 19:38:27 crc kubenswrapper[4942]: I0218 19:38:27.938104 4942 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="360fd9f7-8dca-4006-a1c3-24f346ff360e" containerName="nova-api-log" probeResult="failure" output="Get \"https://10.217.0.224:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 18 19:38:27 crc kubenswrapper[4942]: I0218 19:38:27.938435 4942 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="360fd9f7-8dca-4006-a1c3-24f346ff360e" containerName="nova-api-api" probeResult="failure" output="Get \"https://10.217.0.224:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 18 19:38:35 crc kubenswrapper[4942]: I0218 19:38:35.639492 4942 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Feb 18 19:38:35 crc kubenswrapper[4942]: I0218 19:38:35.648748 4942 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Feb 18 19:38:35 crc kubenswrapper[4942]: I0218 19:38:35.652657 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Feb 18 19:38:36 crc kubenswrapper[4942]: I0218 19:38:36.482962 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Feb 18 19:38:36 crc kubenswrapper[4942]: I0218 19:38:36.933938 4942 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Feb 18 19:38:36 crc kubenswrapper[4942]: I0218 19:38:36.935378 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Feb 18 19:38:36 crc kubenswrapper[4942]: I0218 19:38:36.938873 4942 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Feb 18 19:38:36 crc kubenswrapper[4942]: I0218 19:38:36.942664 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Feb 18 19:38:37 crc kubenswrapper[4942]: I0218 19:38:37.492990 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Feb 18 19:38:37 crc kubenswrapper[4942]: I0218 19:38:37.506072 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Feb 18 19:38:45 crc kubenswrapper[4942]: I0218 19:38:45.229128 4942 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Feb 18 19:38:46 crc kubenswrapper[4942]: I0218 19:38:46.227925 4942 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 18 19:38:49 crc kubenswrapper[4942]: I0218 19:38:49.281836 4942 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-server-0" podUID="77de5cb0-e446-407d-9e32-b13f39c84ae2" containerName="rabbitmq" containerID="cri-o://2a06461943313e923de9b2391c5eb34c6a9c08986670b8d6bae063427214e0e7" gracePeriod=604796 Feb 18 19:38:50 crc kubenswrapper[4942]: I0218 19:38:50.856197 4942 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-cell1-server-0" podUID="b6b41292-c562-4964-bb25-d8945415b3da" containerName="rabbitmq" containerID="cri-o://4f752d07e5ee2189bcc31aa4e606e8bcb5f06355b290a2073d2a7609686ffd94" gracePeriod=604796 Feb 18 19:38:55 crc kubenswrapper[4942]: I0218 19:38:55.677887 4942 generic.go:334] "Generic (PLEG): container finished" podID="77de5cb0-e446-407d-9e32-b13f39c84ae2" containerID="2a06461943313e923de9b2391c5eb34c6a9c08986670b8d6bae063427214e0e7" exitCode=0 Feb 18 19:38:55 crc kubenswrapper[4942]: I0218 19:38:55.678242 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"77de5cb0-e446-407d-9e32-b13f39c84ae2","Type":"ContainerDied","Data":"2a06461943313e923de9b2391c5eb34c6a9c08986670b8d6bae063427214e0e7"} Feb 18 19:38:55 crc kubenswrapper[4942]: I0218 19:38:55.965134 4942 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Feb 18 19:38:56 crc kubenswrapper[4942]: I0218 19:38:56.097784 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/77de5cb0-e446-407d-9e32-b13f39c84ae2-rabbitmq-tls\") pod \"77de5cb0-e446-407d-9e32-b13f39c84ae2\" (UID: \"77de5cb0-e446-407d-9e32-b13f39c84ae2\") " Feb 18 19:38:56 crc kubenswrapper[4942]: I0218 19:38:56.097825 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/77de5cb0-e446-407d-9e32-b13f39c84ae2-erlang-cookie-secret\") pod \"77de5cb0-e446-407d-9e32-b13f39c84ae2\" (UID: \"77de5cb0-e446-407d-9e32-b13f39c84ae2\") " Feb 18 19:38:56 crc kubenswrapper[4942]: I0218 19:38:56.097932 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/77de5cb0-e446-407d-9e32-b13f39c84ae2-pod-info\") pod \"77de5cb0-e446-407d-9e32-b13f39c84ae2\" (UID: \"77de5cb0-e446-407d-9e32-b13f39c84ae2\") " Feb 18 19:38:56 crc kubenswrapper[4942]: I0218 19:38:56.097982 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/77de5cb0-e446-407d-9e32-b13f39c84ae2-config-data\") pod \"77de5cb0-e446-407d-9e32-b13f39c84ae2\" (UID: \"77de5cb0-e446-407d-9e32-b13f39c84ae2\") " Feb 18 19:38:56 crc kubenswrapper[4942]: I0218 19:38:56.098018 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/77de5cb0-e446-407d-9e32-b13f39c84ae2-rabbitmq-confd\") pod \"77de5cb0-e446-407d-9e32-b13f39c84ae2\" (UID: \"77de5cb0-e446-407d-9e32-b13f39c84ae2\") " Feb 18 19:38:56 crc kubenswrapper[4942]: I0218 19:38:56.098091 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"77de5cb0-e446-407d-9e32-b13f39c84ae2\" (UID: \"77de5cb0-e446-407d-9e32-b13f39c84ae2\") " Feb 18 19:38:56 crc kubenswrapper[4942]: I0218 19:38:56.098115 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/77de5cb0-e446-407d-9e32-b13f39c84ae2-rabbitmq-plugins\") pod \"77de5cb0-e446-407d-9e32-b13f39c84ae2\" (UID: \"77de5cb0-e446-407d-9e32-b13f39c84ae2\") " Feb 18 19:38:56 crc kubenswrapper[4942]: I0218 19:38:56.098145 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/77de5cb0-e446-407d-9e32-b13f39c84ae2-server-conf\") pod \"77de5cb0-e446-407d-9e32-b13f39c84ae2\" (UID: \"77de5cb0-e446-407d-9e32-b13f39c84ae2\") " Feb 18 19:38:56 crc kubenswrapper[4942]: I0218 19:38:56.098166 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/77de5cb0-e446-407d-9e32-b13f39c84ae2-plugins-conf\") pod \"77de5cb0-e446-407d-9e32-b13f39c84ae2\" (UID: \"77de5cb0-e446-407d-9e32-b13f39c84ae2\") " Feb 18 19:38:56 crc kubenswrapper[4942]: I0218 19:38:56.098194 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/77de5cb0-e446-407d-9e32-b13f39c84ae2-rabbitmq-erlang-cookie\") pod \"77de5cb0-e446-407d-9e32-b13f39c84ae2\" (UID: \"77de5cb0-e446-407d-9e32-b13f39c84ae2\") " Feb 18 19:38:56 crc kubenswrapper[4942]: I0218 19:38:56.098217 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8wqkf\" (UniqueName: \"kubernetes.io/projected/77de5cb0-e446-407d-9e32-b13f39c84ae2-kube-api-access-8wqkf\") pod \"77de5cb0-e446-407d-9e32-b13f39c84ae2\" (UID: \"77de5cb0-e446-407d-9e32-b13f39c84ae2\") " Feb 18 19:38:56 crc kubenswrapper[4942]: I0218 19:38:56.101038 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/77de5cb0-e446-407d-9e32-b13f39c84ae2-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "77de5cb0-e446-407d-9e32-b13f39c84ae2" (UID: "77de5cb0-e446-407d-9e32-b13f39c84ae2"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 19:38:56 crc kubenswrapper[4942]: I0218 19:38:56.103405 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/77de5cb0-e446-407d-9e32-b13f39c84ae2-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "77de5cb0-e446-407d-9e32-b13f39c84ae2" (UID: "77de5cb0-e446-407d-9e32-b13f39c84ae2"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:38:56 crc kubenswrapper[4942]: I0218 19:38:56.104452 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/77de5cb0-e446-407d-9e32-b13f39c84ae2-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "77de5cb0-e446-407d-9e32-b13f39c84ae2" (UID: "77de5cb0-e446-407d-9e32-b13f39c84ae2"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 19:38:56 crc kubenswrapper[4942]: I0218 19:38:56.105192 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/77de5cb0-e446-407d-9e32-b13f39c84ae2-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "77de5cb0-e446-407d-9e32-b13f39c84ae2" (UID: "77de5cb0-e446-407d-9e32-b13f39c84ae2"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:38:56 crc kubenswrapper[4942]: I0218 19:38:56.109294 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/77de5cb0-e446-407d-9e32-b13f39c84ae2-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "77de5cb0-e446-407d-9e32-b13f39c84ae2" (UID: "77de5cb0-e446-407d-9e32-b13f39c84ae2"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:38:56 crc kubenswrapper[4942]: I0218 19:38:56.111271 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage06-crc" (OuterVolumeSpecName: "persistence") pod "77de5cb0-e446-407d-9e32-b13f39c84ae2" (UID: "77de5cb0-e446-407d-9e32-b13f39c84ae2"). InnerVolumeSpecName "local-storage06-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Feb 18 19:38:56 crc kubenswrapper[4942]: I0218 19:38:56.113910 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/77de5cb0-e446-407d-9e32-b13f39c84ae2-kube-api-access-8wqkf" (OuterVolumeSpecName: "kube-api-access-8wqkf") pod "77de5cb0-e446-407d-9e32-b13f39c84ae2" (UID: "77de5cb0-e446-407d-9e32-b13f39c84ae2"). InnerVolumeSpecName "kube-api-access-8wqkf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:38:56 crc kubenswrapper[4942]: I0218 19:38:56.115872 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/77de5cb0-e446-407d-9e32-b13f39c84ae2-pod-info" (OuterVolumeSpecName: "pod-info") pod "77de5cb0-e446-407d-9e32-b13f39c84ae2" (UID: "77de5cb0-e446-407d-9e32-b13f39c84ae2"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Feb 18 19:38:56 crc kubenswrapper[4942]: I0218 19:38:56.132123 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/77de5cb0-e446-407d-9e32-b13f39c84ae2-config-data" (OuterVolumeSpecName: "config-data") pod "77de5cb0-e446-407d-9e32-b13f39c84ae2" (UID: "77de5cb0-e446-407d-9e32-b13f39c84ae2"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:38:56 crc kubenswrapper[4942]: I0218 19:38:56.187740 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/77de5cb0-e446-407d-9e32-b13f39c84ae2-server-conf" (OuterVolumeSpecName: "server-conf") pod "77de5cb0-e446-407d-9e32-b13f39c84ae2" (UID: "77de5cb0-e446-407d-9e32-b13f39c84ae2"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:38:56 crc kubenswrapper[4942]: I0218 19:38:56.210594 4942 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/77de5cb0-e446-407d-9e32-b13f39c84ae2-pod-info\") on node \"crc\" DevicePath \"\"" Feb 18 19:38:56 crc kubenswrapper[4942]: I0218 19:38:56.210625 4942 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/77de5cb0-e446-407d-9e32-b13f39c84ae2-config-data\") on node \"crc\" DevicePath \"\"" Feb 18 19:38:56 crc kubenswrapper[4942]: I0218 19:38:56.210645 4942 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") on node \"crc\" " Feb 18 19:38:56 crc kubenswrapper[4942]: I0218 19:38:56.210658 4942 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/77de5cb0-e446-407d-9e32-b13f39c84ae2-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Feb 18 19:38:56 crc kubenswrapper[4942]: I0218 19:38:56.210669 4942 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/77de5cb0-e446-407d-9e32-b13f39c84ae2-server-conf\") on node \"crc\" DevicePath \"\"" Feb 18 19:38:56 crc kubenswrapper[4942]: I0218 19:38:56.210677 4942 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/77de5cb0-e446-407d-9e32-b13f39c84ae2-plugins-conf\") on node \"crc\" DevicePath \"\"" Feb 18 19:38:56 crc kubenswrapper[4942]: I0218 19:38:56.210686 4942 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/77de5cb0-e446-407d-9e32-b13f39c84ae2-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Feb 18 19:38:56 crc kubenswrapper[4942]: I0218 19:38:56.210695 4942 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8wqkf\" (UniqueName: \"kubernetes.io/projected/77de5cb0-e446-407d-9e32-b13f39c84ae2-kube-api-access-8wqkf\") on node \"crc\" DevicePath \"\"" Feb 18 19:38:56 crc kubenswrapper[4942]: I0218 19:38:56.210703 4942 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/77de5cb0-e446-407d-9e32-b13f39c84ae2-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Feb 18 19:38:56 crc kubenswrapper[4942]: I0218 19:38:56.210712 4942 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/77de5cb0-e446-407d-9e32-b13f39c84ae2-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Feb 18 19:38:56 crc kubenswrapper[4942]: I0218 19:38:56.241988 4942 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage06-crc" (UniqueName: "kubernetes.io/local-volume/local-storage06-crc") on node "crc" Feb 18 19:38:56 crc kubenswrapper[4942]: I0218 19:38:56.261658 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/77de5cb0-e446-407d-9e32-b13f39c84ae2-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "77de5cb0-e446-407d-9e32-b13f39c84ae2" (UID: "77de5cb0-e446-407d-9e32-b13f39c84ae2"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:38:56 crc kubenswrapper[4942]: I0218 19:38:56.312975 4942 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/77de5cb0-e446-407d-9e32-b13f39c84ae2-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Feb 18 19:38:56 crc kubenswrapper[4942]: I0218 19:38:56.313208 4942 reconciler_common.go:293] "Volume detached for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") on node \"crc\" DevicePath \"\"" Feb 18 19:38:56 crc kubenswrapper[4942]: I0218 19:38:56.688772 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"77de5cb0-e446-407d-9e32-b13f39c84ae2","Type":"ContainerDied","Data":"f25769d8510cd516ae5401d18772436aec7e570a6454b6b2469618103a8155cf"} Feb 18 19:38:56 crc kubenswrapper[4942]: I0218 19:38:56.688827 4942 scope.go:117] "RemoveContainer" containerID="2a06461943313e923de9b2391c5eb34c6a9c08986670b8d6bae063427214e0e7" Feb 18 19:38:56 crc kubenswrapper[4942]: I0218 19:38:56.688965 4942 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Feb 18 19:38:56 crc kubenswrapper[4942]: I0218 19:38:56.720629 4942 scope.go:117] "RemoveContainer" containerID="e242de7f4af5755759f500d3c9dbc2395ec18d3bfe3fe38cf008cae5b5314de3" Feb 18 19:38:56 crc kubenswrapper[4942]: I0218 19:38:56.728859 4942 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Feb 18 19:38:56 crc kubenswrapper[4942]: I0218 19:38:56.737938 4942 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-server-0"] Feb 18 19:38:56 crc kubenswrapper[4942]: I0218 19:38:56.766441 4942 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Feb 18 19:38:56 crc kubenswrapper[4942]: E0218 19:38:56.766935 4942 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="77de5cb0-e446-407d-9e32-b13f39c84ae2" containerName="rabbitmq" Feb 18 19:38:56 crc kubenswrapper[4942]: I0218 19:38:56.766954 4942 state_mem.go:107] "Deleted CPUSet assignment" podUID="77de5cb0-e446-407d-9e32-b13f39c84ae2" containerName="rabbitmq" Feb 18 19:38:56 crc kubenswrapper[4942]: E0218 19:38:56.766978 4942 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="77de5cb0-e446-407d-9e32-b13f39c84ae2" containerName="setup-container" Feb 18 19:38:56 crc kubenswrapper[4942]: I0218 19:38:56.766986 4942 state_mem.go:107] "Deleted CPUSet assignment" podUID="77de5cb0-e446-407d-9e32-b13f39c84ae2" containerName="setup-container" Feb 18 19:38:56 crc kubenswrapper[4942]: I0218 19:38:56.767223 4942 memory_manager.go:354] "RemoveStaleState removing state" podUID="77de5cb0-e446-407d-9e32-b13f39c84ae2" containerName="rabbitmq" Feb 18 19:38:56 crc kubenswrapper[4942]: I0218 19:38:56.768496 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Feb 18 19:38:56 crc kubenswrapper[4942]: I0218 19:38:56.770518 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Feb 18 19:38:56 crc kubenswrapper[4942]: I0218 19:38:56.770799 4942 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-config-data" Feb 18 19:38:56 crc kubenswrapper[4942]: I0218 19:38:56.771052 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Feb 18 19:38:56 crc kubenswrapper[4942]: I0218 19:38:56.771164 4942 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Feb 18 19:38:56 crc kubenswrapper[4942]: I0218 19:38:56.771220 4942 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Feb 18 19:38:56 crc kubenswrapper[4942]: I0218 19:38:56.772675 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-server-dockercfg-jnzzx" Feb 18 19:38:56 crc kubenswrapper[4942]: I0218 19:38:56.772717 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-svc" Feb 18 19:38:56 crc kubenswrapper[4942]: I0218 19:38:56.796349 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Feb 18 19:38:56 crc kubenswrapper[4942]: I0218 19:38:56.841220 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/42559616-368c-4628-8d82-75bfc94dcbaf-config-data\") pod \"rabbitmq-server-0\" (UID: \"42559616-368c-4628-8d82-75bfc94dcbaf\") " pod="openstack/rabbitmq-server-0" Feb 18 19:38:56 crc kubenswrapper[4942]: I0218 19:38:56.841296 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/42559616-368c-4628-8d82-75bfc94dcbaf-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"42559616-368c-4628-8d82-75bfc94dcbaf\") " pod="openstack/rabbitmq-server-0" Feb 18 19:38:56 crc kubenswrapper[4942]: I0218 19:38:56.841352 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/42559616-368c-4628-8d82-75bfc94dcbaf-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"42559616-368c-4628-8d82-75bfc94dcbaf\") " pod="openstack/rabbitmq-server-0" Feb 18 19:38:56 crc kubenswrapper[4942]: I0218 19:38:56.841555 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/42559616-368c-4628-8d82-75bfc94dcbaf-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"42559616-368c-4628-8d82-75bfc94dcbaf\") " pod="openstack/rabbitmq-server-0" Feb 18 19:38:56 crc kubenswrapper[4942]: I0218 19:38:56.841661 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/42559616-368c-4628-8d82-75bfc94dcbaf-server-conf\") pod \"rabbitmq-server-0\" (UID: \"42559616-368c-4628-8d82-75bfc94dcbaf\") " pod="openstack/rabbitmq-server-0" Feb 18 19:38:56 crc kubenswrapper[4942]: I0218 19:38:56.841872 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/42559616-368c-4628-8d82-75bfc94dcbaf-pod-info\") pod \"rabbitmq-server-0\" (UID: \"42559616-368c-4628-8d82-75bfc94dcbaf\") " pod="openstack/rabbitmq-server-0" Feb 18 19:38:56 crc kubenswrapper[4942]: I0218 19:38:56.842180 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-65dcm\" (UniqueName: \"kubernetes.io/projected/42559616-368c-4628-8d82-75bfc94dcbaf-kube-api-access-65dcm\") pod \"rabbitmq-server-0\" (UID: \"42559616-368c-4628-8d82-75bfc94dcbaf\") " pod="openstack/rabbitmq-server-0" Feb 18 19:38:56 crc kubenswrapper[4942]: I0218 19:38:56.842277 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"rabbitmq-server-0\" (UID: \"42559616-368c-4628-8d82-75bfc94dcbaf\") " pod="openstack/rabbitmq-server-0" Feb 18 19:38:56 crc kubenswrapper[4942]: I0218 19:38:56.842327 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/42559616-368c-4628-8d82-75bfc94dcbaf-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"42559616-368c-4628-8d82-75bfc94dcbaf\") " pod="openstack/rabbitmq-server-0" Feb 18 19:38:56 crc kubenswrapper[4942]: I0218 19:38:56.842630 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/42559616-368c-4628-8d82-75bfc94dcbaf-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"42559616-368c-4628-8d82-75bfc94dcbaf\") " pod="openstack/rabbitmq-server-0" Feb 18 19:38:56 crc kubenswrapper[4942]: I0218 19:38:56.843738 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/42559616-368c-4628-8d82-75bfc94dcbaf-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"42559616-368c-4628-8d82-75bfc94dcbaf\") " pod="openstack/rabbitmq-server-0" Feb 18 19:38:56 crc kubenswrapper[4942]: I0218 19:38:56.945786 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/42559616-368c-4628-8d82-75bfc94dcbaf-pod-info\") pod \"rabbitmq-server-0\" (UID: \"42559616-368c-4628-8d82-75bfc94dcbaf\") " pod="openstack/rabbitmq-server-0" Feb 18 19:38:56 crc kubenswrapper[4942]: I0218 19:38:56.946081 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-65dcm\" (UniqueName: \"kubernetes.io/projected/42559616-368c-4628-8d82-75bfc94dcbaf-kube-api-access-65dcm\") pod \"rabbitmq-server-0\" (UID: \"42559616-368c-4628-8d82-75bfc94dcbaf\") " pod="openstack/rabbitmq-server-0" Feb 18 19:38:56 crc kubenswrapper[4942]: I0218 19:38:56.946112 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"rabbitmq-server-0\" (UID: \"42559616-368c-4628-8d82-75bfc94dcbaf\") " pod="openstack/rabbitmq-server-0" Feb 18 19:38:56 crc kubenswrapper[4942]: I0218 19:38:56.946134 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/42559616-368c-4628-8d82-75bfc94dcbaf-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"42559616-368c-4628-8d82-75bfc94dcbaf\") " pod="openstack/rabbitmq-server-0" Feb 18 19:38:56 crc kubenswrapper[4942]: I0218 19:38:56.946209 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/42559616-368c-4628-8d82-75bfc94dcbaf-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"42559616-368c-4628-8d82-75bfc94dcbaf\") " pod="openstack/rabbitmq-server-0" Feb 18 19:38:56 crc kubenswrapper[4942]: I0218 19:38:56.946252 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/42559616-368c-4628-8d82-75bfc94dcbaf-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"42559616-368c-4628-8d82-75bfc94dcbaf\") " pod="openstack/rabbitmq-server-0" Feb 18 19:38:56 crc kubenswrapper[4942]: I0218 19:38:56.946274 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/42559616-368c-4628-8d82-75bfc94dcbaf-config-data\") pod \"rabbitmq-server-0\" (UID: \"42559616-368c-4628-8d82-75bfc94dcbaf\") " pod="openstack/rabbitmq-server-0" Feb 18 19:38:56 crc kubenswrapper[4942]: I0218 19:38:56.946295 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/42559616-368c-4628-8d82-75bfc94dcbaf-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"42559616-368c-4628-8d82-75bfc94dcbaf\") " pod="openstack/rabbitmq-server-0" Feb 18 19:38:56 crc kubenswrapper[4942]: I0218 19:38:56.946315 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/42559616-368c-4628-8d82-75bfc94dcbaf-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"42559616-368c-4628-8d82-75bfc94dcbaf\") " pod="openstack/rabbitmq-server-0" Feb 18 19:38:56 crc kubenswrapper[4942]: I0218 19:38:56.946338 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/42559616-368c-4628-8d82-75bfc94dcbaf-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"42559616-368c-4628-8d82-75bfc94dcbaf\") " pod="openstack/rabbitmq-server-0" Feb 18 19:38:56 crc kubenswrapper[4942]: I0218 19:38:56.946361 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/42559616-368c-4628-8d82-75bfc94dcbaf-server-conf\") pod \"rabbitmq-server-0\" (UID: \"42559616-368c-4628-8d82-75bfc94dcbaf\") " pod="openstack/rabbitmq-server-0" Feb 18 19:38:56 crc kubenswrapper[4942]: I0218 19:38:56.947546 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/42559616-368c-4628-8d82-75bfc94dcbaf-server-conf\") pod \"rabbitmq-server-0\" (UID: \"42559616-368c-4628-8d82-75bfc94dcbaf\") " pod="openstack/rabbitmq-server-0" Feb 18 19:38:56 crc kubenswrapper[4942]: I0218 19:38:56.948104 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/42559616-368c-4628-8d82-75bfc94dcbaf-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"42559616-368c-4628-8d82-75bfc94dcbaf\") " pod="openstack/rabbitmq-server-0" Feb 18 19:38:56 crc kubenswrapper[4942]: I0218 19:38:56.948590 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/42559616-368c-4628-8d82-75bfc94dcbaf-config-data\") pod \"rabbitmq-server-0\" (UID: \"42559616-368c-4628-8d82-75bfc94dcbaf\") " pod="openstack/rabbitmq-server-0" Feb 18 19:38:56 crc kubenswrapper[4942]: I0218 19:38:56.951152 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/42559616-368c-4628-8d82-75bfc94dcbaf-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"42559616-368c-4628-8d82-75bfc94dcbaf\") " pod="openstack/rabbitmq-server-0" Feb 18 19:38:56 crc kubenswrapper[4942]: I0218 19:38:56.952283 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/42559616-368c-4628-8d82-75bfc94dcbaf-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"42559616-368c-4628-8d82-75bfc94dcbaf\") " pod="openstack/rabbitmq-server-0" Feb 18 19:38:56 crc kubenswrapper[4942]: I0218 19:38:56.952405 4942 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"rabbitmq-server-0\" (UID: \"42559616-368c-4628-8d82-75bfc94dcbaf\") device mount path \"/mnt/openstack/pv06\"" pod="openstack/rabbitmq-server-0" Feb 18 19:38:56 crc kubenswrapper[4942]: I0218 19:38:56.968648 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/42559616-368c-4628-8d82-75bfc94dcbaf-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"42559616-368c-4628-8d82-75bfc94dcbaf\") " pod="openstack/rabbitmq-server-0" Feb 18 19:38:56 crc kubenswrapper[4942]: I0218 19:38:56.969409 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/42559616-368c-4628-8d82-75bfc94dcbaf-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"42559616-368c-4628-8d82-75bfc94dcbaf\") " pod="openstack/rabbitmq-server-0" Feb 18 19:38:56 crc kubenswrapper[4942]: I0218 19:38:56.975578 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/42559616-368c-4628-8d82-75bfc94dcbaf-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"42559616-368c-4628-8d82-75bfc94dcbaf\") " pod="openstack/rabbitmq-server-0" Feb 18 19:38:56 crc kubenswrapper[4942]: I0218 19:38:56.980268 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/42559616-368c-4628-8d82-75bfc94dcbaf-pod-info\") pod \"rabbitmq-server-0\" (UID: \"42559616-368c-4628-8d82-75bfc94dcbaf\") " pod="openstack/rabbitmq-server-0" Feb 18 19:38:56 crc kubenswrapper[4942]: I0218 19:38:56.997789 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-65dcm\" (UniqueName: \"kubernetes.io/projected/42559616-368c-4628-8d82-75bfc94dcbaf-kube-api-access-65dcm\") pod \"rabbitmq-server-0\" (UID: \"42559616-368c-4628-8d82-75bfc94dcbaf\") " pod="openstack/rabbitmq-server-0" Feb 18 19:38:57 crc kubenswrapper[4942]: I0218 19:38:57.032000 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"rabbitmq-server-0\" (UID: \"42559616-368c-4628-8d82-75bfc94dcbaf\") " pod="openstack/rabbitmq-server-0" Feb 18 19:38:57 crc kubenswrapper[4942]: I0218 19:38:57.047676 4942 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="77de5cb0-e446-407d-9e32-b13f39c84ae2" path="/var/lib/kubelet/pods/77de5cb0-e446-407d-9e32-b13f39c84ae2/volumes" Feb 18 19:38:57 crc kubenswrapper[4942]: I0218 19:38:57.161629 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Feb 18 19:38:57 crc kubenswrapper[4942]: I0218 19:38:57.430415 4942 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Feb 18 19:38:57 crc kubenswrapper[4942]: I0218 19:38:57.566118 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/b6b41292-c562-4964-bb25-d8945415b3da-plugins-conf\") pod \"b6b41292-c562-4964-bb25-d8945415b3da\" (UID: \"b6b41292-c562-4964-bb25-d8945415b3da\") " Feb 18 19:38:57 crc kubenswrapper[4942]: I0218 19:38:57.566229 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/b6b41292-c562-4964-bb25-d8945415b3da-pod-info\") pod \"b6b41292-c562-4964-bb25-d8945415b3da\" (UID: \"b6b41292-c562-4964-bb25-d8945415b3da\") " Feb 18 19:38:57 crc kubenswrapper[4942]: I0218 19:38:57.566328 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p9vpz\" (UniqueName: \"kubernetes.io/projected/b6b41292-c562-4964-bb25-d8945415b3da-kube-api-access-p9vpz\") pod \"b6b41292-c562-4964-bb25-d8945415b3da\" (UID: \"b6b41292-c562-4964-bb25-d8945415b3da\") " Feb 18 19:38:57 crc kubenswrapper[4942]: I0218 19:38:57.566404 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/b6b41292-c562-4964-bb25-d8945415b3da-erlang-cookie-secret\") pod \"b6b41292-c562-4964-bb25-d8945415b3da\" (UID: \"b6b41292-c562-4964-bb25-d8945415b3da\") " Feb 18 19:38:57 crc kubenswrapper[4942]: I0218 19:38:57.566429 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/b6b41292-c562-4964-bb25-d8945415b3da-config-data\") pod \"b6b41292-c562-4964-bb25-d8945415b3da\" (UID: \"b6b41292-c562-4964-bb25-d8945415b3da\") " Feb 18 19:38:57 crc kubenswrapper[4942]: I0218 19:38:57.566452 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/b6b41292-c562-4964-bb25-d8945415b3da-rabbitmq-tls\") pod \"b6b41292-c562-4964-bb25-d8945415b3da\" (UID: \"b6b41292-c562-4964-bb25-d8945415b3da\") " Feb 18 19:38:57 crc kubenswrapper[4942]: I0218 19:38:57.566478 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"b6b41292-c562-4964-bb25-d8945415b3da\" (UID: \"b6b41292-c562-4964-bb25-d8945415b3da\") " Feb 18 19:38:57 crc kubenswrapper[4942]: I0218 19:38:57.566549 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/b6b41292-c562-4964-bb25-d8945415b3da-server-conf\") pod \"b6b41292-c562-4964-bb25-d8945415b3da\" (UID: \"b6b41292-c562-4964-bb25-d8945415b3da\") " Feb 18 19:38:57 crc kubenswrapper[4942]: I0218 19:38:57.566621 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/b6b41292-c562-4964-bb25-d8945415b3da-rabbitmq-plugins\") pod \"b6b41292-c562-4964-bb25-d8945415b3da\" (UID: \"b6b41292-c562-4964-bb25-d8945415b3da\") " Feb 18 19:38:57 crc kubenswrapper[4942]: I0218 19:38:57.566642 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/b6b41292-c562-4964-bb25-d8945415b3da-rabbitmq-confd\") pod \"b6b41292-c562-4964-bb25-d8945415b3da\" (UID: \"b6b41292-c562-4964-bb25-d8945415b3da\") " Feb 18 19:38:57 crc kubenswrapper[4942]: I0218 19:38:57.566697 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/b6b41292-c562-4964-bb25-d8945415b3da-rabbitmq-erlang-cookie\") pod \"b6b41292-c562-4964-bb25-d8945415b3da\" (UID: \"b6b41292-c562-4964-bb25-d8945415b3da\") " Feb 18 19:38:57 crc kubenswrapper[4942]: I0218 19:38:57.566995 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b6b41292-c562-4964-bb25-d8945415b3da-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "b6b41292-c562-4964-bb25-d8945415b3da" (UID: "b6b41292-c562-4964-bb25-d8945415b3da"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:38:57 crc kubenswrapper[4942]: I0218 19:38:57.567712 4942 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/b6b41292-c562-4964-bb25-d8945415b3da-plugins-conf\") on node \"crc\" DevicePath \"\"" Feb 18 19:38:57 crc kubenswrapper[4942]: I0218 19:38:57.568126 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b6b41292-c562-4964-bb25-d8945415b3da-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "b6b41292-c562-4964-bb25-d8945415b3da" (UID: "b6b41292-c562-4964-bb25-d8945415b3da"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 19:38:57 crc kubenswrapper[4942]: I0218 19:38:57.574551 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage07-crc" (OuterVolumeSpecName: "persistence") pod "b6b41292-c562-4964-bb25-d8945415b3da" (UID: "b6b41292-c562-4964-bb25-d8945415b3da"). InnerVolumeSpecName "local-storage07-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Feb 18 19:38:57 crc kubenswrapper[4942]: I0218 19:38:57.574606 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/b6b41292-c562-4964-bb25-d8945415b3da-pod-info" (OuterVolumeSpecName: "pod-info") pod "b6b41292-c562-4964-bb25-d8945415b3da" (UID: "b6b41292-c562-4964-bb25-d8945415b3da"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Feb 18 19:38:57 crc kubenswrapper[4942]: I0218 19:38:57.574954 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b6b41292-c562-4964-bb25-d8945415b3da-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "b6b41292-c562-4964-bb25-d8945415b3da" (UID: "b6b41292-c562-4964-bb25-d8945415b3da"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 19:38:57 crc kubenswrapper[4942]: I0218 19:38:57.582327 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6b41292-c562-4964-bb25-d8945415b3da-kube-api-access-p9vpz" (OuterVolumeSpecName: "kube-api-access-p9vpz") pod "b6b41292-c562-4964-bb25-d8945415b3da" (UID: "b6b41292-c562-4964-bb25-d8945415b3da"). InnerVolumeSpecName "kube-api-access-p9vpz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:38:57 crc kubenswrapper[4942]: I0218 19:38:57.588821 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6b41292-c562-4964-bb25-d8945415b3da-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "b6b41292-c562-4964-bb25-d8945415b3da" (UID: "b6b41292-c562-4964-bb25-d8945415b3da"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:38:57 crc kubenswrapper[4942]: I0218 19:38:57.600027 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6b41292-c562-4964-bb25-d8945415b3da-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "b6b41292-c562-4964-bb25-d8945415b3da" (UID: "b6b41292-c562-4964-bb25-d8945415b3da"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:38:57 crc kubenswrapper[4942]: I0218 19:38:57.615256 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b6b41292-c562-4964-bb25-d8945415b3da-config-data" (OuterVolumeSpecName: "config-data") pod "b6b41292-c562-4964-bb25-d8945415b3da" (UID: "b6b41292-c562-4964-bb25-d8945415b3da"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:38:57 crc kubenswrapper[4942]: I0218 19:38:57.673210 4942 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/b6b41292-c562-4964-bb25-d8945415b3da-pod-info\") on node \"crc\" DevicePath \"\"" Feb 18 19:38:57 crc kubenswrapper[4942]: I0218 19:38:57.673241 4942 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p9vpz\" (UniqueName: \"kubernetes.io/projected/b6b41292-c562-4964-bb25-d8945415b3da-kube-api-access-p9vpz\") on node \"crc\" DevicePath \"\"" Feb 18 19:38:57 crc kubenswrapper[4942]: I0218 19:38:57.673252 4942 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/b6b41292-c562-4964-bb25-d8945415b3da-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Feb 18 19:38:57 crc kubenswrapper[4942]: I0218 19:38:57.673261 4942 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/b6b41292-c562-4964-bb25-d8945415b3da-config-data\") on node \"crc\" DevicePath \"\"" Feb 18 19:38:57 crc kubenswrapper[4942]: I0218 19:38:57.673272 4942 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/b6b41292-c562-4964-bb25-d8945415b3da-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Feb 18 19:38:57 crc kubenswrapper[4942]: I0218 19:38:57.673292 4942 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") on node \"crc\" " Feb 18 19:38:57 crc kubenswrapper[4942]: I0218 19:38:57.673300 4942 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/b6b41292-c562-4964-bb25-d8945415b3da-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Feb 18 19:38:57 crc kubenswrapper[4942]: I0218 19:38:57.673311 4942 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/b6b41292-c562-4964-bb25-d8945415b3da-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Feb 18 19:38:57 crc kubenswrapper[4942]: I0218 19:38:57.696365 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b6b41292-c562-4964-bb25-d8945415b3da-server-conf" (OuterVolumeSpecName: "server-conf") pod "b6b41292-c562-4964-bb25-d8945415b3da" (UID: "b6b41292-c562-4964-bb25-d8945415b3da"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:38:57 crc kubenswrapper[4942]: I0218 19:38:57.710369 4942 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage07-crc" (UniqueName: "kubernetes.io/local-volume/local-storage07-crc") on node "crc" Feb 18 19:38:57 crc kubenswrapper[4942]: I0218 19:38:57.736877 4942 generic.go:334] "Generic (PLEG): container finished" podID="b6b41292-c562-4964-bb25-d8945415b3da" containerID="4f752d07e5ee2189bcc31aa4e606e8bcb5f06355b290a2073d2a7609686ffd94" exitCode=0 Feb 18 19:38:57 crc kubenswrapper[4942]: I0218 19:38:57.737163 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"b6b41292-c562-4964-bb25-d8945415b3da","Type":"ContainerDied","Data":"4f752d07e5ee2189bcc31aa4e606e8bcb5f06355b290a2073d2a7609686ffd94"} Feb 18 19:38:57 crc kubenswrapper[4942]: I0218 19:38:57.737206 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"b6b41292-c562-4964-bb25-d8945415b3da","Type":"ContainerDied","Data":"dbe1e5a24b02c9ef82c5a83259f9ae73faa64933195a6e2349f17abe3b76bba3"} Feb 18 19:38:57 crc kubenswrapper[4942]: I0218 19:38:57.737224 4942 scope.go:117] "RemoveContainer" containerID="4f752d07e5ee2189bcc31aa4e606e8bcb5f06355b290a2073d2a7609686ffd94" Feb 18 19:38:57 crc kubenswrapper[4942]: I0218 19:38:57.737333 4942 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Feb 18 19:38:57 crc kubenswrapper[4942]: I0218 19:38:57.746698 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Feb 18 19:38:57 crc kubenswrapper[4942]: I0218 19:38:57.775084 4942 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/b6b41292-c562-4964-bb25-d8945415b3da-server-conf\") on node \"crc\" DevicePath \"\"" Feb 18 19:38:57 crc kubenswrapper[4942]: I0218 19:38:57.775109 4942 reconciler_common.go:293] "Volume detached for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") on node \"crc\" DevicePath \"\"" Feb 18 19:38:57 crc kubenswrapper[4942]: I0218 19:38:57.780036 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6b41292-c562-4964-bb25-d8945415b3da-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "b6b41292-c562-4964-bb25-d8945415b3da" (UID: "b6b41292-c562-4964-bb25-d8945415b3da"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:38:57 crc kubenswrapper[4942]: I0218 19:38:57.802524 4942 scope.go:117] "RemoveContainer" containerID="c197a7dd3977502f99f2f3aa2cb1b55953ff18362b376d981b554df6b529f782" Feb 18 19:38:57 crc kubenswrapper[4942]: I0218 19:38:57.839320 4942 scope.go:117] "RemoveContainer" containerID="4f752d07e5ee2189bcc31aa4e606e8bcb5f06355b290a2073d2a7609686ffd94" Feb 18 19:38:57 crc kubenswrapper[4942]: E0218 19:38:57.839873 4942 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4f752d07e5ee2189bcc31aa4e606e8bcb5f06355b290a2073d2a7609686ffd94\": container with ID starting with 4f752d07e5ee2189bcc31aa4e606e8bcb5f06355b290a2073d2a7609686ffd94 not found: ID does not exist" containerID="4f752d07e5ee2189bcc31aa4e606e8bcb5f06355b290a2073d2a7609686ffd94" Feb 18 19:38:57 crc kubenswrapper[4942]: I0218 19:38:57.839923 4942 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4f752d07e5ee2189bcc31aa4e606e8bcb5f06355b290a2073d2a7609686ffd94"} err="failed to get container status \"4f752d07e5ee2189bcc31aa4e606e8bcb5f06355b290a2073d2a7609686ffd94\": rpc error: code = NotFound desc = could not find container \"4f752d07e5ee2189bcc31aa4e606e8bcb5f06355b290a2073d2a7609686ffd94\": container with ID starting with 4f752d07e5ee2189bcc31aa4e606e8bcb5f06355b290a2073d2a7609686ffd94 not found: ID does not exist" Feb 18 19:38:57 crc kubenswrapper[4942]: I0218 19:38:57.839951 4942 scope.go:117] "RemoveContainer" containerID="c197a7dd3977502f99f2f3aa2cb1b55953ff18362b376d981b554df6b529f782" Feb 18 19:38:57 crc kubenswrapper[4942]: E0218 19:38:57.840420 4942 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c197a7dd3977502f99f2f3aa2cb1b55953ff18362b376d981b554df6b529f782\": container with ID starting with c197a7dd3977502f99f2f3aa2cb1b55953ff18362b376d981b554df6b529f782 not found: ID does not exist" containerID="c197a7dd3977502f99f2f3aa2cb1b55953ff18362b376d981b554df6b529f782" Feb 18 19:38:57 crc kubenswrapper[4942]: I0218 19:38:57.840489 4942 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c197a7dd3977502f99f2f3aa2cb1b55953ff18362b376d981b554df6b529f782"} err="failed to get container status \"c197a7dd3977502f99f2f3aa2cb1b55953ff18362b376d981b554df6b529f782\": rpc error: code = NotFound desc = could not find container \"c197a7dd3977502f99f2f3aa2cb1b55953ff18362b376d981b554df6b529f782\": container with ID starting with c197a7dd3977502f99f2f3aa2cb1b55953ff18362b376d981b554df6b529f782 not found: ID does not exist" Feb 18 19:38:57 crc kubenswrapper[4942]: I0218 19:38:57.878037 4942 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/b6b41292-c562-4964-bb25-d8945415b3da-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Feb 18 19:38:58 crc kubenswrapper[4942]: I0218 19:38:58.077713 4942 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 18 19:38:58 crc kubenswrapper[4942]: I0218 19:38:58.089269 4942 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 18 19:38:58 crc kubenswrapper[4942]: I0218 19:38:58.110057 4942 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 18 19:38:58 crc kubenswrapper[4942]: E0218 19:38:58.110645 4942 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b6b41292-c562-4964-bb25-d8945415b3da" containerName="setup-container" Feb 18 19:38:58 crc kubenswrapper[4942]: I0218 19:38:58.110677 4942 state_mem.go:107] "Deleted CPUSet assignment" podUID="b6b41292-c562-4964-bb25-d8945415b3da" containerName="setup-container" Feb 18 19:38:58 crc kubenswrapper[4942]: E0218 19:38:58.110712 4942 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b6b41292-c562-4964-bb25-d8945415b3da" containerName="rabbitmq" Feb 18 19:38:58 crc kubenswrapper[4942]: I0218 19:38:58.110723 4942 state_mem.go:107] "Deleted CPUSet assignment" podUID="b6b41292-c562-4964-bb25-d8945415b3da" containerName="rabbitmq" Feb 18 19:38:58 crc kubenswrapper[4942]: I0218 19:38:58.111428 4942 memory_manager.go:354] "RemoveStaleState removing state" podUID="b6b41292-c562-4964-bb25-d8945415b3da" containerName="rabbitmq" Feb 18 19:38:58 crc kubenswrapper[4942]: I0218 19:38:58.114025 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Feb 18 19:38:58 crc kubenswrapper[4942]: I0218 19:38:58.116384 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Feb 18 19:38:58 crc kubenswrapper[4942]: I0218 19:38:58.116863 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Feb 18 19:38:58 crc kubenswrapper[4942]: I0218 19:38:58.117185 4942 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Feb 18 19:38:58 crc kubenswrapper[4942]: I0218 19:38:58.117377 4942 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Feb 18 19:38:58 crc kubenswrapper[4942]: I0218 19:38:58.117462 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Feb 18 19:38:58 crc kubenswrapper[4942]: I0218 19:38:58.119449 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-wp8g5" Feb 18 19:38:58 crc kubenswrapper[4942]: I0218 19:38:58.121869 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 18 19:38:58 crc kubenswrapper[4942]: I0218 19:38:58.130149 4942 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Feb 18 19:38:58 crc kubenswrapper[4942]: I0218 19:38:58.184050 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/cc294c27-1cd0-4930-8f8d-efe5d0127708-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"cc294c27-1cd0-4930-8f8d-efe5d0127708\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 19:38:58 crc kubenswrapper[4942]: I0218 19:38:58.184276 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/cc294c27-1cd0-4930-8f8d-efe5d0127708-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"cc294c27-1cd0-4930-8f8d-efe5d0127708\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 19:38:58 crc kubenswrapper[4942]: I0218 19:38:58.184423 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/cc294c27-1cd0-4930-8f8d-efe5d0127708-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"cc294c27-1cd0-4930-8f8d-efe5d0127708\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 19:38:58 crc kubenswrapper[4942]: I0218 19:38:58.184512 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/cc294c27-1cd0-4930-8f8d-efe5d0127708-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"cc294c27-1cd0-4930-8f8d-efe5d0127708\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 19:38:58 crc kubenswrapper[4942]: I0218 19:38:58.184605 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/cc294c27-1cd0-4930-8f8d-efe5d0127708-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"cc294c27-1cd0-4930-8f8d-efe5d0127708\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 19:38:58 crc kubenswrapper[4942]: I0218 19:38:58.184707 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"cc294c27-1cd0-4930-8f8d-efe5d0127708\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 19:38:58 crc kubenswrapper[4942]: I0218 19:38:58.184988 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9jgrg\" (UniqueName: \"kubernetes.io/projected/cc294c27-1cd0-4930-8f8d-efe5d0127708-kube-api-access-9jgrg\") pod \"rabbitmq-cell1-server-0\" (UID: \"cc294c27-1cd0-4930-8f8d-efe5d0127708\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 19:38:58 crc kubenswrapper[4942]: I0218 19:38:58.185148 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/cc294c27-1cd0-4930-8f8d-efe5d0127708-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"cc294c27-1cd0-4930-8f8d-efe5d0127708\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 19:38:58 crc kubenswrapper[4942]: I0218 19:38:58.185273 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/cc294c27-1cd0-4930-8f8d-efe5d0127708-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"cc294c27-1cd0-4930-8f8d-efe5d0127708\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 19:38:58 crc kubenswrapper[4942]: I0218 19:38:58.185407 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/cc294c27-1cd0-4930-8f8d-efe5d0127708-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"cc294c27-1cd0-4930-8f8d-efe5d0127708\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 19:38:58 crc kubenswrapper[4942]: I0218 19:38:58.185526 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/cc294c27-1cd0-4930-8f8d-efe5d0127708-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"cc294c27-1cd0-4930-8f8d-efe5d0127708\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 19:38:58 crc kubenswrapper[4942]: I0218 19:38:58.287106 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"cc294c27-1cd0-4930-8f8d-efe5d0127708\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 19:38:58 crc kubenswrapper[4942]: I0218 19:38:58.287159 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9jgrg\" (UniqueName: \"kubernetes.io/projected/cc294c27-1cd0-4930-8f8d-efe5d0127708-kube-api-access-9jgrg\") pod \"rabbitmq-cell1-server-0\" (UID: \"cc294c27-1cd0-4930-8f8d-efe5d0127708\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 19:38:58 crc kubenswrapper[4942]: I0218 19:38:58.287204 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/cc294c27-1cd0-4930-8f8d-efe5d0127708-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"cc294c27-1cd0-4930-8f8d-efe5d0127708\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 19:38:58 crc kubenswrapper[4942]: I0218 19:38:58.287246 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/cc294c27-1cd0-4930-8f8d-efe5d0127708-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"cc294c27-1cd0-4930-8f8d-efe5d0127708\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 19:38:58 crc kubenswrapper[4942]: I0218 19:38:58.287300 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/cc294c27-1cd0-4930-8f8d-efe5d0127708-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"cc294c27-1cd0-4930-8f8d-efe5d0127708\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 19:38:58 crc kubenswrapper[4942]: I0218 19:38:58.287322 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/cc294c27-1cd0-4930-8f8d-efe5d0127708-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"cc294c27-1cd0-4930-8f8d-efe5d0127708\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 19:38:58 crc kubenswrapper[4942]: I0218 19:38:58.287380 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/cc294c27-1cd0-4930-8f8d-efe5d0127708-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"cc294c27-1cd0-4930-8f8d-efe5d0127708\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 19:38:58 crc kubenswrapper[4942]: I0218 19:38:58.287451 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/cc294c27-1cd0-4930-8f8d-efe5d0127708-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"cc294c27-1cd0-4930-8f8d-efe5d0127708\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 19:38:58 crc kubenswrapper[4942]: I0218 19:38:58.287498 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/cc294c27-1cd0-4930-8f8d-efe5d0127708-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"cc294c27-1cd0-4930-8f8d-efe5d0127708\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 19:38:58 crc kubenswrapper[4942]: I0218 19:38:58.287499 4942 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"cc294c27-1cd0-4930-8f8d-efe5d0127708\") device mount path \"/mnt/openstack/pv07\"" pod="openstack/rabbitmq-cell1-server-0" Feb 18 19:38:58 crc kubenswrapper[4942]: I0218 19:38:58.287522 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/cc294c27-1cd0-4930-8f8d-efe5d0127708-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"cc294c27-1cd0-4930-8f8d-efe5d0127708\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 19:38:58 crc kubenswrapper[4942]: I0218 19:38:58.287545 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/cc294c27-1cd0-4930-8f8d-efe5d0127708-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"cc294c27-1cd0-4930-8f8d-efe5d0127708\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 19:38:58 crc kubenswrapper[4942]: I0218 19:38:58.288352 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/cc294c27-1cd0-4930-8f8d-efe5d0127708-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"cc294c27-1cd0-4930-8f8d-efe5d0127708\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 19:38:58 crc kubenswrapper[4942]: I0218 19:38:58.288373 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/cc294c27-1cd0-4930-8f8d-efe5d0127708-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"cc294c27-1cd0-4930-8f8d-efe5d0127708\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 19:38:58 crc kubenswrapper[4942]: I0218 19:38:58.288677 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/cc294c27-1cd0-4930-8f8d-efe5d0127708-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"cc294c27-1cd0-4930-8f8d-efe5d0127708\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 19:38:58 crc kubenswrapper[4942]: I0218 19:38:58.288991 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/cc294c27-1cd0-4930-8f8d-efe5d0127708-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"cc294c27-1cd0-4930-8f8d-efe5d0127708\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 19:38:58 crc kubenswrapper[4942]: I0218 19:38:58.289166 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/cc294c27-1cd0-4930-8f8d-efe5d0127708-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"cc294c27-1cd0-4930-8f8d-efe5d0127708\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 19:38:58 crc kubenswrapper[4942]: I0218 19:38:58.292187 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/cc294c27-1cd0-4930-8f8d-efe5d0127708-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"cc294c27-1cd0-4930-8f8d-efe5d0127708\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 19:38:58 crc kubenswrapper[4942]: I0218 19:38:58.296828 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/cc294c27-1cd0-4930-8f8d-efe5d0127708-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"cc294c27-1cd0-4930-8f8d-efe5d0127708\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 19:38:58 crc kubenswrapper[4942]: I0218 19:38:58.296847 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/cc294c27-1cd0-4930-8f8d-efe5d0127708-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"cc294c27-1cd0-4930-8f8d-efe5d0127708\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 19:38:58 crc kubenswrapper[4942]: I0218 19:38:58.297198 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/cc294c27-1cd0-4930-8f8d-efe5d0127708-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"cc294c27-1cd0-4930-8f8d-efe5d0127708\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 19:38:58 crc kubenswrapper[4942]: I0218 19:38:58.304546 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9jgrg\" (UniqueName: \"kubernetes.io/projected/cc294c27-1cd0-4930-8f8d-efe5d0127708-kube-api-access-9jgrg\") pod \"rabbitmq-cell1-server-0\" (UID: \"cc294c27-1cd0-4930-8f8d-efe5d0127708\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 19:38:58 crc kubenswrapper[4942]: I0218 19:38:58.325597 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"cc294c27-1cd0-4930-8f8d-efe5d0127708\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 19:38:58 crc kubenswrapper[4942]: I0218 19:38:58.438329 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Feb 18 19:38:58 crc kubenswrapper[4942]: I0218 19:38:58.756888 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"42559616-368c-4628-8d82-75bfc94dcbaf","Type":"ContainerStarted","Data":"d03fa74286e0ba9f49dc010e97e62f872ca0134167eef531b9ebdfccd9d98ca4"} Feb 18 19:38:58 crc kubenswrapper[4942]: I0218 19:38:58.928819 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 18 19:38:59 crc kubenswrapper[4942]: I0218 19:38:59.048017 4942 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6b41292-c562-4964-bb25-d8945415b3da" path="/var/lib/kubelet/pods/b6b41292-c562-4964-bb25-d8945415b3da/volumes" Feb 18 19:38:59 crc kubenswrapper[4942]: I0218 19:38:59.798866 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"cc294c27-1cd0-4930-8f8d-efe5d0127708","Type":"ContainerStarted","Data":"77a10bf3a2a906137c236a38ae42f6dba5df2b51344c9966142ff8d18c6e0d91"} Feb 18 19:38:59 crc kubenswrapper[4942]: I0218 19:38:59.811550 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"42559616-368c-4628-8d82-75bfc94dcbaf","Type":"ContainerStarted","Data":"b1fb5a34758585bcfa34bf0a4375593df263351ddd74d3023184593f9a1853a7"} Feb 18 19:39:00 crc kubenswrapper[4942]: I0218 19:39:00.564696 4942 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-79bd4cc8c9-gx5qf"] Feb 18 19:39:00 crc kubenswrapper[4942]: I0218 19:39:00.567101 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-79bd4cc8c9-gx5qf" Feb 18 19:39:00 crc kubenswrapper[4942]: I0218 19:39:00.570708 4942 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-edpm-ipam" Feb 18 19:39:00 crc kubenswrapper[4942]: I0218 19:39:00.584935 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-79bd4cc8c9-gx5qf"] Feb 18 19:39:00 crc kubenswrapper[4942]: I0218 19:39:00.747474 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/97a40a08-ea01-4347-90ed-4b250d289c34-ovsdbserver-sb\") pod \"dnsmasq-dns-79bd4cc8c9-gx5qf\" (UID: \"97a40a08-ea01-4347-90ed-4b250d289c34\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-gx5qf" Feb 18 19:39:00 crc kubenswrapper[4942]: I0218 19:39:00.747552 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/97a40a08-ea01-4347-90ed-4b250d289c34-config\") pod \"dnsmasq-dns-79bd4cc8c9-gx5qf\" (UID: \"97a40a08-ea01-4347-90ed-4b250d289c34\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-gx5qf" Feb 18 19:39:00 crc kubenswrapper[4942]: I0218 19:39:00.747605 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/97a40a08-ea01-4347-90ed-4b250d289c34-dns-svc\") pod \"dnsmasq-dns-79bd4cc8c9-gx5qf\" (UID: \"97a40a08-ea01-4347-90ed-4b250d289c34\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-gx5qf" Feb 18 19:39:00 crc kubenswrapper[4942]: I0218 19:39:00.747637 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/97a40a08-ea01-4347-90ed-4b250d289c34-dns-swift-storage-0\") pod \"dnsmasq-dns-79bd4cc8c9-gx5qf\" (UID: \"97a40a08-ea01-4347-90ed-4b250d289c34\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-gx5qf" Feb 18 19:39:00 crc kubenswrapper[4942]: I0218 19:39:00.747656 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/97a40a08-ea01-4347-90ed-4b250d289c34-openstack-edpm-ipam\") pod \"dnsmasq-dns-79bd4cc8c9-gx5qf\" (UID: \"97a40a08-ea01-4347-90ed-4b250d289c34\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-gx5qf" Feb 18 19:39:00 crc kubenswrapper[4942]: I0218 19:39:00.747680 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/97a40a08-ea01-4347-90ed-4b250d289c34-ovsdbserver-nb\") pod \"dnsmasq-dns-79bd4cc8c9-gx5qf\" (UID: \"97a40a08-ea01-4347-90ed-4b250d289c34\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-gx5qf" Feb 18 19:39:00 crc kubenswrapper[4942]: I0218 19:39:00.747948 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-52hb8\" (UniqueName: \"kubernetes.io/projected/97a40a08-ea01-4347-90ed-4b250d289c34-kube-api-access-52hb8\") pod \"dnsmasq-dns-79bd4cc8c9-gx5qf\" (UID: \"97a40a08-ea01-4347-90ed-4b250d289c34\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-gx5qf" Feb 18 19:39:00 crc kubenswrapper[4942]: I0218 19:39:00.823450 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"cc294c27-1cd0-4930-8f8d-efe5d0127708","Type":"ContainerStarted","Data":"c70066480e0fa5885f66e21f6662d6bce6b0511882161ee08aeff8aefdb90858"} Feb 18 19:39:00 crc kubenswrapper[4942]: I0218 19:39:00.850419 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/97a40a08-ea01-4347-90ed-4b250d289c34-ovsdbserver-sb\") pod \"dnsmasq-dns-79bd4cc8c9-gx5qf\" (UID: \"97a40a08-ea01-4347-90ed-4b250d289c34\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-gx5qf" Feb 18 19:39:00 crc kubenswrapper[4942]: I0218 19:39:00.850487 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/97a40a08-ea01-4347-90ed-4b250d289c34-config\") pod \"dnsmasq-dns-79bd4cc8c9-gx5qf\" (UID: \"97a40a08-ea01-4347-90ed-4b250d289c34\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-gx5qf" Feb 18 19:39:00 crc kubenswrapper[4942]: I0218 19:39:00.850559 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/97a40a08-ea01-4347-90ed-4b250d289c34-dns-svc\") pod \"dnsmasq-dns-79bd4cc8c9-gx5qf\" (UID: \"97a40a08-ea01-4347-90ed-4b250d289c34\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-gx5qf" Feb 18 19:39:00 crc kubenswrapper[4942]: I0218 19:39:00.850598 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/97a40a08-ea01-4347-90ed-4b250d289c34-dns-swift-storage-0\") pod \"dnsmasq-dns-79bd4cc8c9-gx5qf\" (UID: \"97a40a08-ea01-4347-90ed-4b250d289c34\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-gx5qf" Feb 18 19:39:00 crc kubenswrapper[4942]: I0218 19:39:00.850623 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/97a40a08-ea01-4347-90ed-4b250d289c34-openstack-edpm-ipam\") pod \"dnsmasq-dns-79bd4cc8c9-gx5qf\" (UID: \"97a40a08-ea01-4347-90ed-4b250d289c34\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-gx5qf" Feb 18 19:39:00 crc kubenswrapper[4942]: I0218 19:39:00.850650 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/97a40a08-ea01-4347-90ed-4b250d289c34-ovsdbserver-nb\") pod \"dnsmasq-dns-79bd4cc8c9-gx5qf\" (UID: \"97a40a08-ea01-4347-90ed-4b250d289c34\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-gx5qf" Feb 18 19:39:00 crc kubenswrapper[4942]: I0218 19:39:00.850699 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-52hb8\" (UniqueName: \"kubernetes.io/projected/97a40a08-ea01-4347-90ed-4b250d289c34-kube-api-access-52hb8\") pod \"dnsmasq-dns-79bd4cc8c9-gx5qf\" (UID: \"97a40a08-ea01-4347-90ed-4b250d289c34\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-gx5qf" Feb 18 19:39:00 crc kubenswrapper[4942]: I0218 19:39:00.851435 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/97a40a08-ea01-4347-90ed-4b250d289c34-dns-svc\") pod \"dnsmasq-dns-79bd4cc8c9-gx5qf\" (UID: \"97a40a08-ea01-4347-90ed-4b250d289c34\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-gx5qf" Feb 18 19:39:00 crc kubenswrapper[4942]: I0218 19:39:00.851529 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/97a40a08-ea01-4347-90ed-4b250d289c34-dns-swift-storage-0\") pod \"dnsmasq-dns-79bd4cc8c9-gx5qf\" (UID: \"97a40a08-ea01-4347-90ed-4b250d289c34\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-gx5qf" Feb 18 19:39:00 crc kubenswrapper[4942]: I0218 19:39:00.851642 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/97a40a08-ea01-4347-90ed-4b250d289c34-config\") pod \"dnsmasq-dns-79bd4cc8c9-gx5qf\" (UID: \"97a40a08-ea01-4347-90ed-4b250d289c34\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-gx5qf" Feb 18 19:39:00 crc kubenswrapper[4942]: I0218 19:39:00.851642 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/97a40a08-ea01-4347-90ed-4b250d289c34-ovsdbserver-nb\") pod \"dnsmasq-dns-79bd4cc8c9-gx5qf\" (UID: \"97a40a08-ea01-4347-90ed-4b250d289c34\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-gx5qf" Feb 18 19:39:00 crc kubenswrapper[4942]: I0218 19:39:00.852218 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/97a40a08-ea01-4347-90ed-4b250d289c34-ovsdbserver-sb\") pod \"dnsmasq-dns-79bd4cc8c9-gx5qf\" (UID: \"97a40a08-ea01-4347-90ed-4b250d289c34\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-gx5qf" Feb 18 19:39:00 crc kubenswrapper[4942]: I0218 19:39:00.852262 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/97a40a08-ea01-4347-90ed-4b250d289c34-openstack-edpm-ipam\") pod \"dnsmasq-dns-79bd4cc8c9-gx5qf\" (UID: \"97a40a08-ea01-4347-90ed-4b250d289c34\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-gx5qf" Feb 18 19:39:00 crc kubenswrapper[4942]: I0218 19:39:00.879176 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-52hb8\" (UniqueName: \"kubernetes.io/projected/97a40a08-ea01-4347-90ed-4b250d289c34-kube-api-access-52hb8\") pod \"dnsmasq-dns-79bd4cc8c9-gx5qf\" (UID: \"97a40a08-ea01-4347-90ed-4b250d289c34\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-gx5qf" Feb 18 19:39:00 crc kubenswrapper[4942]: I0218 19:39:00.891404 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-79bd4cc8c9-gx5qf" Feb 18 19:39:01 crc kubenswrapper[4942]: I0218 19:39:01.373116 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-79bd4cc8c9-gx5qf"] Feb 18 19:39:01 crc kubenswrapper[4942]: W0218 19:39:01.375036 4942 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod97a40a08_ea01_4347_90ed_4b250d289c34.slice/crio-59099e034c4d122adcd7ea90d69e8e41e804845a9dc0889d53db0e99ee740e0d WatchSource:0}: Error finding container 59099e034c4d122adcd7ea90d69e8e41e804845a9dc0889d53db0e99ee740e0d: Status 404 returned error can't find the container with id 59099e034c4d122adcd7ea90d69e8e41e804845a9dc0889d53db0e99ee740e0d Feb 18 19:39:01 crc kubenswrapper[4942]: I0218 19:39:01.833381 4942 generic.go:334] "Generic (PLEG): container finished" podID="97a40a08-ea01-4347-90ed-4b250d289c34" containerID="b2da2cadf4d9d0ef1f345f8cd9fb7b2bb034daca3e383be305fa0578f8000df3" exitCode=0 Feb 18 19:39:01 crc kubenswrapper[4942]: I0218 19:39:01.833457 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-79bd4cc8c9-gx5qf" event={"ID":"97a40a08-ea01-4347-90ed-4b250d289c34","Type":"ContainerDied","Data":"b2da2cadf4d9d0ef1f345f8cd9fb7b2bb034daca3e383be305fa0578f8000df3"} Feb 18 19:39:01 crc kubenswrapper[4942]: I0218 19:39:01.833505 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-79bd4cc8c9-gx5qf" event={"ID":"97a40a08-ea01-4347-90ed-4b250d289c34","Type":"ContainerStarted","Data":"59099e034c4d122adcd7ea90d69e8e41e804845a9dc0889d53db0e99ee740e0d"} Feb 18 19:39:02 crc kubenswrapper[4942]: I0218 19:39:02.327063 4942 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-cell1-server-0" podUID="b6b41292-c562-4964-bb25-d8945415b3da" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.106:5671: i/o timeout" Feb 18 19:39:02 crc kubenswrapper[4942]: I0218 19:39:02.846045 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-79bd4cc8c9-gx5qf" event={"ID":"97a40a08-ea01-4347-90ed-4b250d289c34","Type":"ContainerStarted","Data":"548b21cd31fc7101d993f909165aaa180afcc87c620cb5fd0e5079acfc3bdef3"} Feb 18 19:39:02 crc kubenswrapper[4942]: I0218 19:39:02.846191 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-79bd4cc8c9-gx5qf" Feb 18 19:39:02 crc kubenswrapper[4942]: I0218 19:39:02.884780 4942 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-79bd4cc8c9-gx5qf" podStartSLOduration=2.884750548 podStartE2EDuration="2.884750548s" podCreationTimestamp="2026-02-18 19:39:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:39:02.878683225 +0000 UTC m=+1302.583615890" watchObservedRunningTime="2026-02-18 19:39:02.884750548 +0000 UTC m=+1302.589683213" Feb 18 19:39:10 crc kubenswrapper[4942]: I0218 19:39:10.894520 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-79bd4cc8c9-gx5qf" Feb 18 19:39:11 crc kubenswrapper[4942]: I0218 19:39:11.000821 4942 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-89c5cd4d5-7jhpx"] Feb 18 19:39:11 crc kubenswrapper[4942]: I0218 19:39:11.001176 4942 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-89c5cd4d5-7jhpx" podUID="7097c36f-c705-4a21-be80-ea057d24ace8" containerName="dnsmasq-dns" containerID="cri-o://59bdba50db92d7f040d8a79e5e6b99a3471a426e80e12a58995334733d255e36" gracePeriod=10 Feb 18 19:39:11 crc kubenswrapper[4942]: I0218 19:39:11.450656 4942 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6cd9bffc9-7bbf4"] Feb 18 19:39:11 crc kubenswrapper[4942]: I0218 19:39:11.453296 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6cd9bffc9-7bbf4" Feb 18 19:39:11 crc kubenswrapper[4942]: I0218 19:39:11.474490 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6cd9bffc9-7bbf4"] Feb 18 19:39:11 crc kubenswrapper[4942]: I0218 19:39:11.631678 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/f492d190-cab0-4416-b2c4-46d4485d89e0-dns-swift-storage-0\") pod \"dnsmasq-dns-6cd9bffc9-7bbf4\" (UID: \"f492d190-cab0-4416-b2c4-46d4485d89e0\") " pod="openstack/dnsmasq-dns-6cd9bffc9-7bbf4" Feb 18 19:39:11 crc kubenswrapper[4942]: I0218 19:39:11.631785 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f492d190-cab0-4416-b2c4-46d4485d89e0-ovsdbserver-nb\") pod \"dnsmasq-dns-6cd9bffc9-7bbf4\" (UID: \"f492d190-cab0-4416-b2c4-46d4485d89e0\") " pod="openstack/dnsmasq-dns-6cd9bffc9-7bbf4" Feb 18 19:39:11 crc kubenswrapper[4942]: I0218 19:39:11.632448 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/f492d190-cab0-4416-b2c4-46d4485d89e0-openstack-edpm-ipam\") pod \"dnsmasq-dns-6cd9bffc9-7bbf4\" (UID: \"f492d190-cab0-4416-b2c4-46d4485d89e0\") " pod="openstack/dnsmasq-dns-6cd9bffc9-7bbf4" Feb 18 19:39:11 crc kubenswrapper[4942]: I0218 19:39:11.632527 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f492d190-cab0-4416-b2c4-46d4485d89e0-dns-svc\") pod \"dnsmasq-dns-6cd9bffc9-7bbf4\" (UID: \"f492d190-cab0-4416-b2c4-46d4485d89e0\") " pod="openstack/dnsmasq-dns-6cd9bffc9-7bbf4" Feb 18 19:39:11 crc kubenswrapper[4942]: I0218 19:39:11.632683 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ndv7z\" (UniqueName: \"kubernetes.io/projected/f492d190-cab0-4416-b2c4-46d4485d89e0-kube-api-access-ndv7z\") pod \"dnsmasq-dns-6cd9bffc9-7bbf4\" (UID: \"f492d190-cab0-4416-b2c4-46d4485d89e0\") " pod="openstack/dnsmasq-dns-6cd9bffc9-7bbf4" Feb 18 19:39:11 crc kubenswrapper[4942]: I0218 19:39:11.632838 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f492d190-cab0-4416-b2c4-46d4485d89e0-config\") pod \"dnsmasq-dns-6cd9bffc9-7bbf4\" (UID: \"f492d190-cab0-4416-b2c4-46d4485d89e0\") " pod="openstack/dnsmasq-dns-6cd9bffc9-7bbf4" Feb 18 19:39:11 crc kubenswrapper[4942]: I0218 19:39:11.632888 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f492d190-cab0-4416-b2c4-46d4485d89e0-ovsdbserver-sb\") pod \"dnsmasq-dns-6cd9bffc9-7bbf4\" (UID: \"f492d190-cab0-4416-b2c4-46d4485d89e0\") " pod="openstack/dnsmasq-dns-6cd9bffc9-7bbf4" Feb 18 19:39:11 crc kubenswrapper[4942]: I0218 19:39:11.734833 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/f492d190-cab0-4416-b2c4-46d4485d89e0-dns-swift-storage-0\") pod \"dnsmasq-dns-6cd9bffc9-7bbf4\" (UID: \"f492d190-cab0-4416-b2c4-46d4485d89e0\") " pod="openstack/dnsmasq-dns-6cd9bffc9-7bbf4" Feb 18 19:39:11 crc kubenswrapper[4942]: I0218 19:39:11.734967 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f492d190-cab0-4416-b2c4-46d4485d89e0-ovsdbserver-nb\") pod \"dnsmasq-dns-6cd9bffc9-7bbf4\" (UID: \"f492d190-cab0-4416-b2c4-46d4485d89e0\") " pod="openstack/dnsmasq-dns-6cd9bffc9-7bbf4" Feb 18 19:39:11 crc kubenswrapper[4942]: I0218 19:39:11.735036 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/f492d190-cab0-4416-b2c4-46d4485d89e0-openstack-edpm-ipam\") pod \"dnsmasq-dns-6cd9bffc9-7bbf4\" (UID: \"f492d190-cab0-4416-b2c4-46d4485d89e0\") " pod="openstack/dnsmasq-dns-6cd9bffc9-7bbf4" Feb 18 19:39:11 crc kubenswrapper[4942]: I0218 19:39:11.735147 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f492d190-cab0-4416-b2c4-46d4485d89e0-dns-svc\") pod \"dnsmasq-dns-6cd9bffc9-7bbf4\" (UID: \"f492d190-cab0-4416-b2c4-46d4485d89e0\") " pod="openstack/dnsmasq-dns-6cd9bffc9-7bbf4" Feb 18 19:39:11 crc kubenswrapper[4942]: I0218 19:39:11.735217 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ndv7z\" (UniqueName: \"kubernetes.io/projected/f492d190-cab0-4416-b2c4-46d4485d89e0-kube-api-access-ndv7z\") pod \"dnsmasq-dns-6cd9bffc9-7bbf4\" (UID: \"f492d190-cab0-4416-b2c4-46d4485d89e0\") " pod="openstack/dnsmasq-dns-6cd9bffc9-7bbf4" Feb 18 19:39:11 crc kubenswrapper[4942]: I0218 19:39:11.735278 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f492d190-cab0-4416-b2c4-46d4485d89e0-config\") pod \"dnsmasq-dns-6cd9bffc9-7bbf4\" (UID: \"f492d190-cab0-4416-b2c4-46d4485d89e0\") " pod="openstack/dnsmasq-dns-6cd9bffc9-7bbf4" Feb 18 19:39:11 crc kubenswrapper[4942]: I0218 19:39:11.735302 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f492d190-cab0-4416-b2c4-46d4485d89e0-ovsdbserver-sb\") pod \"dnsmasq-dns-6cd9bffc9-7bbf4\" (UID: \"f492d190-cab0-4416-b2c4-46d4485d89e0\") " pod="openstack/dnsmasq-dns-6cd9bffc9-7bbf4" Feb 18 19:39:11 crc kubenswrapper[4942]: I0218 19:39:11.736701 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f492d190-cab0-4416-b2c4-46d4485d89e0-ovsdbserver-sb\") pod \"dnsmasq-dns-6cd9bffc9-7bbf4\" (UID: \"f492d190-cab0-4416-b2c4-46d4485d89e0\") " pod="openstack/dnsmasq-dns-6cd9bffc9-7bbf4" Feb 18 19:39:11 crc kubenswrapper[4942]: I0218 19:39:11.736830 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f492d190-cab0-4416-b2c4-46d4485d89e0-config\") pod \"dnsmasq-dns-6cd9bffc9-7bbf4\" (UID: \"f492d190-cab0-4416-b2c4-46d4485d89e0\") " pod="openstack/dnsmasq-dns-6cd9bffc9-7bbf4" Feb 18 19:39:11 crc kubenswrapper[4942]: I0218 19:39:11.736972 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f492d190-cab0-4416-b2c4-46d4485d89e0-dns-svc\") pod \"dnsmasq-dns-6cd9bffc9-7bbf4\" (UID: \"f492d190-cab0-4416-b2c4-46d4485d89e0\") " pod="openstack/dnsmasq-dns-6cd9bffc9-7bbf4" Feb 18 19:39:11 crc kubenswrapper[4942]: I0218 19:39:11.737037 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/f492d190-cab0-4416-b2c4-46d4485d89e0-openstack-edpm-ipam\") pod \"dnsmasq-dns-6cd9bffc9-7bbf4\" (UID: \"f492d190-cab0-4416-b2c4-46d4485d89e0\") " pod="openstack/dnsmasq-dns-6cd9bffc9-7bbf4" Feb 18 19:39:11 crc kubenswrapper[4942]: I0218 19:39:11.737229 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/f492d190-cab0-4416-b2c4-46d4485d89e0-dns-swift-storage-0\") pod \"dnsmasq-dns-6cd9bffc9-7bbf4\" (UID: \"f492d190-cab0-4416-b2c4-46d4485d89e0\") " pod="openstack/dnsmasq-dns-6cd9bffc9-7bbf4" Feb 18 19:39:11 crc kubenswrapper[4942]: I0218 19:39:11.737302 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f492d190-cab0-4416-b2c4-46d4485d89e0-ovsdbserver-nb\") pod \"dnsmasq-dns-6cd9bffc9-7bbf4\" (UID: \"f492d190-cab0-4416-b2c4-46d4485d89e0\") " pod="openstack/dnsmasq-dns-6cd9bffc9-7bbf4" Feb 18 19:39:11 crc kubenswrapper[4942]: I0218 19:39:11.769224 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ndv7z\" (UniqueName: \"kubernetes.io/projected/f492d190-cab0-4416-b2c4-46d4485d89e0-kube-api-access-ndv7z\") pod \"dnsmasq-dns-6cd9bffc9-7bbf4\" (UID: \"f492d190-cab0-4416-b2c4-46d4485d89e0\") " pod="openstack/dnsmasq-dns-6cd9bffc9-7bbf4" Feb 18 19:39:11 crc kubenswrapper[4942]: I0218 19:39:11.781335 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6cd9bffc9-7bbf4" Feb 18 19:39:12 crc kubenswrapper[4942]: I0218 19:39:12.021918 4942 generic.go:334] "Generic (PLEG): container finished" podID="7097c36f-c705-4a21-be80-ea057d24ace8" containerID="59bdba50db92d7f040d8a79e5e6b99a3471a426e80e12a58995334733d255e36" exitCode=0 Feb 18 19:39:12 crc kubenswrapper[4942]: I0218 19:39:12.022216 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-89c5cd4d5-7jhpx" event={"ID":"7097c36f-c705-4a21-be80-ea057d24ace8","Type":"ContainerDied","Data":"59bdba50db92d7f040d8a79e5e6b99a3471a426e80e12a58995334733d255e36"} Feb 18 19:39:12 crc kubenswrapper[4942]: I0218 19:39:12.117740 4942 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-89c5cd4d5-7jhpx" Feb 18 19:39:12 crc kubenswrapper[4942]: I0218 19:39:12.267948 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7097c36f-c705-4a21-be80-ea057d24ace8-dns-svc\") pod \"7097c36f-c705-4a21-be80-ea057d24ace8\" (UID: \"7097c36f-c705-4a21-be80-ea057d24ace8\") " Feb 18 19:39:12 crc kubenswrapper[4942]: I0218 19:39:12.267989 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/7097c36f-c705-4a21-be80-ea057d24ace8-dns-swift-storage-0\") pod \"7097c36f-c705-4a21-be80-ea057d24ace8\" (UID: \"7097c36f-c705-4a21-be80-ea057d24ace8\") " Feb 18 19:39:12 crc kubenswrapper[4942]: I0218 19:39:12.268124 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7097c36f-c705-4a21-be80-ea057d24ace8-config\") pod \"7097c36f-c705-4a21-be80-ea057d24ace8\" (UID: \"7097c36f-c705-4a21-be80-ea057d24ace8\") " Feb 18 19:39:12 crc kubenswrapper[4942]: I0218 19:39:12.268216 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nkld8\" (UniqueName: \"kubernetes.io/projected/7097c36f-c705-4a21-be80-ea057d24ace8-kube-api-access-nkld8\") pod \"7097c36f-c705-4a21-be80-ea057d24ace8\" (UID: \"7097c36f-c705-4a21-be80-ea057d24ace8\") " Feb 18 19:39:12 crc kubenswrapper[4942]: I0218 19:39:12.268233 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/7097c36f-c705-4a21-be80-ea057d24ace8-ovsdbserver-sb\") pod \"7097c36f-c705-4a21-be80-ea057d24ace8\" (UID: \"7097c36f-c705-4a21-be80-ea057d24ace8\") " Feb 18 19:39:12 crc kubenswrapper[4942]: I0218 19:39:12.268294 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/7097c36f-c705-4a21-be80-ea057d24ace8-ovsdbserver-nb\") pod \"7097c36f-c705-4a21-be80-ea057d24ace8\" (UID: \"7097c36f-c705-4a21-be80-ea057d24ace8\") " Feb 18 19:39:12 crc kubenswrapper[4942]: I0218 19:39:12.293732 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7097c36f-c705-4a21-be80-ea057d24ace8-kube-api-access-nkld8" (OuterVolumeSpecName: "kube-api-access-nkld8") pod "7097c36f-c705-4a21-be80-ea057d24ace8" (UID: "7097c36f-c705-4a21-be80-ea057d24ace8"). InnerVolumeSpecName "kube-api-access-nkld8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:39:12 crc kubenswrapper[4942]: I0218 19:39:12.332622 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7097c36f-c705-4a21-be80-ea057d24ace8-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "7097c36f-c705-4a21-be80-ea057d24ace8" (UID: "7097c36f-c705-4a21-be80-ea057d24ace8"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:39:12 crc kubenswrapper[4942]: I0218 19:39:12.346792 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7097c36f-c705-4a21-be80-ea057d24ace8-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "7097c36f-c705-4a21-be80-ea057d24ace8" (UID: "7097c36f-c705-4a21-be80-ea057d24ace8"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:39:12 crc kubenswrapper[4942]: I0218 19:39:12.348233 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7097c36f-c705-4a21-be80-ea057d24ace8-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "7097c36f-c705-4a21-be80-ea057d24ace8" (UID: "7097c36f-c705-4a21-be80-ea057d24ace8"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:39:12 crc kubenswrapper[4942]: I0218 19:39:12.353698 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7097c36f-c705-4a21-be80-ea057d24ace8-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "7097c36f-c705-4a21-be80-ea057d24ace8" (UID: "7097c36f-c705-4a21-be80-ea057d24ace8"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:39:12 crc kubenswrapper[4942]: I0218 19:39:12.354880 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7097c36f-c705-4a21-be80-ea057d24ace8-config" (OuterVolumeSpecName: "config") pod "7097c36f-c705-4a21-be80-ea057d24ace8" (UID: "7097c36f-c705-4a21-be80-ea057d24ace8"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:39:12 crc kubenswrapper[4942]: I0218 19:39:12.370419 4942 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7097c36f-c705-4a21-be80-ea057d24ace8-config\") on node \"crc\" DevicePath \"\"" Feb 18 19:39:12 crc kubenswrapper[4942]: I0218 19:39:12.370458 4942 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nkld8\" (UniqueName: \"kubernetes.io/projected/7097c36f-c705-4a21-be80-ea057d24ace8-kube-api-access-nkld8\") on node \"crc\" DevicePath \"\"" Feb 18 19:39:12 crc kubenswrapper[4942]: I0218 19:39:12.370468 4942 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/7097c36f-c705-4a21-be80-ea057d24ace8-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 18 19:39:12 crc kubenswrapper[4942]: I0218 19:39:12.370476 4942 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/7097c36f-c705-4a21-be80-ea057d24ace8-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 18 19:39:12 crc kubenswrapper[4942]: I0218 19:39:12.370486 4942 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7097c36f-c705-4a21-be80-ea057d24ace8-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 18 19:39:12 crc kubenswrapper[4942]: I0218 19:39:12.370495 4942 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/7097c36f-c705-4a21-be80-ea057d24ace8-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 18 19:39:12 crc kubenswrapper[4942]: I0218 19:39:12.454920 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6cd9bffc9-7bbf4"] Feb 18 19:39:13 crc kubenswrapper[4942]: I0218 19:39:13.034643 4942 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-89c5cd4d5-7jhpx" Feb 18 19:39:13 crc kubenswrapper[4942]: I0218 19:39:13.034634 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-89c5cd4d5-7jhpx" event={"ID":"7097c36f-c705-4a21-be80-ea057d24ace8","Type":"ContainerDied","Data":"5c67289996fab91f1e19ef4b863aed3cd05ec958251ed161ac176da9f1432384"} Feb 18 19:39:13 crc kubenswrapper[4942]: I0218 19:39:13.035232 4942 scope.go:117] "RemoveContainer" containerID="59bdba50db92d7f040d8a79e5e6b99a3471a426e80e12a58995334733d255e36" Feb 18 19:39:13 crc kubenswrapper[4942]: I0218 19:39:13.036915 4942 generic.go:334] "Generic (PLEG): container finished" podID="f492d190-cab0-4416-b2c4-46d4485d89e0" containerID="18c444ebc7d363aec8c613d09feb7cdf24dd60cc24c0375c4813030aa1ffc1f5" exitCode=0 Feb 18 19:39:13 crc kubenswrapper[4942]: I0218 19:39:13.051668 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6cd9bffc9-7bbf4" event={"ID":"f492d190-cab0-4416-b2c4-46d4485d89e0","Type":"ContainerDied","Data":"18c444ebc7d363aec8c613d09feb7cdf24dd60cc24c0375c4813030aa1ffc1f5"} Feb 18 19:39:13 crc kubenswrapper[4942]: I0218 19:39:13.051704 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6cd9bffc9-7bbf4" event={"ID":"f492d190-cab0-4416-b2c4-46d4485d89e0","Type":"ContainerStarted","Data":"02af4d14c52016aa2562402100c28c6c3c0217a21b01e377e0cc017044202514"} Feb 18 19:39:13 crc kubenswrapper[4942]: I0218 19:39:13.061397 4942 scope.go:117] "RemoveContainer" containerID="81cc4bd58d4674e6299bf3f92627b59ac247ba15bf8a7017013a911bae4a12c5" Feb 18 19:39:13 crc kubenswrapper[4942]: I0218 19:39:13.101872 4942 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-89c5cd4d5-7jhpx"] Feb 18 19:39:13 crc kubenswrapper[4942]: I0218 19:39:13.132436 4942 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-89c5cd4d5-7jhpx"] Feb 18 19:39:14 crc kubenswrapper[4942]: I0218 19:39:14.048419 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6cd9bffc9-7bbf4" event={"ID":"f492d190-cab0-4416-b2c4-46d4485d89e0","Type":"ContainerStarted","Data":"4bae647aa7b7d2b4de867c1c258980055290ece36f7258a7e17a12434075b909"} Feb 18 19:39:14 crc kubenswrapper[4942]: I0218 19:39:14.049751 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-6cd9bffc9-7bbf4" Feb 18 19:39:14 crc kubenswrapper[4942]: I0218 19:39:14.073368 4942 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-6cd9bffc9-7bbf4" podStartSLOduration=3.073341459 podStartE2EDuration="3.073341459s" podCreationTimestamp="2026-02-18 19:39:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:39:14.064262615 +0000 UTC m=+1313.769195280" watchObservedRunningTime="2026-02-18 19:39:14.073341459 +0000 UTC m=+1313.778274124" Feb 18 19:39:15 crc kubenswrapper[4942]: I0218 19:39:15.052872 4942 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7097c36f-c705-4a21-be80-ea057d24ace8" path="/var/lib/kubelet/pods/7097c36f-c705-4a21-be80-ea057d24ace8/volumes" Feb 18 19:39:21 crc kubenswrapper[4942]: I0218 19:39:21.784093 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-6cd9bffc9-7bbf4" Feb 18 19:39:21 crc kubenswrapper[4942]: I0218 19:39:21.860551 4942 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-79bd4cc8c9-gx5qf"] Feb 18 19:39:21 crc kubenswrapper[4942]: I0218 19:39:21.860859 4942 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-79bd4cc8c9-gx5qf" podUID="97a40a08-ea01-4347-90ed-4b250d289c34" containerName="dnsmasq-dns" containerID="cri-o://548b21cd31fc7101d993f909165aaa180afcc87c620cb5fd0e5079acfc3bdef3" gracePeriod=10 Feb 18 19:39:22 crc kubenswrapper[4942]: I0218 19:39:22.137049 4942 generic.go:334] "Generic (PLEG): container finished" podID="97a40a08-ea01-4347-90ed-4b250d289c34" containerID="548b21cd31fc7101d993f909165aaa180afcc87c620cb5fd0e5079acfc3bdef3" exitCode=0 Feb 18 19:39:22 crc kubenswrapper[4942]: I0218 19:39:22.137159 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-79bd4cc8c9-gx5qf" event={"ID":"97a40a08-ea01-4347-90ed-4b250d289c34","Type":"ContainerDied","Data":"548b21cd31fc7101d993f909165aaa180afcc87c620cb5fd0e5079acfc3bdef3"} Feb 18 19:39:22 crc kubenswrapper[4942]: I0218 19:39:22.375183 4942 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-79bd4cc8c9-gx5qf" Feb 18 19:39:22 crc kubenswrapper[4942]: I0218 19:39:22.476657 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/97a40a08-ea01-4347-90ed-4b250d289c34-ovsdbserver-nb\") pod \"97a40a08-ea01-4347-90ed-4b250d289c34\" (UID: \"97a40a08-ea01-4347-90ed-4b250d289c34\") " Feb 18 19:39:22 crc kubenswrapper[4942]: I0218 19:39:22.476944 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/97a40a08-ea01-4347-90ed-4b250d289c34-openstack-edpm-ipam\") pod \"97a40a08-ea01-4347-90ed-4b250d289c34\" (UID: \"97a40a08-ea01-4347-90ed-4b250d289c34\") " Feb 18 19:39:22 crc kubenswrapper[4942]: I0218 19:39:22.477031 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/97a40a08-ea01-4347-90ed-4b250d289c34-ovsdbserver-sb\") pod \"97a40a08-ea01-4347-90ed-4b250d289c34\" (UID: \"97a40a08-ea01-4347-90ed-4b250d289c34\") " Feb 18 19:39:22 crc kubenswrapper[4942]: I0218 19:39:22.477172 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/97a40a08-ea01-4347-90ed-4b250d289c34-dns-swift-storage-0\") pod \"97a40a08-ea01-4347-90ed-4b250d289c34\" (UID: \"97a40a08-ea01-4347-90ed-4b250d289c34\") " Feb 18 19:39:22 crc kubenswrapper[4942]: I0218 19:39:22.477204 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-52hb8\" (UniqueName: \"kubernetes.io/projected/97a40a08-ea01-4347-90ed-4b250d289c34-kube-api-access-52hb8\") pod \"97a40a08-ea01-4347-90ed-4b250d289c34\" (UID: \"97a40a08-ea01-4347-90ed-4b250d289c34\") " Feb 18 19:39:22 crc kubenswrapper[4942]: I0218 19:39:22.477259 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/97a40a08-ea01-4347-90ed-4b250d289c34-config\") pod \"97a40a08-ea01-4347-90ed-4b250d289c34\" (UID: \"97a40a08-ea01-4347-90ed-4b250d289c34\") " Feb 18 19:39:22 crc kubenswrapper[4942]: I0218 19:39:22.477291 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/97a40a08-ea01-4347-90ed-4b250d289c34-dns-svc\") pod \"97a40a08-ea01-4347-90ed-4b250d289c34\" (UID: \"97a40a08-ea01-4347-90ed-4b250d289c34\") " Feb 18 19:39:22 crc kubenswrapper[4942]: I0218 19:39:22.495955 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/97a40a08-ea01-4347-90ed-4b250d289c34-kube-api-access-52hb8" (OuterVolumeSpecName: "kube-api-access-52hb8") pod "97a40a08-ea01-4347-90ed-4b250d289c34" (UID: "97a40a08-ea01-4347-90ed-4b250d289c34"). InnerVolumeSpecName "kube-api-access-52hb8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:39:22 crc kubenswrapper[4942]: I0218 19:39:22.533167 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/97a40a08-ea01-4347-90ed-4b250d289c34-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "97a40a08-ea01-4347-90ed-4b250d289c34" (UID: "97a40a08-ea01-4347-90ed-4b250d289c34"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:39:22 crc kubenswrapper[4942]: I0218 19:39:22.549279 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/97a40a08-ea01-4347-90ed-4b250d289c34-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "97a40a08-ea01-4347-90ed-4b250d289c34" (UID: "97a40a08-ea01-4347-90ed-4b250d289c34"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:39:22 crc kubenswrapper[4942]: I0218 19:39:22.555406 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/97a40a08-ea01-4347-90ed-4b250d289c34-openstack-edpm-ipam" (OuterVolumeSpecName: "openstack-edpm-ipam") pod "97a40a08-ea01-4347-90ed-4b250d289c34" (UID: "97a40a08-ea01-4347-90ed-4b250d289c34"). InnerVolumeSpecName "openstack-edpm-ipam". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:39:22 crc kubenswrapper[4942]: I0218 19:39:22.559479 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/97a40a08-ea01-4347-90ed-4b250d289c34-config" (OuterVolumeSpecName: "config") pod "97a40a08-ea01-4347-90ed-4b250d289c34" (UID: "97a40a08-ea01-4347-90ed-4b250d289c34"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:39:22 crc kubenswrapper[4942]: I0218 19:39:22.561237 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/97a40a08-ea01-4347-90ed-4b250d289c34-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "97a40a08-ea01-4347-90ed-4b250d289c34" (UID: "97a40a08-ea01-4347-90ed-4b250d289c34"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:39:22 crc kubenswrapper[4942]: I0218 19:39:22.567480 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/97a40a08-ea01-4347-90ed-4b250d289c34-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "97a40a08-ea01-4347-90ed-4b250d289c34" (UID: "97a40a08-ea01-4347-90ed-4b250d289c34"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:39:22 crc kubenswrapper[4942]: I0218 19:39:22.579535 4942 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/97a40a08-ea01-4347-90ed-4b250d289c34-config\") on node \"crc\" DevicePath \"\"" Feb 18 19:39:22 crc kubenswrapper[4942]: I0218 19:39:22.579632 4942 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/97a40a08-ea01-4347-90ed-4b250d289c34-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 18 19:39:22 crc kubenswrapper[4942]: I0218 19:39:22.579692 4942 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/97a40a08-ea01-4347-90ed-4b250d289c34-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 18 19:39:22 crc kubenswrapper[4942]: I0218 19:39:22.579748 4942 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/97a40a08-ea01-4347-90ed-4b250d289c34-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 18 19:39:22 crc kubenswrapper[4942]: I0218 19:39:22.579844 4942 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/97a40a08-ea01-4347-90ed-4b250d289c34-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 18 19:39:22 crc kubenswrapper[4942]: I0218 19:39:22.579896 4942 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/97a40a08-ea01-4347-90ed-4b250d289c34-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 18 19:39:22 crc kubenswrapper[4942]: I0218 19:39:22.579943 4942 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-52hb8\" (UniqueName: \"kubernetes.io/projected/97a40a08-ea01-4347-90ed-4b250d289c34-kube-api-access-52hb8\") on node \"crc\" DevicePath \"\"" Feb 18 19:39:23 crc kubenswrapper[4942]: I0218 19:39:23.148888 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-79bd4cc8c9-gx5qf" event={"ID":"97a40a08-ea01-4347-90ed-4b250d289c34","Type":"ContainerDied","Data":"59099e034c4d122adcd7ea90d69e8e41e804845a9dc0889d53db0e99ee740e0d"} Feb 18 19:39:23 crc kubenswrapper[4942]: I0218 19:39:23.148944 4942 scope.go:117] "RemoveContainer" containerID="548b21cd31fc7101d993f909165aaa180afcc87c620cb5fd0e5079acfc3bdef3" Feb 18 19:39:23 crc kubenswrapper[4942]: I0218 19:39:23.148977 4942 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-79bd4cc8c9-gx5qf" Feb 18 19:39:23 crc kubenswrapper[4942]: I0218 19:39:23.176863 4942 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-79bd4cc8c9-gx5qf"] Feb 18 19:39:23 crc kubenswrapper[4942]: I0218 19:39:23.186143 4942 scope.go:117] "RemoveContainer" containerID="b2da2cadf4d9d0ef1f345f8cd9fb7b2bb034daca3e383be305fa0578f8000df3" Feb 18 19:39:23 crc kubenswrapper[4942]: I0218 19:39:23.187448 4942 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-79bd4cc8c9-gx5qf"] Feb 18 19:39:25 crc kubenswrapper[4942]: I0218 19:39:25.051154 4942 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="97a40a08-ea01-4347-90ed-4b250d289c34" path="/var/lib/kubelet/pods/97a40a08-ea01-4347-90ed-4b250d289c34/volumes" Feb 18 19:39:32 crc kubenswrapper[4942]: I0218 19:39:32.239812 4942 generic.go:334] "Generic (PLEG): container finished" podID="42559616-368c-4628-8d82-75bfc94dcbaf" containerID="b1fb5a34758585bcfa34bf0a4375593df263351ddd74d3023184593f9a1853a7" exitCode=0 Feb 18 19:39:32 crc kubenswrapper[4942]: I0218 19:39:32.239916 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"42559616-368c-4628-8d82-75bfc94dcbaf","Type":"ContainerDied","Data":"b1fb5a34758585bcfa34bf0a4375593df263351ddd74d3023184593f9a1853a7"} Feb 18 19:39:33 crc kubenswrapper[4942]: I0218 19:39:33.249657 4942 generic.go:334] "Generic (PLEG): container finished" podID="cc294c27-1cd0-4930-8f8d-efe5d0127708" containerID="c70066480e0fa5885f66e21f6662d6bce6b0511882161ee08aeff8aefdb90858" exitCode=0 Feb 18 19:39:33 crc kubenswrapper[4942]: I0218 19:39:33.249757 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"cc294c27-1cd0-4930-8f8d-efe5d0127708","Type":"ContainerDied","Data":"c70066480e0fa5885f66e21f6662d6bce6b0511882161ee08aeff8aefdb90858"} Feb 18 19:39:33 crc kubenswrapper[4942]: I0218 19:39:33.253200 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"42559616-368c-4628-8d82-75bfc94dcbaf","Type":"ContainerStarted","Data":"02b9627c8348b2f40510e9014b2215f9a89cda25fb40c87099392acac7054104"} Feb 18 19:39:33 crc kubenswrapper[4942]: I0218 19:39:33.253430 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Feb 18 19:39:33 crc kubenswrapper[4942]: I0218 19:39:33.320811 4942 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=37.320789437 podStartE2EDuration="37.320789437s" podCreationTimestamp="2026-02-18 19:38:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:39:33.314146709 +0000 UTC m=+1333.019079394" watchObservedRunningTime="2026-02-18 19:39:33.320789437 +0000 UTC m=+1333.025722112" Feb 18 19:39:34 crc kubenswrapper[4942]: I0218 19:39:34.264900 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"cc294c27-1cd0-4930-8f8d-efe5d0127708","Type":"ContainerStarted","Data":"247bd27b9e9a99eb506e4c094a8cd2257ecc949b2b78d1822aebc24397327dcb"} Feb 18 19:39:34 crc kubenswrapper[4942]: I0218 19:39:34.265454 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Feb 18 19:39:34 crc kubenswrapper[4942]: I0218 19:39:34.294062 4942 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=36.294038307 podStartE2EDuration="36.294038307s" podCreationTimestamp="2026-02-18 19:38:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 19:39:34.287974504 +0000 UTC m=+1333.992907249" watchObservedRunningTime="2026-02-18 19:39:34.294038307 +0000 UTC m=+1333.998970972" Feb 18 19:39:35 crc kubenswrapper[4942]: I0218 19:39:35.344921 4942 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-2vggk"] Feb 18 19:39:35 crc kubenswrapper[4942]: E0218 19:39:35.345688 4942 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7097c36f-c705-4a21-be80-ea057d24ace8" containerName="init" Feb 18 19:39:35 crc kubenswrapper[4942]: I0218 19:39:35.345709 4942 state_mem.go:107] "Deleted CPUSet assignment" podUID="7097c36f-c705-4a21-be80-ea057d24ace8" containerName="init" Feb 18 19:39:35 crc kubenswrapper[4942]: E0218 19:39:35.345745 4942 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="97a40a08-ea01-4347-90ed-4b250d289c34" containerName="dnsmasq-dns" Feb 18 19:39:35 crc kubenswrapper[4942]: I0218 19:39:35.345753 4942 state_mem.go:107] "Deleted CPUSet assignment" podUID="97a40a08-ea01-4347-90ed-4b250d289c34" containerName="dnsmasq-dns" Feb 18 19:39:35 crc kubenswrapper[4942]: E0218 19:39:35.345787 4942 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7097c36f-c705-4a21-be80-ea057d24ace8" containerName="dnsmasq-dns" Feb 18 19:39:35 crc kubenswrapper[4942]: I0218 19:39:35.345795 4942 state_mem.go:107] "Deleted CPUSet assignment" podUID="7097c36f-c705-4a21-be80-ea057d24ace8" containerName="dnsmasq-dns" Feb 18 19:39:35 crc kubenswrapper[4942]: E0218 19:39:35.345817 4942 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="97a40a08-ea01-4347-90ed-4b250d289c34" containerName="init" Feb 18 19:39:35 crc kubenswrapper[4942]: I0218 19:39:35.345824 4942 state_mem.go:107] "Deleted CPUSet assignment" podUID="97a40a08-ea01-4347-90ed-4b250d289c34" containerName="init" Feb 18 19:39:35 crc kubenswrapper[4942]: I0218 19:39:35.346058 4942 memory_manager.go:354] "RemoveStaleState removing state" podUID="97a40a08-ea01-4347-90ed-4b250d289c34" containerName="dnsmasq-dns" Feb 18 19:39:35 crc kubenswrapper[4942]: I0218 19:39:35.346088 4942 memory_manager.go:354] "RemoveStaleState removing state" podUID="7097c36f-c705-4a21-be80-ea057d24ace8" containerName="dnsmasq-dns" Feb 18 19:39:35 crc kubenswrapper[4942]: I0218 19:39:35.346938 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-2vggk" Feb 18 19:39:35 crc kubenswrapper[4942]: I0218 19:39:35.348592 4942 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 18 19:39:35 crc kubenswrapper[4942]: I0218 19:39:35.349048 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 18 19:39:35 crc kubenswrapper[4942]: I0218 19:39:35.349313 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 18 19:39:35 crc kubenswrapper[4942]: I0218 19:39:35.349592 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-rgcbh" Feb 18 19:39:35 crc kubenswrapper[4942]: I0218 19:39:35.360872 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-2vggk"] Feb 18 19:39:35 crc kubenswrapper[4942]: I0218 19:39:35.442307 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/d972a9f6-b2f0-46db-a51b-b47575ff72d6-ssh-key-openstack-edpm-ipam\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-2vggk\" (UID: \"d972a9f6-b2f0-46db-a51b-b47575ff72d6\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-2vggk" Feb 18 19:39:35 crc kubenswrapper[4942]: I0218 19:39:35.442405 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d972a9f6-b2f0-46db-a51b-b47575ff72d6-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-2vggk\" (UID: \"d972a9f6-b2f0-46db-a51b-b47575ff72d6\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-2vggk" Feb 18 19:39:35 crc kubenswrapper[4942]: I0218 19:39:35.442501 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d972a9f6-b2f0-46db-a51b-b47575ff72d6-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-2vggk\" (UID: \"d972a9f6-b2f0-46db-a51b-b47575ff72d6\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-2vggk" Feb 18 19:39:35 crc kubenswrapper[4942]: I0218 19:39:35.442569 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6wvhh\" (UniqueName: \"kubernetes.io/projected/d972a9f6-b2f0-46db-a51b-b47575ff72d6-kube-api-access-6wvhh\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-2vggk\" (UID: \"d972a9f6-b2f0-46db-a51b-b47575ff72d6\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-2vggk" Feb 18 19:39:35 crc kubenswrapper[4942]: I0218 19:39:35.544821 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d972a9f6-b2f0-46db-a51b-b47575ff72d6-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-2vggk\" (UID: \"d972a9f6-b2f0-46db-a51b-b47575ff72d6\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-2vggk" Feb 18 19:39:35 crc kubenswrapper[4942]: I0218 19:39:35.545183 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d972a9f6-b2f0-46db-a51b-b47575ff72d6-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-2vggk\" (UID: \"d972a9f6-b2f0-46db-a51b-b47575ff72d6\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-2vggk" Feb 18 19:39:35 crc kubenswrapper[4942]: I0218 19:39:35.545363 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6wvhh\" (UniqueName: \"kubernetes.io/projected/d972a9f6-b2f0-46db-a51b-b47575ff72d6-kube-api-access-6wvhh\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-2vggk\" (UID: \"d972a9f6-b2f0-46db-a51b-b47575ff72d6\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-2vggk" Feb 18 19:39:35 crc kubenswrapper[4942]: I0218 19:39:35.545550 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/d972a9f6-b2f0-46db-a51b-b47575ff72d6-ssh-key-openstack-edpm-ipam\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-2vggk\" (UID: \"d972a9f6-b2f0-46db-a51b-b47575ff72d6\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-2vggk" Feb 18 19:39:35 crc kubenswrapper[4942]: I0218 19:39:35.551347 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/d972a9f6-b2f0-46db-a51b-b47575ff72d6-ssh-key-openstack-edpm-ipam\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-2vggk\" (UID: \"d972a9f6-b2f0-46db-a51b-b47575ff72d6\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-2vggk" Feb 18 19:39:35 crc kubenswrapper[4942]: I0218 19:39:35.551387 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d972a9f6-b2f0-46db-a51b-b47575ff72d6-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-2vggk\" (UID: \"d972a9f6-b2f0-46db-a51b-b47575ff72d6\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-2vggk" Feb 18 19:39:35 crc kubenswrapper[4942]: I0218 19:39:35.553460 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d972a9f6-b2f0-46db-a51b-b47575ff72d6-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-2vggk\" (UID: \"d972a9f6-b2f0-46db-a51b-b47575ff72d6\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-2vggk" Feb 18 19:39:35 crc kubenswrapper[4942]: I0218 19:39:35.564806 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6wvhh\" (UniqueName: \"kubernetes.io/projected/d972a9f6-b2f0-46db-a51b-b47575ff72d6-kube-api-access-6wvhh\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-2vggk\" (UID: \"d972a9f6-b2f0-46db-a51b-b47575ff72d6\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-2vggk" Feb 18 19:39:35 crc kubenswrapper[4942]: I0218 19:39:35.672092 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-2vggk" Feb 18 19:39:36 crc kubenswrapper[4942]: I0218 19:39:36.358416 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-2vggk"] Feb 18 19:39:37 crc kubenswrapper[4942]: I0218 19:39:37.289008 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-2vggk" event={"ID":"d972a9f6-b2f0-46db-a51b-b47575ff72d6","Type":"ContainerStarted","Data":"45552895c35ce022126d6d4ac53308b70b4fce8a439abac0a760fc1143a57ced"} Feb 18 19:39:47 crc kubenswrapper[4942]: I0218 19:39:47.165052 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Feb 18 19:39:47 crc kubenswrapper[4942]: I0218 19:39:47.405589 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-2vggk" event={"ID":"d972a9f6-b2f0-46db-a51b-b47575ff72d6","Type":"ContainerStarted","Data":"90a3f88d42ea2500c4fc51fdcef0e53b011217e4249ea0aff3f311beebd6fa7f"} Feb 18 19:39:47 crc kubenswrapper[4942]: I0218 19:39:47.436530 4942 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-2vggk" podStartSLOduration=2.057121928 podStartE2EDuration="12.436503544s" podCreationTimestamp="2026-02-18 19:39:35 +0000 UTC" firstStartedPulling="2026-02-18 19:39:36.369207912 +0000 UTC m=+1336.074140577" lastFinishedPulling="2026-02-18 19:39:46.748589538 +0000 UTC m=+1346.453522193" observedRunningTime="2026-02-18 19:39:47.429899208 +0000 UTC m=+1347.134831873" watchObservedRunningTime="2026-02-18 19:39:47.436503544 +0000 UTC m=+1347.141436219" Feb 18 19:39:48 crc kubenswrapper[4942]: I0218 19:39:48.441992 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Feb 18 19:39:57 crc kubenswrapper[4942]: I0218 19:39:57.503555 4942 generic.go:334] "Generic (PLEG): container finished" podID="d972a9f6-b2f0-46db-a51b-b47575ff72d6" containerID="90a3f88d42ea2500c4fc51fdcef0e53b011217e4249ea0aff3f311beebd6fa7f" exitCode=0 Feb 18 19:39:57 crc kubenswrapper[4942]: I0218 19:39:57.503604 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-2vggk" event={"ID":"d972a9f6-b2f0-46db-a51b-b47575ff72d6","Type":"ContainerDied","Data":"90a3f88d42ea2500c4fc51fdcef0e53b011217e4249ea0aff3f311beebd6fa7f"} Feb 18 19:39:59 crc kubenswrapper[4942]: I0218 19:39:59.040045 4942 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-2vggk" Feb 18 19:39:59 crc kubenswrapper[4942]: I0218 19:39:59.162220 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d972a9f6-b2f0-46db-a51b-b47575ff72d6-repo-setup-combined-ca-bundle\") pod \"d972a9f6-b2f0-46db-a51b-b47575ff72d6\" (UID: \"d972a9f6-b2f0-46db-a51b-b47575ff72d6\") " Feb 18 19:39:59 crc kubenswrapper[4942]: I0218 19:39:59.162330 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/d972a9f6-b2f0-46db-a51b-b47575ff72d6-ssh-key-openstack-edpm-ipam\") pod \"d972a9f6-b2f0-46db-a51b-b47575ff72d6\" (UID: \"d972a9f6-b2f0-46db-a51b-b47575ff72d6\") " Feb 18 19:39:59 crc kubenswrapper[4942]: I0218 19:39:59.162460 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d972a9f6-b2f0-46db-a51b-b47575ff72d6-inventory\") pod \"d972a9f6-b2f0-46db-a51b-b47575ff72d6\" (UID: \"d972a9f6-b2f0-46db-a51b-b47575ff72d6\") " Feb 18 19:39:59 crc kubenswrapper[4942]: I0218 19:39:59.162689 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6wvhh\" (UniqueName: \"kubernetes.io/projected/d972a9f6-b2f0-46db-a51b-b47575ff72d6-kube-api-access-6wvhh\") pod \"d972a9f6-b2f0-46db-a51b-b47575ff72d6\" (UID: \"d972a9f6-b2f0-46db-a51b-b47575ff72d6\") " Feb 18 19:39:59 crc kubenswrapper[4942]: I0218 19:39:59.170242 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d972a9f6-b2f0-46db-a51b-b47575ff72d6-kube-api-access-6wvhh" (OuterVolumeSpecName: "kube-api-access-6wvhh") pod "d972a9f6-b2f0-46db-a51b-b47575ff72d6" (UID: "d972a9f6-b2f0-46db-a51b-b47575ff72d6"). InnerVolumeSpecName "kube-api-access-6wvhh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:39:59 crc kubenswrapper[4942]: I0218 19:39:59.171430 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d972a9f6-b2f0-46db-a51b-b47575ff72d6-repo-setup-combined-ca-bundle" (OuterVolumeSpecName: "repo-setup-combined-ca-bundle") pod "d972a9f6-b2f0-46db-a51b-b47575ff72d6" (UID: "d972a9f6-b2f0-46db-a51b-b47575ff72d6"). InnerVolumeSpecName "repo-setup-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:39:59 crc kubenswrapper[4942]: I0218 19:39:59.211423 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d972a9f6-b2f0-46db-a51b-b47575ff72d6-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "d972a9f6-b2f0-46db-a51b-b47575ff72d6" (UID: "d972a9f6-b2f0-46db-a51b-b47575ff72d6"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:39:59 crc kubenswrapper[4942]: I0218 19:39:59.223026 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d972a9f6-b2f0-46db-a51b-b47575ff72d6-inventory" (OuterVolumeSpecName: "inventory") pod "d972a9f6-b2f0-46db-a51b-b47575ff72d6" (UID: "d972a9f6-b2f0-46db-a51b-b47575ff72d6"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:39:59 crc kubenswrapper[4942]: I0218 19:39:59.265124 4942 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6wvhh\" (UniqueName: \"kubernetes.io/projected/d972a9f6-b2f0-46db-a51b-b47575ff72d6-kube-api-access-6wvhh\") on node \"crc\" DevicePath \"\"" Feb 18 19:39:59 crc kubenswrapper[4942]: I0218 19:39:59.265162 4942 reconciler_common.go:293] "Volume detached for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d972a9f6-b2f0-46db-a51b-b47575ff72d6-repo-setup-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 19:39:59 crc kubenswrapper[4942]: I0218 19:39:59.265180 4942 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/d972a9f6-b2f0-46db-a51b-b47575ff72d6-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 18 19:39:59 crc kubenswrapper[4942]: I0218 19:39:59.265193 4942 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d972a9f6-b2f0-46db-a51b-b47575ff72d6-inventory\") on node \"crc\" DevicePath \"\"" Feb 18 19:39:59 crc kubenswrapper[4942]: I0218 19:39:59.533429 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-2vggk" event={"ID":"d972a9f6-b2f0-46db-a51b-b47575ff72d6","Type":"ContainerDied","Data":"45552895c35ce022126d6d4ac53308b70b4fce8a439abac0a760fc1143a57ced"} Feb 18 19:39:59 crc kubenswrapper[4942]: I0218 19:39:59.533500 4942 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="45552895c35ce022126d6d4ac53308b70b4fce8a439abac0a760fc1143a57ced" Feb 18 19:39:59 crc kubenswrapper[4942]: I0218 19:39:59.533511 4942 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-2vggk" Feb 18 19:39:59 crc kubenswrapper[4942]: I0218 19:39:59.629225 4942 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/redhat-edpm-deployment-openstack-edpm-ipam-tzhsd"] Feb 18 19:39:59 crc kubenswrapper[4942]: E0218 19:39:59.630268 4942 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d972a9f6-b2f0-46db-a51b-b47575ff72d6" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Feb 18 19:39:59 crc kubenswrapper[4942]: I0218 19:39:59.630302 4942 state_mem.go:107] "Deleted CPUSet assignment" podUID="d972a9f6-b2f0-46db-a51b-b47575ff72d6" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Feb 18 19:39:59 crc kubenswrapper[4942]: I0218 19:39:59.630828 4942 memory_manager.go:354] "RemoveStaleState removing state" podUID="d972a9f6-b2f0-46db-a51b-b47575ff72d6" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Feb 18 19:39:59 crc kubenswrapper[4942]: I0218 19:39:59.632048 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-tzhsd" Feb 18 19:39:59 crc kubenswrapper[4942]: I0218 19:39:59.634147 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 18 19:39:59 crc kubenswrapper[4942]: I0218 19:39:59.634357 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-rgcbh" Feb 18 19:39:59 crc kubenswrapper[4942]: I0218 19:39:59.634750 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 18 19:39:59 crc kubenswrapper[4942]: I0218 19:39:59.635493 4942 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 18 19:39:59 crc kubenswrapper[4942]: I0218 19:39:59.640829 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/redhat-edpm-deployment-openstack-edpm-ipam-tzhsd"] Feb 18 19:39:59 crc kubenswrapper[4942]: I0218 19:39:59.672514 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/0a63da15-6b13-4c7b-bf83-2a4685ada3cf-ssh-key-openstack-edpm-ipam\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-tzhsd\" (UID: \"0a63da15-6b13-4c7b-bf83-2a4685ada3cf\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-tzhsd" Feb 18 19:39:59 crc kubenswrapper[4942]: I0218 19:39:59.672596 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0a63da15-6b13-4c7b-bf83-2a4685ada3cf-inventory\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-tzhsd\" (UID: \"0a63da15-6b13-4c7b-bf83-2a4685ada3cf\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-tzhsd" Feb 18 19:39:59 crc kubenswrapper[4942]: I0218 19:39:59.672635 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qxq2g\" (UniqueName: \"kubernetes.io/projected/0a63da15-6b13-4c7b-bf83-2a4685ada3cf-kube-api-access-qxq2g\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-tzhsd\" (UID: \"0a63da15-6b13-4c7b-bf83-2a4685ada3cf\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-tzhsd" Feb 18 19:39:59 crc kubenswrapper[4942]: I0218 19:39:59.773910 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/0a63da15-6b13-4c7b-bf83-2a4685ada3cf-ssh-key-openstack-edpm-ipam\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-tzhsd\" (UID: \"0a63da15-6b13-4c7b-bf83-2a4685ada3cf\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-tzhsd" Feb 18 19:39:59 crc kubenswrapper[4942]: I0218 19:39:59.773985 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0a63da15-6b13-4c7b-bf83-2a4685ada3cf-inventory\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-tzhsd\" (UID: \"0a63da15-6b13-4c7b-bf83-2a4685ada3cf\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-tzhsd" Feb 18 19:39:59 crc kubenswrapper[4942]: I0218 19:39:59.774024 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qxq2g\" (UniqueName: \"kubernetes.io/projected/0a63da15-6b13-4c7b-bf83-2a4685ada3cf-kube-api-access-qxq2g\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-tzhsd\" (UID: \"0a63da15-6b13-4c7b-bf83-2a4685ada3cf\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-tzhsd" Feb 18 19:39:59 crc kubenswrapper[4942]: I0218 19:39:59.778819 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/0a63da15-6b13-4c7b-bf83-2a4685ada3cf-ssh-key-openstack-edpm-ipam\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-tzhsd\" (UID: \"0a63da15-6b13-4c7b-bf83-2a4685ada3cf\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-tzhsd" Feb 18 19:39:59 crc kubenswrapper[4942]: I0218 19:39:59.778964 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0a63da15-6b13-4c7b-bf83-2a4685ada3cf-inventory\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-tzhsd\" (UID: \"0a63da15-6b13-4c7b-bf83-2a4685ada3cf\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-tzhsd" Feb 18 19:39:59 crc kubenswrapper[4942]: I0218 19:39:59.791012 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qxq2g\" (UniqueName: \"kubernetes.io/projected/0a63da15-6b13-4c7b-bf83-2a4685ada3cf-kube-api-access-qxq2g\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-tzhsd\" (UID: \"0a63da15-6b13-4c7b-bf83-2a4685ada3cf\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-tzhsd" Feb 18 19:39:59 crc kubenswrapper[4942]: I0218 19:39:59.970997 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-tzhsd" Feb 18 19:40:00 crc kubenswrapper[4942]: I0218 19:40:00.509383 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/redhat-edpm-deployment-openstack-edpm-ipam-tzhsd"] Feb 18 19:40:00 crc kubenswrapper[4942]: W0218 19:40:00.511207 4942 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0a63da15_6b13_4c7b_bf83_2a4685ada3cf.slice/crio-6fbbe6d8ecae7388df25dad65a19d3ca711387dc0e056ba2385f71b9e90ef15c WatchSource:0}: Error finding container 6fbbe6d8ecae7388df25dad65a19d3ca711387dc0e056ba2385f71b9e90ef15c: Status 404 returned error can't find the container with id 6fbbe6d8ecae7388df25dad65a19d3ca711387dc0e056ba2385f71b9e90ef15c Feb 18 19:40:00 crc kubenswrapper[4942]: I0218 19:40:00.544200 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-tzhsd" event={"ID":"0a63da15-6b13-4c7b-bf83-2a4685ada3cf","Type":"ContainerStarted","Data":"6fbbe6d8ecae7388df25dad65a19d3ca711387dc0e056ba2385f71b9e90ef15c"} Feb 18 19:40:01 crc kubenswrapper[4942]: I0218 19:40:01.555510 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-tzhsd" event={"ID":"0a63da15-6b13-4c7b-bf83-2a4685ada3cf","Type":"ContainerStarted","Data":"0dd99c4d846e38d3cc34b10490e5b4c7a913dec7ca68bf37c382b2d307c2c33f"} Feb 18 19:40:01 crc kubenswrapper[4942]: I0218 19:40:01.583461 4942 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-tzhsd" podStartSLOduration=2.161554366 podStartE2EDuration="2.583424707s" podCreationTimestamp="2026-02-18 19:39:59 +0000 UTC" firstStartedPulling="2026-02-18 19:40:00.513095696 +0000 UTC m=+1360.218028361" lastFinishedPulling="2026-02-18 19:40:00.934966037 +0000 UTC m=+1360.639898702" observedRunningTime="2026-02-18 19:40:01.57680657 +0000 UTC m=+1361.281739235" watchObservedRunningTime="2026-02-18 19:40:01.583424707 +0000 UTC m=+1361.288357412" Feb 18 19:40:04 crc kubenswrapper[4942]: I0218 19:40:04.589086 4942 generic.go:334] "Generic (PLEG): container finished" podID="0a63da15-6b13-4c7b-bf83-2a4685ada3cf" containerID="0dd99c4d846e38d3cc34b10490e5b4c7a913dec7ca68bf37c382b2d307c2c33f" exitCode=0 Feb 18 19:40:04 crc kubenswrapper[4942]: I0218 19:40:04.589169 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-tzhsd" event={"ID":"0a63da15-6b13-4c7b-bf83-2a4685ada3cf","Type":"ContainerDied","Data":"0dd99c4d846e38d3cc34b10490e5b4c7a913dec7ca68bf37c382b2d307c2c33f"} Feb 18 19:40:06 crc kubenswrapper[4942]: I0218 19:40:06.405565 4942 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-tzhsd" Feb 18 19:40:06 crc kubenswrapper[4942]: I0218 19:40:06.522419 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0a63da15-6b13-4c7b-bf83-2a4685ada3cf-inventory\") pod \"0a63da15-6b13-4c7b-bf83-2a4685ada3cf\" (UID: \"0a63da15-6b13-4c7b-bf83-2a4685ada3cf\") " Feb 18 19:40:06 crc kubenswrapper[4942]: I0218 19:40:06.522615 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/0a63da15-6b13-4c7b-bf83-2a4685ada3cf-ssh-key-openstack-edpm-ipam\") pod \"0a63da15-6b13-4c7b-bf83-2a4685ada3cf\" (UID: \"0a63da15-6b13-4c7b-bf83-2a4685ada3cf\") " Feb 18 19:40:06 crc kubenswrapper[4942]: I0218 19:40:06.522745 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qxq2g\" (UniqueName: \"kubernetes.io/projected/0a63da15-6b13-4c7b-bf83-2a4685ada3cf-kube-api-access-qxq2g\") pod \"0a63da15-6b13-4c7b-bf83-2a4685ada3cf\" (UID: \"0a63da15-6b13-4c7b-bf83-2a4685ada3cf\") " Feb 18 19:40:06 crc kubenswrapper[4942]: I0218 19:40:06.531581 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0a63da15-6b13-4c7b-bf83-2a4685ada3cf-kube-api-access-qxq2g" (OuterVolumeSpecName: "kube-api-access-qxq2g") pod "0a63da15-6b13-4c7b-bf83-2a4685ada3cf" (UID: "0a63da15-6b13-4c7b-bf83-2a4685ada3cf"). InnerVolumeSpecName "kube-api-access-qxq2g". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:40:06 crc kubenswrapper[4942]: I0218 19:40:06.557292 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0a63da15-6b13-4c7b-bf83-2a4685ada3cf-inventory" (OuterVolumeSpecName: "inventory") pod "0a63da15-6b13-4c7b-bf83-2a4685ada3cf" (UID: "0a63da15-6b13-4c7b-bf83-2a4685ada3cf"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:40:06 crc kubenswrapper[4942]: I0218 19:40:06.557869 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0a63da15-6b13-4c7b-bf83-2a4685ada3cf-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "0a63da15-6b13-4c7b-bf83-2a4685ada3cf" (UID: "0a63da15-6b13-4c7b-bf83-2a4685ada3cf"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:40:06 crc kubenswrapper[4942]: I0218 19:40:06.608296 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-tzhsd" event={"ID":"0a63da15-6b13-4c7b-bf83-2a4685ada3cf","Type":"ContainerDied","Data":"6fbbe6d8ecae7388df25dad65a19d3ca711387dc0e056ba2385f71b9e90ef15c"} Feb 18 19:40:06 crc kubenswrapper[4942]: I0218 19:40:06.608342 4942 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6fbbe6d8ecae7388df25dad65a19d3ca711387dc0e056ba2385f71b9e90ef15c" Feb 18 19:40:06 crc kubenswrapper[4942]: I0218 19:40:06.608352 4942 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-tzhsd" Feb 18 19:40:06 crc kubenswrapper[4942]: I0218 19:40:06.625702 4942 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/0a63da15-6b13-4c7b-bf83-2a4685ada3cf-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 18 19:40:06 crc kubenswrapper[4942]: I0218 19:40:06.625929 4942 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qxq2g\" (UniqueName: \"kubernetes.io/projected/0a63da15-6b13-4c7b-bf83-2a4685ada3cf-kube-api-access-qxq2g\") on node \"crc\" DevicePath \"\"" Feb 18 19:40:06 crc kubenswrapper[4942]: I0218 19:40:06.626028 4942 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0a63da15-6b13-4c7b-bf83-2a4685ada3cf-inventory\") on node \"crc\" DevicePath \"\"" Feb 18 19:40:06 crc kubenswrapper[4942]: I0218 19:40:06.785620 4942 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-x2stk"] Feb 18 19:40:06 crc kubenswrapper[4942]: E0218 19:40:06.786043 4942 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0a63da15-6b13-4c7b-bf83-2a4685ada3cf" containerName="redhat-edpm-deployment-openstack-edpm-ipam" Feb 18 19:40:06 crc kubenswrapper[4942]: I0218 19:40:06.786060 4942 state_mem.go:107] "Deleted CPUSet assignment" podUID="0a63da15-6b13-4c7b-bf83-2a4685ada3cf" containerName="redhat-edpm-deployment-openstack-edpm-ipam" Feb 18 19:40:06 crc kubenswrapper[4942]: I0218 19:40:06.786234 4942 memory_manager.go:354] "RemoveStaleState removing state" podUID="0a63da15-6b13-4c7b-bf83-2a4685ada3cf" containerName="redhat-edpm-deployment-openstack-edpm-ipam" Feb 18 19:40:06 crc kubenswrapper[4942]: I0218 19:40:06.786928 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-x2stk" Feb 18 19:40:06 crc kubenswrapper[4942]: I0218 19:40:06.788960 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 18 19:40:06 crc kubenswrapper[4942]: I0218 19:40:06.789479 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 18 19:40:06 crc kubenswrapper[4942]: I0218 19:40:06.789643 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-rgcbh" Feb 18 19:40:06 crc kubenswrapper[4942]: I0218 19:40:06.790074 4942 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 18 19:40:06 crc kubenswrapper[4942]: I0218 19:40:06.803926 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-x2stk"] Feb 18 19:40:06 crc kubenswrapper[4942]: I0218 19:40:06.931110 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/8509499d-4716-44d6-8fb9-539350f38310-ssh-key-openstack-edpm-ipam\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-x2stk\" (UID: \"8509499d-4716-44d6-8fb9-539350f38310\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-x2stk" Feb 18 19:40:06 crc kubenswrapper[4942]: I0218 19:40:06.931487 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jtdhb\" (UniqueName: \"kubernetes.io/projected/8509499d-4716-44d6-8fb9-539350f38310-kube-api-access-jtdhb\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-x2stk\" (UID: \"8509499d-4716-44d6-8fb9-539350f38310\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-x2stk" Feb 18 19:40:06 crc kubenswrapper[4942]: I0218 19:40:06.931663 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8509499d-4716-44d6-8fb9-539350f38310-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-x2stk\" (UID: \"8509499d-4716-44d6-8fb9-539350f38310\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-x2stk" Feb 18 19:40:06 crc kubenswrapper[4942]: I0218 19:40:06.931810 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8509499d-4716-44d6-8fb9-539350f38310-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-x2stk\" (UID: \"8509499d-4716-44d6-8fb9-539350f38310\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-x2stk" Feb 18 19:40:07 crc kubenswrapper[4942]: I0218 19:40:07.033386 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/8509499d-4716-44d6-8fb9-539350f38310-ssh-key-openstack-edpm-ipam\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-x2stk\" (UID: \"8509499d-4716-44d6-8fb9-539350f38310\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-x2stk" Feb 18 19:40:07 crc kubenswrapper[4942]: I0218 19:40:07.033860 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jtdhb\" (UniqueName: \"kubernetes.io/projected/8509499d-4716-44d6-8fb9-539350f38310-kube-api-access-jtdhb\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-x2stk\" (UID: \"8509499d-4716-44d6-8fb9-539350f38310\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-x2stk" Feb 18 19:40:07 crc kubenswrapper[4942]: I0218 19:40:07.033962 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8509499d-4716-44d6-8fb9-539350f38310-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-x2stk\" (UID: \"8509499d-4716-44d6-8fb9-539350f38310\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-x2stk" Feb 18 19:40:07 crc kubenswrapper[4942]: I0218 19:40:07.034072 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8509499d-4716-44d6-8fb9-539350f38310-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-x2stk\" (UID: \"8509499d-4716-44d6-8fb9-539350f38310\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-x2stk" Feb 18 19:40:07 crc kubenswrapper[4942]: I0218 19:40:07.038208 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/8509499d-4716-44d6-8fb9-539350f38310-ssh-key-openstack-edpm-ipam\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-x2stk\" (UID: \"8509499d-4716-44d6-8fb9-539350f38310\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-x2stk" Feb 18 19:40:07 crc kubenswrapper[4942]: I0218 19:40:07.040595 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8509499d-4716-44d6-8fb9-539350f38310-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-x2stk\" (UID: \"8509499d-4716-44d6-8fb9-539350f38310\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-x2stk" Feb 18 19:40:07 crc kubenswrapper[4942]: I0218 19:40:07.041787 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8509499d-4716-44d6-8fb9-539350f38310-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-x2stk\" (UID: \"8509499d-4716-44d6-8fb9-539350f38310\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-x2stk" Feb 18 19:40:07 crc kubenswrapper[4942]: I0218 19:40:07.051352 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jtdhb\" (UniqueName: \"kubernetes.io/projected/8509499d-4716-44d6-8fb9-539350f38310-kube-api-access-jtdhb\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-x2stk\" (UID: \"8509499d-4716-44d6-8fb9-539350f38310\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-x2stk" Feb 18 19:40:07 crc kubenswrapper[4942]: I0218 19:40:07.105378 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-x2stk" Feb 18 19:40:07 crc kubenswrapper[4942]: I0218 19:40:07.645973 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-x2stk"] Feb 18 19:40:07 crc kubenswrapper[4942]: W0218 19:40:07.647182 4942 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8509499d_4716_44d6_8fb9_539350f38310.slice/crio-fae6eb265c482ab9f0b9dae86441cc0e6b1492dd39f48b694aca54d22f8761f8 WatchSource:0}: Error finding container fae6eb265c482ab9f0b9dae86441cc0e6b1492dd39f48b694aca54d22f8761f8: Status 404 returned error can't find the container with id fae6eb265c482ab9f0b9dae86441cc0e6b1492dd39f48b694aca54d22f8761f8 Feb 18 19:40:08 crc kubenswrapper[4942]: I0218 19:40:08.649826 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-x2stk" event={"ID":"8509499d-4716-44d6-8fb9-539350f38310","Type":"ContainerStarted","Data":"2739ca309f28eed92bcb31bf2162f38ddc41005b410bf64c078594e3f919f697"} Feb 18 19:40:08 crc kubenswrapper[4942]: I0218 19:40:08.650171 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-x2stk" event={"ID":"8509499d-4716-44d6-8fb9-539350f38310","Type":"ContainerStarted","Data":"fae6eb265c482ab9f0b9dae86441cc0e6b1492dd39f48b694aca54d22f8761f8"} Feb 18 19:40:08 crc kubenswrapper[4942]: I0218 19:40:08.674573 4942 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-x2stk" podStartSLOduration=2.268181826 podStartE2EDuration="2.674550992s" podCreationTimestamp="2026-02-18 19:40:06 +0000 UTC" firstStartedPulling="2026-02-18 19:40:07.651696903 +0000 UTC m=+1367.356629568" lastFinishedPulling="2026-02-18 19:40:08.058066069 +0000 UTC m=+1367.762998734" observedRunningTime="2026-02-18 19:40:08.666005953 +0000 UTC m=+1368.370938628" watchObservedRunningTime="2026-02-18 19:40:08.674550992 +0000 UTC m=+1368.379483657" Feb 18 19:40:46 crc kubenswrapper[4942]: I0218 19:40:46.752154 4942 scope.go:117] "RemoveContainer" containerID="3ca7995811727ed16b81c6dacf4b796cf8cb865100445c8661ce6034aba901d3" Feb 18 19:40:46 crc kubenswrapper[4942]: I0218 19:40:46.780058 4942 scope.go:117] "RemoveContainer" containerID="91775cfa347502e2c1757de451b7156448b5de2986ec185b6afdfe4b5a592293" Feb 18 19:40:46 crc kubenswrapper[4942]: I0218 19:40:46.864070 4942 scope.go:117] "RemoveContainer" containerID="7d25a210ee23b71ffe8e6422d5c4b01d726dcdfde682e5219625754a6f1f5d53" Feb 18 19:40:46 crc kubenswrapper[4942]: I0218 19:40:46.895894 4942 scope.go:117] "RemoveContainer" containerID="f762c8a9d2890b0c6a5aa76b7b4d8dbd055509fafd584287df55f4c0629feaed" Feb 18 19:40:53 crc kubenswrapper[4942]: I0218 19:40:53.741342 4942 patch_prober.go:28] interesting pod/machine-config-daemon-wqxh4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 19:40:53 crc kubenswrapper[4942]: I0218 19:40:53.741952 4942 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wqxh4" podUID="28921539-823a-4439-a230-3b5aed7085cc" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 19:41:23 crc kubenswrapper[4942]: I0218 19:41:23.741219 4942 patch_prober.go:28] interesting pod/machine-config-daemon-wqxh4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 19:41:23 crc kubenswrapper[4942]: I0218 19:41:23.742058 4942 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wqxh4" podUID="28921539-823a-4439-a230-3b5aed7085cc" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 19:41:47 crc kubenswrapper[4942]: I0218 19:41:47.035216 4942 scope.go:117] "RemoveContainer" containerID="fa114cb799909584016955a551d4df04e20f11df9588933ed8a958c11cc58031" Feb 18 19:41:53 crc kubenswrapper[4942]: I0218 19:41:53.741536 4942 patch_prober.go:28] interesting pod/machine-config-daemon-wqxh4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 19:41:53 crc kubenswrapper[4942]: I0218 19:41:53.742424 4942 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wqxh4" podUID="28921539-823a-4439-a230-3b5aed7085cc" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 19:41:53 crc kubenswrapper[4942]: I0218 19:41:53.742499 4942 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-wqxh4" Feb 18 19:41:53 crc kubenswrapper[4942]: I0218 19:41:53.743688 4942 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"0f7c7ce7194dc50e8e7ff903a9631c5d1d6654771462dbd4df2dfa299f3641bf"} pod="openshift-machine-config-operator/machine-config-daemon-wqxh4" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 18 19:41:53 crc kubenswrapper[4942]: I0218 19:41:53.743872 4942 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-wqxh4" podUID="28921539-823a-4439-a230-3b5aed7085cc" containerName="machine-config-daemon" containerID="cri-o://0f7c7ce7194dc50e8e7ff903a9631c5d1d6654771462dbd4df2dfa299f3641bf" gracePeriod=600 Feb 18 19:41:54 crc kubenswrapper[4942]: I0218 19:41:54.845674 4942 generic.go:334] "Generic (PLEG): container finished" podID="28921539-823a-4439-a230-3b5aed7085cc" containerID="0f7c7ce7194dc50e8e7ff903a9631c5d1d6654771462dbd4df2dfa299f3641bf" exitCode=0 Feb 18 19:41:54 crc kubenswrapper[4942]: I0218 19:41:54.845814 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wqxh4" event={"ID":"28921539-823a-4439-a230-3b5aed7085cc","Type":"ContainerDied","Data":"0f7c7ce7194dc50e8e7ff903a9631c5d1d6654771462dbd4df2dfa299f3641bf"} Feb 18 19:41:54 crc kubenswrapper[4942]: I0218 19:41:54.846121 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wqxh4" event={"ID":"28921539-823a-4439-a230-3b5aed7085cc","Type":"ContainerStarted","Data":"e8694fad4507ebe591fc3e29212876da9f32320a8fd16e4bcde4ab412ae86b19"} Feb 18 19:41:54 crc kubenswrapper[4942]: I0218 19:41:54.846144 4942 scope.go:117] "RemoveContainer" containerID="8ecda90ff377eb2cb3234b37ad9a8ec87fa575a7e7c5a3a78ee7c2e00f4a7b66" Feb 18 19:41:57 crc kubenswrapper[4942]: I0218 19:41:57.195004 4942 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-j4t4b"] Feb 18 19:41:57 crc kubenswrapper[4942]: I0218 19:41:57.200480 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-j4t4b" Feb 18 19:41:57 crc kubenswrapper[4942]: I0218 19:41:57.211616 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-j4t4b"] Feb 18 19:41:57 crc kubenswrapper[4942]: I0218 19:41:57.330392 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f7384611-597a-411c-b993-aa5e957d2a22-catalog-content\") pod \"certified-operators-j4t4b\" (UID: \"f7384611-597a-411c-b993-aa5e957d2a22\") " pod="openshift-marketplace/certified-operators-j4t4b" Feb 18 19:41:57 crc kubenswrapper[4942]: I0218 19:41:57.330607 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f7384611-597a-411c-b993-aa5e957d2a22-utilities\") pod \"certified-operators-j4t4b\" (UID: \"f7384611-597a-411c-b993-aa5e957d2a22\") " pod="openshift-marketplace/certified-operators-j4t4b" Feb 18 19:41:57 crc kubenswrapper[4942]: I0218 19:41:57.330659 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vnn62\" (UniqueName: \"kubernetes.io/projected/f7384611-597a-411c-b993-aa5e957d2a22-kube-api-access-vnn62\") pod \"certified-operators-j4t4b\" (UID: \"f7384611-597a-411c-b993-aa5e957d2a22\") " pod="openshift-marketplace/certified-operators-j4t4b" Feb 18 19:41:57 crc kubenswrapper[4942]: I0218 19:41:57.432564 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f7384611-597a-411c-b993-aa5e957d2a22-catalog-content\") pod \"certified-operators-j4t4b\" (UID: \"f7384611-597a-411c-b993-aa5e957d2a22\") " pod="openshift-marketplace/certified-operators-j4t4b" Feb 18 19:41:57 crc kubenswrapper[4942]: I0218 19:41:57.432679 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f7384611-597a-411c-b993-aa5e957d2a22-utilities\") pod \"certified-operators-j4t4b\" (UID: \"f7384611-597a-411c-b993-aa5e957d2a22\") " pod="openshift-marketplace/certified-operators-j4t4b" Feb 18 19:41:57 crc kubenswrapper[4942]: I0218 19:41:57.432698 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vnn62\" (UniqueName: \"kubernetes.io/projected/f7384611-597a-411c-b993-aa5e957d2a22-kube-api-access-vnn62\") pod \"certified-operators-j4t4b\" (UID: \"f7384611-597a-411c-b993-aa5e957d2a22\") " pod="openshift-marketplace/certified-operators-j4t4b" Feb 18 19:41:57 crc kubenswrapper[4942]: I0218 19:41:57.433109 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f7384611-597a-411c-b993-aa5e957d2a22-catalog-content\") pod \"certified-operators-j4t4b\" (UID: \"f7384611-597a-411c-b993-aa5e957d2a22\") " pod="openshift-marketplace/certified-operators-j4t4b" Feb 18 19:41:57 crc kubenswrapper[4942]: I0218 19:41:57.433126 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f7384611-597a-411c-b993-aa5e957d2a22-utilities\") pod \"certified-operators-j4t4b\" (UID: \"f7384611-597a-411c-b993-aa5e957d2a22\") " pod="openshift-marketplace/certified-operators-j4t4b" Feb 18 19:41:57 crc kubenswrapper[4942]: I0218 19:41:57.454445 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vnn62\" (UniqueName: \"kubernetes.io/projected/f7384611-597a-411c-b993-aa5e957d2a22-kube-api-access-vnn62\") pod \"certified-operators-j4t4b\" (UID: \"f7384611-597a-411c-b993-aa5e957d2a22\") " pod="openshift-marketplace/certified-operators-j4t4b" Feb 18 19:41:57 crc kubenswrapper[4942]: I0218 19:41:57.557578 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-j4t4b" Feb 18 19:41:58 crc kubenswrapper[4942]: I0218 19:41:58.097496 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-j4t4b"] Feb 18 19:41:58 crc kubenswrapper[4942]: I0218 19:41:58.894080 4942 generic.go:334] "Generic (PLEG): container finished" podID="f7384611-597a-411c-b993-aa5e957d2a22" containerID="55ced6eed1f9da2ba0f67e272103f463ac07d013a256f2c716d9a43678f69d7c" exitCode=0 Feb 18 19:41:58 crc kubenswrapper[4942]: I0218 19:41:58.894143 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-j4t4b" event={"ID":"f7384611-597a-411c-b993-aa5e957d2a22","Type":"ContainerDied","Data":"55ced6eed1f9da2ba0f67e272103f463ac07d013a256f2c716d9a43678f69d7c"} Feb 18 19:41:58 crc kubenswrapper[4942]: I0218 19:41:58.894414 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-j4t4b" event={"ID":"f7384611-597a-411c-b993-aa5e957d2a22","Type":"ContainerStarted","Data":"6b2ebaf7f10e3d90236eb75e4cc0f96710547637aa67ed7d29f74873591e1d4b"} Feb 18 19:42:00 crc kubenswrapper[4942]: I0218 19:42:00.920453 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-j4t4b" event={"ID":"f7384611-597a-411c-b993-aa5e957d2a22","Type":"ContainerStarted","Data":"b584ebb67a6c0c28731b826136ac25eca0c9764c757bbe199ab65855e3c1858e"} Feb 18 19:42:01 crc kubenswrapper[4942]: I0218 19:42:01.931196 4942 generic.go:334] "Generic (PLEG): container finished" podID="f7384611-597a-411c-b993-aa5e957d2a22" containerID="b584ebb67a6c0c28731b826136ac25eca0c9764c757bbe199ab65855e3c1858e" exitCode=0 Feb 18 19:42:01 crc kubenswrapper[4942]: I0218 19:42:01.931262 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-j4t4b" event={"ID":"f7384611-597a-411c-b993-aa5e957d2a22","Type":"ContainerDied","Data":"b584ebb67a6c0c28731b826136ac25eca0c9764c757bbe199ab65855e3c1858e"} Feb 18 19:42:01 crc kubenswrapper[4942]: I0218 19:42:01.934101 4942 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 18 19:42:02 crc kubenswrapper[4942]: I0218 19:42:02.945705 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-j4t4b" event={"ID":"f7384611-597a-411c-b993-aa5e957d2a22","Type":"ContainerStarted","Data":"8327d9bc2bdc26af66c6fafe10dbbd520b3f9d0a5ae66043707b2cbdb4de9720"} Feb 18 19:42:02 crc kubenswrapper[4942]: I0218 19:42:02.971820 4942 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-j4t4b" podStartSLOduration=2.474420436 podStartE2EDuration="5.971798576s" podCreationTimestamp="2026-02-18 19:41:57 +0000 UTC" firstStartedPulling="2026-02-18 19:41:58.897134799 +0000 UTC m=+1478.602067464" lastFinishedPulling="2026-02-18 19:42:02.394512899 +0000 UTC m=+1482.099445604" observedRunningTime="2026-02-18 19:42:02.964269427 +0000 UTC m=+1482.669202122" watchObservedRunningTime="2026-02-18 19:42:02.971798576 +0000 UTC m=+1482.676731241" Feb 18 19:42:03 crc kubenswrapper[4942]: I0218 19:42:03.982156 4942 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-p8sbc"] Feb 18 19:42:03 crc kubenswrapper[4942]: I0218 19:42:03.989631 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-p8sbc" Feb 18 19:42:04 crc kubenswrapper[4942]: I0218 19:42:04.000519 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-p8sbc"] Feb 18 19:42:04 crc kubenswrapper[4942]: I0218 19:42:04.197233 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m746x\" (UniqueName: \"kubernetes.io/projected/d83d3c09-6dc0-4ed4-adc3-183e76a548fa-kube-api-access-m746x\") pod \"redhat-operators-p8sbc\" (UID: \"d83d3c09-6dc0-4ed4-adc3-183e76a548fa\") " pod="openshift-marketplace/redhat-operators-p8sbc" Feb 18 19:42:04 crc kubenswrapper[4942]: I0218 19:42:04.197490 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d83d3c09-6dc0-4ed4-adc3-183e76a548fa-catalog-content\") pod \"redhat-operators-p8sbc\" (UID: \"d83d3c09-6dc0-4ed4-adc3-183e76a548fa\") " pod="openshift-marketplace/redhat-operators-p8sbc" Feb 18 19:42:04 crc kubenswrapper[4942]: I0218 19:42:04.197528 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d83d3c09-6dc0-4ed4-adc3-183e76a548fa-utilities\") pod \"redhat-operators-p8sbc\" (UID: \"d83d3c09-6dc0-4ed4-adc3-183e76a548fa\") " pod="openshift-marketplace/redhat-operators-p8sbc" Feb 18 19:42:04 crc kubenswrapper[4942]: I0218 19:42:04.299565 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d83d3c09-6dc0-4ed4-adc3-183e76a548fa-catalog-content\") pod \"redhat-operators-p8sbc\" (UID: \"d83d3c09-6dc0-4ed4-adc3-183e76a548fa\") " pod="openshift-marketplace/redhat-operators-p8sbc" Feb 18 19:42:04 crc kubenswrapper[4942]: I0218 19:42:04.299615 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d83d3c09-6dc0-4ed4-adc3-183e76a548fa-utilities\") pod \"redhat-operators-p8sbc\" (UID: \"d83d3c09-6dc0-4ed4-adc3-183e76a548fa\") " pod="openshift-marketplace/redhat-operators-p8sbc" Feb 18 19:42:04 crc kubenswrapper[4942]: I0218 19:42:04.299678 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m746x\" (UniqueName: \"kubernetes.io/projected/d83d3c09-6dc0-4ed4-adc3-183e76a548fa-kube-api-access-m746x\") pod \"redhat-operators-p8sbc\" (UID: \"d83d3c09-6dc0-4ed4-adc3-183e76a548fa\") " pod="openshift-marketplace/redhat-operators-p8sbc" Feb 18 19:42:04 crc kubenswrapper[4942]: I0218 19:42:04.300096 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d83d3c09-6dc0-4ed4-adc3-183e76a548fa-catalog-content\") pod \"redhat-operators-p8sbc\" (UID: \"d83d3c09-6dc0-4ed4-adc3-183e76a548fa\") " pod="openshift-marketplace/redhat-operators-p8sbc" Feb 18 19:42:04 crc kubenswrapper[4942]: I0218 19:42:04.300161 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d83d3c09-6dc0-4ed4-adc3-183e76a548fa-utilities\") pod \"redhat-operators-p8sbc\" (UID: \"d83d3c09-6dc0-4ed4-adc3-183e76a548fa\") " pod="openshift-marketplace/redhat-operators-p8sbc" Feb 18 19:42:04 crc kubenswrapper[4942]: I0218 19:42:04.320579 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m746x\" (UniqueName: \"kubernetes.io/projected/d83d3c09-6dc0-4ed4-adc3-183e76a548fa-kube-api-access-m746x\") pod \"redhat-operators-p8sbc\" (UID: \"d83d3c09-6dc0-4ed4-adc3-183e76a548fa\") " pod="openshift-marketplace/redhat-operators-p8sbc" Feb 18 19:42:04 crc kubenswrapper[4942]: I0218 19:42:04.612123 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-p8sbc" Feb 18 19:42:05 crc kubenswrapper[4942]: I0218 19:42:05.103924 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-p8sbc"] Feb 18 19:42:05 crc kubenswrapper[4942]: W0218 19:42:05.112906 4942 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd83d3c09_6dc0_4ed4_adc3_183e76a548fa.slice/crio-8daa7defde0b7bd35f941fc3b51d0725ecc9471880f30c486e1dc1a01ca1b651 WatchSource:0}: Error finding container 8daa7defde0b7bd35f941fc3b51d0725ecc9471880f30c486e1dc1a01ca1b651: Status 404 returned error can't find the container with id 8daa7defde0b7bd35f941fc3b51d0725ecc9471880f30c486e1dc1a01ca1b651 Feb 18 19:42:05 crc kubenswrapper[4942]: I0218 19:42:05.975895 4942 generic.go:334] "Generic (PLEG): container finished" podID="d83d3c09-6dc0-4ed4-adc3-183e76a548fa" containerID="068ad5273f8931e5ad2041f71703fdbf4611b94c3edd29d8b659466f08c5dc98" exitCode=0 Feb 18 19:42:05 crc kubenswrapper[4942]: I0218 19:42:05.976072 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-p8sbc" event={"ID":"d83d3c09-6dc0-4ed4-adc3-183e76a548fa","Type":"ContainerDied","Data":"068ad5273f8931e5ad2041f71703fdbf4611b94c3edd29d8b659466f08c5dc98"} Feb 18 19:42:05 crc kubenswrapper[4942]: I0218 19:42:05.976192 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-p8sbc" event={"ID":"d83d3c09-6dc0-4ed4-adc3-183e76a548fa","Type":"ContainerStarted","Data":"8daa7defde0b7bd35f941fc3b51d0725ecc9471880f30c486e1dc1a01ca1b651"} Feb 18 19:42:07 crc kubenswrapper[4942]: I0218 19:42:07.557691 4942 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-j4t4b" Feb 18 19:42:07 crc kubenswrapper[4942]: I0218 19:42:07.557991 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-j4t4b" Feb 18 19:42:07 crc kubenswrapper[4942]: I0218 19:42:07.634395 4942 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-j4t4b" Feb 18 19:42:07 crc kubenswrapper[4942]: I0218 19:42:07.999832 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-p8sbc" event={"ID":"d83d3c09-6dc0-4ed4-adc3-183e76a548fa","Type":"ContainerStarted","Data":"b2550670f59a8a84c5414e0db4e0a63616a11c0c391719faf4d66f8a3c46587f"} Feb 18 19:42:08 crc kubenswrapper[4942]: I0218 19:42:08.073440 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-j4t4b" Feb 18 19:42:09 crc kubenswrapper[4942]: I0218 19:42:09.370185 4942 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-j4t4b"] Feb 18 19:42:10 crc kubenswrapper[4942]: I0218 19:42:10.025571 4942 generic.go:334] "Generic (PLEG): container finished" podID="d83d3c09-6dc0-4ed4-adc3-183e76a548fa" containerID="b2550670f59a8a84c5414e0db4e0a63616a11c0c391719faf4d66f8a3c46587f" exitCode=0 Feb 18 19:42:10 crc kubenswrapper[4942]: I0218 19:42:10.025661 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-p8sbc" event={"ID":"d83d3c09-6dc0-4ed4-adc3-183e76a548fa","Type":"ContainerDied","Data":"b2550670f59a8a84c5414e0db4e0a63616a11c0c391719faf4d66f8a3c46587f"} Feb 18 19:42:10 crc kubenswrapper[4942]: I0218 19:42:10.025916 4942 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-j4t4b" podUID="f7384611-597a-411c-b993-aa5e957d2a22" containerName="registry-server" containerID="cri-o://8327d9bc2bdc26af66c6fafe10dbbd520b3f9d0a5ae66043707b2cbdb4de9720" gracePeriod=2 Feb 18 19:42:11 crc kubenswrapper[4942]: I0218 19:42:11.053704 4942 generic.go:334] "Generic (PLEG): container finished" podID="f7384611-597a-411c-b993-aa5e957d2a22" containerID="8327d9bc2bdc26af66c6fafe10dbbd520b3f9d0a5ae66043707b2cbdb4de9720" exitCode=0 Feb 18 19:42:11 crc kubenswrapper[4942]: I0218 19:42:11.057709 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-j4t4b" event={"ID":"f7384611-597a-411c-b993-aa5e957d2a22","Type":"ContainerDied","Data":"8327d9bc2bdc26af66c6fafe10dbbd520b3f9d0a5ae66043707b2cbdb4de9720"} Feb 18 19:42:11 crc kubenswrapper[4942]: I0218 19:42:11.397787 4942 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-j4t4b" Feb 18 19:42:11 crc kubenswrapper[4942]: I0218 19:42:11.474723 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vnn62\" (UniqueName: \"kubernetes.io/projected/f7384611-597a-411c-b993-aa5e957d2a22-kube-api-access-vnn62\") pod \"f7384611-597a-411c-b993-aa5e957d2a22\" (UID: \"f7384611-597a-411c-b993-aa5e957d2a22\") " Feb 18 19:42:11 crc kubenswrapper[4942]: I0218 19:42:11.474857 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f7384611-597a-411c-b993-aa5e957d2a22-utilities\") pod \"f7384611-597a-411c-b993-aa5e957d2a22\" (UID: \"f7384611-597a-411c-b993-aa5e957d2a22\") " Feb 18 19:42:11 crc kubenswrapper[4942]: I0218 19:42:11.474943 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f7384611-597a-411c-b993-aa5e957d2a22-catalog-content\") pod \"f7384611-597a-411c-b993-aa5e957d2a22\" (UID: \"f7384611-597a-411c-b993-aa5e957d2a22\") " Feb 18 19:42:11 crc kubenswrapper[4942]: I0218 19:42:11.476567 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f7384611-597a-411c-b993-aa5e957d2a22-utilities" (OuterVolumeSpecName: "utilities") pod "f7384611-597a-411c-b993-aa5e957d2a22" (UID: "f7384611-597a-411c-b993-aa5e957d2a22"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 19:42:11 crc kubenswrapper[4942]: I0218 19:42:11.480560 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f7384611-597a-411c-b993-aa5e957d2a22-kube-api-access-vnn62" (OuterVolumeSpecName: "kube-api-access-vnn62") pod "f7384611-597a-411c-b993-aa5e957d2a22" (UID: "f7384611-597a-411c-b993-aa5e957d2a22"). InnerVolumeSpecName "kube-api-access-vnn62". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:42:11 crc kubenswrapper[4942]: I0218 19:42:11.535936 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f7384611-597a-411c-b993-aa5e957d2a22-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "f7384611-597a-411c-b993-aa5e957d2a22" (UID: "f7384611-597a-411c-b993-aa5e957d2a22"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 19:42:11 crc kubenswrapper[4942]: I0218 19:42:11.576460 4942 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vnn62\" (UniqueName: \"kubernetes.io/projected/f7384611-597a-411c-b993-aa5e957d2a22-kube-api-access-vnn62\") on node \"crc\" DevicePath \"\"" Feb 18 19:42:11 crc kubenswrapper[4942]: I0218 19:42:11.576493 4942 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f7384611-597a-411c-b993-aa5e957d2a22-utilities\") on node \"crc\" DevicePath \"\"" Feb 18 19:42:11 crc kubenswrapper[4942]: I0218 19:42:11.576503 4942 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f7384611-597a-411c-b993-aa5e957d2a22-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 18 19:42:12 crc kubenswrapper[4942]: I0218 19:42:12.065970 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-p8sbc" event={"ID":"d83d3c09-6dc0-4ed4-adc3-183e76a548fa","Type":"ContainerStarted","Data":"48038e80522a98b03d699cd8afb110847abe51dcb03117153632f7ba133dd1c4"} Feb 18 19:42:12 crc kubenswrapper[4942]: I0218 19:42:12.068308 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-j4t4b" event={"ID":"f7384611-597a-411c-b993-aa5e957d2a22","Type":"ContainerDied","Data":"6b2ebaf7f10e3d90236eb75e4cc0f96710547637aa67ed7d29f74873591e1d4b"} Feb 18 19:42:12 crc kubenswrapper[4942]: I0218 19:42:12.068345 4942 scope.go:117] "RemoveContainer" containerID="8327d9bc2bdc26af66c6fafe10dbbd520b3f9d0a5ae66043707b2cbdb4de9720" Feb 18 19:42:12 crc kubenswrapper[4942]: I0218 19:42:12.068459 4942 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-j4t4b" Feb 18 19:42:12 crc kubenswrapper[4942]: I0218 19:42:12.096906 4942 scope.go:117] "RemoveContainer" containerID="b584ebb67a6c0c28731b826136ac25eca0c9764c757bbe199ab65855e3c1858e" Feb 18 19:42:12 crc kubenswrapper[4942]: I0218 19:42:12.107585 4942 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-p8sbc" podStartSLOduration=4.155179566 podStartE2EDuration="9.107564274s" podCreationTimestamp="2026-02-18 19:42:03 +0000 UTC" firstStartedPulling="2026-02-18 19:42:05.978060008 +0000 UTC m=+1485.682992713" lastFinishedPulling="2026-02-18 19:42:10.930444756 +0000 UTC m=+1490.635377421" observedRunningTime="2026-02-18 19:42:12.097218071 +0000 UTC m=+1491.802150786" watchObservedRunningTime="2026-02-18 19:42:12.107564274 +0000 UTC m=+1491.812496939" Feb 18 19:42:12 crc kubenswrapper[4942]: I0218 19:42:12.132701 4942 scope.go:117] "RemoveContainer" containerID="55ced6eed1f9da2ba0f67e272103f463ac07d013a256f2c716d9a43678f69d7c" Feb 18 19:42:12 crc kubenswrapper[4942]: I0218 19:42:12.132941 4942 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-j4t4b"] Feb 18 19:42:12 crc kubenswrapper[4942]: I0218 19:42:12.146278 4942 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-j4t4b"] Feb 18 19:42:13 crc kubenswrapper[4942]: I0218 19:42:13.048318 4942 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f7384611-597a-411c-b993-aa5e957d2a22" path="/var/lib/kubelet/pods/f7384611-597a-411c-b993-aa5e957d2a22/volumes" Feb 18 19:42:14 crc kubenswrapper[4942]: I0218 19:42:14.612714 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-p8sbc" Feb 18 19:42:14 crc kubenswrapper[4942]: I0218 19:42:14.612772 4942 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-p8sbc" Feb 18 19:42:15 crc kubenswrapper[4942]: I0218 19:42:15.662633 4942 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-p8sbc" podUID="d83d3c09-6dc0-4ed4-adc3-183e76a548fa" containerName="registry-server" probeResult="failure" output=< Feb 18 19:42:15 crc kubenswrapper[4942]: timeout: failed to connect service ":50051" within 1s Feb 18 19:42:15 crc kubenswrapper[4942]: > Feb 18 19:42:24 crc kubenswrapper[4942]: I0218 19:42:24.696208 4942 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-p8sbc" Feb 18 19:42:24 crc kubenswrapper[4942]: I0218 19:42:24.763988 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-p8sbc" Feb 18 19:42:24 crc kubenswrapper[4942]: I0218 19:42:24.939149 4942 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-p8sbc"] Feb 18 19:42:26 crc kubenswrapper[4942]: I0218 19:42:26.200626 4942 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-p8sbc" podUID="d83d3c09-6dc0-4ed4-adc3-183e76a548fa" containerName="registry-server" containerID="cri-o://48038e80522a98b03d699cd8afb110847abe51dcb03117153632f7ba133dd1c4" gracePeriod=2 Feb 18 19:42:26 crc kubenswrapper[4942]: I0218 19:42:26.685292 4942 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-p8sbc" Feb 18 19:42:26 crc kubenswrapper[4942]: I0218 19:42:26.800167 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d83d3c09-6dc0-4ed4-adc3-183e76a548fa-catalog-content\") pod \"d83d3c09-6dc0-4ed4-adc3-183e76a548fa\" (UID: \"d83d3c09-6dc0-4ed4-adc3-183e76a548fa\") " Feb 18 19:42:26 crc kubenswrapper[4942]: I0218 19:42:26.800381 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d83d3c09-6dc0-4ed4-adc3-183e76a548fa-utilities\") pod \"d83d3c09-6dc0-4ed4-adc3-183e76a548fa\" (UID: \"d83d3c09-6dc0-4ed4-adc3-183e76a548fa\") " Feb 18 19:42:26 crc kubenswrapper[4942]: I0218 19:42:26.800535 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m746x\" (UniqueName: \"kubernetes.io/projected/d83d3c09-6dc0-4ed4-adc3-183e76a548fa-kube-api-access-m746x\") pod \"d83d3c09-6dc0-4ed4-adc3-183e76a548fa\" (UID: \"d83d3c09-6dc0-4ed4-adc3-183e76a548fa\") " Feb 18 19:42:26 crc kubenswrapper[4942]: I0218 19:42:26.801147 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d83d3c09-6dc0-4ed4-adc3-183e76a548fa-utilities" (OuterVolumeSpecName: "utilities") pod "d83d3c09-6dc0-4ed4-adc3-183e76a548fa" (UID: "d83d3c09-6dc0-4ed4-adc3-183e76a548fa"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 19:42:26 crc kubenswrapper[4942]: I0218 19:42:26.809067 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d83d3c09-6dc0-4ed4-adc3-183e76a548fa-kube-api-access-m746x" (OuterVolumeSpecName: "kube-api-access-m746x") pod "d83d3c09-6dc0-4ed4-adc3-183e76a548fa" (UID: "d83d3c09-6dc0-4ed4-adc3-183e76a548fa"). InnerVolumeSpecName "kube-api-access-m746x". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:42:26 crc kubenswrapper[4942]: I0218 19:42:26.902506 4942 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d83d3c09-6dc0-4ed4-adc3-183e76a548fa-utilities\") on node \"crc\" DevicePath \"\"" Feb 18 19:42:26 crc kubenswrapper[4942]: I0218 19:42:26.902541 4942 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m746x\" (UniqueName: \"kubernetes.io/projected/d83d3c09-6dc0-4ed4-adc3-183e76a548fa-kube-api-access-m746x\") on node \"crc\" DevicePath \"\"" Feb 18 19:42:26 crc kubenswrapper[4942]: I0218 19:42:26.927292 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d83d3c09-6dc0-4ed4-adc3-183e76a548fa-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "d83d3c09-6dc0-4ed4-adc3-183e76a548fa" (UID: "d83d3c09-6dc0-4ed4-adc3-183e76a548fa"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 19:42:27 crc kubenswrapper[4942]: I0218 19:42:27.005055 4942 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d83d3c09-6dc0-4ed4-adc3-183e76a548fa-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 18 19:42:27 crc kubenswrapper[4942]: I0218 19:42:27.216904 4942 generic.go:334] "Generic (PLEG): container finished" podID="d83d3c09-6dc0-4ed4-adc3-183e76a548fa" containerID="48038e80522a98b03d699cd8afb110847abe51dcb03117153632f7ba133dd1c4" exitCode=0 Feb 18 19:42:27 crc kubenswrapper[4942]: I0218 19:42:27.216980 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-p8sbc" event={"ID":"d83d3c09-6dc0-4ed4-adc3-183e76a548fa","Type":"ContainerDied","Data":"48038e80522a98b03d699cd8afb110847abe51dcb03117153632f7ba133dd1c4"} Feb 18 19:42:27 crc kubenswrapper[4942]: I0218 19:42:27.217077 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-p8sbc" event={"ID":"d83d3c09-6dc0-4ed4-adc3-183e76a548fa","Type":"ContainerDied","Data":"8daa7defde0b7bd35f941fc3b51d0725ecc9471880f30c486e1dc1a01ca1b651"} Feb 18 19:42:27 crc kubenswrapper[4942]: I0218 19:42:27.217110 4942 scope.go:117] "RemoveContainer" containerID="48038e80522a98b03d699cd8afb110847abe51dcb03117153632f7ba133dd1c4" Feb 18 19:42:27 crc kubenswrapper[4942]: I0218 19:42:27.217010 4942 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-p8sbc" Feb 18 19:42:27 crc kubenswrapper[4942]: I0218 19:42:27.253355 4942 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-p8sbc"] Feb 18 19:42:27 crc kubenswrapper[4942]: I0218 19:42:27.256702 4942 scope.go:117] "RemoveContainer" containerID="b2550670f59a8a84c5414e0db4e0a63616a11c0c391719faf4d66f8a3c46587f" Feb 18 19:42:27 crc kubenswrapper[4942]: I0218 19:42:27.266105 4942 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-p8sbc"] Feb 18 19:42:27 crc kubenswrapper[4942]: I0218 19:42:27.295139 4942 scope.go:117] "RemoveContainer" containerID="068ad5273f8931e5ad2041f71703fdbf4611b94c3edd29d8b659466f08c5dc98" Feb 18 19:42:27 crc kubenswrapper[4942]: I0218 19:42:27.339173 4942 scope.go:117] "RemoveContainer" containerID="48038e80522a98b03d699cd8afb110847abe51dcb03117153632f7ba133dd1c4" Feb 18 19:42:27 crc kubenswrapper[4942]: E0218 19:42:27.339637 4942 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"48038e80522a98b03d699cd8afb110847abe51dcb03117153632f7ba133dd1c4\": container with ID starting with 48038e80522a98b03d699cd8afb110847abe51dcb03117153632f7ba133dd1c4 not found: ID does not exist" containerID="48038e80522a98b03d699cd8afb110847abe51dcb03117153632f7ba133dd1c4" Feb 18 19:42:27 crc kubenswrapper[4942]: I0218 19:42:27.339669 4942 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"48038e80522a98b03d699cd8afb110847abe51dcb03117153632f7ba133dd1c4"} err="failed to get container status \"48038e80522a98b03d699cd8afb110847abe51dcb03117153632f7ba133dd1c4\": rpc error: code = NotFound desc = could not find container \"48038e80522a98b03d699cd8afb110847abe51dcb03117153632f7ba133dd1c4\": container with ID starting with 48038e80522a98b03d699cd8afb110847abe51dcb03117153632f7ba133dd1c4 not found: ID does not exist" Feb 18 19:42:27 crc kubenswrapper[4942]: I0218 19:42:27.339690 4942 scope.go:117] "RemoveContainer" containerID="b2550670f59a8a84c5414e0db4e0a63616a11c0c391719faf4d66f8a3c46587f" Feb 18 19:42:27 crc kubenswrapper[4942]: E0218 19:42:27.340039 4942 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b2550670f59a8a84c5414e0db4e0a63616a11c0c391719faf4d66f8a3c46587f\": container with ID starting with b2550670f59a8a84c5414e0db4e0a63616a11c0c391719faf4d66f8a3c46587f not found: ID does not exist" containerID="b2550670f59a8a84c5414e0db4e0a63616a11c0c391719faf4d66f8a3c46587f" Feb 18 19:42:27 crc kubenswrapper[4942]: I0218 19:42:27.340066 4942 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b2550670f59a8a84c5414e0db4e0a63616a11c0c391719faf4d66f8a3c46587f"} err="failed to get container status \"b2550670f59a8a84c5414e0db4e0a63616a11c0c391719faf4d66f8a3c46587f\": rpc error: code = NotFound desc = could not find container \"b2550670f59a8a84c5414e0db4e0a63616a11c0c391719faf4d66f8a3c46587f\": container with ID starting with b2550670f59a8a84c5414e0db4e0a63616a11c0c391719faf4d66f8a3c46587f not found: ID does not exist" Feb 18 19:42:27 crc kubenswrapper[4942]: I0218 19:42:27.340084 4942 scope.go:117] "RemoveContainer" containerID="068ad5273f8931e5ad2041f71703fdbf4611b94c3edd29d8b659466f08c5dc98" Feb 18 19:42:27 crc kubenswrapper[4942]: E0218 19:42:27.340538 4942 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"068ad5273f8931e5ad2041f71703fdbf4611b94c3edd29d8b659466f08c5dc98\": container with ID starting with 068ad5273f8931e5ad2041f71703fdbf4611b94c3edd29d8b659466f08c5dc98 not found: ID does not exist" containerID="068ad5273f8931e5ad2041f71703fdbf4611b94c3edd29d8b659466f08c5dc98" Feb 18 19:42:27 crc kubenswrapper[4942]: I0218 19:42:27.340607 4942 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"068ad5273f8931e5ad2041f71703fdbf4611b94c3edd29d8b659466f08c5dc98"} err="failed to get container status \"068ad5273f8931e5ad2041f71703fdbf4611b94c3edd29d8b659466f08c5dc98\": rpc error: code = NotFound desc = could not find container \"068ad5273f8931e5ad2041f71703fdbf4611b94c3edd29d8b659466f08c5dc98\": container with ID starting with 068ad5273f8931e5ad2041f71703fdbf4611b94c3edd29d8b659466f08c5dc98 not found: ID does not exist" Feb 18 19:42:29 crc kubenswrapper[4942]: I0218 19:42:29.057568 4942 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d83d3c09-6dc0-4ed4-adc3-183e76a548fa" path="/var/lib/kubelet/pods/d83d3c09-6dc0-4ed4-adc3-183e76a548fa/volumes" Feb 18 19:42:44 crc kubenswrapper[4942]: I0218 19:42:44.865783 4942 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-rcsvv"] Feb 18 19:42:44 crc kubenswrapper[4942]: E0218 19:42:44.867171 4942 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d83d3c09-6dc0-4ed4-adc3-183e76a548fa" containerName="extract-content" Feb 18 19:42:44 crc kubenswrapper[4942]: I0218 19:42:44.867193 4942 state_mem.go:107] "Deleted CPUSet assignment" podUID="d83d3c09-6dc0-4ed4-adc3-183e76a548fa" containerName="extract-content" Feb 18 19:42:44 crc kubenswrapper[4942]: E0218 19:42:44.867214 4942 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f7384611-597a-411c-b993-aa5e957d2a22" containerName="extract-utilities" Feb 18 19:42:44 crc kubenswrapper[4942]: I0218 19:42:44.867225 4942 state_mem.go:107] "Deleted CPUSet assignment" podUID="f7384611-597a-411c-b993-aa5e957d2a22" containerName="extract-utilities" Feb 18 19:42:44 crc kubenswrapper[4942]: E0218 19:42:44.867250 4942 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d83d3c09-6dc0-4ed4-adc3-183e76a548fa" containerName="registry-server" Feb 18 19:42:44 crc kubenswrapper[4942]: I0218 19:42:44.867261 4942 state_mem.go:107] "Deleted CPUSet assignment" podUID="d83d3c09-6dc0-4ed4-adc3-183e76a548fa" containerName="registry-server" Feb 18 19:42:44 crc kubenswrapper[4942]: E0218 19:42:44.867276 4942 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d83d3c09-6dc0-4ed4-adc3-183e76a548fa" containerName="extract-utilities" Feb 18 19:42:44 crc kubenswrapper[4942]: I0218 19:42:44.867287 4942 state_mem.go:107] "Deleted CPUSet assignment" podUID="d83d3c09-6dc0-4ed4-adc3-183e76a548fa" containerName="extract-utilities" Feb 18 19:42:44 crc kubenswrapper[4942]: E0218 19:42:44.867314 4942 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f7384611-597a-411c-b993-aa5e957d2a22" containerName="registry-server" Feb 18 19:42:44 crc kubenswrapper[4942]: I0218 19:42:44.867324 4942 state_mem.go:107] "Deleted CPUSet assignment" podUID="f7384611-597a-411c-b993-aa5e957d2a22" containerName="registry-server" Feb 18 19:42:44 crc kubenswrapper[4942]: E0218 19:42:44.867348 4942 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f7384611-597a-411c-b993-aa5e957d2a22" containerName="extract-content" Feb 18 19:42:44 crc kubenswrapper[4942]: I0218 19:42:44.867358 4942 state_mem.go:107] "Deleted CPUSet assignment" podUID="f7384611-597a-411c-b993-aa5e957d2a22" containerName="extract-content" Feb 18 19:42:44 crc kubenswrapper[4942]: I0218 19:42:44.867619 4942 memory_manager.go:354] "RemoveStaleState removing state" podUID="d83d3c09-6dc0-4ed4-adc3-183e76a548fa" containerName="registry-server" Feb 18 19:42:44 crc kubenswrapper[4942]: I0218 19:42:44.867668 4942 memory_manager.go:354] "RemoveStaleState removing state" podUID="f7384611-597a-411c-b993-aa5e957d2a22" containerName="registry-server" Feb 18 19:42:44 crc kubenswrapper[4942]: I0218 19:42:44.869842 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-rcsvv" Feb 18 19:42:44 crc kubenswrapper[4942]: I0218 19:42:44.877649 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-rcsvv"] Feb 18 19:42:44 crc kubenswrapper[4942]: I0218 19:42:44.991720 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rv55p\" (UniqueName: \"kubernetes.io/projected/e8d8c3f0-9548-40e6-8504-24d707672276-kube-api-access-rv55p\") pod \"community-operators-rcsvv\" (UID: \"e8d8c3f0-9548-40e6-8504-24d707672276\") " pod="openshift-marketplace/community-operators-rcsvv" Feb 18 19:42:44 crc kubenswrapper[4942]: I0218 19:42:44.991997 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e8d8c3f0-9548-40e6-8504-24d707672276-utilities\") pod \"community-operators-rcsvv\" (UID: \"e8d8c3f0-9548-40e6-8504-24d707672276\") " pod="openshift-marketplace/community-operators-rcsvv" Feb 18 19:42:44 crc kubenswrapper[4942]: I0218 19:42:44.992138 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e8d8c3f0-9548-40e6-8504-24d707672276-catalog-content\") pod \"community-operators-rcsvv\" (UID: \"e8d8c3f0-9548-40e6-8504-24d707672276\") " pod="openshift-marketplace/community-operators-rcsvv" Feb 18 19:42:45 crc kubenswrapper[4942]: I0218 19:42:45.093865 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rv55p\" (UniqueName: \"kubernetes.io/projected/e8d8c3f0-9548-40e6-8504-24d707672276-kube-api-access-rv55p\") pod \"community-operators-rcsvv\" (UID: \"e8d8c3f0-9548-40e6-8504-24d707672276\") " pod="openshift-marketplace/community-operators-rcsvv" Feb 18 19:42:45 crc kubenswrapper[4942]: I0218 19:42:45.094004 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e8d8c3f0-9548-40e6-8504-24d707672276-utilities\") pod \"community-operators-rcsvv\" (UID: \"e8d8c3f0-9548-40e6-8504-24d707672276\") " pod="openshift-marketplace/community-operators-rcsvv" Feb 18 19:42:45 crc kubenswrapper[4942]: I0218 19:42:45.094034 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e8d8c3f0-9548-40e6-8504-24d707672276-catalog-content\") pod \"community-operators-rcsvv\" (UID: \"e8d8c3f0-9548-40e6-8504-24d707672276\") " pod="openshift-marketplace/community-operators-rcsvv" Feb 18 19:42:45 crc kubenswrapper[4942]: I0218 19:42:45.094483 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e8d8c3f0-9548-40e6-8504-24d707672276-utilities\") pod \"community-operators-rcsvv\" (UID: \"e8d8c3f0-9548-40e6-8504-24d707672276\") " pod="openshift-marketplace/community-operators-rcsvv" Feb 18 19:42:45 crc kubenswrapper[4942]: I0218 19:42:45.094505 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e8d8c3f0-9548-40e6-8504-24d707672276-catalog-content\") pod \"community-operators-rcsvv\" (UID: \"e8d8c3f0-9548-40e6-8504-24d707672276\") " pod="openshift-marketplace/community-operators-rcsvv" Feb 18 19:42:45 crc kubenswrapper[4942]: I0218 19:42:45.115852 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rv55p\" (UniqueName: \"kubernetes.io/projected/e8d8c3f0-9548-40e6-8504-24d707672276-kube-api-access-rv55p\") pod \"community-operators-rcsvv\" (UID: \"e8d8c3f0-9548-40e6-8504-24d707672276\") " pod="openshift-marketplace/community-operators-rcsvv" Feb 18 19:42:45 crc kubenswrapper[4942]: I0218 19:42:45.220539 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-rcsvv" Feb 18 19:42:45 crc kubenswrapper[4942]: I0218 19:42:45.732247 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-rcsvv"] Feb 18 19:42:46 crc kubenswrapper[4942]: I0218 19:42:46.392564 4942 generic.go:334] "Generic (PLEG): container finished" podID="e8d8c3f0-9548-40e6-8504-24d707672276" containerID="ded280edbce2aa1a11d503709260212e6b5706090d9f984afa63d6dd86863ba2" exitCode=0 Feb 18 19:42:46 crc kubenswrapper[4942]: I0218 19:42:46.392631 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rcsvv" event={"ID":"e8d8c3f0-9548-40e6-8504-24d707672276","Type":"ContainerDied","Data":"ded280edbce2aa1a11d503709260212e6b5706090d9f984afa63d6dd86863ba2"} Feb 18 19:42:46 crc kubenswrapper[4942]: I0218 19:42:46.393181 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rcsvv" event={"ID":"e8d8c3f0-9548-40e6-8504-24d707672276","Type":"ContainerStarted","Data":"0ba7b0673b4b918e03a702622af24ec80f1948edb0f2b842c82a117574bfbca8"} Feb 18 19:42:48 crc kubenswrapper[4942]: I0218 19:42:48.416650 4942 generic.go:334] "Generic (PLEG): container finished" podID="e8d8c3f0-9548-40e6-8504-24d707672276" containerID="fa0f6757cd76f268c59138bfc08e38083fac4ace63857f099ce51268c63b8fb2" exitCode=0 Feb 18 19:42:48 crc kubenswrapper[4942]: I0218 19:42:48.416853 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rcsvv" event={"ID":"e8d8c3f0-9548-40e6-8504-24d707672276","Type":"ContainerDied","Data":"fa0f6757cd76f268c59138bfc08e38083fac4ace63857f099ce51268c63b8fb2"} Feb 18 19:42:49 crc kubenswrapper[4942]: I0218 19:42:49.428208 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rcsvv" event={"ID":"e8d8c3f0-9548-40e6-8504-24d707672276","Type":"ContainerStarted","Data":"243b462838d769b9d628afd971a258f566185d8510d00f59060d602f8c982659"} Feb 18 19:42:49 crc kubenswrapper[4942]: I0218 19:42:49.464522 4942 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-rcsvv" podStartSLOduration=2.966293377 podStartE2EDuration="5.464503056s" podCreationTimestamp="2026-02-18 19:42:44 +0000 UTC" firstStartedPulling="2026-02-18 19:42:46.394987818 +0000 UTC m=+1526.099920483" lastFinishedPulling="2026-02-18 19:42:48.893197497 +0000 UTC m=+1528.598130162" observedRunningTime="2026-02-18 19:42:49.455058548 +0000 UTC m=+1529.159991233" watchObservedRunningTime="2026-02-18 19:42:49.464503056 +0000 UTC m=+1529.169435721" Feb 18 19:42:55 crc kubenswrapper[4942]: I0218 19:42:55.221017 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-rcsvv" Feb 18 19:42:55 crc kubenswrapper[4942]: I0218 19:42:55.221396 4942 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-rcsvv" Feb 18 19:42:55 crc kubenswrapper[4942]: I0218 19:42:55.284895 4942 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-rcsvv" Feb 18 19:42:55 crc kubenswrapper[4942]: I0218 19:42:55.585253 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-rcsvv" Feb 18 19:42:55 crc kubenswrapper[4942]: I0218 19:42:55.653311 4942 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-rcsvv"] Feb 18 19:42:57 crc kubenswrapper[4942]: I0218 19:42:57.511375 4942 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-rcsvv" podUID="e8d8c3f0-9548-40e6-8504-24d707672276" containerName="registry-server" containerID="cri-o://243b462838d769b9d628afd971a258f566185d8510d00f59060d602f8c982659" gracePeriod=2 Feb 18 19:42:58 crc kubenswrapper[4942]: I0218 19:42:58.065672 4942 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-rcsvv" Feb 18 19:42:58 crc kubenswrapper[4942]: I0218 19:42:58.165744 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rv55p\" (UniqueName: \"kubernetes.io/projected/e8d8c3f0-9548-40e6-8504-24d707672276-kube-api-access-rv55p\") pod \"e8d8c3f0-9548-40e6-8504-24d707672276\" (UID: \"e8d8c3f0-9548-40e6-8504-24d707672276\") " Feb 18 19:42:58 crc kubenswrapper[4942]: I0218 19:42:58.165898 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e8d8c3f0-9548-40e6-8504-24d707672276-utilities\") pod \"e8d8c3f0-9548-40e6-8504-24d707672276\" (UID: \"e8d8c3f0-9548-40e6-8504-24d707672276\") " Feb 18 19:42:58 crc kubenswrapper[4942]: I0218 19:42:58.165964 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e8d8c3f0-9548-40e6-8504-24d707672276-catalog-content\") pod \"e8d8c3f0-9548-40e6-8504-24d707672276\" (UID: \"e8d8c3f0-9548-40e6-8504-24d707672276\") " Feb 18 19:42:58 crc kubenswrapper[4942]: I0218 19:42:58.167137 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e8d8c3f0-9548-40e6-8504-24d707672276-utilities" (OuterVolumeSpecName: "utilities") pod "e8d8c3f0-9548-40e6-8504-24d707672276" (UID: "e8d8c3f0-9548-40e6-8504-24d707672276"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 19:42:58 crc kubenswrapper[4942]: I0218 19:42:58.174087 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e8d8c3f0-9548-40e6-8504-24d707672276-kube-api-access-rv55p" (OuterVolumeSpecName: "kube-api-access-rv55p") pod "e8d8c3f0-9548-40e6-8504-24d707672276" (UID: "e8d8c3f0-9548-40e6-8504-24d707672276"). InnerVolumeSpecName "kube-api-access-rv55p". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:42:58 crc kubenswrapper[4942]: I0218 19:42:58.227803 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e8d8c3f0-9548-40e6-8504-24d707672276-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "e8d8c3f0-9548-40e6-8504-24d707672276" (UID: "e8d8c3f0-9548-40e6-8504-24d707672276"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 19:42:58 crc kubenswrapper[4942]: I0218 19:42:58.268698 4942 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rv55p\" (UniqueName: \"kubernetes.io/projected/e8d8c3f0-9548-40e6-8504-24d707672276-kube-api-access-rv55p\") on node \"crc\" DevicePath \"\"" Feb 18 19:42:58 crc kubenswrapper[4942]: I0218 19:42:58.268740 4942 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e8d8c3f0-9548-40e6-8504-24d707672276-utilities\") on node \"crc\" DevicePath \"\"" Feb 18 19:42:58 crc kubenswrapper[4942]: I0218 19:42:58.268752 4942 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e8d8c3f0-9548-40e6-8504-24d707672276-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 18 19:42:58 crc kubenswrapper[4942]: I0218 19:42:58.527890 4942 generic.go:334] "Generic (PLEG): container finished" podID="e8d8c3f0-9548-40e6-8504-24d707672276" containerID="243b462838d769b9d628afd971a258f566185d8510d00f59060d602f8c982659" exitCode=0 Feb 18 19:42:58 crc kubenswrapper[4942]: I0218 19:42:58.527988 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rcsvv" event={"ID":"e8d8c3f0-9548-40e6-8504-24d707672276","Type":"ContainerDied","Data":"243b462838d769b9d628afd971a258f566185d8510d00f59060d602f8c982659"} Feb 18 19:42:58 crc kubenswrapper[4942]: I0218 19:42:58.527979 4942 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-rcsvv" Feb 18 19:42:58 crc kubenswrapper[4942]: I0218 19:42:58.528053 4942 scope.go:117] "RemoveContainer" containerID="243b462838d769b9d628afd971a258f566185d8510d00f59060d602f8c982659" Feb 18 19:42:58 crc kubenswrapper[4942]: I0218 19:42:58.528035 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rcsvv" event={"ID":"e8d8c3f0-9548-40e6-8504-24d707672276","Type":"ContainerDied","Data":"0ba7b0673b4b918e03a702622af24ec80f1948edb0f2b842c82a117574bfbca8"} Feb 18 19:42:58 crc kubenswrapper[4942]: I0218 19:42:58.557742 4942 scope.go:117] "RemoveContainer" containerID="fa0f6757cd76f268c59138bfc08e38083fac4ace63857f099ce51268c63b8fb2" Feb 18 19:42:58 crc kubenswrapper[4942]: I0218 19:42:58.585864 4942 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-rcsvv"] Feb 18 19:42:58 crc kubenswrapper[4942]: I0218 19:42:58.595038 4942 scope.go:117] "RemoveContainer" containerID="ded280edbce2aa1a11d503709260212e6b5706090d9f984afa63d6dd86863ba2" Feb 18 19:42:58 crc kubenswrapper[4942]: I0218 19:42:58.600813 4942 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-rcsvv"] Feb 18 19:42:58 crc kubenswrapper[4942]: I0218 19:42:58.642079 4942 scope.go:117] "RemoveContainer" containerID="243b462838d769b9d628afd971a258f566185d8510d00f59060d602f8c982659" Feb 18 19:42:58 crc kubenswrapper[4942]: E0218 19:42:58.642516 4942 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"243b462838d769b9d628afd971a258f566185d8510d00f59060d602f8c982659\": container with ID starting with 243b462838d769b9d628afd971a258f566185d8510d00f59060d602f8c982659 not found: ID does not exist" containerID="243b462838d769b9d628afd971a258f566185d8510d00f59060d602f8c982659" Feb 18 19:42:58 crc kubenswrapper[4942]: I0218 19:42:58.642557 4942 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"243b462838d769b9d628afd971a258f566185d8510d00f59060d602f8c982659"} err="failed to get container status \"243b462838d769b9d628afd971a258f566185d8510d00f59060d602f8c982659\": rpc error: code = NotFound desc = could not find container \"243b462838d769b9d628afd971a258f566185d8510d00f59060d602f8c982659\": container with ID starting with 243b462838d769b9d628afd971a258f566185d8510d00f59060d602f8c982659 not found: ID does not exist" Feb 18 19:42:58 crc kubenswrapper[4942]: I0218 19:42:58.642583 4942 scope.go:117] "RemoveContainer" containerID="fa0f6757cd76f268c59138bfc08e38083fac4ace63857f099ce51268c63b8fb2" Feb 18 19:42:58 crc kubenswrapper[4942]: E0218 19:42:58.642951 4942 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fa0f6757cd76f268c59138bfc08e38083fac4ace63857f099ce51268c63b8fb2\": container with ID starting with fa0f6757cd76f268c59138bfc08e38083fac4ace63857f099ce51268c63b8fb2 not found: ID does not exist" containerID="fa0f6757cd76f268c59138bfc08e38083fac4ace63857f099ce51268c63b8fb2" Feb 18 19:42:58 crc kubenswrapper[4942]: I0218 19:42:58.642979 4942 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fa0f6757cd76f268c59138bfc08e38083fac4ace63857f099ce51268c63b8fb2"} err="failed to get container status \"fa0f6757cd76f268c59138bfc08e38083fac4ace63857f099ce51268c63b8fb2\": rpc error: code = NotFound desc = could not find container \"fa0f6757cd76f268c59138bfc08e38083fac4ace63857f099ce51268c63b8fb2\": container with ID starting with fa0f6757cd76f268c59138bfc08e38083fac4ace63857f099ce51268c63b8fb2 not found: ID does not exist" Feb 18 19:42:58 crc kubenswrapper[4942]: I0218 19:42:58.642999 4942 scope.go:117] "RemoveContainer" containerID="ded280edbce2aa1a11d503709260212e6b5706090d9f984afa63d6dd86863ba2" Feb 18 19:42:58 crc kubenswrapper[4942]: E0218 19:42:58.643279 4942 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ded280edbce2aa1a11d503709260212e6b5706090d9f984afa63d6dd86863ba2\": container with ID starting with ded280edbce2aa1a11d503709260212e6b5706090d9f984afa63d6dd86863ba2 not found: ID does not exist" containerID="ded280edbce2aa1a11d503709260212e6b5706090d9f984afa63d6dd86863ba2" Feb 18 19:42:58 crc kubenswrapper[4942]: I0218 19:42:58.643329 4942 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ded280edbce2aa1a11d503709260212e6b5706090d9f984afa63d6dd86863ba2"} err="failed to get container status \"ded280edbce2aa1a11d503709260212e6b5706090d9f984afa63d6dd86863ba2\": rpc error: code = NotFound desc = could not find container \"ded280edbce2aa1a11d503709260212e6b5706090d9f984afa63d6dd86863ba2\": container with ID starting with ded280edbce2aa1a11d503709260212e6b5706090d9f984afa63d6dd86863ba2 not found: ID does not exist" Feb 18 19:42:59 crc kubenswrapper[4942]: I0218 19:42:59.046460 4942 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e8d8c3f0-9548-40e6-8504-24d707672276" path="/var/lib/kubelet/pods/e8d8c3f0-9548-40e6-8504-24d707672276/volumes" Feb 18 19:43:17 crc kubenswrapper[4942]: I0218 19:43:17.721473 4942 generic.go:334] "Generic (PLEG): container finished" podID="8509499d-4716-44d6-8fb9-539350f38310" containerID="2739ca309f28eed92bcb31bf2162f38ddc41005b410bf64c078594e3f919f697" exitCode=0 Feb 18 19:43:17 crc kubenswrapper[4942]: I0218 19:43:17.721628 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-x2stk" event={"ID":"8509499d-4716-44d6-8fb9-539350f38310","Type":"ContainerDied","Data":"2739ca309f28eed92bcb31bf2162f38ddc41005b410bf64c078594e3f919f697"} Feb 18 19:43:19 crc kubenswrapper[4942]: I0218 19:43:19.158981 4942 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-x2stk" Feb 18 19:43:19 crc kubenswrapper[4942]: I0218 19:43:19.290889 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8509499d-4716-44d6-8fb9-539350f38310-bootstrap-combined-ca-bundle\") pod \"8509499d-4716-44d6-8fb9-539350f38310\" (UID: \"8509499d-4716-44d6-8fb9-539350f38310\") " Feb 18 19:43:19 crc kubenswrapper[4942]: I0218 19:43:19.291251 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/8509499d-4716-44d6-8fb9-539350f38310-ssh-key-openstack-edpm-ipam\") pod \"8509499d-4716-44d6-8fb9-539350f38310\" (UID: \"8509499d-4716-44d6-8fb9-539350f38310\") " Feb 18 19:43:19 crc kubenswrapper[4942]: I0218 19:43:19.291289 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jtdhb\" (UniqueName: \"kubernetes.io/projected/8509499d-4716-44d6-8fb9-539350f38310-kube-api-access-jtdhb\") pod \"8509499d-4716-44d6-8fb9-539350f38310\" (UID: \"8509499d-4716-44d6-8fb9-539350f38310\") " Feb 18 19:43:19 crc kubenswrapper[4942]: I0218 19:43:19.291452 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8509499d-4716-44d6-8fb9-539350f38310-inventory\") pod \"8509499d-4716-44d6-8fb9-539350f38310\" (UID: \"8509499d-4716-44d6-8fb9-539350f38310\") " Feb 18 19:43:19 crc kubenswrapper[4942]: I0218 19:43:19.296836 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8509499d-4716-44d6-8fb9-539350f38310-bootstrap-combined-ca-bundle" (OuterVolumeSpecName: "bootstrap-combined-ca-bundle") pod "8509499d-4716-44d6-8fb9-539350f38310" (UID: "8509499d-4716-44d6-8fb9-539350f38310"). InnerVolumeSpecName "bootstrap-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:43:19 crc kubenswrapper[4942]: I0218 19:43:19.298030 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8509499d-4716-44d6-8fb9-539350f38310-kube-api-access-jtdhb" (OuterVolumeSpecName: "kube-api-access-jtdhb") pod "8509499d-4716-44d6-8fb9-539350f38310" (UID: "8509499d-4716-44d6-8fb9-539350f38310"). InnerVolumeSpecName "kube-api-access-jtdhb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:43:19 crc kubenswrapper[4942]: I0218 19:43:19.334753 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8509499d-4716-44d6-8fb9-539350f38310-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "8509499d-4716-44d6-8fb9-539350f38310" (UID: "8509499d-4716-44d6-8fb9-539350f38310"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:43:19 crc kubenswrapper[4942]: I0218 19:43:19.341073 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8509499d-4716-44d6-8fb9-539350f38310-inventory" (OuterVolumeSpecName: "inventory") pod "8509499d-4716-44d6-8fb9-539350f38310" (UID: "8509499d-4716-44d6-8fb9-539350f38310"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:43:19 crc kubenswrapper[4942]: I0218 19:43:19.394587 4942 reconciler_common.go:293] "Volume detached for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8509499d-4716-44d6-8fb9-539350f38310-bootstrap-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 19:43:19 crc kubenswrapper[4942]: I0218 19:43:19.394616 4942 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/8509499d-4716-44d6-8fb9-539350f38310-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 18 19:43:19 crc kubenswrapper[4942]: I0218 19:43:19.394626 4942 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jtdhb\" (UniqueName: \"kubernetes.io/projected/8509499d-4716-44d6-8fb9-539350f38310-kube-api-access-jtdhb\") on node \"crc\" DevicePath \"\"" Feb 18 19:43:19 crc kubenswrapper[4942]: I0218 19:43:19.394635 4942 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8509499d-4716-44d6-8fb9-539350f38310-inventory\") on node \"crc\" DevicePath \"\"" Feb 18 19:43:19 crc kubenswrapper[4942]: I0218 19:43:19.741629 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-x2stk" event={"ID":"8509499d-4716-44d6-8fb9-539350f38310","Type":"ContainerDied","Data":"fae6eb265c482ab9f0b9dae86441cc0e6b1492dd39f48b694aca54d22f8761f8"} Feb 18 19:43:19 crc kubenswrapper[4942]: I0218 19:43:19.741675 4942 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fae6eb265c482ab9f0b9dae86441cc0e6b1492dd39f48b694aca54d22f8761f8" Feb 18 19:43:19 crc kubenswrapper[4942]: I0218 19:43:19.741670 4942 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-x2stk" Feb 18 19:43:19 crc kubenswrapper[4942]: I0218 19:43:19.830797 4942 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-cdcjq"] Feb 18 19:43:19 crc kubenswrapper[4942]: E0218 19:43:19.831238 4942 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e8d8c3f0-9548-40e6-8504-24d707672276" containerName="extract-utilities" Feb 18 19:43:19 crc kubenswrapper[4942]: I0218 19:43:19.831261 4942 state_mem.go:107] "Deleted CPUSet assignment" podUID="e8d8c3f0-9548-40e6-8504-24d707672276" containerName="extract-utilities" Feb 18 19:43:19 crc kubenswrapper[4942]: E0218 19:43:19.831270 4942 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e8d8c3f0-9548-40e6-8504-24d707672276" containerName="registry-server" Feb 18 19:43:19 crc kubenswrapper[4942]: I0218 19:43:19.831276 4942 state_mem.go:107] "Deleted CPUSet assignment" podUID="e8d8c3f0-9548-40e6-8504-24d707672276" containerName="registry-server" Feb 18 19:43:19 crc kubenswrapper[4942]: E0218 19:43:19.831284 4942 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e8d8c3f0-9548-40e6-8504-24d707672276" containerName="extract-content" Feb 18 19:43:19 crc kubenswrapper[4942]: I0218 19:43:19.831291 4942 state_mem.go:107] "Deleted CPUSet assignment" podUID="e8d8c3f0-9548-40e6-8504-24d707672276" containerName="extract-content" Feb 18 19:43:19 crc kubenswrapper[4942]: E0218 19:43:19.831330 4942 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8509499d-4716-44d6-8fb9-539350f38310" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Feb 18 19:43:19 crc kubenswrapper[4942]: I0218 19:43:19.831338 4942 state_mem.go:107] "Deleted CPUSet assignment" podUID="8509499d-4716-44d6-8fb9-539350f38310" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Feb 18 19:43:19 crc kubenswrapper[4942]: I0218 19:43:19.831548 4942 memory_manager.go:354] "RemoveStaleState removing state" podUID="8509499d-4716-44d6-8fb9-539350f38310" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Feb 18 19:43:19 crc kubenswrapper[4942]: I0218 19:43:19.831569 4942 memory_manager.go:354] "RemoveStaleState removing state" podUID="e8d8c3f0-9548-40e6-8504-24d707672276" containerName="registry-server" Feb 18 19:43:19 crc kubenswrapper[4942]: I0218 19:43:19.832221 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-cdcjq" Feb 18 19:43:19 crc kubenswrapper[4942]: I0218 19:43:19.835249 4942 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 18 19:43:19 crc kubenswrapper[4942]: I0218 19:43:19.835482 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 18 19:43:19 crc kubenswrapper[4942]: I0218 19:43:19.835746 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 18 19:43:19 crc kubenswrapper[4942]: I0218 19:43:19.835914 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-rgcbh" Feb 18 19:43:19 crc kubenswrapper[4942]: I0218 19:43:19.844587 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-cdcjq"] Feb 18 19:43:19 crc kubenswrapper[4942]: I0218 19:43:19.903885 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b3125f54-a594-4c20-ab3f-298cd68f3709-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-cdcjq\" (UID: \"b3125f54-a594-4c20-ab3f-298cd68f3709\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-cdcjq" Feb 18 19:43:19 crc kubenswrapper[4942]: I0218 19:43:19.903969 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/b3125f54-a594-4c20-ab3f-298cd68f3709-ssh-key-openstack-edpm-ipam\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-cdcjq\" (UID: \"b3125f54-a594-4c20-ab3f-298cd68f3709\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-cdcjq" Feb 18 19:43:19 crc kubenswrapper[4942]: I0218 19:43:19.904000 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v9rz6\" (UniqueName: \"kubernetes.io/projected/b3125f54-a594-4c20-ab3f-298cd68f3709-kube-api-access-v9rz6\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-cdcjq\" (UID: \"b3125f54-a594-4c20-ab3f-298cd68f3709\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-cdcjq" Feb 18 19:43:20 crc kubenswrapper[4942]: I0218 19:43:20.006172 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b3125f54-a594-4c20-ab3f-298cd68f3709-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-cdcjq\" (UID: \"b3125f54-a594-4c20-ab3f-298cd68f3709\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-cdcjq" Feb 18 19:43:20 crc kubenswrapper[4942]: I0218 19:43:20.006273 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/b3125f54-a594-4c20-ab3f-298cd68f3709-ssh-key-openstack-edpm-ipam\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-cdcjq\" (UID: \"b3125f54-a594-4c20-ab3f-298cd68f3709\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-cdcjq" Feb 18 19:43:20 crc kubenswrapper[4942]: I0218 19:43:20.006304 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v9rz6\" (UniqueName: \"kubernetes.io/projected/b3125f54-a594-4c20-ab3f-298cd68f3709-kube-api-access-v9rz6\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-cdcjq\" (UID: \"b3125f54-a594-4c20-ab3f-298cd68f3709\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-cdcjq" Feb 18 19:43:20 crc kubenswrapper[4942]: I0218 19:43:20.011202 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/b3125f54-a594-4c20-ab3f-298cd68f3709-ssh-key-openstack-edpm-ipam\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-cdcjq\" (UID: \"b3125f54-a594-4c20-ab3f-298cd68f3709\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-cdcjq" Feb 18 19:43:20 crc kubenswrapper[4942]: I0218 19:43:20.011739 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b3125f54-a594-4c20-ab3f-298cd68f3709-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-cdcjq\" (UID: \"b3125f54-a594-4c20-ab3f-298cd68f3709\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-cdcjq" Feb 18 19:43:20 crc kubenswrapper[4942]: I0218 19:43:20.026189 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v9rz6\" (UniqueName: \"kubernetes.io/projected/b3125f54-a594-4c20-ab3f-298cd68f3709-kube-api-access-v9rz6\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-cdcjq\" (UID: \"b3125f54-a594-4c20-ab3f-298cd68f3709\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-cdcjq" Feb 18 19:43:20 crc kubenswrapper[4942]: I0218 19:43:20.195978 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-cdcjq" Feb 18 19:43:20 crc kubenswrapper[4942]: I0218 19:43:20.729206 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-cdcjq"] Feb 18 19:43:20 crc kubenswrapper[4942]: W0218 19:43:20.737860 4942 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb3125f54_a594_4c20_ab3f_298cd68f3709.slice/crio-676f2e621a4acf72ea1e7f3770f3144f3d9cebbcc9f5130f6749209326139fa6 WatchSource:0}: Error finding container 676f2e621a4acf72ea1e7f3770f3144f3d9cebbcc9f5130f6749209326139fa6: Status 404 returned error can't find the container with id 676f2e621a4acf72ea1e7f3770f3144f3d9cebbcc9f5130f6749209326139fa6 Feb 18 19:43:20 crc kubenswrapper[4942]: I0218 19:43:20.753646 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-cdcjq" event={"ID":"b3125f54-a594-4c20-ab3f-298cd68f3709","Type":"ContainerStarted","Data":"676f2e621a4acf72ea1e7f3770f3144f3d9cebbcc9f5130f6749209326139fa6"} Feb 18 19:43:21 crc kubenswrapper[4942]: I0218 19:43:21.248342 4942 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 18 19:43:21 crc kubenswrapper[4942]: I0218 19:43:21.763957 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-cdcjq" event={"ID":"b3125f54-a594-4c20-ab3f-298cd68f3709","Type":"ContainerStarted","Data":"ac85b185c9e8dc808f91626080d82ac680145476d477bab1b8677a51e222d00a"} Feb 18 19:43:21 crc kubenswrapper[4942]: I0218 19:43:21.786477 4942 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-cdcjq" podStartSLOduration=2.280901369 podStartE2EDuration="2.786458625s" podCreationTimestamp="2026-02-18 19:43:19 +0000 UTC" firstStartedPulling="2026-02-18 19:43:20.739716091 +0000 UTC m=+1560.444648756" lastFinishedPulling="2026-02-18 19:43:21.245273347 +0000 UTC m=+1560.950206012" observedRunningTime="2026-02-18 19:43:21.781995237 +0000 UTC m=+1561.486927912" watchObservedRunningTime="2026-02-18 19:43:21.786458625 +0000 UTC m=+1561.491391290" Feb 18 19:43:24 crc kubenswrapper[4942]: I0218 19:43:24.366091 4942 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-rnvb2"] Feb 18 19:43:24 crc kubenswrapper[4942]: I0218 19:43:24.373257 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rnvb2" Feb 18 19:43:24 crc kubenswrapper[4942]: I0218 19:43:24.398421 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-rnvb2"] Feb 18 19:43:24 crc kubenswrapper[4942]: I0218 19:43:24.494149 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7bnt5\" (UniqueName: \"kubernetes.io/projected/20cf93d8-a0c4-4855-9a18-b8e1ea19e417-kube-api-access-7bnt5\") pod \"redhat-marketplace-rnvb2\" (UID: \"20cf93d8-a0c4-4855-9a18-b8e1ea19e417\") " pod="openshift-marketplace/redhat-marketplace-rnvb2" Feb 18 19:43:24 crc kubenswrapper[4942]: I0218 19:43:24.494973 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/20cf93d8-a0c4-4855-9a18-b8e1ea19e417-utilities\") pod \"redhat-marketplace-rnvb2\" (UID: \"20cf93d8-a0c4-4855-9a18-b8e1ea19e417\") " pod="openshift-marketplace/redhat-marketplace-rnvb2" Feb 18 19:43:24 crc kubenswrapper[4942]: I0218 19:43:24.495050 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/20cf93d8-a0c4-4855-9a18-b8e1ea19e417-catalog-content\") pod \"redhat-marketplace-rnvb2\" (UID: \"20cf93d8-a0c4-4855-9a18-b8e1ea19e417\") " pod="openshift-marketplace/redhat-marketplace-rnvb2" Feb 18 19:43:24 crc kubenswrapper[4942]: I0218 19:43:24.598034 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/20cf93d8-a0c4-4855-9a18-b8e1ea19e417-catalog-content\") pod \"redhat-marketplace-rnvb2\" (UID: \"20cf93d8-a0c4-4855-9a18-b8e1ea19e417\") " pod="openshift-marketplace/redhat-marketplace-rnvb2" Feb 18 19:43:24 crc kubenswrapper[4942]: I0218 19:43:24.598148 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7bnt5\" (UniqueName: \"kubernetes.io/projected/20cf93d8-a0c4-4855-9a18-b8e1ea19e417-kube-api-access-7bnt5\") pod \"redhat-marketplace-rnvb2\" (UID: \"20cf93d8-a0c4-4855-9a18-b8e1ea19e417\") " pod="openshift-marketplace/redhat-marketplace-rnvb2" Feb 18 19:43:24 crc kubenswrapper[4942]: I0218 19:43:24.598263 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/20cf93d8-a0c4-4855-9a18-b8e1ea19e417-utilities\") pod \"redhat-marketplace-rnvb2\" (UID: \"20cf93d8-a0c4-4855-9a18-b8e1ea19e417\") " pod="openshift-marketplace/redhat-marketplace-rnvb2" Feb 18 19:43:24 crc kubenswrapper[4942]: I0218 19:43:24.598536 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/20cf93d8-a0c4-4855-9a18-b8e1ea19e417-catalog-content\") pod \"redhat-marketplace-rnvb2\" (UID: \"20cf93d8-a0c4-4855-9a18-b8e1ea19e417\") " pod="openshift-marketplace/redhat-marketplace-rnvb2" Feb 18 19:43:24 crc kubenswrapper[4942]: I0218 19:43:24.598675 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/20cf93d8-a0c4-4855-9a18-b8e1ea19e417-utilities\") pod \"redhat-marketplace-rnvb2\" (UID: \"20cf93d8-a0c4-4855-9a18-b8e1ea19e417\") " pod="openshift-marketplace/redhat-marketplace-rnvb2" Feb 18 19:43:24 crc kubenswrapper[4942]: I0218 19:43:24.622466 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7bnt5\" (UniqueName: \"kubernetes.io/projected/20cf93d8-a0c4-4855-9a18-b8e1ea19e417-kube-api-access-7bnt5\") pod \"redhat-marketplace-rnvb2\" (UID: \"20cf93d8-a0c4-4855-9a18-b8e1ea19e417\") " pod="openshift-marketplace/redhat-marketplace-rnvb2" Feb 18 19:43:24 crc kubenswrapper[4942]: I0218 19:43:24.702294 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rnvb2" Feb 18 19:43:25 crc kubenswrapper[4942]: I0218 19:43:25.210183 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-rnvb2"] Feb 18 19:43:25 crc kubenswrapper[4942]: I0218 19:43:25.800674 4942 generic.go:334] "Generic (PLEG): container finished" podID="20cf93d8-a0c4-4855-9a18-b8e1ea19e417" containerID="a346658695d44de69b00bad4467ae469263679b90ace11bf130b86d7e2607c67" exitCode=0 Feb 18 19:43:25 crc kubenswrapper[4942]: I0218 19:43:25.800748 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rnvb2" event={"ID":"20cf93d8-a0c4-4855-9a18-b8e1ea19e417","Type":"ContainerDied","Data":"a346658695d44de69b00bad4467ae469263679b90ace11bf130b86d7e2607c67"} Feb 18 19:43:25 crc kubenswrapper[4942]: I0218 19:43:25.801167 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rnvb2" event={"ID":"20cf93d8-a0c4-4855-9a18-b8e1ea19e417","Type":"ContainerStarted","Data":"b91dfcc0a55e7f78bdfd59306beaed39399ca278506b90a4cd7e3ad2eb529cb3"} Feb 18 19:43:26 crc kubenswrapper[4942]: I0218 19:43:26.811489 4942 generic.go:334] "Generic (PLEG): container finished" podID="20cf93d8-a0c4-4855-9a18-b8e1ea19e417" containerID="32f8e99918ff07fc14367108ff287462ad33c735814cfe2dda213261938f7ea0" exitCode=0 Feb 18 19:43:26 crc kubenswrapper[4942]: I0218 19:43:26.811604 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rnvb2" event={"ID":"20cf93d8-a0c4-4855-9a18-b8e1ea19e417","Type":"ContainerDied","Data":"32f8e99918ff07fc14367108ff287462ad33c735814cfe2dda213261938f7ea0"} Feb 18 19:43:27 crc kubenswrapper[4942]: I0218 19:43:27.825123 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rnvb2" event={"ID":"20cf93d8-a0c4-4855-9a18-b8e1ea19e417","Type":"ContainerStarted","Data":"406158621c6c4da0fe24adcae1a083dc1c9157ab467bceba16261a55240ef835"} Feb 18 19:43:27 crc kubenswrapper[4942]: I0218 19:43:27.853124 4942 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-rnvb2" podStartSLOduration=2.412958847 podStartE2EDuration="3.853102156s" podCreationTimestamp="2026-02-18 19:43:24 +0000 UTC" firstStartedPulling="2026-02-18 19:43:25.80233476 +0000 UTC m=+1565.507267435" lastFinishedPulling="2026-02-18 19:43:27.242478069 +0000 UTC m=+1566.947410744" observedRunningTime="2026-02-18 19:43:27.84193699 +0000 UTC m=+1567.546869735" watchObservedRunningTime="2026-02-18 19:43:27.853102156 +0000 UTC m=+1567.558034821" Feb 18 19:43:34 crc kubenswrapper[4942]: I0218 19:43:34.703464 4942 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-rnvb2" Feb 18 19:43:34 crc kubenswrapper[4942]: I0218 19:43:34.704066 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-rnvb2" Feb 18 19:43:34 crc kubenswrapper[4942]: I0218 19:43:34.748878 4942 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-rnvb2" Feb 18 19:43:34 crc kubenswrapper[4942]: I0218 19:43:34.996889 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-rnvb2" Feb 18 19:43:35 crc kubenswrapper[4942]: I0218 19:43:35.047063 4942 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-rnvb2"] Feb 18 19:43:36 crc kubenswrapper[4942]: I0218 19:43:36.940106 4942 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-rnvb2" podUID="20cf93d8-a0c4-4855-9a18-b8e1ea19e417" containerName="registry-server" containerID="cri-o://406158621c6c4da0fe24adcae1a083dc1c9157ab467bceba16261a55240ef835" gracePeriod=2 Feb 18 19:43:37 crc kubenswrapper[4942]: I0218 19:43:37.467219 4942 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rnvb2" Feb 18 19:43:37 crc kubenswrapper[4942]: I0218 19:43:37.496334 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7bnt5\" (UniqueName: \"kubernetes.io/projected/20cf93d8-a0c4-4855-9a18-b8e1ea19e417-kube-api-access-7bnt5\") pod \"20cf93d8-a0c4-4855-9a18-b8e1ea19e417\" (UID: \"20cf93d8-a0c4-4855-9a18-b8e1ea19e417\") " Feb 18 19:43:37 crc kubenswrapper[4942]: I0218 19:43:37.496559 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/20cf93d8-a0c4-4855-9a18-b8e1ea19e417-catalog-content\") pod \"20cf93d8-a0c4-4855-9a18-b8e1ea19e417\" (UID: \"20cf93d8-a0c4-4855-9a18-b8e1ea19e417\") " Feb 18 19:43:37 crc kubenswrapper[4942]: I0218 19:43:37.496581 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/20cf93d8-a0c4-4855-9a18-b8e1ea19e417-utilities\") pod \"20cf93d8-a0c4-4855-9a18-b8e1ea19e417\" (UID: \"20cf93d8-a0c4-4855-9a18-b8e1ea19e417\") " Feb 18 19:43:37 crc kubenswrapper[4942]: I0218 19:43:37.497625 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/20cf93d8-a0c4-4855-9a18-b8e1ea19e417-utilities" (OuterVolumeSpecName: "utilities") pod "20cf93d8-a0c4-4855-9a18-b8e1ea19e417" (UID: "20cf93d8-a0c4-4855-9a18-b8e1ea19e417"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 19:43:37 crc kubenswrapper[4942]: I0218 19:43:37.505325 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20cf93d8-a0c4-4855-9a18-b8e1ea19e417-kube-api-access-7bnt5" (OuterVolumeSpecName: "kube-api-access-7bnt5") pod "20cf93d8-a0c4-4855-9a18-b8e1ea19e417" (UID: "20cf93d8-a0c4-4855-9a18-b8e1ea19e417"). InnerVolumeSpecName "kube-api-access-7bnt5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:43:37 crc kubenswrapper[4942]: I0218 19:43:37.541615 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/20cf93d8-a0c4-4855-9a18-b8e1ea19e417-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "20cf93d8-a0c4-4855-9a18-b8e1ea19e417" (UID: "20cf93d8-a0c4-4855-9a18-b8e1ea19e417"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 19:43:37 crc kubenswrapper[4942]: I0218 19:43:37.598241 4942 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/20cf93d8-a0c4-4855-9a18-b8e1ea19e417-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 18 19:43:37 crc kubenswrapper[4942]: I0218 19:43:37.598274 4942 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/20cf93d8-a0c4-4855-9a18-b8e1ea19e417-utilities\") on node \"crc\" DevicePath \"\"" Feb 18 19:43:37 crc kubenswrapper[4942]: I0218 19:43:37.598288 4942 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7bnt5\" (UniqueName: \"kubernetes.io/projected/20cf93d8-a0c4-4855-9a18-b8e1ea19e417-kube-api-access-7bnt5\") on node \"crc\" DevicePath \"\"" Feb 18 19:43:37 crc kubenswrapper[4942]: I0218 19:43:37.953900 4942 generic.go:334] "Generic (PLEG): container finished" podID="20cf93d8-a0c4-4855-9a18-b8e1ea19e417" containerID="406158621c6c4da0fe24adcae1a083dc1c9157ab467bceba16261a55240ef835" exitCode=0 Feb 18 19:43:37 crc kubenswrapper[4942]: I0218 19:43:37.953948 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rnvb2" event={"ID":"20cf93d8-a0c4-4855-9a18-b8e1ea19e417","Type":"ContainerDied","Data":"406158621c6c4da0fe24adcae1a083dc1c9157ab467bceba16261a55240ef835"} Feb 18 19:43:37 crc kubenswrapper[4942]: I0218 19:43:37.953984 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rnvb2" event={"ID":"20cf93d8-a0c4-4855-9a18-b8e1ea19e417","Type":"ContainerDied","Data":"b91dfcc0a55e7f78bdfd59306beaed39399ca278506b90a4cd7e3ad2eb529cb3"} Feb 18 19:43:37 crc kubenswrapper[4942]: I0218 19:43:37.953962 4942 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rnvb2" Feb 18 19:43:37 crc kubenswrapper[4942]: I0218 19:43:37.954035 4942 scope.go:117] "RemoveContainer" containerID="406158621c6c4da0fe24adcae1a083dc1c9157ab467bceba16261a55240ef835" Feb 18 19:43:37 crc kubenswrapper[4942]: I0218 19:43:37.978287 4942 scope.go:117] "RemoveContainer" containerID="32f8e99918ff07fc14367108ff287462ad33c735814cfe2dda213261938f7ea0" Feb 18 19:43:38 crc kubenswrapper[4942]: I0218 19:43:38.000632 4942 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-rnvb2"] Feb 18 19:43:38 crc kubenswrapper[4942]: I0218 19:43:38.012252 4942 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-rnvb2"] Feb 18 19:43:38 crc kubenswrapper[4942]: I0218 19:43:38.056728 4942 scope.go:117] "RemoveContainer" containerID="a346658695d44de69b00bad4467ae469263679b90ace11bf130b86d7e2607c67" Feb 18 19:43:38 crc kubenswrapper[4942]: I0218 19:43:38.081315 4942 scope.go:117] "RemoveContainer" containerID="406158621c6c4da0fe24adcae1a083dc1c9157ab467bceba16261a55240ef835" Feb 18 19:43:38 crc kubenswrapper[4942]: E0218 19:43:38.081791 4942 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"406158621c6c4da0fe24adcae1a083dc1c9157ab467bceba16261a55240ef835\": container with ID starting with 406158621c6c4da0fe24adcae1a083dc1c9157ab467bceba16261a55240ef835 not found: ID does not exist" containerID="406158621c6c4da0fe24adcae1a083dc1c9157ab467bceba16261a55240ef835" Feb 18 19:43:38 crc kubenswrapper[4942]: I0218 19:43:38.081828 4942 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"406158621c6c4da0fe24adcae1a083dc1c9157ab467bceba16261a55240ef835"} err="failed to get container status \"406158621c6c4da0fe24adcae1a083dc1c9157ab467bceba16261a55240ef835\": rpc error: code = NotFound desc = could not find container \"406158621c6c4da0fe24adcae1a083dc1c9157ab467bceba16261a55240ef835\": container with ID starting with 406158621c6c4da0fe24adcae1a083dc1c9157ab467bceba16261a55240ef835 not found: ID does not exist" Feb 18 19:43:38 crc kubenswrapper[4942]: I0218 19:43:38.081854 4942 scope.go:117] "RemoveContainer" containerID="32f8e99918ff07fc14367108ff287462ad33c735814cfe2dda213261938f7ea0" Feb 18 19:43:38 crc kubenswrapper[4942]: E0218 19:43:38.082191 4942 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"32f8e99918ff07fc14367108ff287462ad33c735814cfe2dda213261938f7ea0\": container with ID starting with 32f8e99918ff07fc14367108ff287462ad33c735814cfe2dda213261938f7ea0 not found: ID does not exist" containerID="32f8e99918ff07fc14367108ff287462ad33c735814cfe2dda213261938f7ea0" Feb 18 19:43:38 crc kubenswrapper[4942]: I0218 19:43:38.082220 4942 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"32f8e99918ff07fc14367108ff287462ad33c735814cfe2dda213261938f7ea0"} err="failed to get container status \"32f8e99918ff07fc14367108ff287462ad33c735814cfe2dda213261938f7ea0\": rpc error: code = NotFound desc = could not find container \"32f8e99918ff07fc14367108ff287462ad33c735814cfe2dda213261938f7ea0\": container with ID starting with 32f8e99918ff07fc14367108ff287462ad33c735814cfe2dda213261938f7ea0 not found: ID does not exist" Feb 18 19:43:38 crc kubenswrapper[4942]: I0218 19:43:38.082237 4942 scope.go:117] "RemoveContainer" containerID="a346658695d44de69b00bad4467ae469263679b90ace11bf130b86d7e2607c67" Feb 18 19:43:38 crc kubenswrapper[4942]: E0218 19:43:38.082478 4942 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a346658695d44de69b00bad4467ae469263679b90ace11bf130b86d7e2607c67\": container with ID starting with a346658695d44de69b00bad4467ae469263679b90ace11bf130b86d7e2607c67 not found: ID does not exist" containerID="a346658695d44de69b00bad4467ae469263679b90ace11bf130b86d7e2607c67" Feb 18 19:43:38 crc kubenswrapper[4942]: I0218 19:43:38.082501 4942 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a346658695d44de69b00bad4467ae469263679b90ace11bf130b86d7e2607c67"} err="failed to get container status \"a346658695d44de69b00bad4467ae469263679b90ace11bf130b86d7e2607c67\": rpc error: code = NotFound desc = could not find container \"a346658695d44de69b00bad4467ae469263679b90ace11bf130b86d7e2607c67\": container with ID starting with a346658695d44de69b00bad4467ae469263679b90ace11bf130b86d7e2607c67 not found: ID does not exist" Feb 18 19:43:39 crc kubenswrapper[4942]: I0218 19:43:39.057542 4942 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="20cf93d8-a0c4-4855-9a18-b8e1ea19e417" path="/var/lib/kubelet/pods/20cf93d8-a0c4-4855-9a18-b8e1ea19e417/volumes" Feb 18 19:43:47 crc kubenswrapper[4942]: I0218 19:43:47.196180 4942 scope.go:117] "RemoveContainer" containerID="a988a34c898a05087381b3c398ec9025e84f7ccd37d7a000f5a4025b770b9c31" Feb 18 19:43:47 crc kubenswrapper[4942]: I0218 19:43:47.227736 4942 scope.go:117] "RemoveContainer" containerID="4d23d58052be19c944bbfb1bdcae23f79449638dec97cb1fe1f8ae8d61b02fff" Feb 18 19:43:47 crc kubenswrapper[4942]: I0218 19:43:47.256135 4942 scope.go:117] "RemoveContainer" containerID="5c56a687bcaef7e5e54c6de1b78374726c82904080884876b458c8525f4a0752" Feb 18 19:43:55 crc kubenswrapper[4942]: I0218 19:43:55.056429 4942 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-d9d4-account-create-update-7gsvf"] Feb 18 19:43:55 crc kubenswrapper[4942]: I0218 19:43:55.057067 4942 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-d9d4-account-create-update-7gsvf"] Feb 18 19:43:56 crc kubenswrapper[4942]: I0218 19:43:56.064714 4942 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/watcher-9457-account-create-update-5hrw4"] Feb 18 19:43:56 crc kubenswrapper[4942]: I0218 19:43:56.076431 4942 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/watcher-9457-account-create-update-5hrw4"] Feb 18 19:43:57 crc kubenswrapper[4942]: I0218 19:43:57.048083 4942 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="646ba630-1210-431d-8902-b5c0968b35bb" path="/var/lib/kubelet/pods/646ba630-1210-431d-8902-b5c0968b35bb/volumes" Feb 18 19:43:57 crc kubenswrapper[4942]: I0218 19:43:57.049542 4942 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ba056ec7-86a5-43b6-aebd-a22b21843cc3" path="/var/lib/kubelet/pods/ba056ec7-86a5-43b6-aebd-a22b21843cc3/volumes" Feb 18 19:43:57 crc kubenswrapper[4942]: I0218 19:43:57.051662 4942 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-create-h49cz"] Feb 18 19:43:57 crc kubenswrapper[4942]: I0218 19:43:57.062680 4942 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-create-9xsbj"] Feb 18 19:43:57 crc kubenswrapper[4942]: I0218 19:43:57.071222 4942 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/watcher-db-create-59tjm"] Feb 18 19:43:57 crc kubenswrapper[4942]: I0218 19:43:57.079087 4942 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-ce28-account-create-update-h5jjz"] Feb 18 19:43:57 crc kubenswrapper[4942]: I0218 19:43:57.090197 4942 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-create-9xsbj"] Feb 18 19:43:57 crc kubenswrapper[4942]: I0218 19:43:57.099195 4942 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-ce28-account-create-update-h5jjz"] Feb 18 19:43:57 crc kubenswrapper[4942]: I0218 19:43:57.107459 4942 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-create-h49cz"] Feb 18 19:43:57 crc kubenswrapper[4942]: I0218 19:43:57.115745 4942 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/watcher-db-create-59tjm"] Feb 18 19:43:59 crc kubenswrapper[4942]: I0218 19:43:59.049325 4942 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2f4f7b72-968a-4aed-b6e9-87f43677f342" path="/var/lib/kubelet/pods/2f4f7b72-968a-4aed-b6e9-87f43677f342/volumes" Feb 18 19:43:59 crc kubenswrapper[4942]: I0218 19:43:59.051387 4942 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="371430b6-c9b6-48ba-a1a7-d1ce72a001ec" path="/var/lib/kubelet/pods/371430b6-c9b6-48ba-a1a7-d1ce72a001ec/volumes" Feb 18 19:43:59 crc kubenswrapper[4942]: I0218 19:43:59.052697 4942 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6821c713-6163-44f5-a749-415f0c1d8337" path="/var/lib/kubelet/pods/6821c713-6163-44f5-a749-415f0c1d8337/volumes" Feb 18 19:43:59 crc kubenswrapper[4942]: I0218 19:43:59.053551 4942 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a3564c8a-5e18-4c53-b225-7e9baf41a371" path="/var/lib/kubelet/pods/a3564c8a-5e18-4c53-b225-7e9baf41a371/volumes" Feb 18 19:44:04 crc kubenswrapper[4942]: I0218 19:44:04.049957 4942 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-create-tjf5x"] Feb 18 19:44:04 crc kubenswrapper[4942]: I0218 19:44:04.063510 4942 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-create-tjf5x"] Feb 18 19:44:04 crc kubenswrapper[4942]: I0218 19:44:04.076821 4942 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-8ff9-account-create-update-k7n8f"] Feb 18 19:44:04 crc kubenswrapper[4942]: I0218 19:44:04.088057 4942 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-8ff9-account-create-update-k7n8f"] Feb 18 19:44:05 crc kubenswrapper[4942]: I0218 19:44:05.057080 4942 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6a1ca129-f896-4d68-b119-701a991fe0ba" path="/var/lib/kubelet/pods/6a1ca129-f896-4d68-b119-701a991fe0ba/volumes" Feb 18 19:44:05 crc kubenswrapper[4942]: I0218 19:44:05.058621 4942 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8611c14f-da0c-410e-9c3a-dc6cb5a698a7" path="/var/lib/kubelet/pods/8611c14f-da0c-410e-9c3a-dc6cb5a698a7/volumes" Feb 18 19:44:23 crc kubenswrapper[4942]: I0218 19:44:23.741470 4942 patch_prober.go:28] interesting pod/machine-config-daemon-wqxh4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 19:44:23 crc kubenswrapper[4942]: I0218 19:44:23.742158 4942 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wqxh4" podUID="28921539-823a-4439-a230-3b5aed7085cc" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 19:44:28 crc kubenswrapper[4942]: I0218 19:44:28.064290 4942 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-f862-account-create-update-29qlq"] Feb 18 19:44:28 crc kubenswrapper[4942]: I0218 19:44:28.081509 4942 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-create-njfd6"] Feb 18 19:44:28 crc kubenswrapper[4942]: I0218 19:44:28.094696 4942 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-create-4zlhp"] Feb 18 19:44:28 crc kubenswrapper[4942]: I0218 19:44:28.104108 4942 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-8f782"] Feb 18 19:44:28 crc kubenswrapper[4942]: I0218 19:44:28.113523 4942 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-f862-account-create-update-29qlq"] Feb 18 19:44:28 crc kubenswrapper[4942]: I0218 19:44:28.122670 4942 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-8f782"] Feb 18 19:44:28 crc kubenswrapper[4942]: I0218 19:44:28.129989 4942 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-e916-account-create-update-lm2r5"] Feb 18 19:44:28 crc kubenswrapper[4942]: I0218 19:44:28.137068 4942 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-create-4zlhp"] Feb 18 19:44:28 crc kubenswrapper[4942]: I0218 19:44:28.145155 4942 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-create-njfd6"] Feb 18 19:44:28 crc kubenswrapper[4942]: I0218 19:44:28.152955 4942 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-e916-account-create-update-lm2r5"] Feb 18 19:44:29 crc kubenswrapper[4942]: I0218 19:44:29.057669 4942 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="35dbdf24-b5f9-4a19-96f9-1fe390df90e1" path="/var/lib/kubelet/pods/35dbdf24-b5f9-4a19-96f9-1fe390df90e1/volumes" Feb 18 19:44:29 crc kubenswrapper[4942]: I0218 19:44:29.059670 4942 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4edc6296-1ba6-43f7-a076-93f94c77a2c9" path="/var/lib/kubelet/pods/4edc6296-1ba6-43f7-a076-93f94c77a2c9/volumes" Feb 18 19:44:29 crc kubenswrapper[4942]: I0218 19:44:29.062023 4942 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9a8e424f-44a5-4eaa-9f3f-882f070aa404" path="/var/lib/kubelet/pods/9a8e424f-44a5-4eaa-9f3f-882f070aa404/volumes" Feb 18 19:44:29 crc kubenswrapper[4942]: I0218 19:44:29.063400 4942 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dddbc305-d881-4ef9-ada1-49e8f180162c" path="/var/lib/kubelet/pods/dddbc305-d881-4ef9-ada1-49e8f180162c/volumes" Feb 18 19:44:29 crc kubenswrapper[4942]: I0218 19:44:29.065932 4942 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fcea68e2-0d37-4812-a7ad-403e59b7b556" path="/var/lib/kubelet/pods/fcea68e2-0d37-4812-a7ad-403e59b7b556/volumes" Feb 18 19:44:31 crc kubenswrapper[4942]: I0218 19:44:31.062615 4942 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-create-s54gq"] Feb 18 19:44:31 crc kubenswrapper[4942]: I0218 19:44:31.063884 4942 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-fee6-account-create-update-jhlbn"] Feb 18 19:44:31 crc kubenswrapper[4942]: I0218 19:44:31.088321 4942 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-create-s54gq"] Feb 18 19:44:31 crc kubenswrapper[4942]: I0218 19:44:31.100217 4942 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-fee6-account-create-update-jhlbn"] Feb 18 19:44:33 crc kubenswrapper[4942]: I0218 19:44:33.048644 4942 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c903d652-2880-43bd-9445-f1b03764f413" path="/var/lib/kubelet/pods/c903d652-2880-43bd-9445-f1b03764f413/volumes" Feb 18 19:44:33 crc kubenswrapper[4942]: I0218 19:44:33.050357 4942 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fd491cd9-f58f-4821-8004-a5a4762d6bdb" path="/var/lib/kubelet/pods/fd491cd9-f58f-4821-8004-a5a4762d6bdb/volumes" Feb 18 19:44:47 crc kubenswrapper[4942]: I0218 19:44:47.380410 4942 scope.go:117] "RemoveContainer" containerID="7973de763d55a77ffbc3e3d1001daee7ca68a526d4309188caa67a4ce4135e55" Feb 18 19:44:47 crc kubenswrapper[4942]: I0218 19:44:47.447366 4942 scope.go:117] "RemoveContainer" containerID="727dde1e275a9b0b467f516dab63cba62b27e6168562e7bbd076fe7b30b2869f" Feb 18 19:44:47 crc kubenswrapper[4942]: I0218 19:44:47.506946 4942 scope.go:117] "RemoveContainer" containerID="837718ff91cb054c2e7fe10e6239bf44f02d0dd7d7855db97e09e837f3dcef65" Feb 18 19:44:47 crc kubenswrapper[4942]: I0218 19:44:47.560820 4942 scope.go:117] "RemoveContainer" containerID="549770ba7dc9b2efdf1b7dbd1827ec366b9e1e693aeec0f1a695091cdbeda9bc" Feb 18 19:44:47 crc kubenswrapper[4942]: I0218 19:44:47.617825 4942 scope.go:117] "RemoveContainer" containerID="761092c069dfd66382418fe07bf3c15f0aee53ccbdf6b11196e33385aae3fc8b" Feb 18 19:44:47 crc kubenswrapper[4942]: I0218 19:44:47.655528 4942 scope.go:117] "RemoveContainer" containerID="b1d49648de6b3a759e8404975f38b8d6b28e2ed6cf3c88b12649b6a3fed64a43" Feb 18 19:44:47 crc kubenswrapper[4942]: I0218 19:44:47.708142 4942 scope.go:117] "RemoveContainer" containerID="c942add3a433a64faf7638403a168e22e7b5e2f26ceaa17e1731c6044072942d" Feb 18 19:44:47 crc kubenswrapper[4942]: I0218 19:44:47.739699 4942 scope.go:117] "RemoveContainer" containerID="0e02d4fe73a4e293f62bf869926c2629a47060f29d5a8a14b093d650895a851c" Feb 18 19:44:47 crc kubenswrapper[4942]: I0218 19:44:47.770491 4942 scope.go:117] "RemoveContainer" containerID="4e49158c977b69109020d9375918418b28e7f6670849fc1495f27f4bb36f8420" Feb 18 19:44:47 crc kubenswrapper[4942]: I0218 19:44:47.793814 4942 scope.go:117] "RemoveContainer" containerID="376d0fc77c68f0c59dee539c15e1e9f915935989d2e259a07dc205d03784efe9" Feb 18 19:44:47 crc kubenswrapper[4942]: I0218 19:44:47.833657 4942 scope.go:117] "RemoveContainer" containerID="f3ac5111bbb6bd92f96a1d8bfbfe931ddce997416181ddc95500cf9c11a42867" Feb 18 19:44:47 crc kubenswrapper[4942]: I0218 19:44:47.857972 4942 scope.go:117] "RemoveContainer" containerID="811ec8cee78f943aac4bbfb29b95ea4e9d51e51453fc9da48c7eabb6372bfb2b" Feb 18 19:44:47 crc kubenswrapper[4942]: I0218 19:44:47.885286 4942 scope.go:117] "RemoveContainer" containerID="e0735df4037c9d26aa2f69d57c8e775cb7c18bc1fdb68127c0b914f822f83bec" Feb 18 19:44:47 crc kubenswrapper[4942]: I0218 19:44:47.914440 4942 scope.go:117] "RemoveContainer" containerID="a8c3861121c5594ca501846681ea609d414d4c26e10e1b891f8ff728174138b2" Feb 18 19:44:47 crc kubenswrapper[4942]: I0218 19:44:47.935039 4942 scope.go:117] "RemoveContainer" containerID="55245bf67e01b4a9996ff8822e688651e94d412e130a306f9914243a723acae1" Feb 18 19:44:49 crc kubenswrapper[4942]: I0218 19:44:49.057382 4942 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-sync-87p82"] Feb 18 19:44:49 crc kubenswrapper[4942]: I0218 19:44:49.062383 4942 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-sync-87p82"] Feb 18 19:44:49 crc kubenswrapper[4942]: I0218 19:44:49.770918 4942 generic.go:334] "Generic (PLEG): container finished" podID="b3125f54-a594-4c20-ab3f-298cd68f3709" containerID="ac85b185c9e8dc808f91626080d82ac680145476d477bab1b8677a51e222d00a" exitCode=0 Feb 18 19:44:49 crc kubenswrapper[4942]: I0218 19:44:49.770974 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-cdcjq" event={"ID":"b3125f54-a594-4c20-ab3f-298cd68f3709","Type":"ContainerDied","Data":"ac85b185c9e8dc808f91626080d82ac680145476d477bab1b8677a51e222d00a"} Feb 18 19:44:51 crc kubenswrapper[4942]: I0218 19:44:51.058819 4942 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7ed4f34d-fe0d-402c-95d3-171e73eb5bd5" path="/var/lib/kubelet/pods/7ed4f34d-fe0d-402c-95d3-171e73eb5bd5/volumes" Feb 18 19:44:51 crc kubenswrapper[4942]: I0218 19:44:51.246947 4942 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-cdcjq" Feb 18 19:44:51 crc kubenswrapper[4942]: I0218 19:44:51.313234 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v9rz6\" (UniqueName: \"kubernetes.io/projected/b3125f54-a594-4c20-ab3f-298cd68f3709-kube-api-access-v9rz6\") pod \"b3125f54-a594-4c20-ab3f-298cd68f3709\" (UID: \"b3125f54-a594-4c20-ab3f-298cd68f3709\") " Feb 18 19:44:51 crc kubenswrapper[4942]: I0218 19:44:51.313328 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b3125f54-a594-4c20-ab3f-298cd68f3709-inventory\") pod \"b3125f54-a594-4c20-ab3f-298cd68f3709\" (UID: \"b3125f54-a594-4c20-ab3f-298cd68f3709\") " Feb 18 19:44:51 crc kubenswrapper[4942]: I0218 19:44:51.313425 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/b3125f54-a594-4c20-ab3f-298cd68f3709-ssh-key-openstack-edpm-ipam\") pod \"b3125f54-a594-4c20-ab3f-298cd68f3709\" (UID: \"b3125f54-a594-4c20-ab3f-298cd68f3709\") " Feb 18 19:44:51 crc kubenswrapper[4942]: I0218 19:44:51.327962 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b3125f54-a594-4c20-ab3f-298cd68f3709-kube-api-access-v9rz6" (OuterVolumeSpecName: "kube-api-access-v9rz6") pod "b3125f54-a594-4c20-ab3f-298cd68f3709" (UID: "b3125f54-a594-4c20-ab3f-298cd68f3709"). InnerVolumeSpecName "kube-api-access-v9rz6". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:44:51 crc kubenswrapper[4942]: I0218 19:44:51.351063 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b3125f54-a594-4c20-ab3f-298cd68f3709-inventory" (OuterVolumeSpecName: "inventory") pod "b3125f54-a594-4c20-ab3f-298cd68f3709" (UID: "b3125f54-a594-4c20-ab3f-298cd68f3709"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:44:51 crc kubenswrapper[4942]: I0218 19:44:51.354547 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b3125f54-a594-4c20-ab3f-298cd68f3709-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "b3125f54-a594-4c20-ab3f-298cd68f3709" (UID: "b3125f54-a594-4c20-ab3f-298cd68f3709"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:44:51 crc kubenswrapper[4942]: I0218 19:44:51.415145 4942 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v9rz6\" (UniqueName: \"kubernetes.io/projected/b3125f54-a594-4c20-ab3f-298cd68f3709-kube-api-access-v9rz6\") on node \"crc\" DevicePath \"\"" Feb 18 19:44:51 crc kubenswrapper[4942]: I0218 19:44:51.415215 4942 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b3125f54-a594-4c20-ab3f-298cd68f3709-inventory\") on node \"crc\" DevicePath \"\"" Feb 18 19:44:51 crc kubenswrapper[4942]: I0218 19:44:51.415235 4942 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/b3125f54-a594-4c20-ab3f-298cd68f3709-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 18 19:44:51 crc kubenswrapper[4942]: I0218 19:44:51.790606 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-cdcjq" event={"ID":"b3125f54-a594-4c20-ab3f-298cd68f3709","Type":"ContainerDied","Data":"676f2e621a4acf72ea1e7f3770f3144f3d9cebbcc9f5130f6749209326139fa6"} Feb 18 19:44:51 crc kubenswrapper[4942]: I0218 19:44:51.790647 4942 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="676f2e621a4acf72ea1e7f3770f3144f3d9cebbcc9f5130f6749209326139fa6" Feb 18 19:44:51 crc kubenswrapper[4942]: I0218 19:44:51.790658 4942 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-cdcjq" Feb 18 19:44:51 crc kubenswrapper[4942]: I0218 19:44:51.887259 4942 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-v5xn9"] Feb 18 19:44:51 crc kubenswrapper[4942]: E0218 19:44:51.887701 4942 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="20cf93d8-a0c4-4855-9a18-b8e1ea19e417" containerName="extract-content" Feb 18 19:44:51 crc kubenswrapper[4942]: I0218 19:44:51.887719 4942 state_mem.go:107] "Deleted CPUSet assignment" podUID="20cf93d8-a0c4-4855-9a18-b8e1ea19e417" containerName="extract-content" Feb 18 19:44:51 crc kubenswrapper[4942]: E0218 19:44:51.887752 4942 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b3125f54-a594-4c20-ab3f-298cd68f3709" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Feb 18 19:44:51 crc kubenswrapper[4942]: I0218 19:44:51.887776 4942 state_mem.go:107] "Deleted CPUSet assignment" podUID="b3125f54-a594-4c20-ab3f-298cd68f3709" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Feb 18 19:44:51 crc kubenswrapper[4942]: E0218 19:44:51.887787 4942 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="20cf93d8-a0c4-4855-9a18-b8e1ea19e417" containerName="registry-server" Feb 18 19:44:51 crc kubenswrapper[4942]: I0218 19:44:51.887793 4942 state_mem.go:107] "Deleted CPUSet assignment" podUID="20cf93d8-a0c4-4855-9a18-b8e1ea19e417" containerName="registry-server" Feb 18 19:44:51 crc kubenswrapper[4942]: E0218 19:44:51.887809 4942 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="20cf93d8-a0c4-4855-9a18-b8e1ea19e417" containerName="extract-utilities" Feb 18 19:44:51 crc kubenswrapper[4942]: I0218 19:44:51.887816 4942 state_mem.go:107] "Deleted CPUSet assignment" podUID="20cf93d8-a0c4-4855-9a18-b8e1ea19e417" containerName="extract-utilities" Feb 18 19:44:51 crc kubenswrapper[4942]: I0218 19:44:51.888044 4942 memory_manager.go:354] "RemoveStaleState removing state" podUID="b3125f54-a594-4c20-ab3f-298cd68f3709" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Feb 18 19:44:51 crc kubenswrapper[4942]: I0218 19:44:51.888055 4942 memory_manager.go:354] "RemoveStaleState removing state" podUID="20cf93d8-a0c4-4855-9a18-b8e1ea19e417" containerName="registry-server" Feb 18 19:44:51 crc kubenswrapper[4942]: I0218 19:44:51.890983 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-v5xn9" Feb 18 19:44:51 crc kubenswrapper[4942]: I0218 19:44:51.895511 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 18 19:44:51 crc kubenswrapper[4942]: I0218 19:44:51.897048 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-rgcbh" Feb 18 19:44:51 crc kubenswrapper[4942]: I0218 19:44:51.907066 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-v5xn9"] Feb 18 19:44:51 crc kubenswrapper[4942]: I0218 19:44:51.909572 4942 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 18 19:44:51 crc kubenswrapper[4942]: I0218 19:44:51.909803 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 18 19:44:52 crc kubenswrapper[4942]: I0218 19:44:52.027968 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/abda062b-22ed-4d21-adbb-f2b906e36e02-ssh-key-openstack-edpm-ipam\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-v5xn9\" (UID: \"abda062b-22ed-4d21-adbb-f2b906e36e02\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-v5xn9" Feb 18 19:44:52 crc kubenswrapper[4942]: I0218 19:44:52.028094 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fx945\" (UniqueName: \"kubernetes.io/projected/abda062b-22ed-4d21-adbb-f2b906e36e02-kube-api-access-fx945\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-v5xn9\" (UID: \"abda062b-22ed-4d21-adbb-f2b906e36e02\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-v5xn9" Feb 18 19:44:52 crc kubenswrapper[4942]: I0218 19:44:52.028255 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/abda062b-22ed-4d21-adbb-f2b906e36e02-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-v5xn9\" (UID: \"abda062b-22ed-4d21-adbb-f2b906e36e02\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-v5xn9" Feb 18 19:44:52 crc kubenswrapper[4942]: I0218 19:44:52.130427 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/abda062b-22ed-4d21-adbb-f2b906e36e02-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-v5xn9\" (UID: \"abda062b-22ed-4d21-adbb-f2b906e36e02\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-v5xn9" Feb 18 19:44:52 crc kubenswrapper[4942]: I0218 19:44:52.130640 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/abda062b-22ed-4d21-adbb-f2b906e36e02-ssh-key-openstack-edpm-ipam\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-v5xn9\" (UID: \"abda062b-22ed-4d21-adbb-f2b906e36e02\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-v5xn9" Feb 18 19:44:52 crc kubenswrapper[4942]: I0218 19:44:52.130713 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fx945\" (UniqueName: \"kubernetes.io/projected/abda062b-22ed-4d21-adbb-f2b906e36e02-kube-api-access-fx945\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-v5xn9\" (UID: \"abda062b-22ed-4d21-adbb-f2b906e36e02\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-v5xn9" Feb 18 19:44:52 crc kubenswrapper[4942]: I0218 19:44:52.136440 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/abda062b-22ed-4d21-adbb-f2b906e36e02-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-v5xn9\" (UID: \"abda062b-22ed-4d21-adbb-f2b906e36e02\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-v5xn9" Feb 18 19:44:52 crc kubenswrapper[4942]: I0218 19:44:52.137956 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/abda062b-22ed-4d21-adbb-f2b906e36e02-ssh-key-openstack-edpm-ipam\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-v5xn9\" (UID: \"abda062b-22ed-4d21-adbb-f2b906e36e02\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-v5xn9" Feb 18 19:44:52 crc kubenswrapper[4942]: I0218 19:44:52.159627 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fx945\" (UniqueName: \"kubernetes.io/projected/abda062b-22ed-4d21-adbb-f2b906e36e02-kube-api-access-fx945\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-v5xn9\" (UID: \"abda062b-22ed-4d21-adbb-f2b906e36e02\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-v5xn9" Feb 18 19:44:52 crc kubenswrapper[4942]: I0218 19:44:52.210899 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-v5xn9" Feb 18 19:44:52 crc kubenswrapper[4942]: I0218 19:44:52.769259 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-v5xn9"] Feb 18 19:44:52 crc kubenswrapper[4942]: I0218 19:44:52.805751 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-v5xn9" event={"ID":"abda062b-22ed-4d21-adbb-f2b906e36e02","Type":"ContainerStarted","Data":"63f65f48ee5ab0059c48c01f1282e83a7fc007edbf7a2254b1c98d6e5aa16551"} Feb 18 19:44:53 crc kubenswrapper[4942]: I0218 19:44:53.741582 4942 patch_prober.go:28] interesting pod/machine-config-daemon-wqxh4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 19:44:53 crc kubenswrapper[4942]: I0218 19:44:53.741665 4942 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wqxh4" podUID="28921539-823a-4439-a230-3b5aed7085cc" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 19:44:53 crc kubenswrapper[4942]: I0218 19:44:53.817221 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-v5xn9" event={"ID":"abda062b-22ed-4d21-adbb-f2b906e36e02","Type":"ContainerStarted","Data":"56db9471b0cd01eb8b5e1e757306ee35e2890630e80601f66b36dbc89054a34f"} Feb 18 19:44:53 crc kubenswrapper[4942]: I0218 19:44:53.836875 4942 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-v5xn9" podStartSLOduration=2.40061548 podStartE2EDuration="2.836845419s" podCreationTimestamp="2026-02-18 19:44:51 +0000 UTC" firstStartedPulling="2026-02-18 19:44:52.774580705 +0000 UTC m=+1652.479513400" lastFinishedPulling="2026-02-18 19:44:53.210810664 +0000 UTC m=+1652.915743339" observedRunningTime="2026-02-18 19:44:53.832191376 +0000 UTC m=+1653.537124081" watchObservedRunningTime="2026-02-18 19:44:53.836845419 +0000 UTC m=+1653.541778094" Feb 18 19:45:00 crc kubenswrapper[4942]: I0218 19:45:00.146218 4942 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29524065-ckvj9"] Feb 18 19:45:00 crc kubenswrapper[4942]: I0218 19:45:00.147834 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29524065-ckvj9" Feb 18 19:45:00 crc kubenswrapper[4942]: I0218 19:45:00.151313 4942 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 18 19:45:00 crc kubenswrapper[4942]: I0218 19:45:00.152121 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 18 19:45:00 crc kubenswrapper[4942]: I0218 19:45:00.170158 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29524065-ckvj9"] Feb 18 19:45:00 crc kubenswrapper[4942]: I0218 19:45:00.311363 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f02d65f2-f70f-4982-a9d5-fc9d75091181-config-volume\") pod \"collect-profiles-29524065-ckvj9\" (UID: \"f02d65f2-f70f-4982-a9d5-fc9d75091181\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524065-ckvj9" Feb 18 19:45:00 crc kubenswrapper[4942]: I0218 19:45:00.311698 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/f02d65f2-f70f-4982-a9d5-fc9d75091181-secret-volume\") pod \"collect-profiles-29524065-ckvj9\" (UID: \"f02d65f2-f70f-4982-a9d5-fc9d75091181\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524065-ckvj9" Feb 18 19:45:00 crc kubenswrapper[4942]: I0218 19:45:00.311857 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hnzqz\" (UniqueName: \"kubernetes.io/projected/f02d65f2-f70f-4982-a9d5-fc9d75091181-kube-api-access-hnzqz\") pod \"collect-profiles-29524065-ckvj9\" (UID: \"f02d65f2-f70f-4982-a9d5-fc9d75091181\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524065-ckvj9" Feb 18 19:45:00 crc kubenswrapper[4942]: I0218 19:45:00.413699 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hnzqz\" (UniqueName: \"kubernetes.io/projected/f02d65f2-f70f-4982-a9d5-fc9d75091181-kube-api-access-hnzqz\") pod \"collect-profiles-29524065-ckvj9\" (UID: \"f02d65f2-f70f-4982-a9d5-fc9d75091181\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524065-ckvj9" Feb 18 19:45:00 crc kubenswrapper[4942]: I0218 19:45:00.413852 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f02d65f2-f70f-4982-a9d5-fc9d75091181-config-volume\") pod \"collect-profiles-29524065-ckvj9\" (UID: \"f02d65f2-f70f-4982-a9d5-fc9d75091181\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524065-ckvj9" Feb 18 19:45:00 crc kubenswrapper[4942]: I0218 19:45:00.413880 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/f02d65f2-f70f-4982-a9d5-fc9d75091181-secret-volume\") pod \"collect-profiles-29524065-ckvj9\" (UID: \"f02d65f2-f70f-4982-a9d5-fc9d75091181\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524065-ckvj9" Feb 18 19:45:00 crc kubenswrapper[4942]: I0218 19:45:00.414601 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f02d65f2-f70f-4982-a9d5-fc9d75091181-config-volume\") pod \"collect-profiles-29524065-ckvj9\" (UID: \"f02d65f2-f70f-4982-a9d5-fc9d75091181\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524065-ckvj9" Feb 18 19:45:00 crc kubenswrapper[4942]: I0218 19:45:00.423466 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/f02d65f2-f70f-4982-a9d5-fc9d75091181-secret-volume\") pod \"collect-profiles-29524065-ckvj9\" (UID: \"f02d65f2-f70f-4982-a9d5-fc9d75091181\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524065-ckvj9" Feb 18 19:45:00 crc kubenswrapper[4942]: I0218 19:45:00.438855 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hnzqz\" (UniqueName: \"kubernetes.io/projected/f02d65f2-f70f-4982-a9d5-fc9d75091181-kube-api-access-hnzqz\") pod \"collect-profiles-29524065-ckvj9\" (UID: \"f02d65f2-f70f-4982-a9d5-fc9d75091181\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524065-ckvj9" Feb 18 19:45:00 crc kubenswrapper[4942]: I0218 19:45:00.479668 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29524065-ckvj9" Feb 18 19:45:00 crc kubenswrapper[4942]: I0218 19:45:00.956737 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29524065-ckvj9"] Feb 18 19:45:00 crc kubenswrapper[4942]: W0218 19:45:00.970873 4942 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf02d65f2_f70f_4982_a9d5_fc9d75091181.slice/crio-f5e0a0f6fac191b773c4a7caace007f6303aacd1e6bbd0284ba6f31790062301 WatchSource:0}: Error finding container f5e0a0f6fac191b773c4a7caace007f6303aacd1e6bbd0284ba6f31790062301: Status 404 returned error can't find the container with id f5e0a0f6fac191b773c4a7caace007f6303aacd1e6bbd0284ba6f31790062301 Feb 18 19:45:01 crc kubenswrapper[4942]: I0218 19:45:01.916048 4942 generic.go:334] "Generic (PLEG): container finished" podID="f02d65f2-f70f-4982-a9d5-fc9d75091181" containerID="bdd33fc87e63584fee347049c15193b1ff470c22181f3d250c7e0de28ba81fd9" exitCode=0 Feb 18 19:45:01 crc kubenswrapper[4942]: I0218 19:45:01.916168 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29524065-ckvj9" event={"ID":"f02d65f2-f70f-4982-a9d5-fc9d75091181","Type":"ContainerDied","Data":"bdd33fc87e63584fee347049c15193b1ff470c22181f3d250c7e0de28ba81fd9"} Feb 18 19:45:01 crc kubenswrapper[4942]: I0218 19:45:01.917600 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29524065-ckvj9" event={"ID":"f02d65f2-f70f-4982-a9d5-fc9d75091181","Type":"ContainerStarted","Data":"f5e0a0f6fac191b773c4a7caace007f6303aacd1e6bbd0284ba6f31790062301"} Feb 18 19:45:03 crc kubenswrapper[4942]: I0218 19:45:03.096366 4942 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-sync-zw8ls"] Feb 18 19:45:03 crc kubenswrapper[4942]: I0218 19:45:03.106267 4942 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-sync-zw8ls"] Feb 18 19:45:03 crc kubenswrapper[4942]: I0218 19:45:03.313222 4942 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29524065-ckvj9" Feb 18 19:45:03 crc kubenswrapper[4942]: I0218 19:45:03.475881 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f02d65f2-f70f-4982-a9d5-fc9d75091181-config-volume\") pod \"f02d65f2-f70f-4982-a9d5-fc9d75091181\" (UID: \"f02d65f2-f70f-4982-a9d5-fc9d75091181\") " Feb 18 19:45:03 crc kubenswrapper[4942]: I0218 19:45:03.476285 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hnzqz\" (UniqueName: \"kubernetes.io/projected/f02d65f2-f70f-4982-a9d5-fc9d75091181-kube-api-access-hnzqz\") pod \"f02d65f2-f70f-4982-a9d5-fc9d75091181\" (UID: \"f02d65f2-f70f-4982-a9d5-fc9d75091181\") " Feb 18 19:45:03 crc kubenswrapper[4942]: I0218 19:45:03.476315 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/f02d65f2-f70f-4982-a9d5-fc9d75091181-secret-volume\") pod \"f02d65f2-f70f-4982-a9d5-fc9d75091181\" (UID: \"f02d65f2-f70f-4982-a9d5-fc9d75091181\") " Feb 18 19:45:03 crc kubenswrapper[4942]: I0218 19:45:03.476510 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f02d65f2-f70f-4982-a9d5-fc9d75091181-config-volume" (OuterVolumeSpecName: "config-volume") pod "f02d65f2-f70f-4982-a9d5-fc9d75091181" (UID: "f02d65f2-f70f-4982-a9d5-fc9d75091181"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:45:03 crc kubenswrapper[4942]: I0218 19:45:03.476878 4942 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f02d65f2-f70f-4982-a9d5-fc9d75091181-config-volume\") on node \"crc\" DevicePath \"\"" Feb 18 19:45:03 crc kubenswrapper[4942]: I0218 19:45:03.482526 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f02d65f2-f70f-4982-a9d5-fc9d75091181-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "f02d65f2-f70f-4982-a9d5-fc9d75091181" (UID: "f02d65f2-f70f-4982-a9d5-fc9d75091181"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:45:03 crc kubenswrapper[4942]: I0218 19:45:03.483951 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f02d65f2-f70f-4982-a9d5-fc9d75091181-kube-api-access-hnzqz" (OuterVolumeSpecName: "kube-api-access-hnzqz") pod "f02d65f2-f70f-4982-a9d5-fc9d75091181" (UID: "f02d65f2-f70f-4982-a9d5-fc9d75091181"). InnerVolumeSpecName "kube-api-access-hnzqz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:45:03 crc kubenswrapper[4942]: I0218 19:45:03.578689 4942 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hnzqz\" (UniqueName: \"kubernetes.io/projected/f02d65f2-f70f-4982-a9d5-fc9d75091181-kube-api-access-hnzqz\") on node \"crc\" DevicePath \"\"" Feb 18 19:45:03 crc kubenswrapper[4942]: I0218 19:45:03.578734 4942 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/f02d65f2-f70f-4982-a9d5-fc9d75091181-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 18 19:45:03 crc kubenswrapper[4942]: I0218 19:45:03.937170 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29524065-ckvj9" event={"ID":"f02d65f2-f70f-4982-a9d5-fc9d75091181","Type":"ContainerDied","Data":"f5e0a0f6fac191b773c4a7caace007f6303aacd1e6bbd0284ba6f31790062301"} Feb 18 19:45:03 crc kubenswrapper[4942]: I0218 19:45:03.937495 4942 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f5e0a0f6fac191b773c4a7caace007f6303aacd1e6bbd0284ba6f31790062301" Feb 18 19:45:03 crc kubenswrapper[4942]: I0218 19:45:03.937462 4942 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29524065-ckvj9" Feb 18 19:45:05 crc kubenswrapper[4942]: I0218 19:45:05.047858 4942 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="72c2952a-cb2e-4a2e-bdb3-bf97a901cbe3" path="/var/lib/kubelet/pods/72c2952a-cb2e-4a2e-bdb3-bf97a901cbe3/volumes" Feb 18 19:45:20 crc kubenswrapper[4942]: I0218 19:45:20.036200 4942 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-sync-p9l27"] Feb 18 19:45:20 crc kubenswrapper[4942]: I0218 19:45:20.046658 4942 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-sync-p9l27"] Feb 18 19:45:21 crc kubenswrapper[4942]: I0218 19:45:21.071549 4942 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a6c912f7-7ee8-4f53-a358-a6a6a5088be5" path="/var/lib/kubelet/pods/a6c912f7-7ee8-4f53-a358-a6a6a5088be5/volumes" Feb 18 19:45:23 crc kubenswrapper[4942]: I0218 19:45:23.741481 4942 patch_prober.go:28] interesting pod/machine-config-daemon-wqxh4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 19:45:23 crc kubenswrapper[4942]: I0218 19:45:23.742032 4942 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wqxh4" podUID="28921539-823a-4439-a230-3b5aed7085cc" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 19:45:23 crc kubenswrapper[4942]: I0218 19:45:23.742124 4942 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-wqxh4" Feb 18 19:45:23 crc kubenswrapper[4942]: I0218 19:45:23.743395 4942 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"e8694fad4507ebe591fc3e29212876da9f32320a8fd16e4bcde4ab412ae86b19"} pod="openshift-machine-config-operator/machine-config-daemon-wqxh4" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 18 19:45:23 crc kubenswrapper[4942]: I0218 19:45:23.743532 4942 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-wqxh4" podUID="28921539-823a-4439-a230-3b5aed7085cc" containerName="machine-config-daemon" containerID="cri-o://e8694fad4507ebe591fc3e29212876da9f32320a8fd16e4bcde4ab412ae86b19" gracePeriod=600 Feb 18 19:45:23 crc kubenswrapper[4942]: E0218 19:45:23.880475 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wqxh4_openshift-machine-config-operator(28921539-823a-4439-a230-3b5aed7085cc)\"" pod="openshift-machine-config-operator/machine-config-daemon-wqxh4" podUID="28921539-823a-4439-a230-3b5aed7085cc" Feb 18 19:45:24 crc kubenswrapper[4942]: I0218 19:45:24.181694 4942 generic.go:334] "Generic (PLEG): container finished" podID="28921539-823a-4439-a230-3b5aed7085cc" containerID="e8694fad4507ebe591fc3e29212876da9f32320a8fd16e4bcde4ab412ae86b19" exitCode=0 Feb 18 19:45:24 crc kubenswrapper[4942]: I0218 19:45:24.181797 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wqxh4" event={"ID":"28921539-823a-4439-a230-3b5aed7085cc","Type":"ContainerDied","Data":"e8694fad4507ebe591fc3e29212876da9f32320a8fd16e4bcde4ab412ae86b19"} Feb 18 19:45:24 crc kubenswrapper[4942]: I0218 19:45:24.181903 4942 scope.go:117] "RemoveContainer" containerID="0f7c7ce7194dc50e8e7ff903a9631c5d1d6654771462dbd4df2dfa299f3641bf" Feb 18 19:45:24 crc kubenswrapper[4942]: I0218 19:45:24.183087 4942 scope.go:117] "RemoveContainer" containerID="e8694fad4507ebe591fc3e29212876da9f32320a8fd16e4bcde4ab412ae86b19" Feb 18 19:45:24 crc kubenswrapper[4942]: E0218 19:45:24.184089 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wqxh4_openshift-machine-config-operator(28921539-823a-4439-a230-3b5aed7085cc)\"" pod="openshift-machine-config-operator/machine-config-daemon-wqxh4" podUID="28921539-823a-4439-a230-3b5aed7085cc" Feb 18 19:45:26 crc kubenswrapper[4942]: I0218 19:45:26.032789 4942 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-sync-9ntpw"] Feb 18 19:45:26 crc kubenswrapper[4942]: I0218 19:45:26.040529 4942 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-sync-9ntpw"] Feb 18 19:45:27 crc kubenswrapper[4942]: I0218 19:45:27.045200 4942 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="af8e769c-00c3-41a1-97c4-d91902767dfe" path="/var/lib/kubelet/pods/af8e769c-00c3-41a1-97c4-d91902767dfe/volumes" Feb 18 19:45:28 crc kubenswrapper[4942]: I0218 19:45:28.040832 4942 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/watcher-db-sync-4h9n5"] Feb 18 19:45:28 crc kubenswrapper[4942]: I0218 19:45:28.056104 4942 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/watcher-db-sync-4h9n5"] Feb 18 19:45:29 crc kubenswrapper[4942]: I0218 19:45:29.048281 4942 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="983d5293-8413-4a29-88b2-ba775b3b4a8b" path="/var/lib/kubelet/pods/983d5293-8413-4a29-88b2-ba775b3b4a8b/volumes" Feb 18 19:45:30 crc kubenswrapper[4942]: I0218 19:45:30.029448 4942 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-tnqg7"] Feb 18 19:45:30 crc kubenswrapper[4942]: I0218 19:45:30.039285 4942 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-tnqg7"] Feb 18 19:45:31 crc kubenswrapper[4942]: I0218 19:45:31.085248 4942 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f29ae8a1-b3cc-452c-ac99-b450ef3125d8" path="/var/lib/kubelet/pods/f29ae8a1-b3cc-452c-ac99-b450ef3125d8/volumes" Feb 18 19:45:37 crc kubenswrapper[4942]: I0218 19:45:37.037748 4942 scope.go:117] "RemoveContainer" containerID="e8694fad4507ebe591fc3e29212876da9f32320a8fd16e4bcde4ab412ae86b19" Feb 18 19:45:37 crc kubenswrapper[4942]: E0218 19:45:37.038718 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wqxh4_openshift-machine-config-operator(28921539-823a-4439-a230-3b5aed7085cc)\"" pod="openshift-machine-config-operator/machine-config-daemon-wqxh4" podUID="28921539-823a-4439-a230-3b5aed7085cc" Feb 18 19:45:48 crc kubenswrapper[4942]: I0218 19:45:48.036652 4942 scope.go:117] "RemoveContainer" containerID="e8694fad4507ebe591fc3e29212876da9f32320a8fd16e4bcde4ab412ae86b19" Feb 18 19:45:48 crc kubenswrapper[4942]: E0218 19:45:48.037388 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wqxh4_openshift-machine-config-operator(28921539-823a-4439-a230-3b5aed7085cc)\"" pod="openshift-machine-config-operator/machine-config-daemon-wqxh4" podUID="28921539-823a-4439-a230-3b5aed7085cc" Feb 18 19:45:48 crc kubenswrapper[4942]: I0218 19:45:48.271057 4942 scope.go:117] "RemoveContainer" containerID="a5a266a5f35f400b4926f114a4e397e8de76de3f56a176f14c64d1b553d123f4" Feb 18 19:45:48 crc kubenswrapper[4942]: I0218 19:45:48.306338 4942 scope.go:117] "RemoveContainer" containerID="1f69a1fd29ab925cd8cf8e9aff116531b62f274c86f6998747eb096250393ed9" Feb 18 19:45:48 crc kubenswrapper[4942]: I0218 19:45:48.366493 4942 scope.go:117] "RemoveContainer" containerID="8c6545f8eaa3b666b06d888c16ee9caa900adcec0bcd683e72e4f96180bd297d" Feb 18 19:45:48 crc kubenswrapper[4942]: I0218 19:45:48.422025 4942 scope.go:117] "RemoveContainer" containerID="96103ab065d78416959c1d84cf5d96a95a67496c5bf29a0bff2dd2c96318a211" Feb 18 19:45:48 crc kubenswrapper[4942]: I0218 19:45:48.483205 4942 scope.go:117] "RemoveContainer" containerID="16fd17087ed9bd06ba590a2897d1853b93c4e9cb882e3c311955fd4cf453c84b" Feb 18 19:45:48 crc kubenswrapper[4942]: I0218 19:45:48.537884 4942 scope.go:117] "RemoveContainer" containerID="373bd2d7e6e62cf5defbed6522169de2de3264581e7024f113223b1465d241c5" Feb 18 19:45:50 crc kubenswrapper[4942]: I0218 19:45:50.032510 4942 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-sync-qvzh5"] Feb 18 19:45:50 crc kubenswrapper[4942]: I0218 19:45:50.040178 4942 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-sync-qvzh5"] Feb 18 19:45:50 crc kubenswrapper[4942]: I0218 19:45:50.046808 4942 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-sync-h2kjs"] Feb 18 19:45:50 crc kubenswrapper[4942]: I0218 19:45:50.053436 4942 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-sync-h2kjs"] Feb 18 19:45:51 crc kubenswrapper[4942]: I0218 19:45:51.053378 4942 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8aeac097-ba93-4859-a14f-839ae1421e28" path="/var/lib/kubelet/pods/8aeac097-ba93-4859-a14f-839ae1421e28/volumes" Feb 18 19:45:51 crc kubenswrapper[4942]: I0218 19:45:51.054296 4942 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8db7f68b-a733-44fc-90b9-a1dd489fb42d" path="/var/lib/kubelet/pods/8db7f68b-a733-44fc-90b9-a1dd489fb42d/volumes" Feb 18 19:46:02 crc kubenswrapper[4942]: I0218 19:46:02.036190 4942 scope.go:117] "RemoveContainer" containerID="e8694fad4507ebe591fc3e29212876da9f32320a8fd16e4bcde4ab412ae86b19" Feb 18 19:46:02 crc kubenswrapper[4942]: E0218 19:46:02.038661 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wqxh4_openshift-machine-config-operator(28921539-823a-4439-a230-3b5aed7085cc)\"" pod="openshift-machine-config-operator/machine-config-daemon-wqxh4" podUID="28921539-823a-4439-a230-3b5aed7085cc" Feb 18 19:46:03 crc kubenswrapper[4942]: I0218 19:46:03.635365 4942 generic.go:334] "Generic (PLEG): container finished" podID="abda062b-22ed-4d21-adbb-f2b906e36e02" containerID="56db9471b0cd01eb8b5e1e757306ee35e2890630e80601f66b36dbc89054a34f" exitCode=0 Feb 18 19:46:03 crc kubenswrapper[4942]: I0218 19:46:03.635501 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-v5xn9" event={"ID":"abda062b-22ed-4d21-adbb-f2b906e36e02","Type":"ContainerDied","Data":"56db9471b0cd01eb8b5e1e757306ee35e2890630e80601f66b36dbc89054a34f"} Feb 18 19:46:05 crc kubenswrapper[4942]: I0218 19:46:05.151550 4942 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-v5xn9" Feb 18 19:46:05 crc kubenswrapper[4942]: I0218 19:46:05.293811 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/abda062b-22ed-4d21-adbb-f2b906e36e02-inventory\") pod \"abda062b-22ed-4d21-adbb-f2b906e36e02\" (UID: \"abda062b-22ed-4d21-adbb-f2b906e36e02\") " Feb 18 19:46:05 crc kubenswrapper[4942]: I0218 19:46:05.294068 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fx945\" (UniqueName: \"kubernetes.io/projected/abda062b-22ed-4d21-adbb-f2b906e36e02-kube-api-access-fx945\") pod \"abda062b-22ed-4d21-adbb-f2b906e36e02\" (UID: \"abda062b-22ed-4d21-adbb-f2b906e36e02\") " Feb 18 19:46:05 crc kubenswrapper[4942]: I0218 19:46:05.294213 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/abda062b-22ed-4d21-adbb-f2b906e36e02-ssh-key-openstack-edpm-ipam\") pod \"abda062b-22ed-4d21-adbb-f2b906e36e02\" (UID: \"abda062b-22ed-4d21-adbb-f2b906e36e02\") " Feb 18 19:46:05 crc kubenswrapper[4942]: I0218 19:46:05.304121 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/abda062b-22ed-4d21-adbb-f2b906e36e02-kube-api-access-fx945" (OuterVolumeSpecName: "kube-api-access-fx945") pod "abda062b-22ed-4d21-adbb-f2b906e36e02" (UID: "abda062b-22ed-4d21-adbb-f2b906e36e02"). InnerVolumeSpecName "kube-api-access-fx945". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:46:05 crc kubenswrapper[4942]: I0218 19:46:05.396583 4942 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fx945\" (UniqueName: \"kubernetes.io/projected/abda062b-22ed-4d21-adbb-f2b906e36e02-kube-api-access-fx945\") on node \"crc\" DevicePath \"\"" Feb 18 19:46:05 crc kubenswrapper[4942]: I0218 19:46:05.794780 4942 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-2pctl"] Feb 18 19:46:05 crc kubenswrapper[4942]: E0218 19:46:05.795213 4942 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f02d65f2-f70f-4982-a9d5-fc9d75091181" containerName="collect-profiles" Feb 18 19:46:05 crc kubenswrapper[4942]: I0218 19:46:05.795247 4942 state_mem.go:107] "Deleted CPUSet assignment" podUID="f02d65f2-f70f-4982-a9d5-fc9d75091181" containerName="collect-profiles" Feb 18 19:46:05 crc kubenswrapper[4942]: E0218 19:46:05.795296 4942 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="abda062b-22ed-4d21-adbb-f2b906e36e02" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Feb 18 19:46:05 crc kubenswrapper[4942]: I0218 19:46:05.795306 4942 state_mem.go:107] "Deleted CPUSet assignment" podUID="abda062b-22ed-4d21-adbb-f2b906e36e02" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Feb 18 19:46:05 crc kubenswrapper[4942]: I0218 19:46:05.795497 4942 memory_manager.go:354] "RemoveStaleState removing state" podUID="abda062b-22ed-4d21-adbb-f2b906e36e02" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Feb 18 19:46:05 crc kubenswrapper[4942]: I0218 19:46:05.795526 4942 memory_manager.go:354] "RemoveStaleState removing state" podUID="f02d65f2-f70f-4982-a9d5-fc9d75091181" containerName="collect-profiles" Feb 18 19:46:05 crc kubenswrapper[4942]: I0218 19:46:05.796272 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-2pctl" Feb 18 19:46:05 crc kubenswrapper[4942]: I0218 19:46:05.814079 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-2pctl"] Feb 18 19:46:05 crc kubenswrapper[4942]: I0218 19:46:05.906314 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/57203330-4497-4588-ac58-2cff41481077-ssh-key-openstack-edpm-ipam\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-2pctl\" (UID: \"57203330-4497-4588-ac58-2cff41481077\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-2pctl" Feb 18 19:46:05 crc kubenswrapper[4942]: I0218 19:46:05.906658 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j8gq8\" (UniqueName: \"kubernetes.io/projected/57203330-4497-4588-ac58-2cff41481077-kube-api-access-j8gq8\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-2pctl\" (UID: \"57203330-4497-4588-ac58-2cff41481077\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-2pctl" Feb 18 19:46:05 crc kubenswrapper[4942]: I0218 19:46:05.906780 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/57203330-4497-4588-ac58-2cff41481077-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-2pctl\" (UID: \"57203330-4497-4588-ac58-2cff41481077\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-2pctl" Feb 18 19:46:06 crc kubenswrapper[4942]: I0218 19:46:06.009358 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/57203330-4497-4588-ac58-2cff41481077-ssh-key-openstack-edpm-ipam\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-2pctl\" (UID: \"57203330-4497-4588-ac58-2cff41481077\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-2pctl" Feb 18 19:46:06 crc kubenswrapper[4942]: I0218 19:46:06.009623 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j8gq8\" (UniqueName: \"kubernetes.io/projected/57203330-4497-4588-ac58-2cff41481077-kube-api-access-j8gq8\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-2pctl\" (UID: \"57203330-4497-4588-ac58-2cff41481077\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-2pctl" Feb 18 19:46:06 crc kubenswrapper[4942]: I0218 19:46:06.009700 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/57203330-4497-4588-ac58-2cff41481077-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-2pctl\" (UID: \"57203330-4497-4588-ac58-2cff41481077\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-2pctl" Feb 18 19:46:06 crc kubenswrapper[4942]: I0218 19:46:06.017354 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/57203330-4497-4588-ac58-2cff41481077-ssh-key-openstack-edpm-ipam\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-2pctl\" (UID: \"57203330-4497-4588-ac58-2cff41481077\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-2pctl" Feb 18 19:46:06 crc kubenswrapper[4942]: I0218 19:46:06.023023 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/57203330-4497-4588-ac58-2cff41481077-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-2pctl\" (UID: \"57203330-4497-4588-ac58-2cff41481077\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-2pctl" Feb 18 19:46:06 crc kubenswrapper[4942]: I0218 19:46:06.060904 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j8gq8\" (UniqueName: \"kubernetes.io/projected/57203330-4497-4588-ac58-2cff41481077-kube-api-access-j8gq8\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-2pctl\" (UID: \"57203330-4497-4588-ac58-2cff41481077\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-2pctl" Feb 18 19:46:06 crc kubenswrapper[4942]: I0218 19:46:06.119841 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-2pctl" Feb 18 19:46:06 crc kubenswrapper[4942]: I0218 19:46:06.705474 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/abda062b-22ed-4d21-adbb-f2b906e36e02-inventory" (OuterVolumeSpecName: "inventory") pod "abda062b-22ed-4d21-adbb-f2b906e36e02" (UID: "abda062b-22ed-4d21-adbb-f2b906e36e02"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:46:06 crc kubenswrapper[4942]: I0218 19:46:06.715441 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/abda062b-22ed-4d21-adbb-f2b906e36e02-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "abda062b-22ed-4d21-adbb-f2b906e36e02" (UID: "abda062b-22ed-4d21-adbb-f2b906e36e02"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:46:06 crc kubenswrapper[4942]: I0218 19:46:06.728508 4942 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/abda062b-22ed-4d21-adbb-f2b906e36e02-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 18 19:46:06 crc kubenswrapper[4942]: I0218 19:46:06.728593 4942 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/abda062b-22ed-4d21-adbb-f2b906e36e02-inventory\") on node \"crc\" DevicePath \"\"" Feb 18 19:46:06 crc kubenswrapper[4942]: I0218 19:46:06.805177 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-v5xn9" event={"ID":"abda062b-22ed-4d21-adbb-f2b906e36e02","Type":"ContainerDied","Data":"63f65f48ee5ab0059c48c01f1282e83a7fc007edbf7a2254b1c98d6e5aa16551"} Feb 18 19:46:06 crc kubenswrapper[4942]: I0218 19:46:06.805214 4942 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="63f65f48ee5ab0059c48c01f1282e83a7fc007edbf7a2254b1c98d6e5aa16551" Feb 18 19:46:06 crc kubenswrapper[4942]: I0218 19:46:06.805271 4942 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-v5xn9" Feb 18 19:46:07 crc kubenswrapper[4942]: I0218 19:46:07.544254 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-2pctl"] Feb 18 19:46:07 crc kubenswrapper[4942]: I0218 19:46:07.813283 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-2pctl" event={"ID":"57203330-4497-4588-ac58-2cff41481077","Type":"ContainerStarted","Data":"007cb05fac2a4c3ca4ff7d6370c98c9994e395dd0aafe8650d080837b9475653"} Feb 18 19:46:08 crc kubenswrapper[4942]: I0218 19:46:08.830475 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-2pctl" event={"ID":"57203330-4497-4588-ac58-2cff41481077","Type":"ContainerStarted","Data":"2639b93e0de7223af939065d46c5fbb2b93ee2789e4195514a63bc70033c0a67"} Feb 18 19:46:08 crc kubenswrapper[4942]: I0218 19:46:08.855110 4942 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-2pctl" podStartSLOduration=3.144278467 podStartE2EDuration="3.855094849s" podCreationTimestamp="2026-02-18 19:46:05 +0000 UTC" firstStartedPulling="2026-02-18 19:46:07.556069052 +0000 UTC m=+1727.261001717" lastFinishedPulling="2026-02-18 19:46:08.266885424 +0000 UTC m=+1727.971818099" observedRunningTime="2026-02-18 19:46:08.848603588 +0000 UTC m=+1728.553536283" watchObservedRunningTime="2026-02-18 19:46:08.855094849 +0000 UTC m=+1728.560027514" Feb 18 19:46:13 crc kubenswrapper[4942]: I0218 19:46:13.897701 4942 generic.go:334] "Generic (PLEG): container finished" podID="57203330-4497-4588-ac58-2cff41481077" containerID="2639b93e0de7223af939065d46c5fbb2b93ee2789e4195514a63bc70033c0a67" exitCode=0 Feb 18 19:46:13 crc kubenswrapper[4942]: I0218 19:46:13.897825 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-2pctl" event={"ID":"57203330-4497-4588-ac58-2cff41481077","Type":"ContainerDied","Data":"2639b93e0de7223af939065d46c5fbb2b93ee2789e4195514a63bc70033c0a67"} Feb 18 19:46:14 crc kubenswrapper[4942]: I0218 19:46:14.035646 4942 scope.go:117] "RemoveContainer" containerID="e8694fad4507ebe591fc3e29212876da9f32320a8fd16e4bcde4ab412ae86b19" Feb 18 19:46:14 crc kubenswrapper[4942]: E0218 19:46:14.035913 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wqxh4_openshift-machine-config-operator(28921539-823a-4439-a230-3b5aed7085cc)\"" pod="openshift-machine-config-operator/machine-config-daemon-wqxh4" podUID="28921539-823a-4439-a230-3b5aed7085cc" Feb 18 19:46:15 crc kubenswrapper[4942]: I0218 19:46:15.396726 4942 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-2pctl" Feb 18 19:46:15 crc kubenswrapper[4942]: I0218 19:46:15.482516 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/57203330-4497-4588-ac58-2cff41481077-inventory\") pod \"57203330-4497-4588-ac58-2cff41481077\" (UID: \"57203330-4497-4588-ac58-2cff41481077\") " Feb 18 19:46:15 crc kubenswrapper[4942]: I0218 19:46:15.482623 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j8gq8\" (UniqueName: \"kubernetes.io/projected/57203330-4497-4588-ac58-2cff41481077-kube-api-access-j8gq8\") pod \"57203330-4497-4588-ac58-2cff41481077\" (UID: \"57203330-4497-4588-ac58-2cff41481077\") " Feb 18 19:46:15 crc kubenswrapper[4942]: I0218 19:46:15.482662 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/57203330-4497-4588-ac58-2cff41481077-ssh-key-openstack-edpm-ipam\") pod \"57203330-4497-4588-ac58-2cff41481077\" (UID: \"57203330-4497-4588-ac58-2cff41481077\") " Feb 18 19:46:15 crc kubenswrapper[4942]: I0218 19:46:15.497931 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/57203330-4497-4588-ac58-2cff41481077-kube-api-access-j8gq8" (OuterVolumeSpecName: "kube-api-access-j8gq8") pod "57203330-4497-4588-ac58-2cff41481077" (UID: "57203330-4497-4588-ac58-2cff41481077"). InnerVolumeSpecName "kube-api-access-j8gq8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:46:15 crc kubenswrapper[4942]: I0218 19:46:15.518030 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/57203330-4497-4588-ac58-2cff41481077-inventory" (OuterVolumeSpecName: "inventory") pod "57203330-4497-4588-ac58-2cff41481077" (UID: "57203330-4497-4588-ac58-2cff41481077"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:46:15 crc kubenswrapper[4942]: I0218 19:46:15.522055 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/57203330-4497-4588-ac58-2cff41481077-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "57203330-4497-4588-ac58-2cff41481077" (UID: "57203330-4497-4588-ac58-2cff41481077"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:46:15 crc kubenswrapper[4942]: I0218 19:46:15.585809 4942 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/57203330-4497-4588-ac58-2cff41481077-inventory\") on node \"crc\" DevicePath \"\"" Feb 18 19:46:15 crc kubenswrapper[4942]: I0218 19:46:15.585849 4942 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j8gq8\" (UniqueName: \"kubernetes.io/projected/57203330-4497-4588-ac58-2cff41481077-kube-api-access-j8gq8\") on node \"crc\" DevicePath \"\"" Feb 18 19:46:15 crc kubenswrapper[4942]: I0218 19:46:15.585864 4942 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/57203330-4497-4588-ac58-2cff41481077-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 18 19:46:15 crc kubenswrapper[4942]: I0218 19:46:15.922587 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-2pctl" event={"ID":"57203330-4497-4588-ac58-2cff41481077","Type":"ContainerDied","Data":"007cb05fac2a4c3ca4ff7d6370c98c9994e395dd0aafe8650d080837b9475653"} Feb 18 19:46:15 crc kubenswrapper[4942]: I0218 19:46:15.923067 4942 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="007cb05fac2a4c3ca4ff7d6370c98c9994e395dd0aafe8650d080837b9475653" Feb 18 19:46:15 crc kubenswrapper[4942]: I0218 19:46:15.922691 4942 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-2pctl" Feb 18 19:46:16 crc kubenswrapper[4942]: I0218 19:46:16.019147 4942 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-98ch6"] Feb 18 19:46:16 crc kubenswrapper[4942]: E0218 19:46:16.019993 4942 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="57203330-4497-4588-ac58-2cff41481077" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Feb 18 19:46:16 crc kubenswrapper[4942]: I0218 19:46:16.020110 4942 state_mem.go:107] "Deleted CPUSet assignment" podUID="57203330-4497-4588-ac58-2cff41481077" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Feb 18 19:46:16 crc kubenswrapper[4942]: I0218 19:46:16.020435 4942 memory_manager.go:354] "RemoveStaleState removing state" podUID="57203330-4497-4588-ac58-2cff41481077" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Feb 18 19:46:16 crc kubenswrapper[4942]: I0218 19:46:16.021452 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-98ch6" Feb 18 19:46:16 crc kubenswrapper[4942]: I0218 19:46:16.023915 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-rgcbh" Feb 18 19:46:16 crc kubenswrapper[4942]: I0218 19:46:16.024187 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 18 19:46:16 crc kubenswrapper[4942]: I0218 19:46:16.028273 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 18 19:46:16 crc kubenswrapper[4942]: I0218 19:46:16.043008 4942 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 18 19:46:16 crc kubenswrapper[4942]: I0218 19:46:16.046718 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-98ch6"] Feb 18 19:46:16 crc kubenswrapper[4942]: I0218 19:46:16.197983 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8fmj7\" (UniqueName: \"kubernetes.io/projected/59349fa4-b215-47f3-93a7-7e9aca054947-kube-api-access-8fmj7\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-98ch6\" (UID: \"59349fa4-b215-47f3-93a7-7e9aca054947\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-98ch6" Feb 18 19:46:16 crc kubenswrapper[4942]: I0218 19:46:16.198312 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/59349fa4-b215-47f3-93a7-7e9aca054947-ssh-key-openstack-edpm-ipam\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-98ch6\" (UID: \"59349fa4-b215-47f3-93a7-7e9aca054947\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-98ch6" Feb 18 19:46:16 crc kubenswrapper[4942]: I0218 19:46:16.198501 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/59349fa4-b215-47f3-93a7-7e9aca054947-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-98ch6\" (UID: \"59349fa4-b215-47f3-93a7-7e9aca054947\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-98ch6" Feb 18 19:46:16 crc kubenswrapper[4942]: I0218 19:46:16.300242 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8fmj7\" (UniqueName: \"kubernetes.io/projected/59349fa4-b215-47f3-93a7-7e9aca054947-kube-api-access-8fmj7\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-98ch6\" (UID: \"59349fa4-b215-47f3-93a7-7e9aca054947\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-98ch6" Feb 18 19:46:16 crc kubenswrapper[4942]: I0218 19:46:16.300316 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/59349fa4-b215-47f3-93a7-7e9aca054947-ssh-key-openstack-edpm-ipam\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-98ch6\" (UID: \"59349fa4-b215-47f3-93a7-7e9aca054947\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-98ch6" Feb 18 19:46:16 crc kubenswrapper[4942]: I0218 19:46:16.300431 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/59349fa4-b215-47f3-93a7-7e9aca054947-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-98ch6\" (UID: \"59349fa4-b215-47f3-93a7-7e9aca054947\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-98ch6" Feb 18 19:46:16 crc kubenswrapper[4942]: I0218 19:46:16.305699 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/59349fa4-b215-47f3-93a7-7e9aca054947-ssh-key-openstack-edpm-ipam\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-98ch6\" (UID: \"59349fa4-b215-47f3-93a7-7e9aca054947\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-98ch6" Feb 18 19:46:16 crc kubenswrapper[4942]: I0218 19:46:16.306318 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/59349fa4-b215-47f3-93a7-7e9aca054947-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-98ch6\" (UID: \"59349fa4-b215-47f3-93a7-7e9aca054947\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-98ch6" Feb 18 19:46:16 crc kubenswrapper[4942]: I0218 19:46:16.323503 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8fmj7\" (UniqueName: \"kubernetes.io/projected/59349fa4-b215-47f3-93a7-7e9aca054947-kube-api-access-8fmj7\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-98ch6\" (UID: \"59349fa4-b215-47f3-93a7-7e9aca054947\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-98ch6" Feb 18 19:46:16 crc kubenswrapper[4942]: I0218 19:46:16.351210 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-98ch6" Feb 18 19:46:16 crc kubenswrapper[4942]: I0218 19:46:16.970123 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-98ch6"] Feb 18 19:46:17 crc kubenswrapper[4942]: I0218 19:46:17.947869 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-98ch6" event={"ID":"59349fa4-b215-47f3-93a7-7e9aca054947","Type":"ContainerStarted","Data":"865f01125c6391deacd831979ae4a148f4a3a2136ebe5b39793d52d94a72dbb3"} Feb 18 19:46:17 crc kubenswrapper[4942]: I0218 19:46:17.948613 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-98ch6" event={"ID":"59349fa4-b215-47f3-93a7-7e9aca054947","Type":"ContainerStarted","Data":"49e8b3eead57c998bc9134855853dc809162614b2ed5c9cea7ed27fa70db4276"} Feb 18 19:46:17 crc kubenswrapper[4942]: I0218 19:46:17.974309 4942 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-98ch6" podStartSLOduration=2.572395549 podStartE2EDuration="2.974272563s" podCreationTimestamp="2026-02-18 19:46:15 +0000 UTC" firstStartedPulling="2026-02-18 19:46:16.976391948 +0000 UTC m=+1736.681324613" lastFinishedPulling="2026-02-18 19:46:17.378268922 +0000 UTC m=+1737.083201627" observedRunningTime="2026-02-18 19:46:17.963920719 +0000 UTC m=+1737.668853464" watchObservedRunningTime="2026-02-18 19:46:17.974272563 +0000 UTC m=+1737.679205268" Feb 18 19:46:26 crc kubenswrapper[4942]: I0218 19:46:26.037581 4942 scope.go:117] "RemoveContainer" containerID="e8694fad4507ebe591fc3e29212876da9f32320a8fd16e4bcde4ab412ae86b19" Feb 18 19:46:26 crc kubenswrapper[4942]: E0218 19:46:26.039107 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wqxh4_openshift-machine-config-operator(28921539-823a-4439-a230-3b5aed7085cc)\"" pod="openshift-machine-config-operator/machine-config-daemon-wqxh4" podUID="28921539-823a-4439-a230-3b5aed7085cc" Feb 18 19:46:27 crc kubenswrapper[4942]: I0218 19:46:27.051035 4942 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-db-create-hxdjn"] Feb 18 19:46:27 crc kubenswrapper[4942]: I0218 19:46:27.058284 4942 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-db-create-hxdjn"] Feb 18 19:46:29 crc kubenswrapper[4942]: I0218 19:46:29.078512 4942 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="54e11ed4-f85e-4125-acc8-b0b86cef91fb" path="/var/lib/kubelet/pods/54e11ed4-f85e-4125-acc8-b0b86cef91fb/volumes" Feb 18 19:46:29 crc kubenswrapper[4942]: I0218 19:46:29.080189 4942 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-f195-account-create-update-jjctk"] Feb 18 19:46:29 crc kubenswrapper[4942]: I0218 19:46:29.080224 4942 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-db-create-f9r9j"] Feb 18 19:46:29 crc kubenswrapper[4942]: I0218 19:46:29.081481 4942 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-db-create-d7fm8"] Feb 18 19:46:29 crc kubenswrapper[4942]: I0218 19:46:29.099387 4942 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-db-create-d7fm8"] Feb 18 19:46:29 crc kubenswrapper[4942]: I0218 19:46:29.110793 4942 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-f195-account-create-update-jjctk"] Feb 18 19:46:29 crc kubenswrapper[4942]: I0218 19:46:29.118491 4942 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-db-create-f9r9j"] Feb 18 19:46:30 crc kubenswrapper[4942]: I0218 19:46:30.036709 4942 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-1b0e-account-create-update-p6b7z"] Feb 18 19:46:30 crc kubenswrapper[4942]: I0218 19:46:30.047164 4942 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-a3b1-account-create-update-sdgp2"] Feb 18 19:46:30 crc kubenswrapper[4942]: I0218 19:46:30.058088 4942 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-a3b1-account-create-update-sdgp2"] Feb 18 19:46:30 crc kubenswrapper[4942]: I0218 19:46:30.065704 4942 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-1b0e-account-create-update-p6b7z"] Feb 18 19:46:31 crc kubenswrapper[4942]: I0218 19:46:31.054049 4942 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3319773b-d924-402a-adbd-f421ee14c994" path="/var/lib/kubelet/pods/3319773b-d924-402a-adbd-f421ee14c994/volumes" Feb 18 19:46:31 crc kubenswrapper[4942]: I0218 19:46:31.055415 4942 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="908017b2-bbca-42f2-b6a0-af358a18d1b7" path="/var/lib/kubelet/pods/908017b2-bbca-42f2-b6a0-af358a18d1b7/volumes" Feb 18 19:46:31 crc kubenswrapper[4942]: I0218 19:46:31.056336 4942 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bdd3a7b9-5bb1-47a4-8a4a-95131e50cf27" path="/var/lib/kubelet/pods/bdd3a7b9-5bb1-47a4-8a4a-95131e50cf27/volumes" Feb 18 19:46:31 crc kubenswrapper[4942]: I0218 19:46:31.057784 4942 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="de103e96-857c-4fa9-b78b-51c8f4734643" path="/var/lib/kubelet/pods/de103e96-857c-4fa9-b78b-51c8f4734643/volumes" Feb 18 19:46:31 crc kubenswrapper[4942]: I0218 19:46:31.059469 4942 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ef4ca914-d763-484f-aa35-39dbd725d14c" path="/var/lib/kubelet/pods/ef4ca914-d763-484f-aa35-39dbd725d14c/volumes" Feb 18 19:46:38 crc kubenswrapper[4942]: I0218 19:46:38.036907 4942 scope.go:117] "RemoveContainer" containerID="e8694fad4507ebe591fc3e29212876da9f32320a8fd16e4bcde4ab412ae86b19" Feb 18 19:46:38 crc kubenswrapper[4942]: E0218 19:46:38.037706 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wqxh4_openshift-machine-config-operator(28921539-823a-4439-a230-3b5aed7085cc)\"" pod="openshift-machine-config-operator/machine-config-daemon-wqxh4" podUID="28921539-823a-4439-a230-3b5aed7085cc" Feb 18 19:46:48 crc kubenswrapper[4942]: I0218 19:46:48.667008 4942 scope.go:117] "RemoveContainer" containerID="c2c74965083b09d2fda5c205fdee24ab8d991088f20cd6c4fd29973dbc9a7c39" Feb 18 19:46:48 crc kubenswrapper[4942]: I0218 19:46:48.698223 4942 scope.go:117] "RemoveContainer" containerID="12651ed44c362c43a5a615685457fd590c1593f4afa3ac50fda9dea54a2e1f71" Feb 18 19:46:48 crc kubenswrapper[4942]: I0218 19:46:48.776182 4942 scope.go:117] "RemoveContainer" containerID="866788c6c2a051f7476fcb5d58fd9c13e62810bec69e94d021b4616590e98f0b" Feb 18 19:46:48 crc kubenswrapper[4942]: I0218 19:46:48.821140 4942 scope.go:117] "RemoveContainer" containerID="a09c56da144b09bdcb7865a7cc27a2ff95e7937bd4f16a766144008dd1c49144" Feb 18 19:46:48 crc kubenswrapper[4942]: I0218 19:46:48.884061 4942 scope.go:117] "RemoveContainer" containerID="4ee086e7e747f10b7d38270d86480864775d35a33a827da89168941ff41e3484" Feb 18 19:46:48 crc kubenswrapper[4942]: I0218 19:46:48.922376 4942 scope.go:117] "RemoveContainer" containerID="9f2c359e5e4f7ba110dc92287a82c170423f21670d64c2a6b420aa0beff96ce3" Feb 18 19:46:48 crc kubenswrapper[4942]: I0218 19:46:48.957364 4942 scope.go:117] "RemoveContainer" containerID="4d566d8d0c1f2395dae51975108188a50f273b881992f487f3b84531a9f2e9f1" Feb 18 19:46:48 crc kubenswrapper[4942]: I0218 19:46:48.983509 4942 scope.go:117] "RemoveContainer" containerID="e0015f6cb0ed0e4e677017a14f5fcb4378f27372b8c41b1fdca89664675f56a0" Feb 18 19:46:53 crc kubenswrapper[4942]: I0218 19:46:53.037207 4942 scope.go:117] "RemoveContainer" containerID="e8694fad4507ebe591fc3e29212876da9f32320a8fd16e4bcde4ab412ae86b19" Feb 18 19:46:53 crc kubenswrapper[4942]: E0218 19:46:53.038667 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wqxh4_openshift-machine-config-operator(28921539-823a-4439-a230-3b5aed7085cc)\"" pod="openshift-machine-config-operator/machine-config-daemon-wqxh4" podUID="28921539-823a-4439-a230-3b5aed7085cc" Feb 18 19:46:56 crc kubenswrapper[4942]: I0218 19:46:56.373657 4942 generic.go:334] "Generic (PLEG): container finished" podID="59349fa4-b215-47f3-93a7-7e9aca054947" containerID="865f01125c6391deacd831979ae4a148f4a3a2136ebe5b39793d52d94a72dbb3" exitCode=0 Feb 18 19:46:56 crc kubenswrapper[4942]: I0218 19:46:56.373721 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-98ch6" event={"ID":"59349fa4-b215-47f3-93a7-7e9aca054947","Type":"ContainerDied","Data":"865f01125c6391deacd831979ae4a148f4a3a2136ebe5b39793d52d94a72dbb3"} Feb 18 19:46:57 crc kubenswrapper[4942]: I0218 19:46:57.805038 4942 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-98ch6" Feb 18 19:46:57 crc kubenswrapper[4942]: I0218 19:46:57.992612 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/59349fa4-b215-47f3-93a7-7e9aca054947-inventory\") pod \"59349fa4-b215-47f3-93a7-7e9aca054947\" (UID: \"59349fa4-b215-47f3-93a7-7e9aca054947\") " Feb 18 19:46:57 crc kubenswrapper[4942]: I0218 19:46:57.992922 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8fmj7\" (UniqueName: \"kubernetes.io/projected/59349fa4-b215-47f3-93a7-7e9aca054947-kube-api-access-8fmj7\") pod \"59349fa4-b215-47f3-93a7-7e9aca054947\" (UID: \"59349fa4-b215-47f3-93a7-7e9aca054947\") " Feb 18 19:46:57 crc kubenswrapper[4942]: I0218 19:46:57.993163 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/59349fa4-b215-47f3-93a7-7e9aca054947-ssh-key-openstack-edpm-ipam\") pod \"59349fa4-b215-47f3-93a7-7e9aca054947\" (UID: \"59349fa4-b215-47f3-93a7-7e9aca054947\") " Feb 18 19:46:58 crc kubenswrapper[4942]: I0218 19:46:58.000426 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/59349fa4-b215-47f3-93a7-7e9aca054947-kube-api-access-8fmj7" (OuterVolumeSpecName: "kube-api-access-8fmj7") pod "59349fa4-b215-47f3-93a7-7e9aca054947" (UID: "59349fa4-b215-47f3-93a7-7e9aca054947"). InnerVolumeSpecName "kube-api-access-8fmj7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:46:58 crc kubenswrapper[4942]: I0218 19:46:58.051225 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/59349fa4-b215-47f3-93a7-7e9aca054947-inventory" (OuterVolumeSpecName: "inventory") pod "59349fa4-b215-47f3-93a7-7e9aca054947" (UID: "59349fa4-b215-47f3-93a7-7e9aca054947"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:46:58 crc kubenswrapper[4942]: I0218 19:46:58.060416 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/59349fa4-b215-47f3-93a7-7e9aca054947-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "59349fa4-b215-47f3-93a7-7e9aca054947" (UID: "59349fa4-b215-47f3-93a7-7e9aca054947"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:46:58 crc kubenswrapper[4942]: I0218 19:46:58.096124 4942 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/59349fa4-b215-47f3-93a7-7e9aca054947-inventory\") on node \"crc\" DevicePath \"\"" Feb 18 19:46:58 crc kubenswrapper[4942]: I0218 19:46:58.096411 4942 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8fmj7\" (UniqueName: \"kubernetes.io/projected/59349fa4-b215-47f3-93a7-7e9aca054947-kube-api-access-8fmj7\") on node \"crc\" DevicePath \"\"" Feb 18 19:46:58 crc kubenswrapper[4942]: I0218 19:46:58.096501 4942 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/59349fa4-b215-47f3-93a7-7e9aca054947-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 18 19:46:58 crc kubenswrapper[4942]: I0218 19:46:58.401524 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-98ch6" event={"ID":"59349fa4-b215-47f3-93a7-7e9aca054947","Type":"ContainerDied","Data":"49e8b3eead57c998bc9134855853dc809162614b2ed5c9cea7ed27fa70db4276"} Feb 18 19:46:58 crc kubenswrapper[4942]: I0218 19:46:58.401608 4942 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="49e8b3eead57c998bc9134855853dc809162614b2ed5c9cea7ed27fa70db4276" Feb 18 19:46:58 crc kubenswrapper[4942]: I0218 19:46:58.402248 4942 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-98ch6" Feb 18 19:46:58 crc kubenswrapper[4942]: I0218 19:46:58.509796 4942 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-sp7kd"] Feb 18 19:46:58 crc kubenswrapper[4942]: E0218 19:46:58.510311 4942 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="59349fa4-b215-47f3-93a7-7e9aca054947" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Feb 18 19:46:58 crc kubenswrapper[4942]: I0218 19:46:58.510413 4942 state_mem.go:107] "Deleted CPUSet assignment" podUID="59349fa4-b215-47f3-93a7-7e9aca054947" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Feb 18 19:46:58 crc kubenswrapper[4942]: I0218 19:46:58.510675 4942 memory_manager.go:354] "RemoveStaleState removing state" podUID="59349fa4-b215-47f3-93a7-7e9aca054947" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Feb 18 19:46:58 crc kubenswrapper[4942]: I0218 19:46:58.512750 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-sp7kd" Feb 18 19:46:58 crc kubenswrapper[4942]: I0218 19:46:58.518927 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 18 19:46:58 crc kubenswrapper[4942]: I0218 19:46:58.519099 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 18 19:46:58 crc kubenswrapper[4942]: I0218 19:46:58.519254 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-rgcbh" Feb 18 19:46:58 crc kubenswrapper[4942]: I0218 19:46:58.519408 4942 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 18 19:46:58 crc kubenswrapper[4942]: I0218 19:46:58.525802 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-sp7kd"] Feb 18 19:46:58 crc kubenswrapper[4942]: I0218 19:46:58.606276 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zb9js\" (UniqueName: \"kubernetes.io/projected/3aa31c62-3ec4-4764-b3c9-915f2ed0d979-kube-api-access-zb9js\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-sp7kd\" (UID: \"3aa31c62-3ec4-4764-b3c9-915f2ed0d979\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-sp7kd" Feb 18 19:46:58 crc kubenswrapper[4942]: I0218 19:46:58.606313 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/3aa31c62-3ec4-4764-b3c9-915f2ed0d979-ssh-key-openstack-edpm-ipam\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-sp7kd\" (UID: \"3aa31c62-3ec4-4764-b3c9-915f2ed0d979\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-sp7kd" Feb 18 19:46:58 crc kubenswrapper[4942]: I0218 19:46:58.607728 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3aa31c62-3ec4-4764-b3c9-915f2ed0d979-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-sp7kd\" (UID: \"3aa31c62-3ec4-4764-b3c9-915f2ed0d979\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-sp7kd" Feb 18 19:46:58 crc kubenswrapper[4942]: I0218 19:46:58.709292 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zb9js\" (UniqueName: \"kubernetes.io/projected/3aa31c62-3ec4-4764-b3c9-915f2ed0d979-kube-api-access-zb9js\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-sp7kd\" (UID: \"3aa31c62-3ec4-4764-b3c9-915f2ed0d979\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-sp7kd" Feb 18 19:46:58 crc kubenswrapper[4942]: I0218 19:46:58.709547 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/3aa31c62-3ec4-4764-b3c9-915f2ed0d979-ssh-key-openstack-edpm-ipam\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-sp7kd\" (UID: \"3aa31c62-3ec4-4764-b3c9-915f2ed0d979\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-sp7kd" Feb 18 19:46:58 crc kubenswrapper[4942]: I0218 19:46:58.709589 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3aa31c62-3ec4-4764-b3c9-915f2ed0d979-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-sp7kd\" (UID: \"3aa31c62-3ec4-4764-b3c9-915f2ed0d979\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-sp7kd" Feb 18 19:46:58 crc kubenswrapper[4942]: I0218 19:46:58.716081 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/3aa31c62-3ec4-4764-b3c9-915f2ed0d979-ssh-key-openstack-edpm-ipam\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-sp7kd\" (UID: \"3aa31c62-3ec4-4764-b3c9-915f2ed0d979\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-sp7kd" Feb 18 19:46:58 crc kubenswrapper[4942]: I0218 19:46:58.721506 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3aa31c62-3ec4-4764-b3c9-915f2ed0d979-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-sp7kd\" (UID: \"3aa31c62-3ec4-4764-b3c9-915f2ed0d979\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-sp7kd" Feb 18 19:46:58 crc kubenswrapper[4942]: I0218 19:46:58.726783 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zb9js\" (UniqueName: \"kubernetes.io/projected/3aa31c62-3ec4-4764-b3c9-915f2ed0d979-kube-api-access-zb9js\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-sp7kd\" (UID: \"3aa31c62-3ec4-4764-b3c9-915f2ed0d979\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-sp7kd" Feb 18 19:46:58 crc kubenswrapper[4942]: I0218 19:46:58.883863 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-sp7kd" Feb 18 19:46:59 crc kubenswrapper[4942]: W0218 19:46:59.492054 4942 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3aa31c62_3ec4_4764_b3c9_915f2ed0d979.slice/crio-40b6d04180d9292202facb30248186130f9617cf1e6b90655f07b0a1a6ace2e6 WatchSource:0}: Error finding container 40b6d04180d9292202facb30248186130f9617cf1e6b90655f07b0a1a6ace2e6: Status 404 returned error can't find the container with id 40b6d04180d9292202facb30248186130f9617cf1e6b90655f07b0a1a6ace2e6 Feb 18 19:46:59 crc kubenswrapper[4942]: I0218 19:46:59.498723 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-sp7kd"] Feb 18 19:47:00 crc kubenswrapper[4942]: I0218 19:47:00.431879 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-sp7kd" event={"ID":"3aa31c62-3ec4-4764-b3c9-915f2ed0d979","Type":"ContainerStarted","Data":"b8781f6763e3c28d5b28c6dbe67fce458666fad6da6df65d68c5f9c6691cc2d4"} Feb 18 19:47:00 crc kubenswrapper[4942]: I0218 19:47:00.432152 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-sp7kd" event={"ID":"3aa31c62-3ec4-4764-b3c9-915f2ed0d979","Type":"ContainerStarted","Data":"40b6d04180d9292202facb30248186130f9617cf1e6b90655f07b0a1a6ace2e6"} Feb 18 19:47:00 crc kubenswrapper[4942]: I0218 19:47:00.472671 4942 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-sp7kd" podStartSLOduration=2.00583914 podStartE2EDuration="2.472651659s" podCreationTimestamp="2026-02-18 19:46:58 +0000 UTC" firstStartedPulling="2026-02-18 19:46:59.494746982 +0000 UTC m=+1779.199679667" lastFinishedPulling="2026-02-18 19:46:59.961559481 +0000 UTC m=+1779.666492186" observedRunningTime="2026-02-18 19:47:00.458270679 +0000 UTC m=+1780.163203364" watchObservedRunningTime="2026-02-18 19:47:00.472651659 +0000 UTC m=+1780.177584334" Feb 18 19:47:01 crc kubenswrapper[4942]: I0218 19:47:01.074814 4942 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-bbrrn"] Feb 18 19:47:01 crc kubenswrapper[4942]: I0218 19:47:01.091264 4942 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-bbrrn"] Feb 18 19:47:03 crc kubenswrapper[4942]: I0218 19:47:03.055794 4942 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e14c764c-c1b5-4196-a48b-2aff4c38782b" path="/var/lib/kubelet/pods/e14c764c-c1b5-4196-a48b-2aff4c38782b/volumes" Feb 18 19:47:04 crc kubenswrapper[4942]: I0218 19:47:04.036331 4942 scope.go:117] "RemoveContainer" containerID="e8694fad4507ebe591fc3e29212876da9f32320a8fd16e4bcde4ab412ae86b19" Feb 18 19:47:04 crc kubenswrapper[4942]: E0218 19:47:04.036911 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wqxh4_openshift-machine-config-operator(28921539-823a-4439-a230-3b5aed7085cc)\"" pod="openshift-machine-config-operator/machine-config-daemon-wqxh4" podUID="28921539-823a-4439-a230-3b5aed7085cc" Feb 18 19:47:19 crc kubenswrapper[4942]: I0218 19:47:19.037074 4942 scope.go:117] "RemoveContainer" containerID="e8694fad4507ebe591fc3e29212876da9f32320a8fd16e4bcde4ab412ae86b19" Feb 18 19:47:19 crc kubenswrapper[4942]: E0218 19:47:19.038144 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wqxh4_openshift-machine-config-operator(28921539-823a-4439-a230-3b5aed7085cc)\"" pod="openshift-machine-config-operator/machine-config-daemon-wqxh4" podUID="28921539-823a-4439-a230-3b5aed7085cc" Feb 18 19:47:23 crc kubenswrapper[4942]: I0218 19:47:23.068831 4942 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-cell-mapping-6wkkj"] Feb 18 19:47:23 crc kubenswrapper[4942]: I0218 19:47:23.069147 4942 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-bqfl9"] Feb 18 19:47:23 crc kubenswrapper[4942]: I0218 19:47:23.076256 4942 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-cell-mapping-6wkkj"] Feb 18 19:47:23 crc kubenswrapper[4942]: I0218 19:47:23.083639 4942 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-bqfl9"] Feb 18 19:47:25 crc kubenswrapper[4942]: I0218 19:47:25.056748 4942 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2e2cb901-5468-4fa9-9b3a-a16f238ff6e2" path="/var/lib/kubelet/pods/2e2cb901-5468-4fa9-9b3a-a16f238ff6e2/volumes" Feb 18 19:47:25 crc kubenswrapper[4942]: I0218 19:47:25.059561 4942 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b4a19078-b432-452e-8918-7b0f8c60e632" path="/var/lib/kubelet/pods/b4a19078-b432-452e-8918-7b0f8c60e632/volumes" Feb 18 19:47:31 crc kubenswrapper[4942]: I0218 19:47:31.043175 4942 scope.go:117] "RemoveContainer" containerID="e8694fad4507ebe591fc3e29212876da9f32320a8fd16e4bcde4ab412ae86b19" Feb 18 19:47:31 crc kubenswrapper[4942]: E0218 19:47:31.043782 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wqxh4_openshift-machine-config-operator(28921539-823a-4439-a230-3b5aed7085cc)\"" pod="openshift-machine-config-operator/machine-config-daemon-wqxh4" podUID="28921539-823a-4439-a230-3b5aed7085cc" Feb 18 19:47:44 crc kubenswrapper[4942]: I0218 19:47:44.036439 4942 scope.go:117] "RemoveContainer" containerID="e8694fad4507ebe591fc3e29212876da9f32320a8fd16e4bcde4ab412ae86b19" Feb 18 19:47:44 crc kubenswrapper[4942]: E0218 19:47:44.037498 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wqxh4_openshift-machine-config-operator(28921539-823a-4439-a230-3b5aed7085cc)\"" pod="openshift-machine-config-operator/machine-config-daemon-wqxh4" podUID="28921539-823a-4439-a230-3b5aed7085cc" Feb 18 19:47:49 crc kubenswrapper[4942]: I0218 19:47:49.137359 4942 scope.go:117] "RemoveContainer" containerID="2d29442d9649dbaa907e5735ea0dda7657607ca6fa24c4f83c7c2be4ce910a11" Feb 18 19:47:49 crc kubenswrapper[4942]: I0218 19:47:49.187466 4942 scope.go:117] "RemoveContainer" containerID="3d586c465df9e16d18d5348d207063c859dc4c0c45589222afa474013bd766c5" Feb 18 19:47:49 crc kubenswrapper[4942]: I0218 19:47:49.243878 4942 scope.go:117] "RemoveContainer" containerID="ebb11ccd20be89bb58e99f7b4e01c65708315c8dea33a27fefa79d1ee13756e9" Feb 18 19:47:50 crc kubenswrapper[4942]: I0218 19:47:50.039537 4942 generic.go:334] "Generic (PLEG): container finished" podID="3aa31c62-3ec4-4764-b3c9-915f2ed0d979" containerID="b8781f6763e3c28d5b28c6dbe67fce458666fad6da6df65d68c5f9c6691cc2d4" exitCode=0 Feb 18 19:47:50 crc kubenswrapper[4942]: I0218 19:47:50.039579 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-sp7kd" event={"ID":"3aa31c62-3ec4-4764-b3c9-915f2ed0d979","Type":"ContainerDied","Data":"b8781f6763e3c28d5b28c6dbe67fce458666fad6da6df65d68c5f9c6691cc2d4"} Feb 18 19:47:51 crc kubenswrapper[4942]: I0218 19:47:51.499219 4942 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-sp7kd" Feb 18 19:47:51 crc kubenswrapper[4942]: I0218 19:47:51.621568 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zb9js\" (UniqueName: \"kubernetes.io/projected/3aa31c62-3ec4-4764-b3c9-915f2ed0d979-kube-api-access-zb9js\") pod \"3aa31c62-3ec4-4764-b3c9-915f2ed0d979\" (UID: \"3aa31c62-3ec4-4764-b3c9-915f2ed0d979\") " Feb 18 19:47:51 crc kubenswrapper[4942]: I0218 19:47:51.621865 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/3aa31c62-3ec4-4764-b3c9-915f2ed0d979-ssh-key-openstack-edpm-ipam\") pod \"3aa31c62-3ec4-4764-b3c9-915f2ed0d979\" (UID: \"3aa31c62-3ec4-4764-b3c9-915f2ed0d979\") " Feb 18 19:47:51 crc kubenswrapper[4942]: I0218 19:47:51.621889 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3aa31c62-3ec4-4764-b3c9-915f2ed0d979-inventory\") pod \"3aa31c62-3ec4-4764-b3c9-915f2ed0d979\" (UID: \"3aa31c62-3ec4-4764-b3c9-915f2ed0d979\") " Feb 18 19:47:51 crc kubenswrapper[4942]: I0218 19:47:51.627971 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3aa31c62-3ec4-4764-b3c9-915f2ed0d979-kube-api-access-zb9js" (OuterVolumeSpecName: "kube-api-access-zb9js") pod "3aa31c62-3ec4-4764-b3c9-915f2ed0d979" (UID: "3aa31c62-3ec4-4764-b3c9-915f2ed0d979"). InnerVolumeSpecName "kube-api-access-zb9js". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:47:51 crc kubenswrapper[4942]: I0218 19:47:51.655297 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3aa31c62-3ec4-4764-b3c9-915f2ed0d979-inventory" (OuterVolumeSpecName: "inventory") pod "3aa31c62-3ec4-4764-b3c9-915f2ed0d979" (UID: "3aa31c62-3ec4-4764-b3c9-915f2ed0d979"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:47:51 crc kubenswrapper[4942]: I0218 19:47:51.655830 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3aa31c62-3ec4-4764-b3c9-915f2ed0d979-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "3aa31c62-3ec4-4764-b3c9-915f2ed0d979" (UID: "3aa31c62-3ec4-4764-b3c9-915f2ed0d979"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:47:51 crc kubenswrapper[4942]: I0218 19:47:51.724871 4942 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zb9js\" (UniqueName: \"kubernetes.io/projected/3aa31c62-3ec4-4764-b3c9-915f2ed0d979-kube-api-access-zb9js\") on node \"crc\" DevicePath \"\"" Feb 18 19:47:51 crc kubenswrapper[4942]: I0218 19:47:51.724938 4942 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/3aa31c62-3ec4-4764-b3c9-915f2ed0d979-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 18 19:47:51 crc kubenswrapper[4942]: I0218 19:47:51.724965 4942 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3aa31c62-3ec4-4764-b3c9-915f2ed0d979-inventory\") on node \"crc\" DevicePath \"\"" Feb 18 19:47:52 crc kubenswrapper[4942]: I0218 19:47:52.067565 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-sp7kd" event={"ID":"3aa31c62-3ec4-4764-b3c9-915f2ed0d979","Type":"ContainerDied","Data":"40b6d04180d9292202facb30248186130f9617cf1e6b90655f07b0a1a6ace2e6"} Feb 18 19:47:52 crc kubenswrapper[4942]: I0218 19:47:52.067626 4942 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="40b6d04180d9292202facb30248186130f9617cf1e6b90655f07b0a1a6ace2e6" Feb 18 19:47:52 crc kubenswrapper[4942]: I0218 19:47:52.067661 4942 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-sp7kd" Feb 18 19:47:52 crc kubenswrapper[4942]: I0218 19:47:52.164050 4942 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-qlvtb"] Feb 18 19:47:52 crc kubenswrapper[4942]: E0218 19:47:52.164725 4942 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3aa31c62-3ec4-4764-b3c9-915f2ed0d979" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Feb 18 19:47:52 crc kubenswrapper[4942]: I0218 19:47:52.164755 4942 state_mem.go:107] "Deleted CPUSet assignment" podUID="3aa31c62-3ec4-4764-b3c9-915f2ed0d979" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Feb 18 19:47:52 crc kubenswrapper[4942]: I0218 19:47:52.165234 4942 memory_manager.go:354] "RemoveStaleState removing state" podUID="3aa31c62-3ec4-4764-b3c9-915f2ed0d979" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Feb 18 19:47:52 crc kubenswrapper[4942]: I0218 19:47:52.166321 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-qlvtb" Feb 18 19:47:52 crc kubenswrapper[4942]: I0218 19:47:52.168165 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-rgcbh" Feb 18 19:47:52 crc kubenswrapper[4942]: I0218 19:47:52.168905 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 18 19:47:52 crc kubenswrapper[4942]: I0218 19:47:52.169296 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 18 19:47:52 crc kubenswrapper[4942]: I0218 19:47:52.177621 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-qlvtb"] Feb 18 19:47:52 crc kubenswrapper[4942]: I0218 19:47:52.182386 4942 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 18 19:47:52 crc kubenswrapper[4942]: I0218 19:47:52.337772 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/746ae939-383d-48e0-98ab-12f13962d6d3-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-qlvtb\" (UID: \"746ae939-383d-48e0-98ab-12f13962d6d3\") " pod="openstack/ssh-known-hosts-edpm-deployment-qlvtb" Feb 18 19:47:52 crc kubenswrapper[4942]: I0218 19:47:52.337939 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4xdzg\" (UniqueName: \"kubernetes.io/projected/746ae939-383d-48e0-98ab-12f13962d6d3-kube-api-access-4xdzg\") pod \"ssh-known-hosts-edpm-deployment-qlvtb\" (UID: \"746ae939-383d-48e0-98ab-12f13962d6d3\") " pod="openstack/ssh-known-hosts-edpm-deployment-qlvtb" Feb 18 19:47:52 crc kubenswrapper[4942]: I0218 19:47:52.338270 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/746ae939-383d-48e0-98ab-12f13962d6d3-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-qlvtb\" (UID: \"746ae939-383d-48e0-98ab-12f13962d6d3\") " pod="openstack/ssh-known-hosts-edpm-deployment-qlvtb" Feb 18 19:47:52 crc kubenswrapper[4942]: I0218 19:47:52.440717 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/746ae939-383d-48e0-98ab-12f13962d6d3-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-qlvtb\" (UID: \"746ae939-383d-48e0-98ab-12f13962d6d3\") " pod="openstack/ssh-known-hosts-edpm-deployment-qlvtb" Feb 18 19:47:52 crc kubenswrapper[4942]: I0218 19:47:52.440842 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4xdzg\" (UniqueName: \"kubernetes.io/projected/746ae939-383d-48e0-98ab-12f13962d6d3-kube-api-access-4xdzg\") pod \"ssh-known-hosts-edpm-deployment-qlvtb\" (UID: \"746ae939-383d-48e0-98ab-12f13962d6d3\") " pod="openstack/ssh-known-hosts-edpm-deployment-qlvtb" Feb 18 19:47:52 crc kubenswrapper[4942]: I0218 19:47:52.440966 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/746ae939-383d-48e0-98ab-12f13962d6d3-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-qlvtb\" (UID: \"746ae939-383d-48e0-98ab-12f13962d6d3\") " pod="openstack/ssh-known-hosts-edpm-deployment-qlvtb" Feb 18 19:47:52 crc kubenswrapper[4942]: I0218 19:47:52.447899 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/746ae939-383d-48e0-98ab-12f13962d6d3-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-qlvtb\" (UID: \"746ae939-383d-48e0-98ab-12f13962d6d3\") " pod="openstack/ssh-known-hosts-edpm-deployment-qlvtb" Feb 18 19:47:52 crc kubenswrapper[4942]: I0218 19:47:52.447986 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/746ae939-383d-48e0-98ab-12f13962d6d3-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-qlvtb\" (UID: \"746ae939-383d-48e0-98ab-12f13962d6d3\") " pod="openstack/ssh-known-hosts-edpm-deployment-qlvtb" Feb 18 19:47:52 crc kubenswrapper[4942]: I0218 19:47:52.461273 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4xdzg\" (UniqueName: \"kubernetes.io/projected/746ae939-383d-48e0-98ab-12f13962d6d3-kube-api-access-4xdzg\") pod \"ssh-known-hosts-edpm-deployment-qlvtb\" (UID: \"746ae939-383d-48e0-98ab-12f13962d6d3\") " pod="openstack/ssh-known-hosts-edpm-deployment-qlvtb" Feb 18 19:47:52 crc kubenswrapper[4942]: I0218 19:47:52.486601 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-qlvtb" Feb 18 19:47:53 crc kubenswrapper[4942]: I0218 19:47:53.103128 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-qlvtb"] Feb 18 19:47:53 crc kubenswrapper[4942]: I0218 19:47:53.115742 4942 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 18 19:47:54 crc kubenswrapper[4942]: I0218 19:47:54.089308 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-qlvtb" event={"ID":"746ae939-383d-48e0-98ab-12f13962d6d3","Type":"ContainerStarted","Data":"4503660b2cafb14bcddd36eacccdf5e5fdbd3d637549e419febdc58939a91d5c"} Feb 18 19:47:54 crc kubenswrapper[4942]: I0218 19:47:54.089667 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-qlvtb" event={"ID":"746ae939-383d-48e0-98ab-12f13962d6d3","Type":"ContainerStarted","Data":"083bd6858b06e05ab72bfbfa1958a46743bdb5d0a05522ffbba47f00bc6504b2"} Feb 18 19:47:54 crc kubenswrapper[4942]: I0218 19:47:54.120013 4942 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ssh-known-hosts-edpm-deployment-qlvtb" podStartSLOduration=1.65213203 podStartE2EDuration="2.119983422s" podCreationTimestamp="2026-02-18 19:47:52 +0000 UTC" firstStartedPulling="2026-02-18 19:47:53.115484445 +0000 UTC m=+1832.820417110" lastFinishedPulling="2026-02-18 19:47:53.583335797 +0000 UTC m=+1833.288268502" observedRunningTime="2026-02-18 19:47:54.106160988 +0000 UTC m=+1833.811093693" watchObservedRunningTime="2026-02-18 19:47:54.119983422 +0000 UTC m=+1833.824916117" Feb 18 19:47:57 crc kubenswrapper[4942]: I0218 19:47:57.036622 4942 scope.go:117] "RemoveContainer" containerID="e8694fad4507ebe591fc3e29212876da9f32320a8fd16e4bcde4ab412ae86b19" Feb 18 19:47:57 crc kubenswrapper[4942]: E0218 19:47:57.037568 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wqxh4_openshift-machine-config-operator(28921539-823a-4439-a230-3b5aed7085cc)\"" pod="openshift-machine-config-operator/machine-config-daemon-wqxh4" podUID="28921539-823a-4439-a230-3b5aed7085cc" Feb 18 19:48:01 crc kubenswrapper[4942]: I0218 19:48:01.177996 4942 generic.go:334] "Generic (PLEG): container finished" podID="746ae939-383d-48e0-98ab-12f13962d6d3" containerID="4503660b2cafb14bcddd36eacccdf5e5fdbd3d637549e419febdc58939a91d5c" exitCode=0 Feb 18 19:48:01 crc kubenswrapper[4942]: I0218 19:48:01.178069 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-qlvtb" event={"ID":"746ae939-383d-48e0-98ab-12f13962d6d3","Type":"ContainerDied","Data":"4503660b2cafb14bcddd36eacccdf5e5fdbd3d637549e419febdc58939a91d5c"} Feb 18 19:48:02 crc kubenswrapper[4942]: I0218 19:48:02.691504 4942 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-qlvtb" Feb 18 19:48:02 crc kubenswrapper[4942]: I0218 19:48:02.780535 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/746ae939-383d-48e0-98ab-12f13962d6d3-ssh-key-openstack-edpm-ipam\") pod \"746ae939-383d-48e0-98ab-12f13962d6d3\" (UID: \"746ae939-383d-48e0-98ab-12f13962d6d3\") " Feb 18 19:48:02 crc kubenswrapper[4942]: I0218 19:48:02.780730 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/746ae939-383d-48e0-98ab-12f13962d6d3-inventory-0\") pod \"746ae939-383d-48e0-98ab-12f13962d6d3\" (UID: \"746ae939-383d-48e0-98ab-12f13962d6d3\") " Feb 18 19:48:02 crc kubenswrapper[4942]: I0218 19:48:02.780888 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4xdzg\" (UniqueName: \"kubernetes.io/projected/746ae939-383d-48e0-98ab-12f13962d6d3-kube-api-access-4xdzg\") pod \"746ae939-383d-48e0-98ab-12f13962d6d3\" (UID: \"746ae939-383d-48e0-98ab-12f13962d6d3\") " Feb 18 19:48:02 crc kubenswrapper[4942]: I0218 19:48:02.787077 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/746ae939-383d-48e0-98ab-12f13962d6d3-kube-api-access-4xdzg" (OuterVolumeSpecName: "kube-api-access-4xdzg") pod "746ae939-383d-48e0-98ab-12f13962d6d3" (UID: "746ae939-383d-48e0-98ab-12f13962d6d3"). InnerVolumeSpecName "kube-api-access-4xdzg". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:48:02 crc kubenswrapper[4942]: I0218 19:48:02.810130 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/746ae939-383d-48e0-98ab-12f13962d6d3-inventory-0" (OuterVolumeSpecName: "inventory-0") pod "746ae939-383d-48e0-98ab-12f13962d6d3" (UID: "746ae939-383d-48e0-98ab-12f13962d6d3"). InnerVolumeSpecName "inventory-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:48:02 crc kubenswrapper[4942]: I0218 19:48:02.811932 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/746ae939-383d-48e0-98ab-12f13962d6d3-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "746ae939-383d-48e0-98ab-12f13962d6d3" (UID: "746ae939-383d-48e0-98ab-12f13962d6d3"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:48:02 crc kubenswrapper[4942]: I0218 19:48:02.883518 4942 reconciler_common.go:293] "Volume detached for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/746ae939-383d-48e0-98ab-12f13962d6d3-inventory-0\") on node \"crc\" DevicePath \"\"" Feb 18 19:48:02 crc kubenswrapper[4942]: I0218 19:48:02.883569 4942 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4xdzg\" (UniqueName: \"kubernetes.io/projected/746ae939-383d-48e0-98ab-12f13962d6d3-kube-api-access-4xdzg\") on node \"crc\" DevicePath \"\"" Feb 18 19:48:02 crc kubenswrapper[4942]: I0218 19:48:02.883593 4942 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/746ae939-383d-48e0-98ab-12f13962d6d3-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 18 19:48:03 crc kubenswrapper[4942]: I0218 19:48:03.208431 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-qlvtb" event={"ID":"746ae939-383d-48e0-98ab-12f13962d6d3","Type":"ContainerDied","Data":"083bd6858b06e05ab72bfbfa1958a46743bdb5d0a05522ffbba47f00bc6504b2"} Feb 18 19:48:03 crc kubenswrapper[4942]: I0218 19:48:03.208493 4942 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="083bd6858b06e05ab72bfbfa1958a46743bdb5d0a05522ffbba47f00bc6504b2" Feb 18 19:48:03 crc kubenswrapper[4942]: I0218 19:48:03.208585 4942 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-qlvtb" Feb 18 19:48:03 crc kubenswrapper[4942]: I0218 19:48:03.323596 4942 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-kp5sb"] Feb 18 19:48:03 crc kubenswrapper[4942]: E0218 19:48:03.324127 4942 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="746ae939-383d-48e0-98ab-12f13962d6d3" containerName="ssh-known-hosts-edpm-deployment" Feb 18 19:48:03 crc kubenswrapper[4942]: I0218 19:48:03.324152 4942 state_mem.go:107] "Deleted CPUSet assignment" podUID="746ae939-383d-48e0-98ab-12f13962d6d3" containerName="ssh-known-hosts-edpm-deployment" Feb 18 19:48:03 crc kubenswrapper[4942]: I0218 19:48:03.324449 4942 memory_manager.go:354] "RemoveStaleState removing state" podUID="746ae939-383d-48e0-98ab-12f13962d6d3" containerName="ssh-known-hosts-edpm-deployment" Feb 18 19:48:03 crc kubenswrapper[4942]: I0218 19:48:03.325300 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-kp5sb" Feb 18 19:48:03 crc kubenswrapper[4942]: I0218 19:48:03.327861 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-rgcbh" Feb 18 19:48:03 crc kubenswrapper[4942]: I0218 19:48:03.328025 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 18 19:48:03 crc kubenswrapper[4942]: I0218 19:48:03.328421 4942 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 18 19:48:03 crc kubenswrapper[4942]: I0218 19:48:03.332719 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 18 19:48:03 crc kubenswrapper[4942]: I0218 19:48:03.338740 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-kp5sb"] Feb 18 19:48:03 crc kubenswrapper[4942]: I0218 19:48:03.498047 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/90bc7193-8433-4354-99c8-e441b477670b-ssh-key-openstack-edpm-ipam\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-kp5sb\" (UID: \"90bc7193-8433-4354-99c8-e441b477670b\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-kp5sb" Feb 18 19:48:03 crc kubenswrapper[4942]: I0218 19:48:03.498206 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/90bc7193-8433-4354-99c8-e441b477670b-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-kp5sb\" (UID: \"90bc7193-8433-4354-99c8-e441b477670b\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-kp5sb" Feb 18 19:48:03 crc kubenswrapper[4942]: I0218 19:48:03.498658 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-56548\" (UniqueName: \"kubernetes.io/projected/90bc7193-8433-4354-99c8-e441b477670b-kube-api-access-56548\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-kp5sb\" (UID: \"90bc7193-8433-4354-99c8-e441b477670b\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-kp5sb" Feb 18 19:48:03 crc kubenswrapper[4942]: I0218 19:48:03.601124 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-56548\" (UniqueName: \"kubernetes.io/projected/90bc7193-8433-4354-99c8-e441b477670b-kube-api-access-56548\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-kp5sb\" (UID: \"90bc7193-8433-4354-99c8-e441b477670b\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-kp5sb" Feb 18 19:48:03 crc kubenswrapper[4942]: I0218 19:48:03.601324 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/90bc7193-8433-4354-99c8-e441b477670b-ssh-key-openstack-edpm-ipam\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-kp5sb\" (UID: \"90bc7193-8433-4354-99c8-e441b477670b\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-kp5sb" Feb 18 19:48:03 crc kubenswrapper[4942]: I0218 19:48:03.601419 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/90bc7193-8433-4354-99c8-e441b477670b-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-kp5sb\" (UID: \"90bc7193-8433-4354-99c8-e441b477670b\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-kp5sb" Feb 18 19:48:03 crc kubenswrapper[4942]: I0218 19:48:03.606972 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/90bc7193-8433-4354-99c8-e441b477670b-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-kp5sb\" (UID: \"90bc7193-8433-4354-99c8-e441b477670b\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-kp5sb" Feb 18 19:48:03 crc kubenswrapper[4942]: I0218 19:48:03.618393 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/90bc7193-8433-4354-99c8-e441b477670b-ssh-key-openstack-edpm-ipam\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-kp5sb\" (UID: \"90bc7193-8433-4354-99c8-e441b477670b\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-kp5sb" Feb 18 19:48:03 crc kubenswrapper[4942]: I0218 19:48:03.621485 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-56548\" (UniqueName: \"kubernetes.io/projected/90bc7193-8433-4354-99c8-e441b477670b-kube-api-access-56548\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-kp5sb\" (UID: \"90bc7193-8433-4354-99c8-e441b477670b\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-kp5sb" Feb 18 19:48:03 crc kubenswrapper[4942]: I0218 19:48:03.648934 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-kp5sb" Feb 18 19:48:04 crc kubenswrapper[4942]: I0218 19:48:04.170884 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-kp5sb"] Feb 18 19:48:04 crc kubenswrapper[4942]: I0218 19:48:04.217989 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-kp5sb" event={"ID":"90bc7193-8433-4354-99c8-e441b477670b","Type":"ContainerStarted","Data":"832e643169e6918b6869e534b927c648a3ced25a15e292d62f10bbfb1e11f139"} Feb 18 19:48:05 crc kubenswrapper[4942]: I0218 19:48:05.231450 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-kp5sb" event={"ID":"90bc7193-8433-4354-99c8-e441b477670b","Type":"ContainerStarted","Data":"c0caeade510de70c11ac75f5bb95ac1d1b35661f2a2e4df9f57a3d094eacf300"} Feb 18 19:48:05 crc kubenswrapper[4942]: I0218 19:48:05.265243 4942 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-kp5sb" podStartSLOduration=1.838542063 podStartE2EDuration="2.265220549s" podCreationTimestamp="2026-02-18 19:48:03 +0000 UTC" firstStartedPulling="2026-02-18 19:48:04.169995601 +0000 UTC m=+1843.874928286" lastFinishedPulling="2026-02-18 19:48:04.596674107 +0000 UTC m=+1844.301606772" observedRunningTime="2026-02-18 19:48:05.252986337 +0000 UTC m=+1844.957919032" watchObservedRunningTime="2026-02-18 19:48:05.265220549 +0000 UTC m=+1844.970153224" Feb 18 19:48:09 crc kubenswrapper[4942]: I0218 19:48:09.036152 4942 scope.go:117] "RemoveContainer" containerID="e8694fad4507ebe591fc3e29212876da9f32320a8fd16e4bcde4ab412ae86b19" Feb 18 19:48:09 crc kubenswrapper[4942]: E0218 19:48:09.037184 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wqxh4_openshift-machine-config-operator(28921539-823a-4439-a230-3b5aed7085cc)\"" pod="openshift-machine-config-operator/machine-config-daemon-wqxh4" podUID="28921539-823a-4439-a230-3b5aed7085cc" Feb 18 19:48:09 crc kubenswrapper[4942]: I0218 19:48:09.064691 4942 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-cell-mapping-6sjb6"] Feb 18 19:48:09 crc kubenswrapper[4942]: I0218 19:48:09.079363 4942 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-cell-mapping-6sjb6"] Feb 18 19:48:11 crc kubenswrapper[4942]: I0218 19:48:11.049926 4942 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2c972a02-9d35-43d1-9ef6-ab99f7cded50" path="/var/lib/kubelet/pods/2c972a02-9d35-43d1-9ef6-ab99f7cded50/volumes" Feb 18 19:48:13 crc kubenswrapper[4942]: I0218 19:48:13.312740 4942 generic.go:334] "Generic (PLEG): container finished" podID="90bc7193-8433-4354-99c8-e441b477670b" containerID="c0caeade510de70c11ac75f5bb95ac1d1b35661f2a2e4df9f57a3d094eacf300" exitCode=0 Feb 18 19:48:13 crc kubenswrapper[4942]: I0218 19:48:13.312857 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-kp5sb" event={"ID":"90bc7193-8433-4354-99c8-e441b477670b","Type":"ContainerDied","Data":"c0caeade510de70c11ac75f5bb95ac1d1b35661f2a2e4df9f57a3d094eacf300"} Feb 18 19:48:14 crc kubenswrapper[4942]: I0218 19:48:14.760964 4942 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-kp5sb" Feb 18 19:48:14 crc kubenswrapper[4942]: I0218 19:48:14.855430 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/90bc7193-8433-4354-99c8-e441b477670b-inventory\") pod \"90bc7193-8433-4354-99c8-e441b477670b\" (UID: \"90bc7193-8433-4354-99c8-e441b477670b\") " Feb 18 19:48:14 crc kubenswrapper[4942]: I0218 19:48:14.855514 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-56548\" (UniqueName: \"kubernetes.io/projected/90bc7193-8433-4354-99c8-e441b477670b-kube-api-access-56548\") pod \"90bc7193-8433-4354-99c8-e441b477670b\" (UID: \"90bc7193-8433-4354-99c8-e441b477670b\") " Feb 18 19:48:14 crc kubenswrapper[4942]: I0218 19:48:14.855544 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/90bc7193-8433-4354-99c8-e441b477670b-ssh-key-openstack-edpm-ipam\") pod \"90bc7193-8433-4354-99c8-e441b477670b\" (UID: \"90bc7193-8433-4354-99c8-e441b477670b\") " Feb 18 19:48:14 crc kubenswrapper[4942]: I0218 19:48:14.860956 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/90bc7193-8433-4354-99c8-e441b477670b-kube-api-access-56548" (OuterVolumeSpecName: "kube-api-access-56548") pod "90bc7193-8433-4354-99c8-e441b477670b" (UID: "90bc7193-8433-4354-99c8-e441b477670b"). InnerVolumeSpecName "kube-api-access-56548". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:48:14 crc kubenswrapper[4942]: I0218 19:48:14.888733 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/90bc7193-8433-4354-99c8-e441b477670b-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "90bc7193-8433-4354-99c8-e441b477670b" (UID: "90bc7193-8433-4354-99c8-e441b477670b"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:48:14 crc kubenswrapper[4942]: I0218 19:48:14.891342 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/90bc7193-8433-4354-99c8-e441b477670b-inventory" (OuterVolumeSpecName: "inventory") pod "90bc7193-8433-4354-99c8-e441b477670b" (UID: "90bc7193-8433-4354-99c8-e441b477670b"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:48:14 crc kubenswrapper[4942]: I0218 19:48:14.960869 4942 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/90bc7193-8433-4354-99c8-e441b477670b-inventory\") on node \"crc\" DevicePath \"\"" Feb 18 19:48:14 crc kubenswrapper[4942]: I0218 19:48:14.960918 4942 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-56548\" (UniqueName: \"kubernetes.io/projected/90bc7193-8433-4354-99c8-e441b477670b-kube-api-access-56548\") on node \"crc\" DevicePath \"\"" Feb 18 19:48:14 crc kubenswrapper[4942]: I0218 19:48:14.960968 4942 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/90bc7193-8433-4354-99c8-e441b477670b-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 18 19:48:15 crc kubenswrapper[4942]: I0218 19:48:15.337258 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-kp5sb" event={"ID":"90bc7193-8433-4354-99c8-e441b477670b","Type":"ContainerDied","Data":"832e643169e6918b6869e534b927c648a3ced25a15e292d62f10bbfb1e11f139"} Feb 18 19:48:15 crc kubenswrapper[4942]: I0218 19:48:15.337622 4942 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="832e643169e6918b6869e534b927c648a3ced25a15e292d62f10bbfb1e11f139" Feb 18 19:48:15 crc kubenswrapper[4942]: I0218 19:48:15.337335 4942 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-kp5sb" Feb 18 19:48:15 crc kubenswrapper[4942]: I0218 19:48:15.445153 4942 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-mgjjf"] Feb 18 19:48:15 crc kubenswrapper[4942]: E0218 19:48:15.445617 4942 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="90bc7193-8433-4354-99c8-e441b477670b" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Feb 18 19:48:15 crc kubenswrapper[4942]: I0218 19:48:15.445640 4942 state_mem.go:107] "Deleted CPUSet assignment" podUID="90bc7193-8433-4354-99c8-e441b477670b" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Feb 18 19:48:15 crc kubenswrapper[4942]: I0218 19:48:15.445911 4942 memory_manager.go:354] "RemoveStaleState removing state" podUID="90bc7193-8433-4354-99c8-e441b477670b" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Feb 18 19:48:15 crc kubenswrapper[4942]: I0218 19:48:15.446686 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-mgjjf" Feb 18 19:48:15 crc kubenswrapper[4942]: I0218 19:48:15.449921 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 18 19:48:15 crc kubenswrapper[4942]: I0218 19:48:15.450648 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-rgcbh" Feb 18 19:48:15 crc kubenswrapper[4942]: I0218 19:48:15.454401 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 18 19:48:15 crc kubenswrapper[4942]: I0218 19:48:15.455249 4942 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 18 19:48:15 crc kubenswrapper[4942]: I0218 19:48:15.462943 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-mgjjf"] Feb 18 19:48:15 crc kubenswrapper[4942]: I0218 19:48:15.586086 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/79e81623-b595-4683-81b3-89c5a11f8237-ssh-key-openstack-edpm-ipam\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-mgjjf\" (UID: \"79e81623-b595-4683-81b3-89c5a11f8237\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-mgjjf" Feb 18 19:48:15 crc kubenswrapper[4942]: I0218 19:48:15.586348 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/79e81623-b595-4683-81b3-89c5a11f8237-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-mgjjf\" (UID: \"79e81623-b595-4683-81b3-89c5a11f8237\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-mgjjf" Feb 18 19:48:15 crc kubenswrapper[4942]: I0218 19:48:15.587016 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-js2pn\" (UniqueName: \"kubernetes.io/projected/79e81623-b595-4683-81b3-89c5a11f8237-kube-api-access-js2pn\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-mgjjf\" (UID: \"79e81623-b595-4683-81b3-89c5a11f8237\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-mgjjf" Feb 18 19:48:15 crc kubenswrapper[4942]: I0218 19:48:15.688964 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-js2pn\" (UniqueName: \"kubernetes.io/projected/79e81623-b595-4683-81b3-89c5a11f8237-kube-api-access-js2pn\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-mgjjf\" (UID: \"79e81623-b595-4683-81b3-89c5a11f8237\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-mgjjf" Feb 18 19:48:15 crc kubenswrapper[4942]: I0218 19:48:15.689050 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/79e81623-b595-4683-81b3-89c5a11f8237-ssh-key-openstack-edpm-ipam\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-mgjjf\" (UID: \"79e81623-b595-4683-81b3-89c5a11f8237\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-mgjjf" Feb 18 19:48:15 crc kubenswrapper[4942]: I0218 19:48:15.689082 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/79e81623-b595-4683-81b3-89c5a11f8237-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-mgjjf\" (UID: \"79e81623-b595-4683-81b3-89c5a11f8237\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-mgjjf" Feb 18 19:48:15 crc kubenswrapper[4942]: I0218 19:48:15.693304 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/79e81623-b595-4683-81b3-89c5a11f8237-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-mgjjf\" (UID: \"79e81623-b595-4683-81b3-89c5a11f8237\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-mgjjf" Feb 18 19:48:15 crc kubenswrapper[4942]: I0218 19:48:15.696299 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/79e81623-b595-4683-81b3-89c5a11f8237-ssh-key-openstack-edpm-ipam\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-mgjjf\" (UID: \"79e81623-b595-4683-81b3-89c5a11f8237\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-mgjjf" Feb 18 19:48:15 crc kubenswrapper[4942]: I0218 19:48:15.719718 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-js2pn\" (UniqueName: \"kubernetes.io/projected/79e81623-b595-4683-81b3-89c5a11f8237-kube-api-access-js2pn\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-mgjjf\" (UID: \"79e81623-b595-4683-81b3-89c5a11f8237\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-mgjjf" Feb 18 19:48:15 crc kubenswrapper[4942]: I0218 19:48:15.775348 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-mgjjf" Feb 18 19:48:16 crc kubenswrapper[4942]: I0218 19:48:16.177442 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-mgjjf"] Feb 18 19:48:16 crc kubenswrapper[4942]: W0218 19:48:16.183033 4942 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod79e81623_b595_4683_81b3_89c5a11f8237.slice/crio-0dd19f57f7767a16796e0297a7dd9ca4c53eec6f04f4a54a1a4865683242c6a8 WatchSource:0}: Error finding container 0dd19f57f7767a16796e0297a7dd9ca4c53eec6f04f4a54a1a4865683242c6a8: Status 404 returned error can't find the container with id 0dd19f57f7767a16796e0297a7dd9ca4c53eec6f04f4a54a1a4865683242c6a8 Feb 18 19:48:16 crc kubenswrapper[4942]: I0218 19:48:16.345410 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-mgjjf" event={"ID":"79e81623-b595-4683-81b3-89c5a11f8237","Type":"ContainerStarted","Data":"0dd19f57f7767a16796e0297a7dd9ca4c53eec6f04f4a54a1a4865683242c6a8"} Feb 18 19:48:17 crc kubenswrapper[4942]: I0218 19:48:17.360138 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-mgjjf" event={"ID":"79e81623-b595-4683-81b3-89c5a11f8237","Type":"ContainerStarted","Data":"be278f8aa5596652dd2d6708280f59cf7d32aa410dae321c030e034b045c3fe3"} Feb 18 19:48:22 crc kubenswrapper[4942]: I0218 19:48:22.035895 4942 scope.go:117] "RemoveContainer" containerID="e8694fad4507ebe591fc3e29212876da9f32320a8fd16e4bcde4ab412ae86b19" Feb 18 19:48:22 crc kubenswrapper[4942]: E0218 19:48:22.036992 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wqxh4_openshift-machine-config-operator(28921539-823a-4439-a230-3b5aed7085cc)\"" pod="openshift-machine-config-operator/machine-config-daemon-wqxh4" podUID="28921539-823a-4439-a230-3b5aed7085cc" Feb 18 19:48:26 crc kubenswrapper[4942]: I0218 19:48:26.469694 4942 generic.go:334] "Generic (PLEG): container finished" podID="79e81623-b595-4683-81b3-89c5a11f8237" containerID="be278f8aa5596652dd2d6708280f59cf7d32aa410dae321c030e034b045c3fe3" exitCode=0 Feb 18 19:48:26 crc kubenswrapper[4942]: I0218 19:48:26.469796 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-mgjjf" event={"ID":"79e81623-b595-4683-81b3-89c5a11f8237","Type":"ContainerDied","Data":"be278f8aa5596652dd2d6708280f59cf7d32aa410dae321c030e034b045c3fe3"} Feb 18 19:48:28 crc kubenswrapper[4942]: I0218 19:48:28.058651 4942 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-mgjjf" Feb 18 19:48:28 crc kubenswrapper[4942]: I0218 19:48:28.203835 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/79e81623-b595-4683-81b3-89c5a11f8237-ssh-key-openstack-edpm-ipam\") pod \"79e81623-b595-4683-81b3-89c5a11f8237\" (UID: \"79e81623-b595-4683-81b3-89c5a11f8237\") " Feb 18 19:48:28 crc kubenswrapper[4942]: I0218 19:48:28.204244 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-js2pn\" (UniqueName: \"kubernetes.io/projected/79e81623-b595-4683-81b3-89c5a11f8237-kube-api-access-js2pn\") pod \"79e81623-b595-4683-81b3-89c5a11f8237\" (UID: \"79e81623-b595-4683-81b3-89c5a11f8237\") " Feb 18 19:48:28 crc kubenswrapper[4942]: I0218 19:48:28.204468 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/79e81623-b595-4683-81b3-89c5a11f8237-inventory\") pod \"79e81623-b595-4683-81b3-89c5a11f8237\" (UID: \"79e81623-b595-4683-81b3-89c5a11f8237\") " Feb 18 19:48:28 crc kubenswrapper[4942]: I0218 19:48:28.221682 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/79e81623-b595-4683-81b3-89c5a11f8237-kube-api-access-js2pn" (OuterVolumeSpecName: "kube-api-access-js2pn") pod "79e81623-b595-4683-81b3-89c5a11f8237" (UID: "79e81623-b595-4683-81b3-89c5a11f8237"). InnerVolumeSpecName "kube-api-access-js2pn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:48:28 crc kubenswrapper[4942]: I0218 19:48:28.240030 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/79e81623-b595-4683-81b3-89c5a11f8237-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "79e81623-b595-4683-81b3-89c5a11f8237" (UID: "79e81623-b595-4683-81b3-89c5a11f8237"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:48:28 crc kubenswrapper[4942]: I0218 19:48:28.251530 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/79e81623-b595-4683-81b3-89c5a11f8237-inventory" (OuterVolumeSpecName: "inventory") pod "79e81623-b595-4683-81b3-89c5a11f8237" (UID: "79e81623-b595-4683-81b3-89c5a11f8237"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:48:28 crc kubenswrapper[4942]: I0218 19:48:28.307972 4942 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/79e81623-b595-4683-81b3-89c5a11f8237-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 18 19:48:28 crc kubenswrapper[4942]: I0218 19:48:28.308007 4942 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-js2pn\" (UniqueName: \"kubernetes.io/projected/79e81623-b595-4683-81b3-89c5a11f8237-kube-api-access-js2pn\") on node \"crc\" DevicePath \"\"" Feb 18 19:48:28 crc kubenswrapper[4942]: I0218 19:48:28.308019 4942 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/79e81623-b595-4683-81b3-89c5a11f8237-inventory\") on node \"crc\" DevicePath \"\"" Feb 18 19:48:28 crc kubenswrapper[4942]: I0218 19:48:28.514080 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-mgjjf" event={"ID":"79e81623-b595-4683-81b3-89c5a11f8237","Type":"ContainerDied","Data":"0dd19f57f7767a16796e0297a7dd9ca4c53eec6f04f4a54a1a4865683242c6a8"} Feb 18 19:48:28 crc kubenswrapper[4942]: I0218 19:48:28.514150 4942 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0dd19f57f7767a16796e0297a7dd9ca4c53eec6f04f4a54a1a4865683242c6a8" Feb 18 19:48:28 crc kubenswrapper[4942]: I0218 19:48:28.514173 4942 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-mgjjf" Feb 18 19:48:28 crc kubenswrapper[4942]: I0218 19:48:28.628170 4942 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-d9gs9"] Feb 18 19:48:28 crc kubenswrapper[4942]: E0218 19:48:28.628946 4942 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="79e81623-b595-4683-81b3-89c5a11f8237" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Feb 18 19:48:28 crc kubenswrapper[4942]: I0218 19:48:28.629035 4942 state_mem.go:107] "Deleted CPUSet assignment" podUID="79e81623-b595-4683-81b3-89c5a11f8237" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Feb 18 19:48:28 crc kubenswrapper[4942]: I0218 19:48:28.629333 4942 memory_manager.go:354] "RemoveStaleState removing state" podUID="79e81623-b595-4683-81b3-89c5a11f8237" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Feb 18 19:48:28 crc kubenswrapper[4942]: I0218 19:48:28.630265 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-d9gs9" Feb 18 19:48:28 crc kubenswrapper[4942]: I0218 19:48:28.635079 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-libvirt-default-certs-0" Feb 18 19:48:28 crc kubenswrapper[4942]: I0218 19:48:28.641654 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-neutron-metadata-default-certs-0" Feb 18 19:48:28 crc kubenswrapper[4942]: I0218 19:48:28.641937 4942 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 18 19:48:28 crc kubenswrapper[4942]: I0218 19:48:28.642388 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 18 19:48:28 crc kubenswrapper[4942]: I0218 19:48:28.642602 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-rgcbh" Feb 18 19:48:28 crc kubenswrapper[4942]: I0218 19:48:28.642840 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-ovn-default-certs-0" Feb 18 19:48:28 crc kubenswrapper[4942]: I0218 19:48:28.643742 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 18 19:48:28 crc kubenswrapper[4942]: I0218 19:48:28.644127 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-telemetry-default-certs-0" Feb 18 19:48:28 crc kubenswrapper[4942]: I0218 19:48:28.661929 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-d9gs9"] Feb 18 19:48:28 crc kubenswrapper[4942]: I0218 19:48:28.820958 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3dd9d927-61b8-4c83-93f9-131ab03cb0cc-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-d9gs9\" (UID: \"3dd9d927-61b8-4c83-93f9-131ab03cb0cc\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-d9gs9" Feb 18 19:48:28 crc kubenswrapper[4942]: I0218 19:48:28.821040 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/3dd9d927-61b8-4c83-93f9-131ab03cb0cc-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-d9gs9\" (UID: \"3dd9d927-61b8-4c83-93f9-131ab03cb0cc\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-d9gs9" Feb 18 19:48:28 crc kubenswrapper[4942]: I0218 19:48:28.821071 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3dd9d927-61b8-4c83-93f9-131ab03cb0cc-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-d9gs9\" (UID: \"3dd9d927-61b8-4c83-93f9-131ab03cb0cc\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-d9gs9" Feb 18 19:48:28 crc kubenswrapper[4942]: I0218 19:48:28.821092 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3dd9d927-61b8-4c83-93f9-131ab03cb0cc-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-d9gs9\" (UID: \"3dd9d927-61b8-4c83-93f9-131ab03cb0cc\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-d9gs9" Feb 18 19:48:28 crc kubenswrapper[4942]: I0218 19:48:28.821156 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3dd9d927-61b8-4c83-93f9-131ab03cb0cc-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-d9gs9\" (UID: \"3dd9d927-61b8-4c83-93f9-131ab03cb0cc\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-d9gs9" Feb 18 19:48:28 crc kubenswrapper[4942]: I0218 19:48:28.821196 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3dd9d927-61b8-4c83-93f9-131ab03cb0cc-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-d9gs9\" (UID: \"3dd9d927-61b8-4c83-93f9-131ab03cb0cc\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-d9gs9" Feb 18 19:48:28 crc kubenswrapper[4942]: I0218 19:48:28.821216 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/3dd9d927-61b8-4c83-93f9-131ab03cb0cc-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-d9gs9\" (UID: \"3dd9d927-61b8-4c83-93f9-131ab03cb0cc\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-d9gs9" Feb 18 19:48:28 crc kubenswrapper[4942]: I0218 19:48:28.821251 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/3dd9d927-61b8-4c83-93f9-131ab03cb0cc-ssh-key-openstack-edpm-ipam\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-d9gs9\" (UID: \"3dd9d927-61b8-4c83-93f9-131ab03cb0cc\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-d9gs9" Feb 18 19:48:28 crc kubenswrapper[4942]: I0218 19:48:28.821273 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/3dd9d927-61b8-4c83-93f9-131ab03cb0cc-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-d9gs9\" (UID: \"3dd9d927-61b8-4c83-93f9-131ab03cb0cc\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-d9gs9" Feb 18 19:48:28 crc kubenswrapper[4942]: I0218 19:48:28.821290 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z8gnl\" (UniqueName: \"kubernetes.io/projected/3dd9d927-61b8-4c83-93f9-131ab03cb0cc-kube-api-access-z8gnl\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-d9gs9\" (UID: \"3dd9d927-61b8-4c83-93f9-131ab03cb0cc\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-d9gs9" Feb 18 19:48:28 crc kubenswrapper[4942]: I0218 19:48:28.821306 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3dd9d927-61b8-4c83-93f9-131ab03cb0cc-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-d9gs9\" (UID: \"3dd9d927-61b8-4c83-93f9-131ab03cb0cc\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-d9gs9" Feb 18 19:48:28 crc kubenswrapper[4942]: I0218 19:48:28.821423 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/3dd9d927-61b8-4c83-93f9-131ab03cb0cc-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-d9gs9\" (UID: \"3dd9d927-61b8-4c83-93f9-131ab03cb0cc\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-d9gs9" Feb 18 19:48:28 crc kubenswrapper[4942]: I0218 19:48:28.821455 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3dd9d927-61b8-4c83-93f9-131ab03cb0cc-telemetry-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-d9gs9\" (UID: \"3dd9d927-61b8-4c83-93f9-131ab03cb0cc\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-d9gs9" Feb 18 19:48:28 crc kubenswrapper[4942]: I0218 19:48:28.821511 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3dd9d927-61b8-4c83-93f9-131ab03cb0cc-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-d9gs9\" (UID: \"3dd9d927-61b8-4c83-93f9-131ab03cb0cc\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-d9gs9" Feb 18 19:48:28 crc kubenswrapper[4942]: I0218 19:48:28.923061 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/3dd9d927-61b8-4c83-93f9-131ab03cb0cc-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-d9gs9\" (UID: \"3dd9d927-61b8-4c83-93f9-131ab03cb0cc\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-d9gs9" Feb 18 19:48:28 crc kubenswrapper[4942]: I0218 19:48:28.923115 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3dd9d927-61b8-4c83-93f9-131ab03cb0cc-telemetry-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-d9gs9\" (UID: \"3dd9d927-61b8-4c83-93f9-131ab03cb0cc\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-d9gs9" Feb 18 19:48:28 crc kubenswrapper[4942]: I0218 19:48:28.923151 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3dd9d927-61b8-4c83-93f9-131ab03cb0cc-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-d9gs9\" (UID: \"3dd9d927-61b8-4c83-93f9-131ab03cb0cc\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-d9gs9" Feb 18 19:48:28 crc kubenswrapper[4942]: I0218 19:48:28.923191 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3dd9d927-61b8-4c83-93f9-131ab03cb0cc-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-d9gs9\" (UID: \"3dd9d927-61b8-4c83-93f9-131ab03cb0cc\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-d9gs9" Feb 18 19:48:28 crc kubenswrapper[4942]: I0218 19:48:28.923263 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/3dd9d927-61b8-4c83-93f9-131ab03cb0cc-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-d9gs9\" (UID: \"3dd9d927-61b8-4c83-93f9-131ab03cb0cc\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-d9gs9" Feb 18 19:48:28 crc kubenswrapper[4942]: I0218 19:48:28.923300 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3dd9d927-61b8-4c83-93f9-131ab03cb0cc-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-d9gs9\" (UID: \"3dd9d927-61b8-4c83-93f9-131ab03cb0cc\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-d9gs9" Feb 18 19:48:28 crc kubenswrapper[4942]: I0218 19:48:28.923328 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3dd9d927-61b8-4c83-93f9-131ab03cb0cc-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-d9gs9\" (UID: \"3dd9d927-61b8-4c83-93f9-131ab03cb0cc\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-d9gs9" Feb 18 19:48:28 crc kubenswrapper[4942]: I0218 19:48:28.923380 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3dd9d927-61b8-4c83-93f9-131ab03cb0cc-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-d9gs9\" (UID: \"3dd9d927-61b8-4c83-93f9-131ab03cb0cc\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-d9gs9" Feb 18 19:48:28 crc kubenswrapper[4942]: I0218 19:48:28.923435 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3dd9d927-61b8-4c83-93f9-131ab03cb0cc-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-d9gs9\" (UID: \"3dd9d927-61b8-4c83-93f9-131ab03cb0cc\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-d9gs9" Feb 18 19:48:28 crc kubenswrapper[4942]: I0218 19:48:28.923464 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/3dd9d927-61b8-4c83-93f9-131ab03cb0cc-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-d9gs9\" (UID: \"3dd9d927-61b8-4c83-93f9-131ab03cb0cc\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-d9gs9" Feb 18 19:48:28 crc kubenswrapper[4942]: I0218 19:48:28.923514 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/3dd9d927-61b8-4c83-93f9-131ab03cb0cc-ssh-key-openstack-edpm-ipam\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-d9gs9\" (UID: \"3dd9d927-61b8-4c83-93f9-131ab03cb0cc\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-d9gs9" Feb 18 19:48:28 crc kubenswrapper[4942]: I0218 19:48:28.923545 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/3dd9d927-61b8-4c83-93f9-131ab03cb0cc-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-d9gs9\" (UID: \"3dd9d927-61b8-4c83-93f9-131ab03cb0cc\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-d9gs9" Feb 18 19:48:28 crc kubenswrapper[4942]: I0218 19:48:28.923571 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3dd9d927-61b8-4c83-93f9-131ab03cb0cc-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-d9gs9\" (UID: \"3dd9d927-61b8-4c83-93f9-131ab03cb0cc\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-d9gs9" Feb 18 19:48:28 crc kubenswrapper[4942]: I0218 19:48:28.923590 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z8gnl\" (UniqueName: \"kubernetes.io/projected/3dd9d927-61b8-4c83-93f9-131ab03cb0cc-kube-api-access-z8gnl\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-d9gs9\" (UID: \"3dd9d927-61b8-4c83-93f9-131ab03cb0cc\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-d9gs9" Feb 18 19:48:28 crc kubenswrapper[4942]: I0218 19:48:28.927359 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3dd9d927-61b8-4c83-93f9-131ab03cb0cc-telemetry-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-d9gs9\" (UID: \"3dd9d927-61b8-4c83-93f9-131ab03cb0cc\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-d9gs9" Feb 18 19:48:28 crc kubenswrapper[4942]: I0218 19:48:28.928203 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/3dd9d927-61b8-4c83-93f9-131ab03cb0cc-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-d9gs9\" (UID: \"3dd9d927-61b8-4c83-93f9-131ab03cb0cc\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-d9gs9" Feb 18 19:48:28 crc kubenswrapper[4942]: I0218 19:48:28.928978 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/3dd9d927-61b8-4c83-93f9-131ab03cb0cc-ssh-key-openstack-edpm-ipam\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-d9gs9\" (UID: \"3dd9d927-61b8-4c83-93f9-131ab03cb0cc\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-d9gs9" Feb 18 19:48:28 crc kubenswrapper[4942]: I0218 19:48:28.929879 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3dd9d927-61b8-4c83-93f9-131ab03cb0cc-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-d9gs9\" (UID: \"3dd9d927-61b8-4c83-93f9-131ab03cb0cc\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-d9gs9" Feb 18 19:48:28 crc kubenswrapper[4942]: I0218 19:48:28.929980 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3dd9d927-61b8-4c83-93f9-131ab03cb0cc-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-d9gs9\" (UID: \"3dd9d927-61b8-4c83-93f9-131ab03cb0cc\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-d9gs9" Feb 18 19:48:28 crc kubenswrapper[4942]: I0218 19:48:28.930342 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/3dd9d927-61b8-4c83-93f9-131ab03cb0cc-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-d9gs9\" (UID: \"3dd9d927-61b8-4c83-93f9-131ab03cb0cc\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-d9gs9" Feb 18 19:48:28 crc kubenswrapper[4942]: I0218 19:48:28.931491 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/3dd9d927-61b8-4c83-93f9-131ab03cb0cc-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-d9gs9\" (UID: \"3dd9d927-61b8-4c83-93f9-131ab03cb0cc\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-d9gs9" Feb 18 19:48:28 crc kubenswrapper[4942]: I0218 19:48:28.931575 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3dd9d927-61b8-4c83-93f9-131ab03cb0cc-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-d9gs9\" (UID: \"3dd9d927-61b8-4c83-93f9-131ab03cb0cc\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-d9gs9" Feb 18 19:48:28 crc kubenswrapper[4942]: I0218 19:48:28.931602 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3dd9d927-61b8-4c83-93f9-131ab03cb0cc-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-d9gs9\" (UID: \"3dd9d927-61b8-4c83-93f9-131ab03cb0cc\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-d9gs9" Feb 18 19:48:28 crc kubenswrapper[4942]: I0218 19:48:28.932069 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/3dd9d927-61b8-4c83-93f9-131ab03cb0cc-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-d9gs9\" (UID: \"3dd9d927-61b8-4c83-93f9-131ab03cb0cc\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-d9gs9" Feb 18 19:48:28 crc kubenswrapper[4942]: I0218 19:48:28.934314 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3dd9d927-61b8-4c83-93f9-131ab03cb0cc-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-d9gs9\" (UID: \"3dd9d927-61b8-4c83-93f9-131ab03cb0cc\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-d9gs9" Feb 18 19:48:28 crc kubenswrapper[4942]: I0218 19:48:28.934632 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3dd9d927-61b8-4c83-93f9-131ab03cb0cc-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-d9gs9\" (UID: \"3dd9d927-61b8-4c83-93f9-131ab03cb0cc\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-d9gs9" Feb 18 19:48:28 crc kubenswrapper[4942]: I0218 19:48:28.935175 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3dd9d927-61b8-4c83-93f9-131ab03cb0cc-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-d9gs9\" (UID: \"3dd9d927-61b8-4c83-93f9-131ab03cb0cc\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-d9gs9" Feb 18 19:48:28 crc kubenswrapper[4942]: I0218 19:48:28.943289 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z8gnl\" (UniqueName: \"kubernetes.io/projected/3dd9d927-61b8-4c83-93f9-131ab03cb0cc-kube-api-access-z8gnl\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-d9gs9\" (UID: \"3dd9d927-61b8-4c83-93f9-131ab03cb0cc\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-d9gs9" Feb 18 19:48:28 crc kubenswrapper[4942]: I0218 19:48:28.957172 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-d9gs9" Feb 18 19:48:29 crc kubenswrapper[4942]: I0218 19:48:29.563027 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-d9gs9"] Feb 18 19:48:30 crc kubenswrapper[4942]: I0218 19:48:30.540959 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-d9gs9" event={"ID":"3dd9d927-61b8-4c83-93f9-131ab03cb0cc","Type":"ContainerStarted","Data":"3791c445a8aed4a1c329e58e05be1af60aa43c45c0de7b755485eb69e49cd109"} Feb 18 19:48:30 crc kubenswrapper[4942]: I0218 19:48:30.541324 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-d9gs9" event={"ID":"3dd9d927-61b8-4c83-93f9-131ab03cb0cc","Type":"ContainerStarted","Data":"8eef9dd84c19f1e2b5d433ec3f8f7e827b07e842d030159eea65323eacc71c4a"} Feb 18 19:48:30 crc kubenswrapper[4942]: I0218 19:48:30.576709 4942 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-d9gs9" podStartSLOduration=2.081833057 podStartE2EDuration="2.57668729s" podCreationTimestamp="2026-02-18 19:48:28 +0000 UTC" firstStartedPulling="2026-02-18 19:48:29.567802468 +0000 UTC m=+1869.272735143" lastFinishedPulling="2026-02-18 19:48:30.062656691 +0000 UTC m=+1869.767589376" observedRunningTime="2026-02-18 19:48:30.56227007 +0000 UTC m=+1870.267202755" watchObservedRunningTime="2026-02-18 19:48:30.57668729 +0000 UTC m=+1870.281619965" Feb 18 19:48:37 crc kubenswrapper[4942]: I0218 19:48:37.037437 4942 scope.go:117] "RemoveContainer" containerID="e8694fad4507ebe591fc3e29212876da9f32320a8fd16e4bcde4ab412ae86b19" Feb 18 19:48:37 crc kubenswrapper[4942]: E0218 19:48:37.038397 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wqxh4_openshift-machine-config-operator(28921539-823a-4439-a230-3b5aed7085cc)\"" pod="openshift-machine-config-operator/machine-config-daemon-wqxh4" podUID="28921539-823a-4439-a230-3b5aed7085cc" Feb 18 19:48:49 crc kubenswrapper[4942]: I0218 19:48:49.358391 4942 scope.go:117] "RemoveContainer" containerID="493fbf668fd581eae9f157a3d4dd7cefc935750aeaa50d79a8dc2cadd67f3413" Feb 18 19:48:52 crc kubenswrapper[4942]: I0218 19:48:52.035789 4942 scope.go:117] "RemoveContainer" containerID="e8694fad4507ebe591fc3e29212876da9f32320a8fd16e4bcde4ab412ae86b19" Feb 18 19:48:52 crc kubenswrapper[4942]: E0218 19:48:52.036362 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wqxh4_openshift-machine-config-operator(28921539-823a-4439-a230-3b5aed7085cc)\"" pod="openshift-machine-config-operator/machine-config-daemon-wqxh4" podUID="28921539-823a-4439-a230-3b5aed7085cc" Feb 18 19:49:04 crc kubenswrapper[4942]: I0218 19:49:04.036120 4942 scope.go:117] "RemoveContainer" containerID="e8694fad4507ebe591fc3e29212876da9f32320a8fd16e4bcde4ab412ae86b19" Feb 18 19:49:04 crc kubenswrapper[4942]: E0218 19:49:04.036818 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wqxh4_openshift-machine-config-operator(28921539-823a-4439-a230-3b5aed7085cc)\"" pod="openshift-machine-config-operator/machine-config-daemon-wqxh4" podUID="28921539-823a-4439-a230-3b5aed7085cc" Feb 18 19:49:07 crc kubenswrapper[4942]: I0218 19:49:07.937726 4942 generic.go:334] "Generic (PLEG): container finished" podID="3dd9d927-61b8-4c83-93f9-131ab03cb0cc" containerID="3791c445a8aed4a1c329e58e05be1af60aa43c45c0de7b755485eb69e49cd109" exitCode=0 Feb 18 19:49:07 crc kubenswrapper[4942]: I0218 19:49:07.937843 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-d9gs9" event={"ID":"3dd9d927-61b8-4c83-93f9-131ab03cb0cc","Type":"ContainerDied","Data":"3791c445a8aed4a1c329e58e05be1af60aa43c45c0de7b755485eb69e49cd109"} Feb 18 19:49:09 crc kubenswrapper[4942]: I0218 19:49:09.493275 4942 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-d9gs9" Feb 18 19:49:09 crc kubenswrapper[4942]: I0218 19:49:09.621177 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/3dd9d927-61b8-4c83-93f9-131ab03cb0cc-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"3dd9d927-61b8-4c83-93f9-131ab03cb0cc\" (UID: \"3dd9d927-61b8-4c83-93f9-131ab03cb0cc\") " Feb 18 19:49:09 crc kubenswrapper[4942]: I0218 19:49:09.621242 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3dd9d927-61b8-4c83-93f9-131ab03cb0cc-telemetry-combined-ca-bundle\") pod \"3dd9d927-61b8-4c83-93f9-131ab03cb0cc\" (UID: \"3dd9d927-61b8-4c83-93f9-131ab03cb0cc\") " Feb 18 19:49:09 crc kubenswrapper[4942]: I0218 19:49:09.621301 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3dd9d927-61b8-4c83-93f9-131ab03cb0cc-libvirt-combined-ca-bundle\") pod \"3dd9d927-61b8-4c83-93f9-131ab03cb0cc\" (UID: \"3dd9d927-61b8-4c83-93f9-131ab03cb0cc\") " Feb 18 19:49:09 crc kubenswrapper[4942]: I0218 19:49:09.621349 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3dd9d927-61b8-4c83-93f9-131ab03cb0cc-ovn-combined-ca-bundle\") pod \"3dd9d927-61b8-4c83-93f9-131ab03cb0cc\" (UID: \"3dd9d927-61b8-4c83-93f9-131ab03cb0cc\") " Feb 18 19:49:09 crc kubenswrapper[4942]: I0218 19:49:09.621432 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/3dd9d927-61b8-4c83-93f9-131ab03cb0cc-ssh-key-openstack-edpm-ipam\") pod \"3dd9d927-61b8-4c83-93f9-131ab03cb0cc\" (UID: \"3dd9d927-61b8-4c83-93f9-131ab03cb0cc\") " Feb 18 19:49:09 crc kubenswrapper[4942]: I0218 19:49:09.621477 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3dd9d927-61b8-4c83-93f9-131ab03cb0cc-repo-setup-combined-ca-bundle\") pod \"3dd9d927-61b8-4c83-93f9-131ab03cb0cc\" (UID: \"3dd9d927-61b8-4c83-93f9-131ab03cb0cc\") " Feb 18 19:49:09 crc kubenswrapper[4942]: I0218 19:49:09.621519 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3dd9d927-61b8-4c83-93f9-131ab03cb0cc-inventory\") pod \"3dd9d927-61b8-4c83-93f9-131ab03cb0cc\" (UID: \"3dd9d927-61b8-4c83-93f9-131ab03cb0cc\") " Feb 18 19:49:09 crc kubenswrapper[4942]: I0218 19:49:09.621621 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/3dd9d927-61b8-4c83-93f9-131ab03cb0cc-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"3dd9d927-61b8-4c83-93f9-131ab03cb0cc\" (UID: \"3dd9d927-61b8-4c83-93f9-131ab03cb0cc\") " Feb 18 19:49:09 crc kubenswrapper[4942]: I0218 19:49:09.621800 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3dd9d927-61b8-4c83-93f9-131ab03cb0cc-neutron-metadata-combined-ca-bundle\") pod \"3dd9d927-61b8-4c83-93f9-131ab03cb0cc\" (UID: \"3dd9d927-61b8-4c83-93f9-131ab03cb0cc\") " Feb 18 19:49:09 crc kubenswrapper[4942]: I0218 19:49:09.621840 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3dd9d927-61b8-4c83-93f9-131ab03cb0cc-nova-combined-ca-bundle\") pod \"3dd9d927-61b8-4c83-93f9-131ab03cb0cc\" (UID: \"3dd9d927-61b8-4c83-93f9-131ab03cb0cc\") " Feb 18 19:49:09 crc kubenswrapper[4942]: I0218 19:49:09.621939 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/3dd9d927-61b8-4c83-93f9-131ab03cb0cc-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"3dd9d927-61b8-4c83-93f9-131ab03cb0cc\" (UID: \"3dd9d927-61b8-4c83-93f9-131ab03cb0cc\") " Feb 18 19:49:09 crc kubenswrapper[4942]: I0218 19:49:09.621987 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z8gnl\" (UniqueName: \"kubernetes.io/projected/3dd9d927-61b8-4c83-93f9-131ab03cb0cc-kube-api-access-z8gnl\") pod \"3dd9d927-61b8-4c83-93f9-131ab03cb0cc\" (UID: \"3dd9d927-61b8-4c83-93f9-131ab03cb0cc\") " Feb 18 19:49:09 crc kubenswrapper[4942]: I0218 19:49:09.622038 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/3dd9d927-61b8-4c83-93f9-131ab03cb0cc-openstack-edpm-ipam-ovn-default-certs-0\") pod \"3dd9d927-61b8-4c83-93f9-131ab03cb0cc\" (UID: \"3dd9d927-61b8-4c83-93f9-131ab03cb0cc\") " Feb 18 19:49:09 crc kubenswrapper[4942]: I0218 19:49:09.622079 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3dd9d927-61b8-4c83-93f9-131ab03cb0cc-bootstrap-combined-ca-bundle\") pod \"3dd9d927-61b8-4c83-93f9-131ab03cb0cc\" (UID: \"3dd9d927-61b8-4c83-93f9-131ab03cb0cc\") " Feb 18 19:49:09 crc kubenswrapper[4942]: I0218 19:49:09.628099 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3dd9d927-61b8-4c83-93f9-131ab03cb0cc-nova-combined-ca-bundle" (OuterVolumeSpecName: "nova-combined-ca-bundle") pod "3dd9d927-61b8-4c83-93f9-131ab03cb0cc" (UID: "3dd9d927-61b8-4c83-93f9-131ab03cb0cc"). InnerVolumeSpecName "nova-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:49:09 crc kubenswrapper[4942]: I0218 19:49:09.628236 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3dd9d927-61b8-4c83-93f9-131ab03cb0cc-openstack-edpm-ipam-ovn-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-ovn-default-certs-0") pod "3dd9d927-61b8-4c83-93f9-131ab03cb0cc" (UID: "3dd9d927-61b8-4c83-93f9-131ab03cb0cc"). InnerVolumeSpecName "openstack-edpm-ipam-ovn-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:49:09 crc kubenswrapper[4942]: I0218 19:49:09.630839 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3dd9d927-61b8-4c83-93f9-131ab03cb0cc-neutron-metadata-combined-ca-bundle" (OuterVolumeSpecName: "neutron-metadata-combined-ca-bundle") pod "3dd9d927-61b8-4c83-93f9-131ab03cb0cc" (UID: "3dd9d927-61b8-4c83-93f9-131ab03cb0cc"). InnerVolumeSpecName "neutron-metadata-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:49:09 crc kubenswrapper[4942]: I0218 19:49:09.631739 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3dd9d927-61b8-4c83-93f9-131ab03cb0cc-libvirt-combined-ca-bundle" (OuterVolumeSpecName: "libvirt-combined-ca-bundle") pod "3dd9d927-61b8-4c83-93f9-131ab03cb0cc" (UID: "3dd9d927-61b8-4c83-93f9-131ab03cb0cc"). InnerVolumeSpecName "libvirt-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:49:09 crc kubenswrapper[4942]: I0218 19:49:09.631902 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3dd9d927-61b8-4c83-93f9-131ab03cb0cc-repo-setup-combined-ca-bundle" (OuterVolumeSpecName: "repo-setup-combined-ca-bundle") pod "3dd9d927-61b8-4c83-93f9-131ab03cb0cc" (UID: "3dd9d927-61b8-4c83-93f9-131ab03cb0cc"). InnerVolumeSpecName "repo-setup-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:49:09 crc kubenswrapper[4942]: I0218 19:49:09.632992 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3dd9d927-61b8-4c83-93f9-131ab03cb0cc-openstack-edpm-ipam-neutron-metadata-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-neutron-metadata-default-certs-0") pod "3dd9d927-61b8-4c83-93f9-131ab03cb0cc" (UID: "3dd9d927-61b8-4c83-93f9-131ab03cb0cc"). InnerVolumeSpecName "openstack-edpm-ipam-neutron-metadata-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:49:09 crc kubenswrapper[4942]: I0218 19:49:09.633080 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3dd9d927-61b8-4c83-93f9-131ab03cb0cc-kube-api-access-z8gnl" (OuterVolumeSpecName: "kube-api-access-z8gnl") pod "3dd9d927-61b8-4c83-93f9-131ab03cb0cc" (UID: "3dd9d927-61b8-4c83-93f9-131ab03cb0cc"). InnerVolumeSpecName "kube-api-access-z8gnl". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:49:09 crc kubenswrapper[4942]: I0218 19:49:09.634038 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3dd9d927-61b8-4c83-93f9-131ab03cb0cc-ovn-combined-ca-bundle" (OuterVolumeSpecName: "ovn-combined-ca-bundle") pod "3dd9d927-61b8-4c83-93f9-131ab03cb0cc" (UID: "3dd9d927-61b8-4c83-93f9-131ab03cb0cc"). InnerVolumeSpecName "ovn-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:49:09 crc kubenswrapper[4942]: I0218 19:49:09.634146 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3dd9d927-61b8-4c83-93f9-131ab03cb0cc-openstack-edpm-ipam-libvirt-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-libvirt-default-certs-0") pod "3dd9d927-61b8-4c83-93f9-131ab03cb0cc" (UID: "3dd9d927-61b8-4c83-93f9-131ab03cb0cc"). InnerVolumeSpecName "openstack-edpm-ipam-libvirt-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:49:09 crc kubenswrapper[4942]: I0218 19:49:09.636820 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3dd9d927-61b8-4c83-93f9-131ab03cb0cc-openstack-edpm-ipam-telemetry-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-telemetry-default-certs-0") pod "3dd9d927-61b8-4c83-93f9-131ab03cb0cc" (UID: "3dd9d927-61b8-4c83-93f9-131ab03cb0cc"). InnerVolumeSpecName "openstack-edpm-ipam-telemetry-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:49:09 crc kubenswrapper[4942]: I0218 19:49:09.644875 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3dd9d927-61b8-4c83-93f9-131ab03cb0cc-telemetry-combined-ca-bundle" (OuterVolumeSpecName: "telemetry-combined-ca-bundle") pod "3dd9d927-61b8-4c83-93f9-131ab03cb0cc" (UID: "3dd9d927-61b8-4c83-93f9-131ab03cb0cc"). InnerVolumeSpecName "telemetry-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:49:09 crc kubenswrapper[4942]: I0218 19:49:09.644962 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3dd9d927-61b8-4c83-93f9-131ab03cb0cc-bootstrap-combined-ca-bundle" (OuterVolumeSpecName: "bootstrap-combined-ca-bundle") pod "3dd9d927-61b8-4c83-93f9-131ab03cb0cc" (UID: "3dd9d927-61b8-4c83-93f9-131ab03cb0cc"). InnerVolumeSpecName "bootstrap-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:49:09 crc kubenswrapper[4942]: I0218 19:49:09.662907 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3dd9d927-61b8-4c83-93f9-131ab03cb0cc-inventory" (OuterVolumeSpecName: "inventory") pod "3dd9d927-61b8-4c83-93f9-131ab03cb0cc" (UID: "3dd9d927-61b8-4c83-93f9-131ab03cb0cc"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:49:09 crc kubenswrapper[4942]: I0218 19:49:09.679834 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3dd9d927-61b8-4c83-93f9-131ab03cb0cc-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "3dd9d927-61b8-4c83-93f9-131ab03cb0cc" (UID: "3dd9d927-61b8-4c83-93f9-131ab03cb0cc"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:49:09 crc kubenswrapper[4942]: I0218 19:49:09.726278 4942 reconciler_common.go:293] "Volume detached for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3dd9d927-61b8-4c83-93f9-131ab03cb0cc-neutron-metadata-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 19:49:09 crc kubenswrapper[4942]: I0218 19:49:09.726327 4942 reconciler_common.go:293] "Volume detached for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3dd9d927-61b8-4c83-93f9-131ab03cb0cc-nova-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 19:49:09 crc kubenswrapper[4942]: I0218 19:49:09.726350 4942 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/3dd9d927-61b8-4c83-93f9-131ab03cb0cc-openstack-edpm-ipam-libvirt-default-certs-0\") on node \"crc\" DevicePath \"\"" Feb 18 19:49:09 crc kubenswrapper[4942]: I0218 19:49:09.726374 4942 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z8gnl\" (UniqueName: \"kubernetes.io/projected/3dd9d927-61b8-4c83-93f9-131ab03cb0cc-kube-api-access-z8gnl\") on node \"crc\" DevicePath \"\"" Feb 18 19:49:09 crc kubenswrapper[4942]: I0218 19:49:09.726394 4942 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/3dd9d927-61b8-4c83-93f9-131ab03cb0cc-openstack-edpm-ipam-ovn-default-certs-0\") on node \"crc\" DevicePath \"\"" Feb 18 19:49:09 crc kubenswrapper[4942]: I0218 19:49:09.726413 4942 reconciler_common.go:293] "Volume detached for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3dd9d927-61b8-4c83-93f9-131ab03cb0cc-bootstrap-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 19:49:09 crc kubenswrapper[4942]: I0218 19:49:09.726433 4942 reconciler_common.go:293] "Volume detached for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3dd9d927-61b8-4c83-93f9-131ab03cb0cc-telemetry-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 19:49:09 crc kubenswrapper[4942]: I0218 19:49:09.726452 4942 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/3dd9d927-61b8-4c83-93f9-131ab03cb0cc-openstack-edpm-ipam-telemetry-default-certs-0\") on node \"crc\" DevicePath \"\"" Feb 18 19:49:09 crc kubenswrapper[4942]: I0218 19:49:09.726470 4942 reconciler_common.go:293] "Volume detached for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3dd9d927-61b8-4c83-93f9-131ab03cb0cc-libvirt-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 19:49:09 crc kubenswrapper[4942]: I0218 19:49:09.726492 4942 reconciler_common.go:293] "Volume detached for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3dd9d927-61b8-4c83-93f9-131ab03cb0cc-ovn-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 19:49:09 crc kubenswrapper[4942]: I0218 19:49:09.726510 4942 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/3dd9d927-61b8-4c83-93f9-131ab03cb0cc-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 18 19:49:09 crc kubenswrapper[4942]: I0218 19:49:09.726528 4942 reconciler_common.go:293] "Volume detached for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3dd9d927-61b8-4c83-93f9-131ab03cb0cc-repo-setup-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 19:49:09 crc kubenswrapper[4942]: I0218 19:49:09.726548 4942 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3dd9d927-61b8-4c83-93f9-131ab03cb0cc-inventory\") on node \"crc\" DevicePath \"\"" Feb 18 19:49:09 crc kubenswrapper[4942]: I0218 19:49:09.726569 4942 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/3dd9d927-61b8-4c83-93f9-131ab03cb0cc-openstack-edpm-ipam-neutron-metadata-default-certs-0\") on node \"crc\" DevicePath \"\"" Feb 18 19:49:09 crc kubenswrapper[4942]: I0218 19:49:09.962009 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-d9gs9" event={"ID":"3dd9d927-61b8-4c83-93f9-131ab03cb0cc","Type":"ContainerDied","Data":"8eef9dd84c19f1e2b5d433ec3f8f7e827b07e842d030159eea65323eacc71c4a"} Feb 18 19:49:09 crc kubenswrapper[4942]: I0218 19:49:09.962088 4942 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8eef9dd84c19f1e2b5d433ec3f8f7e827b07e842d030159eea65323eacc71c4a" Feb 18 19:49:09 crc kubenswrapper[4942]: I0218 19:49:09.962188 4942 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-d9gs9" Feb 18 19:49:10 crc kubenswrapper[4942]: I0218 19:49:10.106317 4942 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-vbk8b"] Feb 18 19:49:10 crc kubenswrapper[4942]: E0218 19:49:10.107020 4942 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3dd9d927-61b8-4c83-93f9-131ab03cb0cc" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Feb 18 19:49:10 crc kubenswrapper[4942]: I0218 19:49:10.107052 4942 state_mem.go:107] "Deleted CPUSet assignment" podUID="3dd9d927-61b8-4c83-93f9-131ab03cb0cc" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Feb 18 19:49:10 crc kubenswrapper[4942]: I0218 19:49:10.107412 4942 memory_manager.go:354] "RemoveStaleState removing state" podUID="3dd9d927-61b8-4c83-93f9-131ab03cb0cc" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Feb 18 19:49:10 crc kubenswrapper[4942]: I0218 19:49:10.108539 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-vbk8b" Feb 18 19:49:10 crc kubenswrapper[4942]: I0218 19:49:10.113383 4942 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 18 19:49:10 crc kubenswrapper[4942]: I0218 19:49:10.113691 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 18 19:49:10 crc kubenswrapper[4942]: I0218 19:49:10.114215 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 18 19:49:10 crc kubenswrapper[4942]: I0218 19:49:10.114404 4942 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-config" Feb 18 19:49:10 crc kubenswrapper[4942]: I0218 19:49:10.114740 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-rgcbh" Feb 18 19:49:10 crc kubenswrapper[4942]: I0218 19:49:10.126025 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-vbk8b"] Feb 18 19:49:10 crc kubenswrapper[4942]: I0218 19:49:10.235988 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w6d57\" (UniqueName: \"kubernetes.io/projected/bae512dc-7305-4dc5-b47a-524c9b8f57ab-kube-api-access-w6d57\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-vbk8b\" (UID: \"bae512dc-7305-4dc5-b47a-524c9b8f57ab\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-vbk8b" Feb 18 19:49:10 crc kubenswrapper[4942]: I0218 19:49:10.236373 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/bae512dc-7305-4dc5-b47a-524c9b8f57ab-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-vbk8b\" (UID: \"bae512dc-7305-4dc5-b47a-524c9b8f57ab\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-vbk8b" Feb 18 19:49:10 crc kubenswrapper[4942]: I0218 19:49:10.236529 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/bae512dc-7305-4dc5-b47a-524c9b8f57ab-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-vbk8b\" (UID: \"bae512dc-7305-4dc5-b47a-524c9b8f57ab\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-vbk8b" Feb 18 19:49:10 crc kubenswrapper[4942]: I0218 19:49:10.236678 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bae512dc-7305-4dc5-b47a-524c9b8f57ab-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-vbk8b\" (UID: \"bae512dc-7305-4dc5-b47a-524c9b8f57ab\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-vbk8b" Feb 18 19:49:10 crc kubenswrapper[4942]: I0218 19:49:10.236739 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/bae512dc-7305-4dc5-b47a-524c9b8f57ab-ssh-key-openstack-edpm-ipam\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-vbk8b\" (UID: \"bae512dc-7305-4dc5-b47a-524c9b8f57ab\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-vbk8b" Feb 18 19:49:10 crc kubenswrapper[4942]: I0218 19:49:10.338507 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w6d57\" (UniqueName: \"kubernetes.io/projected/bae512dc-7305-4dc5-b47a-524c9b8f57ab-kube-api-access-w6d57\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-vbk8b\" (UID: \"bae512dc-7305-4dc5-b47a-524c9b8f57ab\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-vbk8b" Feb 18 19:49:10 crc kubenswrapper[4942]: I0218 19:49:10.338561 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/bae512dc-7305-4dc5-b47a-524c9b8f57ab-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-vbk8b\" (UID: \"bae512dc-7305-4dc5-b47a-524c9b8f57ab\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-vbk8b" Feb 18 19:49:10 crc kubenswrapper[4942]: I0218 19:49:10.338642 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/bae512dc-7305-4dc5-b47a-524c9b8f57ab-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-vbk8b\" (UID: \"bae512dc-7305-4dc5-b47a-524c9b8f57ab\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-vbk8b" Feb 18 19:49:10 crc kubenswrapper[4942]: I0218 19:49:10.338698 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bae512dc-7305-4dc5-b47a-524c9b8f57ab-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-vbk8b\" (UID: \"bae512dc-7305-4dc5-b47a-524c9b8f57ab\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-vbk8b" Feb 18 19:49:10 crc kubenswrapper[4942]: I0218 19:49:10.338734 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/bae512dc-7305-4dc5-b47a-524c9b8f57ab-ssh-key-openstack-edpm-ipam\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-vbk8b\" (UID: \"bae512dc-7305-4dc5-b47a-524c9b8f57ab\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-vbk8b" Feb 18 19:49:10 crc kubenswrapper[4942]: I0218 19:49:10.339833 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/bae512dc-7305-4dc5-b47a-524c9b8f57ab-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-vbk8b\" (UID: \"bae512dc-7305-4dc5-b47a-524c9b8f57ab\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-vbk8b" Feb 18 19:49:10 crc kubenswrapper[4942]: I0218 19:49:10.346996 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bae512dc-7305-4dc5-b47a-524c9b8f57ab-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-vbk8b\" (UID: \"bae512dc-7305-4dc5-b47a-524c9b8f57ab\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-vbk8b" Feb 18 19:49:10 crc kubenswrapper[4942]: I0218 19:49:10.349314 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/bae512dc-7305-4dc5-b47a-524c9b8f57ab-ssh-key-openstack-edpm-ipam\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-vbk8b\" (UID: \"bae512dc-7305-4dc5-b47a-524c9b8f57ab\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-vbk8b" Feb 18 19:49:10 crc kubenswrapper[4942]: I0218 19:49:10.352431 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/bae512dc-7305-4dc5-b47a-524c9b8f57ab-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-vbk8b\" (UID: \"bae512dc-7305-4dc5-b47a-524c9b8f57ab\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-vbk8b" Feb 18 19:49:10 crc kubenswrapper[4942]: I0218 19:49:10.362952 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w6d57\" (UniqueName: \"kubernetes.io/projected/bae512dc-7305-4dc5-b47a-524c9b8f57ab-kube-api-access-w6d57\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-vbk8b\" (UID: \"bae512dc-7305-4dc5-b47a-524c9b8f57ab\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-vbk8b" Feb 18 19:49:10 crc kubenswrapper[4942]: I0218 19:49:10.431302 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-vbk8b" Feb 18 19:49:10 crc kubenswrapper[4942]: I0218 19:49:10.996294 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-vbk8b"] Feb 18 19:49:11 crc kubenswrapper[4942]: I0218 19:49:11.982636 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-vbk8b" event={"ID":"bae512dc-7305-4dc5-b47a-524c9b8f57ab","Type":"ContainerStarted","Data":"9f90e2546d8c227bc117262fd71f9c8456682f751fed957e9711a9d5bf183e6f"} Feb 18 19:49:11 crc kubenswrapper[4942]: I0218 19:49:11.983027 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-vbk8b" event={"ID":"bae512dc-7305-4dc5-b47a-524c9b8f57ab","Type":"ContainerStarted","Data":"5352973215c03a80c13a0dd4da3a49efd6b06a0bf8db37bee68ea4a246523de3"} Feb 18 19:49:12 crc kubenswrapper[4942]: I0218 19:49:12.000896 4942 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-vbk8b" podStartSLOduration=1.579119315 podStartE2EDuration="2.000873352s" podCreationTimestamp="2026-02-18 19:49:10 +0000 UTC" firstStartedPulling="2026-02-18 19:49:11.000438282 +0000 UTC m=+1910.705370957" lastFinishedPulling="2026-02-18 19:49:11.422192289 +0000 UTC m=+1911.127124994" observedRunningTime="2026-02-18 19:49:11.999620689 +0000 UTC m=+1911.704553364" watchObservedRunningTime="2026-02-18 19:49:12.000873352 +0000 UTC m=+1911.705806067" Feb 18 19:49:15 crc kubenswrapper[4942]: I0218 19:49:15.035578 4942 scope.go:117] "RemoveContainer" containerID="e8694fad4507ebe591fc3e29212876da9f32320a8fd16e4bcde4ab412ae86b19" Feb 18 19:49:15 crc kubenswrapper[4942]: E0218 19:49:15.036196 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wqxh4_openshift-machine-config-operator(28921539-823a-4439-a230-3b5aed7085cc)\"" pod="openshift-machine-config-operator/machine-config-daemon-wqxh4" podUID="28921539-823a-4439-a230-3b5aed7085cc" Feb 18 19:49:29 crc kubenswrapper[4942]: I0218 19:49:29.036085 4942 scope.go:117] "RemoveContainer" containerID="e8694fad4507ebe591fc3e29212876da9f32320a8fd16e4bcde4ab412ae86b19" Feb 18 19:49:29 crc kubenswrapper[4942]: E0218 19:49:29.036802 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wqxh4_openshift-machine-config-operator(28921539-823a-4439-a230-3b5aed7085cc)\"" pod="openshift-machine-config-operator/machine-config-daemon-wqxh4" podUID="28921539-823a-4439-a230-3b5aed7085cc" Feb 18 19:49:40 crc kubenswrapper[4942]: I0218 19:49:40.037060 4942 scope.go:117] "RemoveContainer" containerID="e8694fad4507ebe591fc3e29212876da9f32320a8fd16e4bcde4ab412ae86b19" Feb 18 19:49:40 crc kubenswrapper[4942]: E0218 19:49:40.038226 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wqxh4_openshift-machine-config-operator(28921539-823a-4439-a230-3b5aed7085cc)\"" pod="openshift-machine-config-operator/machine-config-daemon-wqxh4" podUID="28921539-823a-4439-a230-3b5aed7085cc" Feb 18 19:49:53 crc kubenswrapper[4942]: I0218 19:49:53.037477 4942 scope.go:117] "RemoveContainer" containerID="e8694fad4507ebe591fc3e29212876da9f32320a8fd16e4bcde4ab412ae86b19" Feb 18 19:49:53 crc kubenswrapper[4942]: E0218 19:49:53.040193 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wqxh4_openshift-machine-config-operator(28921539-823a-4439-a230-3b5aed7085cc)\"" pod="openshift-machine-config-operator/machine-config-daemon-wqxh4" podUID="28921539-823a-4439-a230-3b5aed7085cc" Feb 18 19:50:07 crc kubenswrapper[4942]: I0218 19:50:07.036963 4942 scope.go:117] "RemoveContainer" containerID="e8694fad4507ebe591fc3e29212876da9f32320a8fd16e4bcde4ab412ae86b19" Feb 18 19:50:07 crc kubenswrapper[4942]: E0218 19:50:07.037628 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wqxh4_openshift-machine-config-operator(28921539-823a-4439-a230-3b5aed7085cc)\"" pod="openshift-machine-config-operator/machine-config-daemon-wqxh4" podUID="28921539-823a-4439-a230-3b5aed7085cc" Feb 18 19:50:16 crc kubenswrapper[4942]: I0218 19:50:16.826108 4942 generic.go:334] "Generic (PLEG): container finished" podID="bae512dc-7305-4dc5-b47a-524c9b8f57ab" containerID="9f90e2546d8c227bc117262fd71f9c8456682f751fed957e9711a9d5bf183e6f" exitCode=0 Feb 18 19:50:16 crc kubenswrapper[4942]: I0218 19:50:16.826196 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-vbk8b" event={"ID":"bae512dc-7305-4dc5-b47a-524c9b8f57ab","Type":"ContainerDied","Data":"9f90e2546d8c227bc117262fd71f9c8456682f751fed957e9711a9d5bf183e6f"} Feb 18 19:50:18 crc kubenswrapper[4942]: I0218 19:50:18.278455 4942 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-vbk8b" Feb 18 19:50:18 crc kubenswrapper[4942]: I0218 19:50:18.390616 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/bae512dc-7305-4dc5-b47a-524c9b8f57ab-ssh-key-openstack-edpm-ipam\") pod \"bae512dc-7305-4dc5-b47a-524c9b8f57ab\" (UID: \"bae512dc-7305-4dc5-b47a-524c9b8f57ab\") " Feb 18 19:50:18 crc kubenswrapper[4942]: I0218 19:50:18.390718 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/bae512dc-7305-4dc5-b47a-524c9b8f57ab-inventory\") pod \"bae512dc-7305-4dc5-b47a-524c9b8f57ab\" (UID: \"bae512dc-7305-4dc5-b47a-524c9b8f57ab\") " Feb 18 19:50:18 crc kubenswrapper[4942]: I0218 19:50:18.390827 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/bae512dc-7305-4dc5-b47a-524c9b8f57ab-ovncontroller-config-0\") pod \"bae512dc-7305-4dc5-b47a-524c9b8f57ab\" (UID: \"bae512dc-7305-4dc5-b47a-524c9b8f57ab\") " Feb 18 19:50:18 crc kubenswrapper[4942]: I0218 19:50:18.390885 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bae512dc-7305-4dc5-b47a-524c9b8f57ab-ovn-combined-ca-bundle\") pod \"bae512dc-7305-4dc5-b47a-524c9b8f57ab\" (UID: \"bae512dc-7305-4dc5-b47a-524c9b8f57ab\") " Feb 18 19:50:18 crc kubenswrapper[4942]: I0218 19:50:18.391033 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w6d57\" (UniqueName: \"kubernetes.io/projected/bae512dc-7305-4dc5-b47a-524c9b8f57ab-kube-api-access-w6d57\") pod \"bae512dc-7305-4dc5-b47a-524c9b8f57ab\" (UID: \"bae512dc-7305-4dc5-b47a-524c9b8f57ab\") " Feb 18 19:50:18 crc kubenswrapper[4942]: I0218 19:50:18.396307 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bae512dc-7305-4dc5-b47a-524c9b8f57ab-ovn-combined-ca-bundle" (OuterVolumeSpecName: "ovn-combined-ca-bundle") pod "bae512dc-7305-4dc5-b47a-524c9b8f57ab" (UID: "bae512dc-7305-4dc5-b47a-524c9b8f57ab"). InnerVolumeSpecName "ovn-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:50:18 crc kubenswrapper[4942]: I0218 19:50:18.407061 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bae512dc-7305-4dc5-b47a-524c9b8f57ab-kube-api-access-w6d57" (OuterVolumeSpecName: "kube-api-access-w6d57") pod "bae512dc-7305-4dc5-b47a-524c9b8f57ab" (UID: "bae512dc-7305-4dc5-b47a-524c9b8f57ab"). InnerVolumeSpecName "kube-api-access-w6d57". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:50:18 crc kubenswrapper[4942]: I0218 19:50:18.419516 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bae512dc-7305-4dc5-b47a-524c9b8f57ab-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "bae512dc-7305-4dc5-b47a-524c9b8f57ab" (UID: "bae512dc-7305-4dc5-b47a-524c9b8f57ab"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:50:18 crc kubenswrapper[4942]: I0218 19:50:18.429235 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bae512dc-7305-4dc5-b47a-524c9b8f57ab-ovncontroller-config-0" (OuterVolumeSpecName: "ovncontroller-config-0") pod "bae512dc-7305-4dc5-b47a-524c9b8f57ab" (UID: "bae512dc-7305-4dc5-b47a-524c9b8f57ab"). InnerVolumeSpecName "ovncontroller-config-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:50:18 crc kubenswrapper[4942]: I0218 19:50:18.443854 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bae512dc-7305-4dc5-b47a-524c9b8f57ab-inventory" (OuterVolumeSpecName: "inventory") pod "bae512dc-7305-4dc5-b47a-524c9b8f57ab" (UID: "bae512dc-7305-4dc5-b47a-524c9b8f57ab"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:50:18 crc kubenswrapper[4942]: I0218 19:50:18.493205 4942 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w6d57\" (UniqueName: \"kubernetes.io/projected/bae512dc-7305-4dc5-b47a-524c9b8f57ab-kube-api-access-w6d57\") on node \"crc\" DevicePath \"\"" Feb 18 19:50:18 crc kubenswrapper[4942]: I0218 19:50:18.493251 4942 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/bae512dc-7305-4dc5-b47a-524c9b8f57ab-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 18 19:50:18 crc kubenswrapper[4942]: I0218 19:50:18.493261 4942 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/bae512dc-7305-4dc5-b47a-524c9b8f57ab-inventory\") on node \"crc\" DevicePath \"\"" Feb 18 19:50:18 crc kubenswrapper[4942]: I0218 19:50:18.493269 4942 reconciler_common.go:293] "Volume detached for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/bae512dc-7305-4dc5-b47a-524c9b8f57ab-ovncontroller-config-0\") on node \"crc\" DevicePath \"\"" Feb 18 19:50:18 crc kubenswrapper[4942]: I0218 19:50:18.493280 4942 reconciler_common.go:293] "Volume detached for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bae512dc-7305-4dc5-b47a-524c9b8f57ab-ovn-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 19:50:18 crc kubenswrapper[4942]: I0218 19:50:18.843943 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-vbk8b" event={"ID":"bae512dc-7305-4dc5-b47a-524c9b8f57ab","Type":"ContainerDied","Data":"5352973215c03a80c13a0dd4da3a49efd6b06a0bf8db37bee68ea4a246523de3"} Feb 18 19:50:18 crc kubenswrapper[4942]: I0218 19:50:18.844476 4942 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5352973215c03a80c13a0dd4da3a49efd6b06a0bf8db37bee68ea4a246523de3" Feb 18 19:50:18 crc kubenswrapper[4942]: I0218 19:50:18.844062 4942 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-vbk8b" Feb 18 19:50:19 crc kubenswrapper[4942]: I0218 19:50:19.021169 4942 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-wzdf7"] Feb 18 19:50:19 crc kubenswrapper[4942]: E0218 19:50:19.021753 4942 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bae512dc-7305-4dc5-b47a-524c9b8f57ab" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Feb 18 19:50:19 crc kubenswrapper[4942]: I0218 19:50:19.021874 4942 state_mem.go:107] "Deleted CPUSet assignment" podUID="bae512dc-7305-4dc5-b47a-524c9b8f57ab" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Feb 18 19:50:19 crc kubenswrapper[4942]: I0218 19:50:19.022183 4942 memory_manager.go:354] "RemoveStaleState removing state" podUID="bae512dc-7305-4dc5-b47a-524c9b8f57ab" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Feb 18 19:50:19 crc kubenswrapper[4942]: I0218 19:50:19.022940 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-wzdf7" Feb 18 19:50:19 crc kubenswrapper[4942]: I0218 19:50:19.025194 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-ovn-metadata-agent-neutron-config" Feb 18 19:50:19 crc kubenswrapper[4942]: I0218 19:50:19.025559 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-rgcbh" Feb 18 19:50:19 crc kubenswrapper[4942]: I0218 19:50:19.025597 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 18 19:50:19 crc kubenswrapper[4942]: I0218 19:50:19.025914 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 18 19:50:19 crc kubenswrapper[4942]: I0218 19:50:19.029029 4942 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 18 19:50:19 crc kubenswrapper[4942]: I0218 19:50:19.034198 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-neutron-config" Feb 18 19:50:19 crc kubenswrapper[4942]: I0218 19:50:19.049096 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-wzdf7"] Feb 18 19:50:19 crc kubenswrapper[4942]: I0218 19:50:19.103725 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/7ae4842a-dc23-4e56-a33d-87df95cade92-ssh-key-openstack-edpm-ipam\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-wzdf7\" (UID: \"7ae4842a-dc23-4e56-a33d-87df95cade92\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-wzdf7" Feb 18 19:50:19 crc kubenswrapper[4942]: I0218 19:50:19.103813 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/7ae4842a-dc23-4e56-a33d-87df95cade92-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-wzdf7\" (UID: \"7ae4842a-dc23-4e56-a33d-87df95cade92\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-wzdf7" Feb 18 19:50:19 crc kubenswrapper[4942]: I0218 19:50:19.103886 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/7ae4842a-dc23-4e56-a33d-87df95cade92-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-wzdf7\" (UID: \"7ae4842a-dc23-4e56-a33d-87df95cade92\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-wzdf7" Feb 18 19:50:19 crc kubenswrapper[4942]: I0218 19:50:19.103961 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jrwfm\" (UniqueName: \"kubernetes.io/projected/7ae4842a-dc23-4e56-a33d-87df95cade92-kube-api-access-jrwfm\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-wzdf7\" (UID: \"7ae4842a-dc23-4e56-a33d-87df95cade92\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-wzdf7" Feb 18 19:50:19 crc kubenswrapper[4942]: I0218 19:50:19.104072 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7ae4842a-dc23-4e56-a33d-87df95cade92-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-wzdf7\" (UID: \"7ae4842a-dc23-4e56-a33d-87df95cade92\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-wzdf7" Feb 18 19:50:19 crc kubenswrapper[4942]: I0218 19:50:19.104130 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7ae4842a-dc23-4e56-a33d-87df95cade92-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-wzdf7\" (UID: \"7ae4842a-dc23-4e56-a33d-87df95cade92\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-wzdf7" Feb 18 19:50:19 crc kubenswrapper[4942]: I0218 19:50:19.209667 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/7ae4842a-dc23-4e56-a33d-87df95cade92-ssh-key-openstack-edpm-ipam\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-wzdf7\" (UID: \"7ae4842a-dc23-4e56-a33d-87df95cade92\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-wzdf7" Feb 18 19:50:19 crc kubenswrapper[4942]: I0218 19:50:19.209747 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/7ae4842a-dc23-4e56-a33d-87df95cade92-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-wzdf7\" (UID: \"7ae4842a-dc23-4e56-a33d-87df95cade92\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-wzdf7" Feb 18 19:50:19 crc kubenswrapper[4942]: I0218 19:50:19.209824 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/7ae4842a-dc23-4e56-a33d-87df95cade92-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-wzdf7\" (UID: \"7ae4842a-dc23-4e56-a33d-87df95cade92\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-wzdf7" Feb 18 19:50:19 crc kubenswrapper[4942]: I0218 19:50:19.209854 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jrwfm\" (UniqueName: \"kubernetes.io/projected/7ae4842a-dc23-4e56-a33d-87df95cade92-kube-api-access-jrwfm\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-wzdf7\" (UID: \"7ae4842a-dc23-4e56-a33d-87df95cade92\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-wzdf7" Feb 18 19:50:19 crc kubenswrapper[4942]: I0218 19:50:19.209929 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7ae4842a-dc23-4e56-a33d-87df95cade92-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-wzdf7\" (UID: \"7ae4842a-dc23-4e56-a33d-87df95cade92\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-wzdf7" Feb 18 19:50:19 crc kubenswrapper[4942]: I0218 19:50:19.209986 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7ae4842a-dc23-4e56-a33d-87df95cade92-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-wzdf7\" (UID: \"7ae4842a-dc23-4e56-a33d-87df95cade92\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-wzdf7" Feb 18 19:50:19 crc kubenswrapper[4942]: I0218 19:50:19.214862 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7ae4842a-dc23-4e56-a33d-87df95cade92-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-wzdf7\" (UID: \"7ae4842a-dc23-4e56-a33d-87df95cade92\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-wzdf7" Feb 18 19:50:19 crc kubenswrapper[4942]: I0218 19:50:19.214861 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/7ae4842a-dc23-4e56-a33d-87df95cade92-ssh-key-openstack-edpm-ipam\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-wzdf7\" (UID: \"7ae4842a-dc23-4e56-a33d-87df95cade92\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-wzdf7" Feb 18 19:50:19 crc kubenswrapper[4942]: I0218 19:50:19.215559 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/7ae4842a-dc23-4e56-a33d-87df95cade92-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-wzdf7\" (UID: \"7ae4842a-dc23-4e56-a33d-87df95cade92\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-wzdf7" Feb 18 19:50:19 crc kubenswrapper[4942]: I0218 19:50:19.215664 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/7ae4842a-dc23-4e56-a33d-87df95cade92-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-wzdf7\" (UID: \"7ae4842a-dc23-4e56-a33d-87df95cade92\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-wzdf7" Feb 18 19:50:19 crc kubenswrapper[4942]: I0218 19:50:19.220056 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7ae4842a-dc23-4e56-a33d-87df95cade92-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-wzdf7\" (UID: \"7ae4842a-dc23-4e56-a33d-87df95cade92\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-wzdf7" Feb 18 19:50:19 crc kubenswrapper[4942]: I0218 19:50:19.245542 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jrwfm\" (UniqueName: \"kubernetes.io/projected/7ae4842a-dc23-4e56-a33d-87df95cade92-kube-api-access-jrwfm\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-wzdf7\" (UID: \"7ae4842a-dc23-4e56-a33d-87df95cade92\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-wzdf7" Feb 18 19:50:19 crc kubenswrapper[4942]: I0218 19:50:19.341356 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-wzdf7" Feb 18 19:50:19 crc kubenswrapper[4942]: I0218 19:50:19.902242 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-wzdf7"] Feb 18 19:50:20 crc kubenswrapper[4942]: I0218 19:50:20.868752 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-wzdf7" event={"ID":"7ae4842a-dc23-4e56-a33d-87df95cade92","Type":"ContainerStarted","Data":"261d9e0939cda0fb87a9e22441ae81a2f95c444b1440ca9d5683679ec038b2ae"} Feb 18 19:50:21 crc kubenswrapper[4942]: I0218 19:50:21.051235 4942 scope.go:117] "RemoveContainer" containerID="e8694fad4507ebe591fc3e29212876da9f32320a8fd16e4bcde4ab412ae86b19" Feb 18 19:50:21 crc kubenswrapper[4942]: E0218 19:50:21.052045 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wqxh4_openshift-machine-config-operator(28921539-823a-4439-a230-3b5aed7085cc)\"" pod="openshift-machine-config-operator/machine-config-daemon-wqxh4" podUID="28921539-823a-4439-a230-3b5aed7085cc" Feb 18 19:50:21 crc kubenswrapper[4942]: I0218 19:50:21.882035 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-wzdf7" event={"ID":"7ae4842a-dc23-4e56-a33d-87df95cade92","Type":"ContainerStarted","Data":"75804daf0f8a67a86a9c2a3e7a0911dc2ff820a2e9d5fb6f79a4bfa2b98f6abe"} Feb 18 19:50:21 crc kubenswrapper[4942]: I0218 19:50:21.924932 4942 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-wzdf7" podStartSLOduration=2.209477899 podStartE2EDuration="2.924909717s" podCreationTimestamp="2026-02-18 19:50:19 +0000 UTC" firstStartedPulling="2026-02-18 19:50:19.913007196 +0000 UTC m=+1979.617939861" lastFinishedPulling="2026-02-18 19:50:20.628438994 +0000 UTC m=+1980.333371679" observedRunningTime="2026-02-18 19:50:21.909039419 +0000 UTC m=+1981.613972104" watchObservedRunningTime="2026-02-18 19:50:21.924909717 +0000 UTC m=+1981.629842392" Feb 18 19:50:32 crc kubenswrapper[4942]: I0218 19:50:32.036202 4942 scope.go:117] "RemoveContainer" containerID="e8694fad4507ebe591fc3e29212876da9f32320a8fd16e4bcde4ab412ae86b19" Feb 18 19:50:33 crc kubenswrapper[4942]: I0218 19:50:33.006962 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wqxh4" event={"ID":"28921539-823a-4439-a230-3b5aed7085cc","Type":"ContainerStarted","Data":"c81e1d649813c8beecb89429c1c4dde799b86b0af5d8804642a6a83d2ee52071"} Feb 18 19:51:11 crc kubenswrapper[4942]: I0218 19:51:11.395366 4942 generic.go:334] "Generic (PLEG): container finished" podID="7ae4842a-dc23-4e56-a33d-87df95cade92" containerID="75804daf0f8a67a86a9c2a3e7a0911dc2ff820a2e9d5fb6f79a4bfa2b98f6abe" exitCode=0 Feb 18 19:51:11 crc kubenswrapper[4942]: I0218 19:51:11.395434 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-wzdf7" event={"ID":"7ae4842a-dc23-4e56-a33d-87df95cade92","Type":"ContainerDied","Data":"75804daf0f8a67a86a9c2a3e7a0911dc2ff820a2e9d5fb6f79a4bfa2b98f6abe"} Feb 18 19:51:12 crc kubenswrapper[4942]: I0218 19:51:12.950399 4942 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-wzdf7" Feb 18 19:51:13 crc kubenswrapper[4942]: I0218 19:51:13.065381 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jrwfm\" (UniqueName: \"kubernetes.io/projected/7ae4842a-dc23-4e56-a33d-87df95cade92-kube-api-access-jrwfm\") pod \"7ae4842a-dc23-4e56-a33d-87df95cade92\" (UID: \"7ae4842a-dc23-4e56-a33d-87df95cade92\") " Feb 18 19:51:13 crc kubenswrapper[4942]: I0218 19:51:13.065614 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/7ae4842a-dc23-4e56-a33d-87df95cade92-nova-metadata-neutron-config-0\") pod \"7ae4842a-dc23-4e56-a33d-87df95cade92\" (UID: \"7ae4842a-dc23-4e56-a33d-87df95cade92\") " Feb 18 19:51:13 crc kubenswrapper[4942]: I0218 19:51:13.065696 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7ae4842a-dc23-4e56-a33d-87df95cade92-neutron-metadata-combined-ca-bundle\") pod \"7ae4842a-dc23-4e56-a33d-87df95cade92\" (UID: \"7ae4842a-dc23-4e56-a33d-87df95cade92\") " Feb 18 19:51:13 crc kubenswrapper[4942]: I0218 19:51:13.065841 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/7ae4842a-dc23-4e56-a33d-87df95cade92-ssh-key-openstack-edpm-ipam\") pod \"7ae4842a-dc23-4e56-a33d-87df95cade92\" (UID: \"7ae4842a-dc23-4e56-a33d-87df95cade92\") " Feb 18 19:51:13 crc kubenswrapper[4942]: I0218 19:51:13.065887 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/7ae4842a-dc23-4e56-a33d-87df95cade92-neutron-ovn-metadata-agent-neutron-config-0\") pod \"7ae4842a-dc23-4e56-a33d-87df95cade92\" (UID: \"7ae4842a-dc23-4e56-a33d-87df95cade92\") " Feb 18 19:51:13 crc kubenswrapper[4942]: I0218 19:51:13.066040 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7ae4842a-dc23-4e56-a33d-87df95cade92-inventory\") pod \"7ae4842a-dc23-4e56-a33d-87df95cade92\" (UID: \"7ae4842a-dc23-4e56-a33d-87df95cade92\") " Feb 18 19:51:13 crc kubenswrapper[4942]: I0218 19:51:13.075265 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7ae4842a-dc23-4e56-a33d-87df95cade92-neutron-metadata-combined-ca-bundle" (OuterVolumeSpecName: "neutron-metadata-combined-ca-bundle") pod "7ae4842a-dc23-4e56-a33d-87df95cade92" (UID: "7ae4842a-dc23-4e56-a33d-87df95cade92"). InnerVolumeSpecName "neutron-metadata-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:51:13 crc kubenswrapper[4942]: I0218 19:51:13.075577 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7ae4842a-dc23-4e56-a33d-87df95cade92-kube-api-access-jrwfm" (OuterVolumeSpecName: "kube-api-access-jrwfm") pod "7ae4842a-dc23-4e56-a33d-87df95cade92" (UID: "7ae4842a-dc23-4e56-a33d-87df95cade92"). InnerVolumeSpecName "kube-api-access-jrwfm". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:51:13 crc kubenswrapper[4942]: I0218 19:51:13.096510 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7ae4842a-dc23-4e56-a33d-87df95cade92-inventory" (OuterVolumeSpecName: "inventory") pod "7ae4842a-dc23-4e56-a33d-87df95cade92" (UID: "7ae4842a-dc23-4e56-a33d-87df95cade92"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:51:13 crc kubenswrapper[4942]: I0218 19:51:13.102987 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7ae4842a-dc23-4e56-a33d-87df95cade92-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "7ae4842a-dc23-4e56-a33d-87df95cade92" (UID: "7ae4842a-dc23-4e56-a33d-87df95cade92"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:51:13 crc kubenswrapper[4942]: I0218 19:51:13.106885 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7ae4842a-dc23-4e56-a33d-87df95cade92-neutron-ovn-metadata-agent-neutron-config-0" (OuterVolumeSpecName: "neutron-ovn-metadata-agent-neutron-config-0") pod "7ae4842a-dc23-4e56-a33d-87df95cade92" (UID: "7ae4842a-dc23-4e56-a33d-87df95cade92"). InnerVolumeSpecName "neutron-ovn-metadata-agent-neutron-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:51:13 crc kubenswrapper[4942]: I0218 19:51:13.110467 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7ae4842a-dc23-4e56-a33d-87df95cade92-nova-metadata-neutron-config-0" (OuterVolumeSpecName: "nova-metadata-neutron-config-0") pod "7ae4842a-dc23-4e56-a33d-87df95cade92" (UID: "7ae4842a-dc23-4e56-a33d-87df95cade92"). InnerVolumeSpecName "nova-metadata-neutron-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:51:13 crc kubenswrapper[4942]: I0218 19:51:13.183118 4942 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/7ae4842a-dc23-4e56-a33d-87df95cade92-nova-metadata-neutron-config-0\") on node \"crc\" DevicePath \"\"" Feb 18 19:51:13 crc kubenswrapper[4942]: I0218 19:51:13.183172 4942 reconciler_common.go:293] "Volume detached for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7ae4842a-dc23-4e56-a33d-87df95cade92-neutron-metadata-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 19:51:13 crc kubenswrapper[4942]: I0218 19:51:13.183192 4942 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/7ae4842a-dc23-4e56-a33d-87df95cade92-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 18 19:51:13 crc kubenswrapper[4942]: I0218 19:51:13.183212 4942 reconciler_common.go:293] "Volume detached for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/7ae4842a-dc23-4e56-a33d-87df95cade92-neutron-ovn-metadata-agent-neutron-config-0\") on node \"crc\" DevicePath \"\"" Feb 18 19:51:13 crc kubenswrapper[4942]: I0218 19:51:13.183234 4942 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7ae4842a-dc23-4e56-a33d-87df95cade92-inventory\") on node \"crc\" DevicePath \"\"" Feb 18 19:51:13 crc kubenswrapper[4942]: I0218 19:51:13.183251 4942 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jrwfm\" (UniqueName: \"kubernetes.io/projected/7ae4842a-dc23-4e56-a33d-87df95cade92-kube-api-access-jrwfm\") on node \"crc\" DevicePath \"\"" Feb 18 19:51:13 crc kubenswrapper[4942]: I0218 19:51:13.415441 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-wzdf7" event={"ID":"7ae4842a-dc23-4e56-a33d-87df95cade92","Type":"ContainerDied","Data":"261d9e0939cda0fb87a9e22441ae81a2f95c444b1440ca9d5683679ec038b2ae"} Feb 18 19:51:13 crc kubenswrapper[4942]: I0218 19:51:13.415491 4942 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="261d9e0939cda0fb87a9e22441ae81a2f95c444b1440ca9d5683679ec038b2ae" Feb 18 19:51:13 crc kubenswrapper[4942]: I0218 19:51:13.415517 4942 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-wzdf7" Feb 18 19:51:13 crc kubenswrapper[4942]: I0218 19:51:13.527308 4942 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-5nmvq"] Feb 18 19:51:13 crc kubenswrapper[4942]: E0218 19:51:13.528015 4942 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7ae4842a-dc23-4e56-a33d-87df95cade92" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Feb 18 19:51:13 crc kubenswrapper[4942]: I0218 19:51:13.528037 4942 state_mem.go:107] "Deleted CPUSet assignment" podUID="7ae4842a-dc23-4e56-a33d-87df95cade92" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Feb 18 19:51:13 crc kubenswrapper[4942]: I0218 19:51:13.528270 4942 memory_manager.go:354] "RemoveStaleState removing state" podUID="7ae4842a-dc23-4e56-a33d-87df95cade92" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Feb 18 19:51:13 crc kubenswrapper[4942]: I0218 19:51:13.529660 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-5nmvq" Feb 18 19:51:13 crc kubenswrapper[4942]: I0218 19:51:13.531876 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 18 19:51:13 crc kubenswrapper[4942]: I0218 19:51:13.533019 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 18 19:51:13 crc kubenswrapper[4942]: I0218 19:51:13.533080 4942 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 18 19:51:13 crc kubenswrapper[4942]: I0218 19:51:13.533546 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"libvirt-secret" Feb 18 19:51:13 crc kubenswrapper[4942]: I0218 19:51:13.537563 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-rgcbh" Feb 18 19:51:13 crc kubenswrapper[4942]: I0218 19:51:13.545862 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-5nmvq"] Feb 18 19:51:13 crc kubenswrapper[4942]: I0218 19:51:13.690870 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1924338e-aea6-474f-9216-bb7eb32dc5fe-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-5nmvq\" (UID: \"1924338e-aea6-474f-9216-bb7eb32dc5fe\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-5nmvq" Feb 18 19:51:13 crc kubenswrapper[4942]: I0218 19:51:13.690958 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-llwjc\" (UniqueName: \"kubernetes.io/projected/1924338e-aea6-474f-9216-bb7eb32dc5fe-kube-api-access-llwjc\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-5nmvq\" (UID: \"1924338e-aea6-474f-9216-bb7eb32dc5fe\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-5nmvq" Feb 18 19:51:13 crc kubenswrapper[4942]: I0218 19:51:13.691049 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/1924338e-aea6-474f-9216-bb7eb32dc5fe-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-5nmvq\" (UID: \"1924338e-aea6-474f-9216-bb7eb32dc5fe\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-5nmvq" Feb 18 19:51:13 crc kubenswrapper[4942]: I0218 19:51:13.691098 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/1924338e-aea6-474f-9216-bb7eb32dc5fe-ssh-key-openstack-edpm-ipam\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-5nmvq\" (UID: \"1924338e-aea6-474f-9216-bb7eb32dc5fe\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-5nmvq" Feb 18 19:51:13 crc kubenswrapper[4942]: I0218 19:51:13.691326 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1924338e-aea6-474f-9216-bb7eb32dc5fe-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-5nmvq\" (UID: \"1924338e-aea6-474f-9216-bb7eb32dc5fe\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-5nmvq" Feb 18 19:51:13 crc kubenswrapper[4942]: I0218 19:51:13.792908 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1924338e-aea6-474f-9216-bb7eb32dc5fe-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-5nmvq\" (UID: \"1924338e-aea6-474f-9216-bb7eb32dc5fe\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-5nmvq" Feb 18 19:51:13 crc kubenswrapper[4942]: I0218 19:51:13.793117 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1924338e-aea6-474f-9216-bb7eb32dc5fe-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-5nmvq\" (UID: \"1924338e-aea6-474f-9216-bb7eb32dc5fe\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-5nmvq" Feb 18 19:51:13 crc kubenswrapper[4942]: I0218 19:51:13.793197 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-llwjc\" (UniqueName: \"kubernetes.io/projected/1924338e-aea6-474f-9216-bb7eb32dc5fe-kube-api-access-llwjc\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-5nmvq\" (UID: \"1924338e-aea6-474f-9216-bb7eb32dc5fe\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-5nmvq" Feb 18 19:51:13 crc kubenswrapper[4942]: I0218 19:51:13.793257 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/1924338e-aea6-474f-9216-bb7eb32dc5fe-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-5nmvq\" (UID: \"1924338e-aea6-474f-9216-bb7eb32dc5fe\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-5nmvq" Feb 18 19:51:13 crc kubenswrapper[4942]: I0218 19:51:13.793305 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/1924338e-aea6-474f-9216-bb7eb32dc5fe-ssh-key-openstack-edpm-ipam\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-5nmvq\" (UID: \"1924338e-aea6-474f-9216-bb7eb32dc5fe\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-5nmvq" Feb 18 19:51:13 crc kubenswrapper[4942]: I0218 19:51:13.798256 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/1924338e-aea6-474f-9216-bb7eb32dc5fe-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-5nmvq\" (UID: \"1924338e-aea6-474f-9216-bb7eb32dc5fe\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-5nmvq" Feb 18 19:51:13 crc kubenswrapper[4942]: I0218 19:51:13.798800 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1924338e-aea6-474f-9216-bb7eb32dc5fe-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-5nmvq\" (UID: \"1924338e-aea6-474f-9216-bb7eb32dc5fe\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-5nmvq" Feb 18 19:51:13 crc kubenswrapper[4942]: I0218 19:51:13.799212 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1924338e-aea6-474f-9216-bb7eb32dc5fe-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-5nmvq\" (UID: \"1924338e-aea6-474f-9216-bb7eb32dc5fe\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-5nmvq" Feb 18 19:51:13 crc kubenswrapper[4942]: I0218 19:51:13.799746 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/1924338e-aea6-474f-9216-bb7eb32dc5fe-ssh-key-openstack-edpm-ipam\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-5nmvq\" (UID: \"1924338e-aea6-474f-9216-bb7eb32dc5fe\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-5nmvq" Feb 18 19:51:13 crc kubenswrapper[4942]: I0218 19:51:13.817258 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-llwjc\" (UniqueName: \"kubernetes.io/projected/1924338e-aea6-474f-9216-bb7eb32dc5fe-kube-api-access-llwjc\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-5nmvq\" (UID: \"1924338e-aea6-474f-9216-bb7eb32dc5fe\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-5nmvq" Feb 18 19:51:13 crc kubenswrapper[4942]: I0218 19:51:13.864187 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-5nmvq" Feb 18 19:51:14 crc kubenswrapper[4942]: I0218 19:51:14.482558 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-5nmvq"] Feb 18 19:51:14 crc kubenswrapper[4942]: W0218 19:51:14.487363 4942 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1924338e_aea6_474f_9216_bb7eb32dc5fe.slice/crio-85a49794f2563fabd1097b6b3517a1243e11855d7bbbaa0e4c993f19ad38505f WatchSource:0}: Error finding container 85a49794f2563fabd1097b6b3517a1243e11855d7bbbaa0e4c993f19ad38505f: Status 404 returned error can't find the container with id 85a49794f2563fabd1097b6b3517a1243e11855d7bbbaa0e4c993f19ad38505f Feb 18 19:51:15 crc kubenswrapper[4942]: I0218 19:51:15.441232 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-5nmvq" event={"ID":"1924338e-aea6-474f-9216-bb7eb32dc5fe","Type":"ContainerStarted","Data":"0881a0c3a7d6de31317f11d4bbacc01b597b4d2f4939061d09363608ec65d1f7"} Feb 18 19:51:15 crc kubenswrapper[4942]: I0218 19:51:15.441613 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-5nmvq" event={"ID":"1924338e-aea6-474f-9216-bb7eb32dc5fe","Type":"ContainerStarted","Data":"85a49794f2563fabd1097b6b3517a1243e11855d7bbbaa0e4c993f19ad38505f"} Feb 18 19:52:07 crc kubenswrapper[4942]: I0218 19:52:07.575244 4942 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-5nmvq" podStartSLOduration=54.149974486 podStartE2EDuration="54.575214165s" podCreationTimestamp="2026-02-18 19:51:13 +0000 UTC" firstStartedPulling="2026-02-18 19:51:14.49070449 +0000 UTC m=+2034.195637165" lastFinishedPulling="2026-02-18 19:51:14.915944179 +0000 UTC m=+2034.620876844" observedRunningTime="2026-02-18 19:51:15.478119367 +0000 UTC m=+2035.183052062" watchObservedRunningTime="2026-02-18 19:52:07.575214165 +0000 UTC m=+2087.280146870" Feb 18 19:52:07 crc kubenswrapper[4942]: I0218 19:52:07.596044 4942 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-zf5jr"] Feb 18 19:52:07 crc kubenswrapper[4942]: I0218 19:52:07.599074 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-zf5jr" Feb 18 19:52:07 crc kubenswrapper[4942]: I0218 19:52:07.610380 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-zf5jr"] Feb 18 19:52:07 crc kubenswrapper[4942]: I0218 19:52:07.718240 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2ba0702b-f077-473e-9df3-2cc59e94d7d9-utilities\") pod \"redhat-operators-zf5jr\" (UID: \"2ba0702b-f077-473e-9df3-2cc59e94d7d9\") " pod="openshift-marketplace/redhat-operators-zf5jr" Feb 18 19:52:07 crc kubenswrapper[4942]: I0218 19:52:07.718519 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2ba0702b-f077-473e-9df3-2cc59e94d7d9-catalog-content\") pod \"redhat-operators-zf5jr\" (UID: \"2ba0702b-f077-473e-9df3-2cc59e94d7d9\") " pod="openshift-marketplace/redhat-operators-zf5jr" Feb 18 19:52:07 crc kubenswrapper[4942]: I0218 19:52:07.718715 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8pd6b\" (UniqueName: \"kubernetes.io/projected/2ba0702b-f077-473e-9df3-2cc59e94d7d9-kube-api-access-8pd6b\") pod \"redhat-operators-zf5jr\" (UID: \"2ba0702b-f077-473e-9df3-2cc59e94d7d9\") " pod="openshift-marketplace/redhat-operators-zf5jr" Feb 18 19:52:07 crc kubenswrapper[4942]: I0218 19:52:07.820587 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2ba0702b-f077-473e-9df3-2cc59e94d7d9-catalog-content\") pod \"redhat-operators-zf5jr\" (UID: \"2ba0702b-f077-473e-9df3-2cc59e94d7d9\") " pod="openshift-marketplace/redhat-operators-zf5jr" Feb 18 19:52:07 crc kubenswrapper[4942]: I0218 19:52:07.820951 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8pd6b\" (UniqueName: \"kubernetes.io/projected/2ba0702b-f077-473e-9df3-2cc59e94d7d9-kube-api-access-8pd6b\") pod \"redhat-operators-zf5jr\" (UID: \"2ba0702b-f077-473e-9df3-2cc59e94d7d9\") " pod="openshift-marketplace/redhat-operators-zf5jr" Feb 18 19:52:07 crc kubenswrapper[4942]: I0218 19:52:07.821160 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2ba0702b-f077-473e-9df3-2cc59e94d7d9-utilities\") pod \"redhat-operators-zf5jr\" (UID: \"2ba0702b-f077-473e-9df3-2cc59e94d7d9\") " pod="openshift-marketplace/redhat-operators-zf5jr" Feb 18 19:52:07 crc kubenswrapper[4942]: I0218 19:52:07.821166 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2ba0702b-f077-473e-9df3-2cc59e94d7d9-catalog-content\") pod \"redhat-operators-zf5jr\" (UID: \"2ba0702b-f077-473e-9df3-2cc59e94d7d9\") " pod="openshift-marketplace/redhat-operators-zf5jr" Feb 18 19:52:07 crc kubenswrapper[4942]: I0218 19:52:07.821435 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2ba0702b-f077-473e-9df3-2cc59e94d7d9-utilities\") pod \"redhat-operators-zf5jr\" (UID: \"2ba0702b-f077-473e-9df3-2cc59e94d7d9\") " pod="openshift-marketplace/redhat-operators-zf5jr" Feb 18 19:52:07 crc kubenswrapper[4942]: I0218 19:52:07.849455 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8pd6b\" (UniqueName: \"kubernetes.io/projected/2ba0702b-f077-473e-9df3-2cc59e94d7d9-kube-api-access-8pd6b\") pod \"redhat-operators-zf5jr\" (UID: \"2ba0702b-f077-473e-9df3-2cc59e94d7d9\") " pod="openshift-marketplace/redhat-operators-zf5jr" Feb 18 19:52:07 crc kubenswrapper[4942]: I0218 19:52:07.932746 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-zf5jr" Feb 18 19:52:08 crc kubenswrapper[4942]: I0218 19:52:08.460969 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-zf5jr"] Feb 18 19:52:09 crc kubenswrapper[4942]: I0218 19:52:09.064914 4942 generic.go:334] "Generic (PLEG): container finished" podID="2ba0702b-f077-473e-9df3-2cc59e94d7d9" containerID="e807aa8bf9dcd9cc1efaf3cd63daaa9547080e906a8f9c3e5c01fe164fc9d8bd" exitCode=0 Feb 18 19:52:09 crc kubenswrapper[4942]: I0218 19:52:09.065170 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zf5jr" event={"ID":"2ba0702b-f077-473e-9df3-2cc59e94d7d9","Type":"ContainerDied","Data":"e807aa8bf9dcd9cc1efaf3cd63daaa9547080e906a8f9c3e5c01fe164fc9d8bd"} Feb 18 19:52:09 crc kubenswrapper[4942]: I0218 19:52:09.065193 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zf5jr" event={"ID":"2ba0702b-f077-473e-9df3-2cc59e94d7d9","Type":"ContainerStarted","Data":"c6bd6168be7acfd240df4b763697b61db8aab69181f9ea02390aaa8a2d3ef101"} Feb 18 19:52:11 crc kubenswrapper[4942]: I0218 19:52:11.088237 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zf5jr" event={"ID":"2ba0702b-f077-473e-9df3-2cc59e94d7d9","Type":"ContainerStarted","Data":"505c9b0e9c93a2776191fa6a8bd33b933c92b8a1277cb229c365dfc910ef8c03"} Feb 18 19:52:12 crc kubenswrapper[4942]: I0218 19:52:12.101900 4942 generic.go:334] "Generic (PLEG): container finished" podID="2ba0702b-f077-473e-9df3-2cc59e94d7d9" containerID="505c9b0e9c93a2776191fa6a8bd33b933c92b8a1277cb229c365dfc910ef8c03" exitCode=0 Feb 18 19:52:12 crc kubenswrapper[4942]: I0218 19:52:12.101965 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zf5jr" event={"ID":"2ba0702b-f077-473e-9df3-2cc59e94d7d9","Type":"ContainerDied","Data":"505c9b0e9c93a2776191fa6a8bd33b933c92b8a1277cb229c365dfc910ef8c03"} Feb 18 19:52:13 crc kubenswrapper[4942]: I0218 19:52:13.119712 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zf5jr" event={"ID":"2ba0702b-f077-473e-9df3-2cc59e94d7d9","Type":"ContainerStarted","Data":"08dee3a7f9a9ebb3558251a5269d86f2a61de6f08704f9170dcc51697b628f01"} Feb 18 19:52:13 crc kubenswrapper[4942]: I0218 19:52:13.155369 4942 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-zf5jr" podStartSLOduration=2.708749995 podStartE2EDuration="6.155348352s" podCreationTimestamp="2026-02-18 19:52:07 +0000 UTC" firstStartedPulling="2026-02-18 19:52:09.078590586 +0000 UTC m=+2088.783523251" lastFinishedPulling="2026-02-18 19:52:12.525188933 +0000 UTC m=+2092.230121608" observedRunningTime="2026-02-18 19:52:13.144387491 +0000 UTC m=+2092.849320196" watchObservedRunningTime="2026-02-18 19:52:13.155348352 +0000 UTC m=+2092.860281027" Feb 18 19:52:17 crc kubenswrapper[4942]: I0218 19:52:17.934000 4942 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-zf5jr" Feb 18 19:52:17 crc kubenswrapper[4942]: I0218 19:52:17.934638 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-zf5jr" Feb 18 19:52:18 crc kubenswrapper[4942]: I0218 19:52:18.995865 4942 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-zf5jr" podUID="2ba0702b-f077-473e-9df3-2cc59e94d7d9" containerName="registry-server" probeResult="failure" output=< Feb 18 19:52:18 crc kubenswrapper[4942]: timeout: failed to connect service ":50051" within 1s Feb 18 19:52:18 crc kubenswrapper[4942]: > Feb 18 19:52:27 crc kubenswrapper[4942]: I0218 19:52:27.982332 4942 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-zf5jr" Feb 18 19:52:28 crc kubenswrapper[4942]: I0218 19:52:28.044211 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-zf5jr" Feb 18 19:52:28 crc kubenswrapper[4942]: I0218 19:52:28.250635 4942 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-zf5jr"] Feb 18 19:52:29 crc kubenswrapper[4942]: I0218 19:52:29.277394 4942 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-zf5jr" podUID="2ba0702b-f077-473e-9df3-2cc59e94d7d9" containerName="registry-server" containerID="cri-o://08dee3a7f9a9ebb3558251a5269d86f2a61de6f08704f9170dcc51697b628f01" gracePeriod=2 Feb 18 19:52:29 crc kubenswrapper[4942]: I0218 19:52:29.770535 4942 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-zf5jr" Feb 18 19:52:29 crc kubenswrapper[4942]: I0218 19:52:29.779605 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8pd6b\" (UniqueName: \"kubernetes.io/projected/2ba0702b-f077-473e-9df3-2cc59e94d7d9-kube-api-access-8pd6b\") pod \"2ba0702b-f077-473e-9df3-2cc59e94d7d9\" (UID: \"2ba0702b-f077-473e-9df3-2cc59e94d7d9\") " Feb 18 19:52:29 crc kubenswrapper[4942]: I0218 19:52:29.779658 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2ba0702b-f077-473e-9df3-2cc59e94d7d9-catalog-content\") pod \"2ba0702b-f077-473e-9df3-2cc59e94d7d9\" (UID: \"2ba0702b-f077-473e-9df3-2cc59e94d7d9\") " Feb 18 19:52:29 crc kubenswrapper[4942]: I0218 19:52:29.779867 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2ba0702b-f077-473e-9df3-2cc59e94d7d9-utilities\") pod \"2ba0702b-f077-473e-9df3-2cc59e94d7d9\" (UID: \"2ba0702b-f077-473e-9df3-2cc59e94d7d9\") " Feb 18 19:52:29 crc kubenswrapper[4942]: I0218 19:52:29.780584 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2ba0702b-f077-473e-9df3-2cc59e94d7d9-utilities" (OuterVolumeSpecName: "utilities") pod "2ba0702b-f077-473e-9df3-2cc59e94d7d9" (UID: "2ba0702b-f077-473e-9df3-2cc59e94d7d9"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 19:52:29 crc kubenswrapper[4942]: I0218 19:52:29.787998 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2ba0702b-f077-473e-9df3-2cc59e94d7d9-kube-api-access-8pd6b" (OuterVolumeSpecName: "kube-api-access-8pd6b") pod "2ba0702b-f077-473e-9df3-2cc59e94d7d9" (UID: "2ba0702b-f077-473e-9df3-2cc59e94d7d9"). InnerVolumeSpecName "kube-api-access-8pd6b". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:52:29 crc kubenswrapper[4942]: I0218 19:52:29.881123 4942 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2ba0702b-f077-473e-9df3-2cc59e94d7d9-utilities\") on node \"crc\" DevicePath \"\"" Feb 18 19:52:29 crc kubenswrapper[4942]: I0218 19:52:29.881150 4942 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8pd6b\" (UniqueName: \"kubernetes.io/projected/2ba0702b-f077-473e-9df3-2cc59e94d7d9-kube-api-access-8pd6b\") on node \"crc\" DevicePath \"\"" Feb 18 19:52:29 crc kubenswrapper[4942]: I0218 19:52:29.905586 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2ba0702b-f077-473e-9df3-2cc59e94d7d9-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "2ba0702b-f077-473e-9df3-2cc59e94d7d9" (UID: "2ba0702b-f077-473e-9df3-2cc59e94d7d9"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 19:52:29 crc kubenswrapper[4942]: I0218 19:52:29.982505 4942 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2ba0702b-f077-473e-9df3-2cc59e94d7d9-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 18 19:52:30 crc kubenswrapper[4942]: I0218 19:52:30.295353 4942 generic.go:334] "Generic (PLEG): container finished" podID="2ba0702b-f077-473e-9df3-2cc59e94d7d9" containerID="08dee3a7f9a9ebb3558251a5269d86f2a61de6f08704f9170dcc51697b628f01" exitCode=0 Feb 18 19:52:30 crc kubenswrapper[4942]: I0218 19:52:30.295424 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zf5jr" event={"ID":"2ba0702b-f077-473e-9df3-2cc59e94d7d9","Type":"ContainerDied","Data":"08dee3a7f9a9ebb3558251a5269d86f2a61de6f08704f9170dcc51697b628f01"} Feb 18 19:52:30 crc kubenswrapper[4942]: I0218 19:52:30.295476 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zf5jr" event={"ID":"2ba0702b-f077-473e-9df3-2cc59e94d7d9","Type":"ContainerDied","Data":"c6bd6168be7acfd240df4b763697b61db8aab69181f9ea02390aaa8a2d3ef101"} Feb 18 19:52:30 crc kubenswrapper[4942]: I0218 19:52:30.295498 4942 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-zf5jr" Feb 18 19:52:30 crc kubenswrapper[4942]: I0218 19:52:30.295513 4942 scope.go:117] "RemoveContainer" containerID="08dee3a7f9a9ebb3558251a5269d86f2a61de6f08704f9170dcc51697b628f01" Feb 18 19:52:30 crc kubenswrapper[4942]: I0218 19:52:30.328076 4942 scope.go:117] "RemoveContainer" containerID="505c9b0e9c93a2776191fa6a8bd33b933c92b8a1277cb229c365dfc910ef8c03" Feb 18 19:52:30 crc kubenswrapper[4942]: I0218 19:52:30.347367 4942 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-zf5jr"] Feb 18 19:52:30 crc kubenswrapper[4942]: I0218 19:52:30.360535 4942 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-zf5jr"] Feb 18 19:52:30 crc kubenswrapper[4942]: I0218 19:52:30.371570 4942 scope.go:117] "RemoveContainer" containerID="e807aa8bf9dcd9cc1efaf3cd63daaa9547080e906a8f9c3e5c01fe164fc9d8bd" Feb 18 19:52:30 crc kubenswrapper[4942]: I0218 19:52:30.420204 4942 scope.go:117] "RemoveContainer" containerID="08dee3a7f9a9ebb3558251a5269d86f2a61de6f08704f9170dcc51697b628f01" Feb 18 19:52:30 crc kubenswrapper[4942]: E0218 19:52:30.420973 4942 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"08dee3a7f9a9ebb3558251a5269d86f2a61de6f08704f9170dcc51697b628f01\": container with ID starting with 08dee3a7f9a9ebb3558251a5269d86f2a61de6f08704f9170dcc51697b628f01 not found: ID does not exist" containerID="08dee3a7f9a9ebb3558251a5269d86f2a61de6f08704f9170dcc51697b628f01" Feb 18 19:52:30 crc kubenswrapper[4942]: I0218 19:52:30.421055 4942 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"08dee3a7f9a9ebb3558251a5269d86f2a61de6f08704f9170dcc51697b628f01"} err="failed to get container status \"08dee3a7f9a9ebb3558251a5269d86f2a61de6f08704f9170dcc51697b628f01\": rpc error: code = NotFound desc = could not find container \"08dee3a7f9a9ebb3558251a5269d86f2a61de6f08704f9170dcc51697b628f01\": container with ID starting with 08dee3a7f9a9ebb3558251a5269d86f2a61de6f08704f9170dcc51697b628f01 not found: ID does not exist" Feb 18 19:52:30 crc kubenswrapper[4942]: I0218 19:52:30.421095 4942 scope.go:117] "RemoveContainer" containerID="505c9b0e9c93a2776191fa6a8bd33b933c92b8a1277cb229c365dfc910ef8c03" Feb 18 19:52:30 crc kubenswrapper[4942]: E0218 19:52:30.421679 4942 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"505c9b0e9c93a2776191fa6a8bd33b933c92b8a1277cb229c365dfc910ef8c03\": container with ID starting with 505c9b0e9c93a2776191fa6a8bd33b933c92b8a1277cb229c365dfc910ef8c03 not found: ID does not exist" containerID="505c9b0e9c93a2776191fa6a8bd33b933c92b8a1277cb229c365dfc910ef8c03" Feb 18 19:52:30 crc kubenswrapper[4942]: I0218 19:52:30.421706 4942 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"505c9b0e9c93a2776191fa6a8bd33b933c92b8a1277cb229c365dfc910ef8c03"} err="failed to get container status \"505c9b0e9c93a2776191fa6a8bd33b933c92b8a1277cb229c365dfc910ef8c03\": rpc error: code = NotFound desc = could not find container \"505c9b0e9c93a2776191fa6a8bd33b933c92b8a1277cb229c365dfc910ef8c03\": container with ID starting with 505c9b0e9c93a2776191fa6a8bd33b933c92b8a1277cb229c365dfc910ef8c03 not found: ID does not exist" Feb 18 19:52:30 crc kubenswrapper[4942]: I0218 19:52:30.421723 4942 scope.go:117] "RemoveContainer" containerID="e807aa8bf9dcd9cc1efaf3cd63daaa9547080e906a8f9c3e5c01fe164fc9d8bd" Feb 18 19:52:30 crc kubenswrapper[4942]: E0218 19:52:30.422158 4942 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e807aa8bf9dcd9cc1efaf3cd63daaa9547080e906a8f9c3e5c01fe164fc9d8bd\": container with ID starting with e807aa8bf9dcd9cc1efaf3cd63daaa9547080e906a8f9c3e5c01fe164fc9d8bd not found: ID does not exist" containerID="e807aa8bf9dcd9cc1efaf3cd63daaa9547080e906a8f9c3e5c01fe164fc9d8bd" Feb 18 19:52:30 crc kubenswrapper[4942]: I0218 19:52:30.422196 4942 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e807aa8bf9dcd9cc1efaf3cd63daaa9547080e906a8f9c3e5c01fe164fc9d8bd"} err="failed to get container status \"e807aa8bf9dcd9cc1efaf3cd63daaa9547080e906a8f9c3e5c01fe164fc9d8bd\": rpc error: code = NotFound desc = could not find container \"e807aa8bf9dcd9cc1efaf3cd63daaa9547080e906a8f9c3e5c01fe164fc9d8bd\": container with ID starting with e807aa8bf9dcd9cc1efaf3cd63daaa9547080e906a8f9c3e5c01fe164fc9d8bd not found: ID does not exist" Feb 18 19:52:31 crc kubenswrapper[4942]: I0218 19:52:31.052557 4942 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2ba0702b-f077-473e-9df3-2cc59e94d7d9" path="/var/lib/kubelet/pods/2ba0702b-f077-473e-9df3-2cc59e94d7d9/volumes" Feb 18 19:52:53 crc kubenswrapper[4942]: I0218 19:52:53.740864 4942 patch_prober.go:28] interesting pod/machine-config-daemon-wqxh4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 19:52:53 crc kubenswrapper[4942]: I0218 19:52:53.741378 4942 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wqxh4" podUID="28921539-823a-4439-a230-3b5aed7085cc" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 19:53:15 crc kubenswrapper[4942]: I0218 19:53:15.872726 4942 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-9qvpq"] Feb 18 19:53:15 crc kubenswrapper[4942]: E0218 19:53:15.873879 4942 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2ba0702b-f077-473e-9df3-2cc59e94d7d9" containerName="extract-utilities" Feb 18 19:53:15 crc kubenswrapper[4942]: I0218 19:53:15.873898 4942 state_mem.go:107] "Deleted CPUSet assignment" podUID="2ba0702b-f077-473e-9df3-2cc59e94d7d9" containerName="extract-utilities" Feb 18 19:53:15 crc kubenswrapper[4942]: E0218 19:53:15.873925 4942 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2ba0702b-f077-473e-9df3-2cc59e94d7d9" containerName="extract-content" Feb 18 19:53:15 crc kubenswrapper[4942]: I0218 19:53:15.873934 4942 state_mem.go:107] "Deleted CPUSet assignment" podUID="2ba0702b-f077-473e-9df3-2cc59e94d7d9" containerName="extract-content" Feb 18 19:53:15 crc kubenswrapper[4942]: E0218 19:53:15.873950 4942 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2ba0702b-f077-473e-9df3-2cc59e94d7d9" containerName="registry-server" Feb 18 19:53:15 crc kubenswrapper[4942]: I0218 19:53:15.873958 4942 state_mem.go:107] "Deleted CPUSet assignment" podUID="2ba0702b-f077-473e-9df3-2cc59e94d7d9" containerName="registry-server" Feb 18 19:53:15 crc kubenswrapper[4942]: I0218 19:53:15.874247 4942 memory_manager.go:354] "RemoveStaleState removing state" podUID="2ba0702b-f077-473e-9df3-2cc59e94d7d9" containerName="registry-server" Feb 18 19:53:15 crc kubenswrapper[4942]: I0218 19:53:15.876335 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-9qvpq" Feb 18 19:53:15 crc kubenswrapper[4942]: I0218 19:53:15.903098 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-9qvpq"] Feb 18 19:53:16 crc kubenswrapper[4942]: I0218 19:53:16.078537 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j5lcs\" (UniqueName: \"kubernetes.io/projected/959568c6-1106-46e0-89f4-d10e629dc2be-kube-api-access-j5lcs\") pod \"certified-operators-9qvpq\" (UID: \"959568c6-1106-46e0-89f4-d10e629dc2be\") " pod="openshift-marketplace/certified-operators-9qvpq" Feb 18 19:53:16 crc kubenswrapper[4942]: I0218 19:53:16.078605 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/959568c6-1106-46e0-89f4-d10e629dc2be-utilities\") pod \"certified-operators-9qvpq\" (UID: \"959568c6-1106-46e0-89f4-d10e629dc2be\") " pod="openshift-marketplace/certified-operators-9qvpq" Feb 18 19:53:16 crc kubenswrapper[4942]: I0218 19:53:16.078764 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/959568c6-1106-46e0-89f4-d10e629dc2be-catalog-content\") pod \"certified-operators-9qvpq\" (UID: \"959568c6-1106-46e0-89f4-d10e629dc2be\") " pod="openshift-marketplace/certified-operators-9qvpq" Feb 18 19:53:16 crc kubenswrapper[4942]: I0218 19:53:16.179903 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/959568c6-1106-46e0-89f4-d10e629dc2be-catalog-content\") pod \"certified-operators-9qvpq\" (UID: \"959568c6-1106-46e0-89f4-d10e629dc2be\") " pod="openshift-marketplace/certified-operators-9qvpq" Feb 18 19:53:16 crc kubenswrapper[4942]: I0218 19:53:16.180053 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j5lcs\" (UniqueName: \"kubernetes.io/projected/959568c6-1106-46e0-89f4-d10e629dc2be-kube-api-access-j5lcs\") pod \"certified-operators-9qvpq\" (UID: \"959568c6-1106-46e0-89f4-d10e629dc2be\") " pod="openshift-marketplace/certified-operators-9qvpq" Feb 18 19:53:16 crc kubenswrapper[4942]: I0218 19:53:16.180120 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/959568c6-1106-46e0-89f4-d10e629dc2be-utilities\") pod \"certified-operators-9qvpq\" (UID: \"959568c6-1106-46e0-89f4-d10e629dc2be\") " pod="openshift-marketplace/certified-operators-9qvpq" Feb 18 19:53:16 crc kubenswrapper[4942]: I0218 19:53:16.180471 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/959568c6-1106-46e0-89f4-d10e629dc2be-catalog-content\") pod \"certified-operators-9qvpq\" (UID: \"959568c6-1106-46e0-89f4-d10e629dc2be\") " pod="openshift-marketplace/certified-operators-9qvpq" Feb 18 19:53:16 crc kubenswrapper[4942]: I0218 19:53:16.180502 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/959568c6-1106-46e0-89f4-d10e629dc2be-utilities\") pod \"certified-operators-9qvpq\" (UID: \"959568c6-1106-46e0-89f4-d10e629dc2be\") " pod="openshift-marketplace/certified-operators-9qvpq" Feb 18 19:53:16 crc kubenswrapper[4942]: I0218 19:53:16.202220 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j5lcs\" (UniqueName: \"kubernetes.io/projected/959568c6-1106-46e0-89f4-d10e629dc2be-kube-api-access-j5lcs\") pod \"certified-operators-9qvpq\" (UID: \"959568c6-1106-46e0-89f4-d10e629dc2be\") " pod="openshift-marketplace/certified-operators-9qvpq" Feb 18 19:53:16 crc kubenswrapper[4942]: I0218 19:53:16.203764 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-9qvpq" Feb 18 19:53:16 crc kubenswrapper[4942]: I0218 19:53:16.728265 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-9qvpq"] Feb 18 19:53:16 crc kubenswrapper[4942]: I0218 19:53:16.791046 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9qvpq" event={"ID":"959568c6-1106-46e0-89f4-d10e629dc2be","Type":"ContainerStarted","Data":"d2330edf5065932b60465224de9d122c8738d3cdd3854e111c8033441e3e1ef0"} Feb 18 19:53:17 crc kubenswrapper[4942]: I0218 19:53:17.803589 4942 generic.go:334] "Generic (PLEG): container finished" podID="959568c6-1106-46e0-89f4-d10e629dc2be" containerID="c694b344b4d4a5c34d4d2928c1eb64e1984715d854472bf759a407e7aeb4a410" exitCode=0 Feb 18 19:53:17 crc kubenswrapper[4942]: I0218 19:53:17.803683 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9qvpq" event={"ID":"959568c6-1106-46e0-89f4-d10e629dc2be","Type":"ContainerDied","Data":"c694b344b4d4a5c34d4d2928c1eb64e1984715d854472bf759a407e7aeb4a410"} Feb 18 19:53:17 crc kubenswrapper[4942]: I0218 19:53:17.808351 4942 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 18 19:53:18 crc kubenswrapper[4942]: I0218 19:53:18.813786 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9qvpq" event={"ID":"959568c6-1106-46e0-89f4-d10e629dc2be","Type":"ContainerStarted","Data":"7d40d7e7d6de523082f27e61b38a952ae88ae3527b41a46694a6d2590b8f7a83"} Feb 18 19:53:19 crc kubenswrapper[4942]: I0218 19:53:19.827277 4942 generic.go:334] "Generic (PLEG): container finished" podID="959568c6-1106-46e0-89f4-d10e629dc2be" containerID="7d40d7e7d6de523082f27e61b38a952ae88ae3527b41a46694a6d2590b8f7a83" exitCode=0 Feb 18 19:53:19 crc kubenswrapper[4942]: I0218 19:53:19.827337 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9qvpq" event={"ID":"959568c6-1106-46e0-89f4-d10e629dc2be","Type":"ContainerDied","Data":"7d40d7e7d6de523082f27e61b38a952ae88ae3527b41a46694a6d2590b8f7a83"} Feb 18 19:53:20 crc kubenswrapper[4942]: I0218 19:53:20.836729 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9qvpq" event={"ID":"959568c6-1106-46e0-89f4-d10e629dc2be","Type":"ContainerStarted","Data":"413ffd85cbec353886beb9831622381d664549d3fab270a7394f2d0645fdf3f4"} Feb 18 19:53:20 crc kubenswrapper[4942]: I0218 19:53:20.860721 4942 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-9qvpq" podStartSLOduration=3.466546861 podStartE2EDuration="5.860704699s" podCreationTimestamp="2026-02-18 19:53:15 +0000 UTC" firstStartedPulling="2026-02-18 19:53:17.808027409 +0000 UTC m=+2157.512960084" lastFinishedPulling="2026-02-18 19:53:20.202185237 +0000 UTC m=+2159.907117922" observedRunningTime="2026-02-18 19:53:20.853937129 +0000 UTC m=+2160.558869804" watchObservedRunningTime="2026-02-18 19:53:20.860704699 +0000 UTC m=+2160.565637364" Feb 18 19:53:23 crc kubenswrapper[4942]: I0218 19:53:23.740718 4942 patch_prober.go:28] interesting pod/machine-config-daemon-wqxh4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 19:53:23 crc kubenswrapper[4942]: I0218 19:53:23.741184 4942 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wqxh4" podUID="28921539-823a-4439-a230-3b5aed7085cc" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 19:53:26 crc kubenswrapper[4942]: I0218 19:53:26.204747 4942 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-9qvpq" Feb 18 19:53:26 crc kubenswrapper[4942]: I0218 19:53:26.206023 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-9qvpq" Feb 18 19:53:26 crc kubenswrapper[4942]: I0218 19:53:26.276832 4942 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-9qvpq" Feb 18 19:53:26 crc kubenswrapper[4942]: I0218 19:53:26.962051 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-9qvpq" Feb 18 19:53:27 crc kubenswrapper[4942]: I0218 19:53:27.008778 4942 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-9qvpq"] Feb 18 19:53:28 crc kubenswrapper[4942]: I0218 19:53:28.905384 4942 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-9qvpq" podUID="959568c6-1106-46e0-89f4-d10e629dc2be" containerName="registry-server" containerID="cri-o://413ffd85cbec353886beb9831622381d664549d3fab270a7394f2d0645fdf3f4" gracePeriod=2 Feb 18 19:53:29 crc kubenswrapper[4942]: I0218 19:53:29.449082 4942 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-9qvpq" Feb 18 19:53:29 crc kubenswrapper[4942]: I0218 19:53:29.555684 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j5lcs\" (UniqueName: \"kubernetes.io/projected/959568c6-1106-46e0-89f4-d10e629dc2be-kube-api-access-j5lcs\") pod \"959568c6-1106-46e0-89f4-d10e629dc2be\" (UID: \"959568c6-1106-46e0-89f4-d10e629dc2be\") " Feb 18 19:53:29 crc kubenswrapper[4942]: I0218 19:53:29.556003 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/959568c6-1106-46e0-89f4-d10e629dc2be-utilities\") pod \"959568c6-1106-46e0-89f4-d10e629dc2be\" (UID: \"959568c6-1106-46e0-89f4-d10e629dc2be\") " Feb 18 19:53:29 crc kubenswrapper[4942]: I0218 19:53:29.556124 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/959568c6-1106-46e0-89f4-d10e629dc2be-catalog-content\") pod \"959568c6-1106-46e0-89f4-d10e629dc2be\" (UID: \"959568c6-1106-46e0-89f4-d10e629dc2be\") " Feb 18 19:53:29 crc kubenswrapper[4942]: I0218 19:53:29.556814 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/959568c6-1106-46e0-89f4-d10e629dc2be-utilities" (OuterVolumeSpecName: "utilities") pod "959568c6-1106-46e0-89f4-d10e629dc2be" (UID: "959568c6-1106-46e0-89f4-d10e629dc2be"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 19:53:29 crc kubenswrapper[4942]: I0218 19:53:29.562886 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/959568c6-1106-46e0-89f4-d10e629dc2be-kube-api-access-j5lcs" (OuterVolumeSpecName: "kube-api-access-j5lcs") pod "959568c6-1106-46e0-89f4-d10e629dc2be" (UID: "959568c6-1106-46e0-89f4-d10e629dc2be"). InnerVolumeSpecName "kube-api-access-j5lcs". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:53:29 crc kubenswrapper[4942]: I0218 19:53:29.671959 4942 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j5lcs\" (UniqueName: \"kubernetes.io/projected/959568c6-1106-46e0-89f4-d10e629dc2be-kube-api-access-j5lcs\") on node \"crc\" DevicePath \"\"" Feb 18 19:53:29 crc kubenswrapper[4942]: I0218 19:53:29.671991 4942 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/959568c6-1106-46e0-89f4-d10e629dc2be-utilities\") on node \"crc\" DevicePath \"\"" Feb 18 19:53:29 crc kubenswrapper[4942]: I0218 19:53:29.706396 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/959568c6-1106-46e0-89f4-d10e629dc2be-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "959568c6-1106-46e0-89f4-d10e629dc2be" (UID: "959568c6-1106-46e0-89f4-d10e629dc2be"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 19:53:29 crc kubenswrapper[4942]: I0218 19:53:29.773532 4942 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/959568c6-1106-46e0-89f4-d10e629dc2be-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 18 19:53:29 crc kubenswrapper[4942]: I0218 19:53:29.917800 4942 generic.go:334] "Generic (PLEG): container finished" podID="959568c6-1106-46e0-89f4-d10e629dc2be" containerID="413ffd85cbec353886beb9831622381d664549d3fab270a7394f2d0645fdf3f4" exitCode=0 Feb 18 19:53:29 crc kubenswrapper[4942]: I0218 19:53:29.917847 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9qvpq" event={"ID":"959568c6-1106-46e0-89f4-d10e629dc2be","Type":"ContainerDied","Data":"413ffd85cbec353886beb9831622381d664549d3fab270a7394f2d0645fdf3f4"} Feb 18 19:53:29 crc kubenswrapper[4942]: I0218 19:53:29.917876 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9qvpq" event={"ID":"959568c6-1106-46e0-89f4-d10e629dc2be","Type":"ContainerDied","Data":"d2330edf5065932b60465224de9d122c8738d3cdd3854e111c8033441e3e1ef0"} Feb 18 19:53:29 crc kubenswrapper[4942]: I0218 19:53:29.917894 4942 scope.go:117] "RemoveContainer" containerID="413ffd85cbec353886beb9831622381d664549d3fab270a7394f2d0645fdf3f4" Feb 18 19:53:29 crc kubenswrapper[4942]: I0218 19:53:29.918028 4942 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-9qvpq" Feb 18 19:53:29 crc kubenswrapper[4942]: I0218 19:53:29.949084 4942 scope.go:117] "RemoveContainer" containerID="7d40d7e7d6de523082f27e61b38a952ae88ae3527b41a46694a6d2590b8f7a83" Feb 18 19:53:29 crc kubenswrapper[4942]: I0218 19:53:29.977308 4942 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-9qvpq"] Feb 18 19:53:29 crc kubenswrapper[4942]: I0218 19:53:29.979810 4942 scope.go:117] "RemoveContainer" containerID="c694b344b4d4a5c34d4d2928c1eb64e1984715d854472bf759a407e7aeb4a410" Feb 18 19:53:29 crc kubenswrapper[4942]: I0218 19:53:29.989289 4942 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-9qvpq"] Feb 18 19:53:30 crc kubenswrapper[4942]: I0218 19:53:30.030011 4942 scope.go:117] "RemoveContainer" containerID="413ffd85cbec353886beb9831622381d664549d3fab270a7394f2d0645fdf3f4" Feb 18 19:53:30 crc kubenswrapper[4942]: E0218 19:53:30.031370 4942 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"413ffd85cbec353886beb9831622381d664549d3fab270a7394f2d0645fdf3f4\": container with ID starting with 413ffd85cbec353886beb9831622381d664549d3fab270a7394f2d0645fdf3f4 not found: ID does not exist" containerID="413ffd85cbec353886beb9831622381d664549d3fab270a7394f2d0645fdf3f4" Feb 18 19:53:30 crc kubenswrapper[4942]: I0218 19:53:30.031444 4942 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"413ffd85cbec353886beb9831622381d664549d3fab270a7394f2d0645fdf3f4"} err="failed to get container status \"413ffd85cbec353886beb9831622381d664549d3fab270a7394f2d0645fdf3f4\": rpc error: code = NotFound desc = could not find container \"413ffd85cbec353886beb9831622381d664549d3fab270a7394f2d0645fdf3f4\": container with ID starting with 413ffd85cbec353886beb9831622381d664549d3fab270a7394f2d0645fdf3f4 not found: ID does not exist" Feb 18 19:53:30 crc kubenswrapper[4942]: I0218 19:53:30.031487 4942 scope.go:117] "RemoveContainer" containerID="7d40d7e7d6de523082f27e61b38a952ae88ae3527b41a46694a6d2590b8f7a83" Feb 18 19:53:30 crc kubenswrapper[4942]: E0218 19:53:30.031991 4942 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7d40d7e7d6de523082f27e61b38a952ae88ae3527b41a46694a6d2590b8f7a83\": container with ID starting with 7d40d7e7d6de523082f27e61b38a952ae88ae3527b41a46694a6d2590b8f7a83 not found: ID does not exist" containerID="7d40d7e7d6de523082f27e61b38a952ae88ae3527b41a46694a6d2590b8f7a83" Feb 18 19:53:30 crc kubenswrapper[4942]: I0218 19:53:30.032068 4942 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7d40d7e7d6de523082f27e61b38a952ae88ae3527b41a46694a6d2590b8f7a83"} err="failed to get container status \"7d40d7e7d6de523082f27e61b38a952ae88ae3527b41a46694a6d2590b8f7a83\": rpc error: code = NotFound desc = could not find container \"7d40d7e7d6de523082f27e61b38a952ae88ae3527b41a46694a6d2590b8f7a83\": container with ID starting with 7d40d7e7d6de523082f27e61b38a952ae88ae3527b41a46694a6d2590b8f7a83 not found: ID does not exist" Feb 18 19:53:30 crc kubenswrapper[4942]: I0218 19:53:30.032112 4942 scope.go:117] "RemoveContainer" containerID="c694b344b4d4a5c34d4d2928c1eb64e1984715d854472bf759a407e7aeb4a410" Feb 18 19:53:30 crc kubenswrapper[4942]: E0218 19:53:30.032572 4942 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c694b344b4d4a5c34d4d2928c1eb64e1984715d854472bf759a407e7aeb4a410\": container with ID starting with c694b344b4d4a5c34d4d2928c1eb64e1984715d854472bf759a407e7aeb4a410 not found: ID does not exist" containerID="c694b344b4d4a5c34d4d2928c1eb64e1984715d854472bf759a407e7aeb4a410" Feb 18 19:53:30 crc kubenswrapper[4942]: I0218 19:53:30.032675 4942 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c694b344b4d4a5c34d4d2928c1eb64e1984715d854472bf759a407e7aeb4a410"} err="failed to get container status \"c694b344b4d4a5c34d4d2928c1eb64e1984715d854472bf759a407e7aeb4a410\": rpc error: code = NotFound desc = could not find container \"c694b344b4d4a5c34d4d2928c1eb64e1984715d854472bf759a407e7aeb4a410\": container with ID starting with c694b344b4d4a5c34d4d2928c1eb64e1984715d854472bf759a407e7aeb4a410 not found: ID does not exist" Feb 18 19:53:31 crc kubenswrapper[4942]: I0218 19:53:31.056445 4942 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="959568c6-1106-46e0-89f4-d10e629dc2be" path="/var/lib/kubelet/pods/959568c6-1106-46e0-89f4-d10e629dc2be/volumes" Feb 18 19:53:53 crc kubenswrapper[4942]: I0218 19:53:53.740976 4942 patch_prober.go:28] interesting pod/machine-config-daemon-wqxh4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 19:53:53 crc kubenswrapper[4942]: I0218 19:53:53.741492 4942 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wqxh4" podUID="28921539-823a-4439-a230-3b5aed7085cc" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 19:53:53 crc kubenswrapper[4942]: I0218 19:53:53.741550 4942 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-wqxh4" Feb 18 19:53:53 crc kubenswrapper[4942]: I0218 19:53:53.742230 4942 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"c81e1d649813c8beecb89429c1c4dde799b86b0af5d8804642a6a83d2ee52071"} pod="openshift-machine-config-operator/machine-config-daemon-wqxh4" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 18 19:53:53 crc kubenswrapper[4942]: I0218 19:53:53.742300 4942 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-wqxh4" podUID="28921539-823a-4439-a230-3b5aed7085cc" containerName="machine-config-daemon" containerID="cri-o://c81e1d649813c8beecb89429c1c4dde799b86b0af5d8804642a6a83d2ee52071" gracePeriod=600 Feb 18 19:53:54 crc kubenswrapper[4942]: I0218 19:53:54.151479 4942 generic.go:334] "Generic (PLEG): container finished" podID="28921539-823a-4439-a230-3b5aed7085cc" containerID="c81e1d649813c8beecb89429c1c4dde799b86b0af5d8804642a6a83d2ee52071" exitCode=0 Feb 18 19:53:54 crc kubenswrapper[4942]: I0218 19:53:54.151552 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wqxh4" event={"ID":"28921539-823a-4439-a230-3b5aed7085cc","Type":"ContainerDied","Data":"c81e1d649813c8beecb89429c1c4dde799b86b0af5d8804642a6a83d2ee52071"} Feb 18 19:53:54 crc kubenswrapper[4942]: I0218 19:53:54.152069 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wqxh4" event={"ID":"28921539-823a-4439-a230-3b5aed7085cc","Type":"ContainerStarted","Data":"5e4e4cde2bbc876890dcc79d1035aec859f9c3fe975d1ce36677f131f53ddd1d"} Feb 18 19:53:54 crc kubenswrapper[4942]: I0218 19:53:54.152106 4942 scope.go:117] "RemoveContainer" containerID="e8694fad4507ebe591fc3e29212876da9f32320a8fd16e4bcde4ab412ae86b19" Feb 18 19:54:01 crc kubenswrapper[4942]: I0218 19:54:01.868587 4942 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-4hkw6"] Feb 18 19:54:01 crc kubenswrapper[4942]: E0218 19:54:01.869628 4942 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="959568c6-1106-46e0-89f4-d10e629dc2be" containerName="registry-server" Feb 18 19:54:01 crc kubenswrapper[4942]: I0218 19:54:01.869641 4942 state_mem.go:107] "Deleted CPUSet assignment" podUID="959568c6-1106-46e0-89f4-d10e629dc2be" containerName="registry-server" Feb 18 19:54:01 crc kubenswrapper[4942]: E0218 19:54:01.869669 4942 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="959568c6-1106-46e0-89f4-d10e629dc2be" containerName="extract-utilities" Feb 18 19:54:01 crc kubenswrapper[4942]: I0218 19:54:01.869676 4942 state_mem.go:107] "Deleted CPUSet assignment" podUID="959568c6-1106-46e0-89f4-d10e629dc2be" containerName="extract-utilities" Feb 18 19:54:01 crc kubenswrapper[4942]: E0218 19:54:01.869702 4942 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="959568c6-1106-46e0-89f4-d10e629dc2be" containerName="extract-content" Feb 18 19:54:01 crc kubenswrapper[4942]: I0218 19:54:01.869710 4942 state_mem.go:107] "Deleted CPUSet assignment" podUID="959568c6-1106-46e0-89f4-d10e629dc2be" containerName="extract-content" Feb 18 19:54:01 crc kubenswrapper[4942]: I0218 19:54:01.870526 4942 memory_manager.go:354] "RemoveStaleState removing state" podUID="959568c6-1106-46e0-89f4-d10e629dc2be" containerName="registry-server" Feb 18 19:54:01 crc kubenswrapper[4942]: I0218 19:54:01.874039 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-4hkw6" Feb 18 19:54:01 crc kubenswrapper[4942]: I0218 19:54:01.885585 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-4hkw6"] Feb 18 19:54:02 crc kubenswrapper[4942]: I0218 19:54:02.029981 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5e924cf0-b2c6-4897-b2e7-4f9b8897d083-utilities\") pod \"community-operators-4hkw6\" (UID: \"5e924cf0-b2c6-4897-b2e7-4f9b8897d083\") " pod="openshift-marketplace/community-operators-4hkw6" Feb 18 19:54:02 crc kubenswrapper[4942]: I0218 19:54:02.030116 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9lck4\" (UniqueName: \"kubernetes.io/projected/5e924cf0-b2c6-4897-b2e7-4f9b8897d083-kube-api-access-9lck4\") pod \"community-operators-4hkw6\" (UID: \"5e924cf0-b2c6-4897-b2e7-4f9b8897d083\") " pod="openshift-marketplace/community-operators-4hkw6" Feb 18 19:54:02 crc kubenswrapper[4942]: I0218 19:54:02.030196 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5e924cf0-b2c6-4897-b2e7-4f9b8897d083-catalog-content\") pod \"community-operators-4hkw6\" (UID: \"5e924cf0-b2c6-4897-b2e7-4f9b8897d083\") " pod="openshift-marketplace/community-operators-4hkw6" Feb 18 19:54:02 crc kubenswrapper[4942]: I0218 19:54:02.132123 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5e924cf0-b2c6-4897-b2e7-4f9b8897d083-utilities\") pod \"community-operators-4hkw6\" (UID: \"5e924cf0-b2c6-4897-b2e7-4f9b8897d083\") " pod="openshift-marketplace/community-operators-4hkw6" Feb 18 19:54:02 crc kubenswrapper[4942]: I0218 19:54:02.132210 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9lck4\" (UniqueName: \"kubernetes.io/projected/5e924cf0-b2c6-4897-b2e7-4f9b8897d083-kube-api-access-9lck4\") pod \"community-operators-4hkw6\" (UID: \"5e924cf0-b2c6-4897-b2e7-4f9b8897d083\") " pod="openshift-marketplace/community-operators-4hkw6" Feb 18 19:54:02 crc kubenswrapper[4942]: I0218 19:54:02.132250 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5e924cf0-b2c6-4897-b2e7-4f9b8897d083-catalog-content\") pod \"community-operators-4hkw6\" (UID: \"5e924cf0-b2c6-4897-b2e7-4f9b8897d083\") " pod="openshift-marketplace/community-operators-4hkw6" Feb 18 19:54:02 crc kubenswrapper[4942]: I0218 19:54:02.133015 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5e924cf0-b2c6-4897-b2e7-4f9b8897d083-catalog-content\") pod \"community-operators-4hkw6\" (UID: \"5e924cf0-b2c6-4897-b2e7-4f9b8897d083\") " pod="openshift-marketplace/community-operators-4hkw6" Feb 18 19:54:02 crc kubenswrapper[4942]: I0218 19:54:02.133176 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5e924cf0-b2c6-4897-b2e7-4f9b8897d083-utilities\") pod \"community-operators-4hkw6\" (UID: \"5e924cf0-b2c6-4897-b2e7-4f9b8897d083\") " pod="openshift-marketplace/community-operators-4hkw6" Feb 18 19:54:02 crc kubenswrapper[4942]: I0218 19:54:02.154971 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9lck4\" (UniqueName: \"kubernetes.io/projected/5e924cf0-b2c6-4897-b2e7-4f9b8897d083-kube-api-access-9lck4\") pod \"community-operators-4hkw6\" (UID: \"5e924cf0-b2c6-4897-b2e7-4f9b8897d083\") " pod="openshift-marketplace/community-operators-4hkw6" Feb 18 19:54:02 crc kubenswrapper[4942]: I0218 19:54:02.205523 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-4hkw6" Feb 18 19:54:02 crc kubenswrapper[4942]: I0218 19:54:02.683897 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-4hkw6"] Feb 18 19:54:03 crc kubenswrapper[4942]: I0218 19:54:03.246693 4942 generic.go:334] "Generic (PLEG): container finished" podID="5e924cf0-b2c6-4897-b2e7-4f9b8897d083" containerID="2d0712a79af0cd17e9d3752f83b8fbf806888be761852b2cb3edb3ac9aa0c67a" exitCode=0 Feb 18 19:54:03 crc kubenswrapper[4942]: I0218 19:54:03.246788 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4hkw6" event={"ID":"5e924cf0-b2c6-4897-b2e7-4f9b8897d083","Type":"ContainerDied","Data":"2d0712a79af0cd17e9d3752f83b8fbf806888be761852b2cb3edb3ac9aa0c67a"} Feb 18 19:54:03 crc kubenswrapper[4942]: I0218 19:54:03.247957 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4hkw6" event={"ID":"5e924cf0-b2c6-4897-b2e7-4f9b8897d083","Type":"ContainerStarted","Data":"3827ab588f8db14da1f2e9a66731d1db7bc3e013ccdb2c77ca4f1d290292025b"} Feb 18 19:54:04 crc kubenswrapper[4942]: I0218 19:54:04.258252 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4hkw6" event={"ID":"5e924cf0-b2c6-4897-b2e7-4f9b8897d083","Type":"ContainerStarted","Data":"ea194218bfe8d8fb6e0edb5dab22c760c8badc5d5af529d1765434569028a7a8"} Feb 18 19:54:04 crc kubenswrapper[4942]: I0218 19:54:04.664566 4942 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-lszsq"] Feb 18 19:54:04 crc kubenswrapper[4942]: I0218 19:54:04.666909 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-lszsq" Feb 18 19:54:04 crc kubenswrapper[4942]: I0218 19:54:04.678538 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-lszsq"] Feb 18 19:54:04 crc kubenswrapper[4942]: I0218 19:54:04.787293 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h69hf\" (UniqueName: \"kubernetes.io/projected/383667c1-9137-4f4f-a870-3bbc3dee3050-kube-api-access-h69hf\") pod \"redhat-marketplace-lszsq\" (UID: \"383667c1-9137-4f4f-a870-3bbc3dee3050\") " pod="openshift-marketplace/redhat-marketplace-lszsq" Feb 18 19:54:04 crc kubenswrapper[4942]: I0218 19:54:04.787449 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/383667c1-9137-4f4f-a870-3bbc3dee3050-catalog-content\") pod \"redhat-marketplace-lszsq\" (UID: \"383667c1-9137-4f4f-a870-3bbc3dee3050\") " pod="openshift-marketplace/redhat-marketplace-lszsq" Feb 18 19:54:04 crc kubenswrapper[4942]: I0218 19:54:04.787498 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/383667c1-9137-4f4f-a870-3bbc3dee3050-utilities\") pod \"redhat-marketplace-lszsq\" (UID: \"383667c1-9137-4f4f-a870-3bbc3dee3050\") " pod="openshift-marketplace/redhat-marketplace-lszsq" Feb 18 19:54:04 crc kubenswrapper[4942]: I0218 19:54:04.889712 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/383667c1-9137-4f4f-a870-3bbc3dee3050-catalog-content\") pod \"redhat-marketplace-lszsq\" (UID: \"383667c1-9137-4f4f-a870-3bbc3dee3050\") " pod="openshift-marketplace/redhat-marketplace-lszsq" Feb 18 19:54:04 crc kubenswrapper[4942]: I0218 19:54:04.889825 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/383667c1-9137-4f4f-a870-3bbc3dee3050-utilities\") pod \"redhat-marketplace-lszsq\" (UID: \"383667c1-9137-4f4f-a870-3bbc3dee3050\") " pod="openshift-marketplace/redhat-marketplace-lszsq" Feb 18 19:54:04 crc kubenswrapper[4942]: I0218 19:54:04.889904 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h69hf\" (UniqueName: \"kubernetes.io/projected/383667c1-9137-4f4f-a870-3bbc3dee3050-kube-api-access-h69hf\") pod \"redhat-marketplace-lszsq\" (UID: \"383667c1-9137-4f4f-a870-3bbc3dee3050\") " pod="openshift-marketplace/redhat-marketplace-lszsq" Feb 18 19:54:04 crc kubenswrapper[4942]: I0218 19:54:04.890327 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/383667c1-9137-4f4f-a870-3bbc3dee3050-catalog-content\") pod \"redhat-marketplace-lszsq\" (UID: \"383667c1-9137-4f4f-a870-3bbc3dee3050\") " pod="openshift-marketplace/redhat-marketplace-lszsq" Feb 18 19:54:04 crc kubenswrapper[4942]: I0218 19:54:04.890342 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/383667c1-9137-4f4f-a870-3bbc3dee3050-utilities\") pod \"redhat-marketplace-lszsq\" (UID: \"383667c1-9137-4f4f-a870-3bbc3dee3050\") " pod="openshift-marketplace/redhat-marketplace-lszsq" Feb 18 19:54:04 crc kubenswrapper[4942]: I0218 19:54:04.918744 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h69hf\" (UniqueName: \"kubernetes.io/projected/383667c1-9137-4f4f-a870-3bbc3dee3050-kube-api-access-h69hf\") pod \"redhat-marketplace-lszsq\" (UID: \"383667c1-9137-4f4f-a870-3bbc3dee3050\") " pod="openshift-marketplace/redhat-marketplace-lszsq" Feb 18 19:54:04 crc kubenswrapper[4942]: I0218 19:54:04.991501 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-lszsq" Feb 18 19:54:05 crc kubenswrapper[4942]: I0218 19:54:05.269003 4942 generic.go:334] "Generic (PLEG): container finished" podID="5e924cf0-b2c6-4897-b2e7-4f9b8897d083" containerID="ea194218bfe8d8fb6e0edb5dab22c760c8badc5d5af529d1765434569028a7a8" exitCode=0 Feb 18 19:54:05 crc kubenswrapper[4942]: I0218 19:54:05.269315 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4hkw6" event={"ID":"5e924cf0-b2c6-4897-b2e7-4f9b8897d083","Type":"ContainerDied","Data":"ea194218bfe8d8fb6e0edb5dab22c760c8badc5d5af529d1765434569028a7a8"} Feb 18 19:54:05 crc kubenswrapper[4942]: I0218 19:54:05.533517 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-lszsq"] Feb 18 19:54:05 crc kubenswrapper[4942]: W0218 19:54:05.539449 4942 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod383667c1_9137_4f4f_a870_3bbc3dee3050.slice/crio-0ac470feecbc5858e591f7152f03a704a24ee4c2d533e4c44891108eea6256a3 WatchSource:0}: Error finding container 0ac470feecbc5858e591f7152f03a704a24ee4c2d533e4c44891108eea6256a3: Status 404 returned error can't find the container with id 0ac470feecbc5858e591f7152f03a704a24ee4c2d533e4c44891108eea6256a3 Feb 18 19:54:06 crc kubenswrapper[4942]: I0218 19:54:06.285353 4942 generic.go:334] "Generic (PLEG): container finished" podID="383667c1-9137-4f4f-a870-3bbc3dee3050" containerID="fced62a823aabc9eb96d7dc1c21c39c26f67347f087ea0b1c45827cef7157377" exitCode=0 Feb 18 19:54:06 crc kubenswrapper[4942]: I0218 19:54:06.285414 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-lszsq" event={"ID":"383667c1-9137-4f4f-a870-3bbc3dee3050","Type":"ContainerDied","Data":"fced62a823aabc9eb96d7dc1c21c39c26f67347f087ea0b1c45827cef7157377"} Feb 18 19:54:06 crc kubenswrapper[4942]: I0218 19:54:06.286002 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-lszsq" event={"ID":"383667c1-9137-4f4f-a870-3bbc3dee3050","Type":"ContainerStarted","Data":"0ac470feecbc5858e591f7152f03a704a24ee4c2d533e4c44891108eea6256a3"} Feb 18 19:54:06 crc kubenswrapper[4942]: I0218 19:54:06.291431 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4hkw6" event={"ID":"5e924cf0-b2c6-4897-b2e7-4f9b8897d083","Type":"ContainerStarted","Data":"1d77495a36747426d872d719c4b5c29ee0e6e958fc8df65ab5ab215611207fcb"} Feb 18 19:54:06 crc kubenswrapper[4942]: I0218 19:54:06.329242 4942 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-4hkw6" podStartSLOduration=2.9359082340000002 podStartE2EDuration="5.329218359s" podCreationTimestamp="2026-02-18 19:54:01 +0000 UTC" firstStartedPulling="2026-02-18 19:54:03.248998178 +0000 UTC m=+2202.953930833" lastFinishedPulling="2026-02-18 19:54:05.642308303 +0000 UTC m=+2205.347240958" observedRunningTime="2026-02-18 19:54:06.325839039 +0000 UTC m=+2206.030771704" watchObservedRunningTime="2026-02-18 19:54:06.329218359 +0000 UTC m=+2206.034151044" Feb 18 19:54:08 crc kubenswrapper[4942]: I0218 19:54:08.309618 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-lszsq" event={"ID":"383667c1-9137-4f4f-a870-3bbc3dee3050","Type":"ContainerStarted","Data":"01241740eda1e01b1148596092553039bc8d0f4fa82bfe1851e652e1a9db2c10"} Feb 18 19:54:09 crc kubenswrapper[4942]: I0218 19:54:09.320267 4942 generic.go:334] "Generic (PLEG): container finished" podID="383667c1-9137-4f4f-a870-3bbc3dee3050" containerID="01241740eda1e01b1148596092553039bc8d0f4fa82bfe1851e652e1a9db2c10" exitCode=0 Feb 18 19:54:09 crc kubenswrapper[4942]: I0218 19:54:09.320341 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-lszsq" event={"ID":"383667c1-9137-4f4f-a870-3bbc3dee3050","Type":"ContainerDied","Data":"01241740eda1e01b1148596092553039bc8d0f4fa82bfe1851e652e1a9db2c10"} Feb 18 19:54:10 crc kubenswrapper[4942]: I0218 19:54:10.331728 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-lszsq" event={"ID":"383667c1-9137-4f4f-a870-3bbc3dee3050","Type":"ContainerStarted","Data":"e6e36f3a740b91dbd03b01c5e3d04984228711747c2ab244bd4357d34fe38eec"} Feb 18 19:54:10 crc kubenswrapper[4942]: I0218 19:54:10.360856 4942 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-lszsq" podStartSLOduration=2.891764562 podStartE2EDuration="6.360833546s" podCreationTimestamp="2026-02-18 19:54:04 +0000 UTC" firstStartedPulling="2026-02-18 19:54:06.288333473 +0000 UTC m=+2205.993266148" lastFinishedPulling="2026-02-18 19:54:09.757402447 +0000 UTC m=+2209.462335132" observedRunningTime="2026-02-18 19:54:10.353779739 +0000 UTC m=+2210.058712424" watchObservedRunningTime="2026-02-18 19:54:10.360833546 +0000 UTC m=+2210.065766211" Feb 18 19:54:12 crc kubenswrapper[4942]: I0218 19:54:12.205651 4942 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-4hkw6" Feb 18 19:54:12 crc kubenswrapper[4942]: I0218 19:54:12.206253 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-4hkw6" Feb 18 19:54:12 crc kubenswrapper[4942]: I0218 19:54:12.280887 4942 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-4hkw6" Feb 18 19:54:12 crc kubenswrapper[4942]: I0218 19:54:12.403376 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-4hkw6" Feb 18 19:54:12 crc kubenswrapper[4942]: I0218 19:54:12.887188 4942 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-4hkw6"] Feb 18 19:54:14 crc kubenswrapper[4942]: I0218 19:54:14.364953 4942 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-4hkw6" podUID="5e924cf0-b2c6-4897-b2e7-4f9b8897d083" containerName="registry-server" containerID="cri-o://1d77495a36747426d872d719c4b5c29ee0e6e958fc8df65ab5ab215611207fcb" gracePeriod=2 Feb 18 19:54:14 crc kubenswrapper[4942]: I0218 19:54:14.809627 4942 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-4hkw6" Feb 18 19:54:14 crc kubenswrapper[4942]: I0218 19:54:14.887814 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5e924cf0-b2c6-4897-b2e7-4f9b8897d083-utilities\") pod \"5e924cf0-b2c6-4897-b2e7-4f9b8897d083\" (UID: \"5e924cf0-b2c6-4897-b2e7-4f9b8897d083\") " Feb 18 19:54:14 crc kubenswrapper[4942]: I0218 19:54:14.887882 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9lck4\" (UniqueName: \"kubernetes.io/projected/5e924cf0-b2c6-4897-b2e7-4f9b8897d083-kube-api-access-9lck4\") pod \"5e924cf0-b2c6-4897-b2e7-4f9b8897d083\" (UID: \"5e924cf0-b2c6-4897-b2e7-4f9b8897d083\") " Feb 18 19:54:14 crc kubenswrapper[4942]: I0218 19:54:14.888059 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5e924cf0-b2c6-4897-b2e7-4f9b8897d083-catalog-content\") pod \"5e924cf0-b2c6-4897-b2e7-4f9b8897d083\" (UID: \"5e924cf0-b2c6-4897-b2e7-4f9b8897d083\") " Feb 18 19:54:14 crc kubenswrapper[4942]: I0218 19:54:14.889475 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5e924cf0-b2c6-4897-b2e7-4f9b8897d083-utilities" (OuterVolumeSpecName: "utilities") pod "5e924cf0-b2c6-4897-b2e7-4f9b8897d083" (UID: "5e924cf0-b2c6-4897-b2e7-4f9b8897d083"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 19:54:14 crc kubenswrapper[4942]: I0218 19:54:14.896731 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5e924cf0-b2c6-4897-b2e7-4f9b8897d083-kube-api-access-9lck4" (OuterVolumeSpecName: "kube-api-access-9lck4") pod "5e924cf0-b2c6-4897-b2e7-4f9b8897d083" (UID: "5e924cf0-b2c6-4897-b2e7-4f9b8897d083"). InnerVolumeSpecName "kube-api-access-9lck4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:54:14 crc kubenswrapper[4942]: I0218 19:54:14.991017 4942 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5e924cf0-b2c6-4897-b2e7-4f9b8897d083-utilities\") on node \"crc\" DevicePath \"\"" Feb 18 19:54:14 crc kubenswrapper[4942]: I0218 19:54:14.991052 4942 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9lck4\" (UniqueName: \"kubernetes.io/projected/5e924cf0-b2c6-4897-b2e7-4f9b8897d083-kube-api-access-9lck4\") on node \"crc\" DevicePath \"\"" Feb 18 19:54:14 crc kubenswrapper[4942]: I0218 19:54:14.991942 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-lszsq" Feb 18 19:54:14 crc kubenswrapper[4942]: I0218 19:54:14.992048 4942 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-lszsq" Feb 18 19:54:15 crc kubenswrapper[4942]: I0218 19:54:15.053777 4942 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-lszsq" Feb 18 19:54:15 crc kubenswrapper[4942]: I0218 19:54:15.377962 4942 generic.go:334] "Generic (PLEG): container finished" podID="5e924cf0-b2c6-4897-b2e7-4f9b8897d083" containerID="1d77495a36747426d872d719c4b5c29ee0e6e958fc8df65ab5ab215611207fcb" exitCode=0 Feb 18 19:54:15 crc kubenswrapper[4942]: I0218 19:54:15.378231 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4hkw6" event={"ID":"5e924cf0-b2c6-4897-b2e7-4f9b8897d083","Type":"ContainerDied","Data":"1d77495a36747426d872d719c4b5c29ee0e6e958fc8df65ab5ab215611207fcb"} Feb 18 19:54:15 crc kubenswrapper[4942]: I0218 19:54:15.378295 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4hkw6" event={"ID":"5e924cf0-b2c6-4897-b2e7-4f9b8897d083","Type":"ContainerDied","Data":"3827ab588f8db14da1f2e9a66731d1db7bc3e013ccdb2c77ca4f1d290292025b"} Feb 18 19:54:15 crc kubenswrapper[4942]: I0218 19:54:15.378317 4942 scope.go:117] "RemoveContainer" containerID="1d77495a36747426d872d719c4b5c29ee0e6e958fc8df65ab5ab215611207fcb" Feb 18 19:54:15 crc kubenswrapper[4942]: I0218 19:54:15.378422 4942 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-4hkw6" Feb 18 19:54:15 crc kubenswrapper[4942]: I0218 19:54:15.420446 4942 scope.go:117] "RemoveContainer" containerID="ea194218bfe8d8fb6e0edb5dab22c760c8badc5d5af529d1765434569028a7a8" Feb 18 19:54:15 crc kubenswrapper[4942]: I0218 19:54:15.426320 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-lszsq" Feb 18 19:54:15 crc kubenswrapper[4942]: I0218 19:54:15.441865 4942 scope.go:117] "RemoveContainer" containerID="2d0712a79af0cd17e9d3752f83b8fbf806888be761852b2cb3edb3ac9aa0c67a" Feb 18 19:54:15 crc kubenswrapper[4942]: I0218 19:54:15.496266 4942 scope.go:117] "RemoveContainer" containerID="1d77495a36747426d872d719c4b5c29ee0e6e958fc8df65ab5ab215611207fcb" Feb 18 19:54:15 crc kubenswrapper[4942]: E0218 19:54:15.498372 4942 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1d77495a36747426d872d719c4b5c29ee0e6e958fc8df65ab5ab215611207fcb\": container with ID starting with 1d77495a36747426d872d719c4b5c29ee0e6e958fc8df65ab5ab215611207fcb not found: ID does not exist" containerID="1d77495a36747426d872d719c4b5c29ee0e6e958fc8df65ab5ab215611207fcb" Feb 18 19:54:15 crc kubenswrapper[4942]: I0218 19:54:15.498418 4942 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1d77495a36747426d872d719c4b5c29ee0e6e958fc8df65ab5ab215611207fcb"} err="failed to get container status \"1d77495a36747426d872d719c4b5c29ee0e6e958fc8df65ab5ab215611207fcb\": rpc error: code = NotFound desc = could not find container \"1d77495a36747426d872d719c4b5c29ee0e6e958fc8df65ab5ab215611207fcb\": container with ID starting with 1d77495a36747426d872d719c4b5c29ee0e6e958fc8df65ab5ab215611207fcb not found: ID does not exist" Feb 18 19:54:15 crc kubenswrapper[4942]: I0218 19:54:15.498445 4942 scope.go:117] "RemoveContainer" containerID="ea194218bfe8d8fb6e0edb5dab22c760c8badc5d5af529d1765434569028a7a8" Feb 18 19:54:15 crc kubenswrapper[4942]: E0218 19:54:15.498784 4942 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ea194218bfe8d8fb6e0edb5dab22c760c8badc5d5af529d1765434569028a7a8\": container with ID starting with ea194218bfe8d8fb6e0edb5dab22c760c8badc5d5af529d1765434569028a7a8 not found: ID does not exist" containerID="ea194218bfe8d8fb6e0edb5dab22c760c8badc5d5af529d1765434569028a7a8" Feb 18 19:54:15 crc kubenswrapper[4942]: I0218 19:54:15.498803 4942 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ea194218bfe8d8fb6e0edb5dab22c760c8badc5d5af529d1765434569028a7a8"} err="failed to get container status \"ea194218bfe8d8fb6e0edb5dab22c760c8badc5d5af529d1765434569028a7a8\": rpc error: code = NotFound desc = could not find container \"ea194218bfe8d8fb6e0edb5dab22c760c8badc5d5af529d1765434569028a7a8\": container with ID starting with ea194218bfe8d8fb6e0edb5dab22c760c8badc5d5af529d1765434569028a7a8 not found: ID does not exist" Feb 18 19:54:15 crc kubenswrapper[4942]: I0218 19:54:15.498817 4942 scope.go:117] "RemoveContainer" containerID="2d0712a79af0cd17e9d3752f83b8fbf806888be761852b2cb3edb3ac9aa0c67a" Feb 18 19:54:15 crc kubenswrapper[4942]: E0218 19:54:15.499152 4942 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2d0712a79af0cd17e9d3752f83b8fbf806888be761852b2cb3edb3ac9aa0c67a\": container with ID starting with 2d0712a79af0cd17e9d3752f83b8fbf806888be761852b2cb3edb3ac9aa0c67a not found: ID does not exist" containerID="2d0712a79af0cd17e9d3752f83b8fbf806888be761852b2cb3edb3ac9aa0c67a" Feb 18 19:54:15 crc kubenswrapper[4942]: I0218 19:54:15.499172 4942 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2d0712a79af0cd17e9d3752f83b8fbf806888be761852b2cb3edb3ac9aa0c67a"} err="failed to get container status \"2d0712a79af0cd17e9d3752f83b8fbf806888be761852b2cb3edb3ac9aa0c67a\": rpc error: code = NotFound desc = could not find container \"2d0712a79af0cd17e9d3752f83b8fbf806888be761852b2cb3edb3ac9aa0c67a\": container with ID starting with 2d0712a79af0cd17e9d3752f83b8fbf806888be761852b2cb3edb3ac9aa0c67a not found: ID does not exist" Feb 18 19:54:15 crc kubenswrapper[4942]: I0218 19:54:15.547505 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5e924cf0-b2c6-4897-b2e7-4f9b8897d083-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5e924cf0-b2c6-4897-b2e7-4f9b8897d083" (UID: "5e924cf0-b2c6-4897-b2e7-4f9b8897d083"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 19:54:15 crc kubenswrapper[4942]: I0218 19:54:15.605993 4942 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5e924cf0-b2c6-4897-b2e7-4f9b8897d083-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 18 19:54:15 crc kubenswrapper[4942]: I0218 19:54:15.738075 4942 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-4hkw6"] Feb 18 19:54:15 crc kubenswrapper[4942]: I0218 19:54:15.748416 4942 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-4hkw6"] Feb 18 19:54:17 crc kubenswrapper[4942]: I0218 19:54:17.054708 4942 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5e924cf0-b2c6-4897-b2e7-4f9b8897d083" path="/var/lib/kubelet/pods/5e924cf0-b2c6-4897-b2e7-4f9b8897d083/volumes" Feb 18 19:54:17 crc kubenswrapper[4942]: I0218 19:54:17.461949 4942 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-lszsq"] Feb 18 19:54:18 crc kubenswrapper[4942]: I0218 19:54:18.410711 4942 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-lszsq" podUID="383667c1-9137-4f4f-a870-3bbc3dee3050" containerName="registry-server" containerID="cri-o://e6e36f3a740b91dbd03b01c5e3d04984228711747c2ab244bd4357d34fe38eec" gracePeriod=2 Feb 18 19:54:19 crc kubenswrapper[4942]: I0218 19:54:19.422687 4942 generic.go:334] "Generic (PLEG): container finished" podID="383667c1-9137-4f4f-a870-3bbc3dee3050" containerID="e6e36f3a740b91dbd03b01c5e3d04984228711747c2ab244bd4357d34fe38eec" exitCode=0 Feb 18 19:54:19 crc kubenswrapper[4942]: I0218 19:54:19.422768 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-lszsq" event={"ID":"383667c1-9137-4f4f-a870-3bbc3dee3050","Type":"ContainerDied","Data":"e6e36f3a740b91dbd03b01c5e3d04984228711747c2ab244bd4357d34fe38eec"} Feb 18 19:54:19 crc kubenswrapper[4942]: I0218 19:54:19.423152 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-lszsq" event={"ID":"383667c1-9137-4f4f-a870-3bbc3dee3050","Type":"ContainerDied","Data":"0ac470feecbc5858e591f7152f03a704a24ee4c2d533e4c44891108eea6256a3"} Feb 18 19:54:19 crc kubenswrapper[4942]: I0218 19:54:19.423175 4942 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0ac470feecbc5858e591f7152f03a704a24ee4c2d533e4c44891108eea6256a3" Feb 18 19:54:19 crc kubenswrapper[4942]: I0218 19:54:19.459463 4942 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-lszsq" Feb 18 19:54:19 crc kubenswrapper[4942]: I0218 19:54:19.486728 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/383667c1-9137-4f4f-a870-3bbc3dee3050-utilities\") pod \"383667c1-9137-4f4f-a870-3bbc3dee3050\" (UID: \"383667c1-9137-4f4f-a870-3bbc3dee3050\") " Feb 18 19:54:19 crc kubenswrapper[4942]: I0218 19:54:19.486839 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h69hf\" (UniqueName: \"kubernetes.io/projected/383667c1-9137-4f4f-a870-3bbc3dee3050-kube-api-access-h69hf\") pod \"383667c1-9137-4f4f-a870-3bbc3dee3050\" (UID: \"383667c1-9137-4f4f-a870-3bbc3dee3050\") " Feb 18 19:54:19 crc kubenswrapper[4942]: I0218 19:54:19.487031 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/383667c1-9137-4f4f-a870-3bbc3dee3050-catalog-content\") pod \"383667c1-9137-4f4f-a870-3bbc3dee3050\" (UID: \"383667c1-9137-4f4f-a870-3bbc3dee3050\") " Feb 18 19:54:19 crc kubenswrapper[4942]: I0218 19:54:19.487584 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/383667c1-9137-4f4f-a870-3bbc3dee3050-utilities" (OuterVolumeSpecName: "utilities") pod "383667c1-9137-4f4f-a870-3bbc3dee3050" (UID: "383667c1-9137-4f4f-a870-3bbc3dee3050"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 19:54:19 crc kubenswrapper[4942]: I0218 19:54:19.492061 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/383667c1-9137-4f4f-a870-3bbc3dee3050-kube-api-access-h69hf" (OuterVolumeSpecName: "kube-api-access-h69hf") pod "383667c1-9137-4f4f-a870-3bbc3dee3050" (UID: "383667c1-9137-4f4f-a870-3bbc3dee3050"). InnerVolumeSpecName "kube-api-access-h69hf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:54:19 crc kubenswrapper[4942]: I0218 19:54:19.513089 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/383667c1-9137-4f4f-a870-3bbc3dee3050-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "383667c1-9137-4f4f-a870-3bbc3dee3050" (UID: "383667c1-9137-4f4f-a870-3bbc3dee3050"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 19:54:19 crc kubenswrapper[4942]: I0218 19:54:19.589411 4942 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/383667c1-9137-4f4f-a870-3bbc3dee3050-utilities\") on node \"crc\" DevicePath \"\"" Feb 18 19:54:19 crc kubenswrapper[4942]: I0218 19:54:19.589439 4942 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h69hf\" (UniqueName: \"kubernetes.io/projected/383667c1-9137-4f4f-a870-3bbc3dee3050-kube-api-access-h69hf\") on node \"crc\" DevicePath \"\"" Feb 18 19:54:19 crc kubenswrapper[4942]: I0218 19:54:19.589449 4942 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/383667c1-9137-4f4f-a870-3bbc3dee3050-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 18 19:54:20 crc kubenswrapper[4942]: I0218 19:54:20.434283 4942 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-lszsq" Feb 18 19:54:20 crc kubenswrapper[4942]: I0218 19:54:20.484051 4942 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-lszsq"] Feb 18 19:54:20 crc kubenswrapper[4942]: I0218 19:54:20.494427 4942 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-lszsq"] Feb 18 19:54:21 crc kubenswrapper[4942]: I0218 19:54:21.074697 4942 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="383667c1-9137-4f4f-a870-3bbc3dee3050" path="/var/lib/kubelet/pods/383667c1-9137-4f4f-a870-3bbc3dee3050/volumes" Feb 18 19:55:05 crc kubenswrapper[4942]: I0218 19:55:05.874449 4942 generic.go:334] "Generic (PLEG): container finished" podID="1924338e-aea6-474f-9216-bb7eb32dc5fe" containerID="0881a0c3a7d6de31317f11d4bbacc01b597b4d2f4939061d09363608ec65d1f7" exitCode=0 Feb 18 19:55:05 crc kubenswrapper[4942]: I0218 19:55:05.874568 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-5nmvq" event={"ID":"1924338e-aea6-474f-9216-bb7eb32dc5fe","Type":"ContainerDied","Data":"0881a0c3a7d6de31317f11d4bbacc01b597b4d2f4939061d09363608ec65d1f7"} Feb 18 19:55:07 crc kubenswrapper[4942]: I0218 19:55:07.317097 4942 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-5nmvq" Feb 18 19:55:07 crc kubenswrapper[4942]: I0218 19:55:07.377812 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1924338e-aea6-474f-9216-bb7eb32dc5fe-inventory\") pod \"1924338e-aea6-474f-9216-bb7eb32dc5fe\" (UID: \"1924338e-aea6-474f-9216-bb7eb32dc5fe\") " Feb 18 19:55:07 crc kubenswrapper[4942]: I0218 19:55:07.377914 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1924338e-aea6-474f-9216-bb7eb32dc5fe-libvirt-combined-ca-bundle\") pod \"1924338e-aea6-474f-9216-bb7eb32dc5fe\" (UID: \"1924338e-aea6-474f-9216-bb7eb32dc5fe\") " Feb 18 19:55:07 crc kubenswrapper[4942]: I0218 19:55:07.377945 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-llwjc\" (UniqueName: \"kubernetes.io/projected/1924338e-aea6-474f-9216-bb7eb32dc5fe-kube-api-access-llwjc\") pod \"1924338e-aea6-474f-9216-bb7eb32dc5fe\" (UID: \"1924338e-aea6-474f-9216-bb7eb32dc5fe\") " Feb 18 19:55:07 crc kubenswrapper[4942]: I0218 19:55:07.377991 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/1924338e-aea6-474f-9216-bb7eb32dc5fe-libvirt-secret-0\") pod \"1924338e-aea6-474f-9216-bb7eb32dc5fe\" (UID: \"1924338e-aea6-474f-9216-bb7eb32dc5fe\") " Feb 18 19:55:07 crc kubenswrapper[4942]: I0218 19:55:07.378080 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/1924338e-aea6-474f-9216-bb7eb32dc5fe-ssh-key-openstack-edpm-ipam\") pod \"1924338e-aea6-474f-9216-bb7eb32dc5fe\" (UID: \"1924338e-aea6-474f-9216-bb7eb32dc5fe\") " Feb 18 19:55:07 crc kubenswrapper[4942]: I0218 19:55:07.384081 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1924338e-aea6-474f-9216-bb7eb32dc5fe-libvirt-combined-ca-bundle" (OuterVolumeSpecName: "libvirt-combined-ca-bundle") pod "1924338e-aea6-474f-9216-bb7eb32dc5fe" (UID: "1924338e-aea6-474f-9216-bb7eb32dc5fe"). InnerVolumeSpecName "libvirt-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:55:07 crc kubenswrapper[4942]: I0218 19:55:07.384564 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1924338e-aea6-474f-9216-bb7eb32dc5fe-kube-api-access-llwjc" (OuterVolumeSpecName: "kube-api-access-llwjc") pod "1924338e-aea6-474f-9216-bb7eb32dc5fe" (UID: "1924338e-aea6-474f-9216-bb7eb32dc5fe"). InnerVolumeSpecName "kube-api-access-llwjc". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:55:07 crc kubenswrapper[4942]: I0218 19:55:07.405390 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1924338e-aea6-474f-9216-bb7eb32dc5fe-libvirt-secret-0" (OuterVolumeSpecName: "libvirt-secret-0") pod "1924338e-aea6-474f-9216-bb7eb32dc5fe" (UID: "1924338e-aea6-474f-9216-bb7eb32dc5fe"). InnerVolumeSpecName "libvirt-secret-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:55:07 crc kubenswrapper[4942]: I0218 19:55:07.407819 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1924338e-aea6-474f-9216-bb7eb32dc5fe-inventory" (OuterVolumeSpecName: "inventory") pod "1924338e-aea6-474f-9216-bb7eb32dc5fe" (UID: "1924338e-aea6-474f-9216-bb7eb32dc5fe"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:55:07 crc kubenswrapper[4942]: I0218 19:55:07.415667 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1924338e-aea6-474f-9216-bb7eb32dc5fe-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "1924338e-aea6-474f-9216-bb7eb32dc5fe" (UID: "1924338e-aea6-474f-9216-bb7eb32dc5fe"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:55:07 crc kubenswrapper[4942]: I0218 19:55:07.480607 4942 reconciler_common.go:293] "Volume detached for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/1924338e-aea6-474f-9216-bb7eb32dc5fe-libvirt-secret-0\") on node \"crc\" DevicePath \"\"" Feb 18 19:55:07 crc kubenswrapper[4942]: I0218 19:55:07.480660 4942 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/1924338e-aea6-474f-9216-bb7eb32dc5fe-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 18 19:55:07 crc kubenswrapper[4942]: I0218 19:55:07.480684 4942 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1924338e-aea6-474f-9216-bb7eb32dc5fe-inventory\") on node \"crc\" DevicePath \"\"" Feb 18 19:55:07 crc kubenswrapper[4942]: I0218 19:55:07.480704 4942 reconciler_common.go:293] "Volume detached for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1924338e-aea6-474f-9216-bb7eb32dc5fe-libvirt-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 19:55:07 crc kubenswrapper[4942]: I0218 19:55:07.480723 4942 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-llwjc\" (UniqueName: \"kubernetes.io/projected/1924338e-aea6-474f-9216-bb7eb32dc5fe-kube-api-access-llwjc\") on node \"crc\" DevicePath \"\"" Feb 18 19:55:07 crc kubenswrapper[4942]: I0218 19:55:07.894107 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-5nmvq" event={"ID":"1924338e-aea6-474f-9216-bb7eb32dc5fe","Type":"ContainerDied","Data":"85a49794f2563fabd1097b6b3517a1243e11855d7bbbaa0e4c993f19ad38505f"} Feb 18 19:55:07 crc kubenswrapper[4942]: I0218 19:55:07.894154 4942 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-5nmvq" Feb 18 19:55:07 crc kubenswrapper[4942]: I0218 19:55:07.894155 4942 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="85a49794f2563fabd1097b6b3517a1243e11855d7bbbaa0e4c993f19ad38505f" Feb 18 19:55:08 crc kubenswrapper[4942]: I0218 19:55:08.019249 4942 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-edpm-deployment-openstack-edpm-ipam-hckpr"] Feb 18 19:55:08 crc kubenswrapper[4942]: E0218 19:55:08.020409 4942 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5e924cf0-b2c6-4897-b2e7-4f9b8897d083" containerName="extract-utilities" Feb 18 19:55:08 crc kubenswrapper[4942]: I0218 19:55:08.020426 4942 state_mem.go:107] "Deleted CPUSet assignment" podUID="5e924cf0-b2c6-4897-b2e7-4f9b8897d083" containerName="extract-utilities" Feb 18 19:55:08 crc kubenswrapper[4942]: E0218 19:55:08.020453 4942 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="383667c1-9137-4f4f-a870-3bbc3dee3050" containerName="extract-utilities" Feb 18 19:55:08 crc kubenswrapper[4942]: I0218 19:55:08.020459 4942 state_mem.go:107] "Deleted CPUSet assignment" podUID="383667c1-9137-4f4f-a870-3bbc3dee3050" containerName="extract-utilities" Feb 18 19:55:08 crc kubenswrapper[4942]: E0218 19:55:08.020484 4942 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5e924cf0-b2c6-4897-b2e7-4f9b8897d083" containerName="registry-server" Feb 18 19:55:08 crc kubenswrapper[4942]: I0218 19:55:08.020494 4942 state_mem.go:107] "Deleted CPUSet assignment" podUID="5e924cf0-b2c6-4897-b2e7-4f9b8897d083" containerName="registry-server" Feb 18 19:55:08 crc kubenswrapper[4942]: E0218 19:55:08.020504 4942 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="383667c1-9137-4f4f-a870-3bbc3dee3050" containerName="registry-server" Feb 18 19:55:08 crc kubenswrapper[4942]: I0218 19:55:08.020513 4942 state_mem.go:107] "Deleted CPUSet assignment" podUID="383667c1-9137-4f4f-a870-3bbc3dee3050" containerName="registry-server" Feb 18 19:55:08 crc kubenswrapper[4942]: E0218 19:55:08.020539 4942 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5e924cf0-b2c6-4897-b2e7-4f9b8897d083" containerName="extract-content" Feb 18 19:55:08 crc kubenswrapper[4942]: I0218 19:55:08.020545 4942 state_mem.go:107] "Deleted CPUSet assignment" podUID="5e924cf0-b2c6-4897-b2e7-4f9b8897d083" containerName="extract-content" Feb 18 19:55:08 crc kubenswrapper[4942]: E0218 19:55:08.020551 4942 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="383667c1-9137-4f4f-a870-3bbc3dee3050" containerName="extract-content" Feb 18 19:55:08 crc kubenswrapper[4942]: I0218 19:55:08.020558 4942 state_mem.go:107] "Deleted CPUSet assignment" podUID="383667c1-9137-4f4f-a870-3bbc3dee3050" containerName="extract-content" Feb 18 19:55:08 crc kubenswrapper[4942]: E0218 19:55:08.020570 4942 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1924338e-aea6-474f-9216-bb7eb32dc5fe" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Feb 18 19:55:08 crc kubenswrapper[4942]: I0218 19:55:08.020580 4942 state_mem.go:107] "Deleted CPUSet assignment" podUID="1924338e-aea6-474f-9216-bb7eb32dc5fe" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Feb 18 19:55:08 crc kubenswrapper[4942]: I0218 19:55:08.020982 4942 memory_manager.go:354] "RemoveStaleState removing state" podUID="383667c1-9137-4f4f-a870-3bbc3dee3050" containerName="registry-server" Feb 18 19:55:08 crc kubenswrapper[4942]: I0218 19:55:08.021008 4942 memory_manager.go:354] "RemoveStaleState removing state" podUID="5e924cf0-b2c6-4897-b2e7-4f9b8897d083" containerName="registry-server" Feb 18 19:55:08 crc kubenswrapper[4942]: I0218 19:55:08.021029 4942 memory_manager.go:354] "RemoveStaleState removing state" podUID="1924338e-aea6-474f-9216-bb7eb32dc5fe" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Feb 18 19:55:08 crc kubenswrapper[4942]: I0218 19:55:08.022086 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-hckpr" Feb 18 19:55:08 crc kubenswrapper[4942]: I0218 19:55:08.026241 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 18 19:55:08 crc kubenswrapper[4942]: I0218 19:55:08.026440 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-migration-ssh-key" Feb 18 19:55:08 crc kubenswrapper[4942]: I0218 19:55:08.028105 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-compute-config" Feb 18 19:55:08 crc kubenswrapper[4942]: I0218 19:55:08.028323 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-rgcbh" Feb 18 19:55:08 crc kubenswrapper[4942]: I0218 19:55:08.028435 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 18 19:55:08 crc kubenswrapper[4942]: I0218 19:55:08.028561 4942 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 18 19:55:08 crc kubenswrapper[4942]: I0218 19:55:08.028696 4942 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"nova-extra-config" Feb 18 19:55:08 crc kubenswrapper[4942]: I0218 19:55:08.051033 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-edpm-deployment-openstack-edpm-ipam-hckpr"] Feb 18 19:55:08 crc kubenswrapper[4942]: I0218 19:55:08.092739 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/d2f25881-283c-4b0e-9f7f-e7e8ae0dfc70-nova-cell1-compute-config-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-hckpr\" (UID: \"d2f25881-283c-4b0e-9f7f-e7e8ae0dfc70\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-hckpr" Feb 18 19:55:08 crc kubenswrapper[4942]: I0218 19:55:08.092799 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/d2f25881-283c-4b0e-9f7f-e7e8ae0dfc70-nova-extra-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-hckpr\" (UID: \"d2f25881-283c-4b0e-9f7f-e7e8ae0dfc70\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-hckpr" Feb 18 19:55:08 crc kubenswrapper[4942]: I0218 19:55:08.092927 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/d2f25881-283c-4b0e-9f7f-e7e8ae0dfc70-nova-migration-ssh-key-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-hckpr\" (UID: \"d2f25881-283c-4b0e-9f7f-e7e8ae0dfc70\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-hckpr" Feb 18 19:55:08 crc kubenswrapper[4942]: I0218 19:55:08.092967 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d2f25881-283c-4b0e-9f7f-e7e8ae0dfc70-nova-combined-ca-bundle\") pod \"nova-edpm-deployment-openstack-edpm-ipam-hckpr\" (UID: \"d2f25881-283c-4b0e-9f7f-e7e8ae0dfc70\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-hckpr" Feb 18 19:55:08 crc kubenswrapper[4942]: I0218 19:55:08.093025 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/d2f25881-283c-4b0e-9f7f-e7e8ae0dfc70-ssh-key-openstack-edpm-ipam\") pod \"nova-edpm-deployment-openstack-edpm-ipam-hckpr\" (UID: \"d2f25881-283c-4b0e-9f7f-e7e8ae0dfc70\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-hckpr" Feb 18 19:55:08 crc kubenswrapper[4942]: I0218 19:55:08.093109 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/d2f25881-283c-4b0e-9f7f-e7e8ae0dfc70-nova-migration-ssh-key-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-hckpr\" (UID: \"d2f25881-283c-4b0e-9f7f-e7e8ae0dfc70\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-hckpr" Feb 18 19:55:08 crc kubenswrapper[4942]: I0218 19:55:08.093133 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s5vhg\" (UniqueName: \"kubernetes.io/projected/d2f25881-283c-4b0e-9f7f-e7e8ae0dfc70-kube-api-access-s5vhg\") pod \"nova-edpm-deployment-openstack-edpm-ipam-hckpr\" (UID: \"d2f25881-283c-4b0e-9f7f-e7e8ae0dfc70\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-hckpr" Feb 18 19:55:08 crc kubenswrapper[4942]: I0218 19:55:08.093285 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d2f25881-283c-4b0e-9f7f-e7e8ae0dfc70-inventory\") pod \"nova-edpm-deployment-openstack-edpm-ipam-hckpr\" (UID: \"d2f25881-283c-4b0e-9f7f-e7e8ae0dfc70\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-hckpr" Feb 18 19:55:08 crc kubenswrapper[4942]: I0218 19:55:08.093394 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/d2f25881-283c-4b0e-9f7f-e7e8ae0dfc70-nova-cell1-compute-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-hckpr\" (UID: \"d2f25881-283c-4b0e-9f7f-e7e8ae0dfc70\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-hckpr" Feb 18 19:55:08 crc kubenswrapper[4942]: I0218 19:55:08.195439 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/d2f25881-283c-4b0e-9f7f-e7e8ae0dfc70-nova-cell1-compute-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-hckpr\" (UID: \"d2f25881-283c-4b0e-9f7f-e7e8ae0dfc70\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-hckpr" Feb 18 19:55:08 crc kubenswrapper[4942]: I0218 19:55:08.195539 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/d2f25881-283c-4b0e-9f7f-e7e8ae0dfc70-nova-cell1-compute-config-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-hckpr\" (UID: \"d2f25881-283c-4b0e-9f7f-e7e8ae0dfc70\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-hckpr" Feb 18 19:55:08 crc kubenswrapper[4942]: I0218 19:55:08.195574 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/d2f25881-283c-4b0e-9f7f-e7e8ae0dfc70-nova-extra-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-hckpr\" (UID: \"d2f25881-283c-4b0e-9f7f-e7e8ae0dfc70\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-hckpr" Feb 18 19:55:08 crc kubenswrapper[4942]: I0218 19:55:08.195600 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/d2f25881-283c-4b0e-9f7f-e7e8ae0dfc70-nova-migration-ssh-key-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-hckpr\" (UID: \"d2f25881-283c-4b0e-9f7f-e7e8ae0dfc70\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-hckpr" Feb 18 19:55:08 crc kubenswrapper[4942]: I0218 19:55:08.195623 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d2f25881-283c-4b0e-9f7f-e7e8ae0dfc70-nova-combined-ca-bundle\") pod \"nova-edpm-deployment-openstack-edpm-ipam-hckpr\" (UID: \"d2f25881-283c-4b0e-9f7f-e7e8ae0dfc70\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-hckpr" Feb 18 19:55:08 crc kubenswrapper[4942]: I0218 19:55:08.195663 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/d2f25881-283c-4b0e-9f7f-e7e8ae0dfc70-ssh-key-openstack-edpm-ipam\") pod \"nova-edpm-deployment-openstack-edpm-ipam-hckpr\" (UID: \"d2f25881-283c-4b0e-9f7f-e7e8ae0dfc70\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-hckpr" Feb 18 19:55:08 crc kubenswrapper[4942]: I0218 19:55:08.195706 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/d2f25881-283c-4b0e-9f7f-e7e8ae0dfc70-nova-migration-ssh-key-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-hckpr\" (UID: \"d2f25881-283c-4b0e-9f7f-e7e8ae0dfc70\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-hckpr" Feb 18 19:55:08 crc kubenswrapper[4942]: I0218 19:55:08.195731 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s5vhg\" (UniqueName: \"kubernetes.io/projected/d2f25881-283c-4b0e-9f7f-e7e8ae0dfc70-kube-api-access-s5vhg\") pod \"nova-edpm-deployment-openstack-edpm-ipam-hckpr\" (UID: \"d2f25881-283c-4b0e-9f7f-e7e8ae0dfc70\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-hckpr" Feb 18 19:55:08 crc kubenswrapper[4942]: I0218 19:55:08.195835 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d2f25881-283c-4b0e-9f7f-e7e8ae0dfc70-inventory\") pod \"nova-edpm-deployment-openstack-edpm-ipam-hckpr\" (UID: \"d2f25881-283c-4b0e-9f7f-e7e8ae0dfc70\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-hckpr" Feb 18 19:55:08 crc kubenswrapper[4942]: I0218 19:55:08.197151 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/d2f25881-283c-4b0e-9f7f-e7e8ae0dfc70-nova-extra-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-hckpr\" (UID: \"d2f25881-283c-4b0e-9f7f-e7e8ae0dfc70\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-hckpr" Feb 18 19:55:08 crc kubenswrapper[4942]: I0218 19:55:08.200449 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d2f25881-283c-4b0e-9f7f-e7e8ae0dfc70-inventory\") pod \"nova-edpm-deployment-openstack-edpm-ipam-hckpr\" (UID: \"d2f25881-283c-4b0e-9f7f-e7e8ae0dfc70\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-hckpr" Feb 18 19:55:08 crc kubenswrapper[4942]: I0218 19:55:08.201355 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/d2f25881-283c-4b0e-9f7f-e7e8ae0dfc70-nova-cell1-compute-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-hckpr\" (UID: \"d2f25881-283c-4b0e-9f7f-e7e8ae0dfc70\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-hckpr" Feb 18 19:55:08 crc kubenswrapper[4942]: I0218 19:55:08.201583 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/d2f25881-283c-4b0e-9f7f-e7e8ae0dfc70-nova-migration-ssh-key-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-hckpr\" (UID: \"d2f25881-283c-4b0e-9f7f-e7e8ae0dfc70\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-hckpr" Feb 18 19:55:08 crc kubenswrapper[4942]: I0218 19:55:08.210555 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/d2f25881-283c-4b0e-9f7f-e7e8ae0dfc70-nova-migration-ssh-key-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-hckpr\" (UID: \"d2f25881-283c-4b0e-9f7f-e7e8ae0dfc70\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-hckpr" Feb 18 19:55:08 crc kubenswrapper[4942]: I0218 19:55:08.210657 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/d2f25881-283c-4b0e-9f7f-e7e8ae0dfc70-nova-cell1-compute-config-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-hckpr\" (UID: \"d2f25881-283c-4b0e-9f7f-e7e8ae0dfc70\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-hckpr" Feb 18 19:55:08 crc kubenswrapper[4942]: I0218 19:55:08.210805 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d2f25881-283c-4b0e-9f7f-e7e8ae0dfc70-nova-combined-ca-bundle\") pod \"nova-edpm-deployment-openstack-edpm-ipam-hckpr\" (UID: \"d2f25881-283c-4b0e-9f7f-e7e8ae0dfc70\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-hckpr" Feb 18 19:55:08 crc kubenswrapper[4942]: I0218 19:55:08.215847 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s5vhg\" (UniqueName: \"kubernetes.io/projected/d2f25881-283c-4b0e-9f7f-e7e8ae0dfc70-kube-api-access-s5vhg\") pod \"nova-edpm-deployment-openstack-edpm-ipam-hckpr\" (UID: \"d2f25881-283c-4b0e-9f7f-e7e8ae0dfc70\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-hckpr" Feb 18 19:55:08 crc kubenswrapper[4942]: I0218 19:55:08.219380 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/d2f25881-283c-4b0e-9f7f-e7e8ae0dfc70-ssh-key-openstack-edpm-ipam\") pod \"nova-edpm-deployment-openstack-edpm-ipam-hckpr\" (UID: \"d2f25881-283c-4b0e-9f7f-e7e8ae0dfc70\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-hckpr" Feb 18 19:55:08 crc kubenswrapper[4942]: I0218 19:55:08.340561 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-hckpr" Feb 18 19:55:08 crc kubenswrapper[4942]: I0218 19:55:08.949871 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-edpm-deployment-openstack-edpm-ipam-hckpr"] Feb 18 19:55:08 crc kubenswrapper[4942]: W0218 19:55:08.955182 4942 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd2f25881_283c_4b0e_9f7f_e7e8ae0dfc70.slice/crio-26b2e0a6c7d2cd46d70e4fc3ba7e0f057acbb4c702bcc59e97a7a00a1e2041ed WatchSource:0}: Error finding container 26b2e0a6c7d2cd46d70e4fc3ba7e0f057acbb4c702bcc59e97a7a00a1e2041ed: Status 404 returned error can't find the container with id 26b2e0a6c7d2cd46d70e4fc3ba7e0f057acbb4c702bcc59e97a7a00a1e2041ed Feb 18 19:55:09 crc kubenswrapper[4942]: I0218 19:55:09.914193 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-hckpr" event={"ID":"d2f25881-283c-4b0e-9f7f-e7e8ae0dfc70","Type":"ContainerStarted","Data":"64878fa884de6ab75395084ab5066c4598e313f06a7c48d59600498c9717bbc7"} Feb 18 19:55:09 crc kubenswrapper[4942]: I0218 19:55:09.914753 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-hckpr" event={"ID":"d2f25881-283c-4b0e-9f7f-e7e8ae0dfc70","Type":"ContainerStarted","Data":"26b2e0a6c7d2cd46d70e4fc3ba7e0f057acbb4c702bcc59e97a7a00a1e2041ed"} Feb 18 19:55:09 crc kubenswrapper[4942]: I0218 19:55:09.945825 4942 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-hckpr" podStartSLOduration=2.491454004 podStartE2EDuration="2.945806906s" podCreationTimestamp="2026-02-18 19:55:07 +0000 UTC" firstStartedPulling="2026-02-18 19:55:08.958347982 +0000 UTC m=+2268.663280647" lastFinishedPulling="2026-02-18 19:55:09.412700884 +0000 UTC m=+2269.117633549" observedRunningTime="2026-02-18 19:55:09.937926987 +0000 UTC m=+2269.642859692" watchObservedRunningTime="2026-02-18 19:55:09.945806906 +0000 UTC m=+2269.650739561" Feb 18 19:56:23 crc kubenswrapper[4942]: I0218 19:56:23.740718 4942 patch_prober.go:28] interesting pod/machine-config-daemon-wqxh4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 19:56:23 crc kubenswrapper[4942]: I0218 19:56:23.741371 4942 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wqxh4" podUID="28921539-823a-4439-a230-3b5aed7085cc" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 19:56:53 crc kubenswrapper[4942]: I0218 19:56:53.740961 4942 patch_prober.go:28] interesting pod/machine-config-daemon-wqxh4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 19:56:53 crc kubenswrapper[4942]: I0218 19:56:53.741537 4942 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wqxh4" podUID="28921539-823a-4439-a230-3b5aed7085cc" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 19:57:23 crc kubenswrapper[4942]: I0218 19:57:23.740878 4942 patch_prober.go:28] interesting pod/machine-config-daemon-wqxh4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 19:57:23 crc kubenswrapper[4942]: I0218 19:57:23.741458 4942 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wqxh4" podUID="28921539-823a-4439-a230-3b5aed7085cc" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 19:57:23 crc kubenswrapper[4942]: I0218 19:57:23.741509 4942 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-wqxh4" Feb 18 19:57:23 crc kubenswrapper[4942]: I0218 19:57:23.742248 4942 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"5e4e4cde2bbc876890dcc79d1035aec859f9c3fe975d1ce36677f131f53ddd1d"} pod="openshift-machine-config-operator/machine-config-daemon-wqxh4" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 18 19:57:23 crc kubenswrapper[4942]: I0218 19:57:23.742307 4942 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-wqxh4" podUID="28921539-823a-4439-a230-3b5aed7085cc" containerName="machine-config-daemon" containerID="cri-o://5e4e4cde2bbc876890dcc79d1035aec859f9c3fe975d1ce36677f131f53ddd1d" gracePeriod=600 Feb 18 19:57:23 crc kubenswrapper[4942]: E0218 19:57:23.880273 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wqxh4_openshift-machine-config-operator(28921539-823a-4439-a230-3b5aed7085cc)\"" pod="openshift-machine-config-operator/machine-config-daemon-wqxh4" podUID="28921539-823a-4439-a230-3b5aed7085cc" Feb 18 19:57:24 crc kubenswrapper[4942]: I0218 19:57:24.271179 4942 generic.go:334] "Generic (PLEG): container finished" podID="28921539-823a-4439-a230-3b5aed7085cc" containerID="5e4e4cde2bbc876890dcc79d1035aec859f9c3fe975d1ce36677f131f53ddd1d" exitCode=0 Feb 18 19:57:24 crc kubenswrapper[4942]: I0218 19:57:24.271236 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wqxh4" event={"ID":"28921539-823a-4439-a230-3b5aed7085cc","Type":"ContainerDied","Data":"5e4e4cde2bbc876890dcc79d1035aec859f9c3fe975d1ce36677f131f53ddd1d"} Feb 18 19:57:24 crc kubenswrapper[4942]: I0218 19:57:24.271506 4942 scope.go:117] "RemoveContainer" containerID="c81e1d649813c8beecb89429c1c4dde799b86b0af5d8804642a6a83d2ee52071" Feb 18 19:57:24 crc kubenswrapper[4942]: I0218 19:57:24.272757 4942 scope.go:117] "RemoveContainer" containerID="5e4e4cde2bbc876890dcc79d1035aec859f9c3fe975d1ce36677f131f53ddd1d" Feb 18 19:57:24 crc kubenswrapper[4942]: E0218 19:57:24.273131 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wqxh4_openshift-machine-config-operator(28921539-823a-4439-a230-3b5aed7085cc)\"" pod="openshift-machine-config-operator/machine-config-daemon-wqxh4" podUID="28921539-823a-4439-a230-3b5aed7085cc" Feb 18 19:57:33 crc kubenswrapper[4942]: I0218 19:57:33.357367 4942 generic.go:334] "Generic (PLEG): container finished" podID="d2f25881-283c-4b0e-9f7f-e7e8ae0dfc70" containerID="64878fa884de6ab75395084ab5066c4598e313f06a7c48d59600498c9717bbc7" exitCode=0 Feb 18 19:57:33 crc kubenswrapper[4942]: I0218 19:57:33.357440 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-hckpr" event={"ID":"d2f25881-283c-4b0e-9f7f-e7e8ae0dfc70","Type":"ContainerDied","Data":"64878fa884de6ab75395084ab5066c4598e313f06a7c48d59600498c9717bbc7"} Feb 18 19:57:34 crc kubenswrapper[4942]: I0218 19:57:34.835991 4942 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-hckpr" Feb 18 19:57:34 crc kubenswrapper[4942]: I0218 19:57:34.971934 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d2f25881-283c-4b0e-9f7f-e7e8ae0dfc70-inventory\") pod \"d2f25881-283c-4b0e-9f7f-e7e8ae0dfc70\" (UID: \"d2f25881-283c-4b0e-9f7f-e7e8ae0dfc70\") " Feb 18 19:57:34 crc kubenswrapper[4942]: I0218 19:57:34.972304 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d2f25881-283c-4b0e-9f7f-e7e8ae0dfc70-nova-combined-ca-bundle\") pod \"d2f25881-283c-4b0e-9f7f-e7e8ae0dfc70\" (UID: \"d2f25881-283c-4b0e-9f7f-e7e8ae0dfc70\") " Feb 18 19:57:34 crc kubenswrapper[4942]: I0218 19:57:34.972943 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/d2f25881-283c-4b0e-9f7f-e7e8ae0dfc70-nova-cell1-compute-config-0\") pod \"d2f25881-283c-4b0e-9f7f-e7e8ae0dfc70\" (UID: \"d2f25881-283c-4b0e-9f7f-e7e8ae0dfc70\") " Feb 18 19:57:34 crc kubenswrapper[4942]: I0218 19:57:34.973007 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s5vhg\" (UniqueName: \"kubernetes.io/projected/d2f25881-283c-4b0e-9f7f-e7e8ae0dfc70-kube-api-access-s5vhg\") pod \"d2f25881-283c-4b0e-9f7f-e7e8ae0dfc70\" (UID: \"d2f25881-283c-4b0e-9f7f-e7e8ae0dfc70\") " Feb 18 19:57:34 crc kubenswrapper[4942]: I0218 19:57:34.973098 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/d2f25881-283c-4b0e-9f7f-e7e8ae0dfc70-ssh-key-openstack-edpm-ipam\") pod \"d2f25881-283c-4b0e-9f7f-e7e8ae0dfc70\" (UID: \"d2f25881-283c-4b0e-9f7f-e7e8ae0dfc70\") " Feb 18 19:57:34 crc kubenswrapper[4942]: I0218 19:57:34.973158 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/d2f25881-283c-4b0e-9f7f-e7e8ae0dfc70-nova-migration-ssh-key-1\") pod \"d2f25881-283c-4b0e-9f7f-e7e8ae0dfc70\" (UID: \"d2f25881-283c-4b0e-9f7f-e7e8ae0dfc70\") " Feb 18 19:57:34 crc kubenswrapper[4942]: I0218 19:57:34.973230 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/d2f25881-283c-4b0e-9f7f-e7e8ae0dfc70-nova-extra-config-0\") pod \"d2f25881-283c-4b0e-9f7f-e7e8ae0dfc70\" (UID: \"d2f25881-283c-4b0e-9f7f-e7e8ae0dfc70\") " Feb 18 19:57:34 crc kubenswrapper[4942]: I0218 19:57:34.973264 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/d2f25881-283c-4b0e-9f7f-e7e8ae0dfc70-nova-cell1-compute-config-1\") pod \"d2f25881-283c-4b0e-9f7f-e7e8ae0dfc70\" (UID: \"d2f25881-283c-4b0e-9f7f-e7e8ae0dfc70\") " Feb 18 19:57:34 crc kubenswrapper[4942]: I0218 19:57:34.973306 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/d2f25881-283c-4b0e-9f7f-e7e8ae0dfc70-nova-migration-ssh-key-0\") pod \"d2f25881-283c-4b0e-9f7f-e7e8ae0dfc70\" (UID: \"d2f25881-283c-4b0e-9f7f-e7e8ae0dfc70\") " Feb 18 19:57:34 crc kubenswrapper[4942]: I0218 19:57:34.985154 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d2f25881-283c-4b0e-9f7f-e7e8ae0dfc70-nova-combined-ca-bundle" (OuterVolumeSpecName: "nova-combined-ca-bundle") pod "d2f25881-283c-4b0e-9f7f-e7e8ae0dfc70" (UID: "d2f25881-283c-4b0e-9f7f-e7e8ae0dfc70"). InnerVolumeSpecName "nova-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:57:34 crc kubenswrapper[4942]: I0218 19:57:34.995013 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d2f25881-283c-4b0e-9f7f-e7e8ae0dfc70-kube-api-access-s5vhg" (OuterVolumeSpecName: "kube-api-access-s5vhg") pod "d2f25881-283c-4b0e-9f7f-e7e8ae0dfc70" (UID: "d2f25881-283c-4b0e-9f7f-e7e8ae0dfc70"). InnerVolumeSpecName "kube-api-access-s5vhg". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:57:34 crc kubenswrapper[4942]: I0218 19:57:34.998511 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d2f25881-283c-4b0e-9f7f-e7e8ae0dfc70-nova-extra-config-0" (OuterVolumeSpecName: "nova-extra-config-0") pod "d2f25881-283c-4b0e-9f7f-e7e8ae0dfc70" (UID: "d2f25881-283c-4b0e-9f7f-e7e8ae0dfc70"). InnerVolumeSpecName "nova-extra-config-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 19:57:34 crc kubenswrapper[4942]: I0218 19:57:34.999544 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d2f25881-283c-4b0e-9f7f-e7e8ae0dfc70-inventory" (OuterVolumeSpecName: "inventory") pod "d2f25881-283c-4b0e-9f7f-e7e8ae0dfc70" (UID: "d2f25881-283c-4b0e-9f7f-e7e8ae0dfc70"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:57:35 crc kubenswrapper[4942]: I0218 19:57:35.000568 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d2f25881-283c-4b0e-9f7f-e7e8ae0dfc70-nova-cell1-compute-config-0" (OuterVolumeSpecName: "nova-cell1-compute-config-0") pod "d2f25881-283c-4b0e-9f7f-e7e8ae0dfc70" (UID: "d2f25881-283c-4b0e-9f7f-e7e8ae0dfc70"). InnerVolumeSpecName "nova-cell1-compute-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:57:35 crc kubenswrapper[4942]: I0218 19:57:35.001824 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d2f25881-283c-4b0e-9f7f-e7e8ae0dfc70-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "d2f25881-283c-4b0e-9f7f-e7e8ae0dfc70" (UID: "d2f25881-283c-4b0e-9f7f-e7e8ae0dfc70"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:57:35 crc kubenswrapper[4942]: I0218 19:57:35.006478 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d2f25881-283c-4b0e-9f7f-e7e8ae0dfc70-nova-migration-ssh-key-0" (OuterVolumeSpecName: "nova-migration-ssh-key-0") pod "d2f25881-283c-4b0e-9f7f-e7e8ae0dfc70" (UID: "d2f25881-283c-4b0e-9f7f-e7e8ae0dfc70"). InnerVolumeSpecName "nova-migration-ssh-key-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:57:35 crc kubenswrapper[4942]: I0218 19:57:35.015156 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d2f25881-283c-4b0e-9f7f-e7e8ae0dfc70-nova-cell1-compute-config-1" (OuterVolumeSpecName: "nova-cell1-compute-config-1") pod "d2f25881-283c-4b0e-9f7f-e7e8ae0dfc70" (UID: "d2f25881-283c-4b0e-9f7f-e7e8ae0dfc70"). InnerVolumeSpecName "nova-cell1-compute-config-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:57:35 crc kubenswrapper[4942]: I0218 19:57:35.022054 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d2f25881-283c-4b0e-9f7f-e7e8ae0dfc70-nova-migration-ssh-key-1" (OuterVolumeSpecName: "nova-migration-ssh-key-1") pod "d2f25881-283c-4b0e-9f7f-e7e8ae0dfc70" (UID: "d2f25881-283c-4b0e-9f7f-e7e8ae0dfc70"). InnerVolumeSpecName "nova-migration-ssh-key-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:57:35 crc kubenswrapper[4942]: I0218 19:57:35.076362 4942 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s5vhg\" (UniqueName: \"kubernetes.io/projected/d2f25881-283c-4b0e-9f7f-e7e8ae0dfc70-kube-api-access-s5vhg\") on node \"crc\" DevicePath \"\"" Feb 18 19:57:35 crc kubenswrapper[4942]: I0218 19:57:35.076561 4942 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/d2f25881-283c-4b0e-9f7f-e7e8ae0dfc70-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 18 19:57:35 crc kubenswrapper[4942]: I0218 19:57:35.076670 4942 reconciler_common.go:293] "Volume detached for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/d2f25881-283c-4b0e-9f7f-e7e8ae0dfc70-nova-migration-ssh-key-1\") on node \"crc\" DevicePath \"\"" Feb 18 19:57:35 crc kubenswrapper[4942]: I0218 19:57:35.076824 4942 reconciler_common.go:293] "Volume detached for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/d2f25881-283c-4b0e-9f7f-e7e8ae0dfc70-nova-extra-config-0\") on node \"crc\" DevicePath \"\"" Feb 18 19:57:35 crc kubenswrapper[4942]: I0218 19:57:35.076926 4942 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/d2f25881-283c-4b0e-9f7f-e7e8ae0dfc70-nova-cell1-compute-config-1\") on node \"crc\" DevicePath \"\"" Feb 18 19:57:35 crc kubenswrapper[4942]: I0218 19:57:35.077003 4942 reconciler_common.go:293] "Volume detached for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/d2f25881-283c-4b0e-9f7f-e7e8ae0dfc70-nova-migration-ssh-key-0\") on node \"crc\" DevicePath \"\"" Feb 18 19:57:35 crc kubenswrapper[4942]: I0218 19:57:35.077104 4942 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d2f25881-283c-4b0e-9f7f-e7e8ae0dfc70-inventory\") on node \"crc\" DevicePath \"\"" Feb 18 19:57:35 crc kubenswrapper[4942]: I0218 19:57:35.077192 4942 reconciler_common.go:293] "Volume detached for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d2f25881-283c-4b0e-9f7f-e7e8ae0dfc70-nova-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 19:57:35 crc kubenswrapper[4942]: I0218 19:57:35.077267 4942 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/d2f25881-283c-4b0e-9f7f-e7e8ae0dfc70-nova-cell1-compute-config-0\") on node \"crc\" DevicePath \"\"" Feb 18 19:57:35 crc kubenswrapper[4942]: I0218 19:57:35.381722 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-hckpr" event={"ID":"d2f25881-283c-4b0e-9f7f-e7e8ae0dfc70","Type":"ContainerDied","Data":"26b2e0a6c7d2cd46d70e4fc3ba7e0f057acbb4c702bcc59e97a7a00a1e2041ed"} Feb 18 19:57:35 crc kubenswrapper[4942]: I0218 19:57:35.381773 4942 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="26b2e0a6c7d2cd46d70e4fc3ba7e0f057acbb4c702bcc59e97a7a00a1e2041ed" Feb 18 19:57:35 crc kubenswrapper[4942]: I0218 19:57:35.382103 4942 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-hckpr" Feb 18 19:57:35 crc kubenswrapper[4942]: I0218 19:57:35.517778 4942 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/telemetry-edpm-deployment-openstack-edpm-ipam-drng5"] Feb 18 19:57:35 crc kubenswrapper[4942]: E0218 19:57:35.518329 4942 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d2f25881-283c-4b0e-9f7f-e7e8ae0dfc70" containerName="nova-edpm-deployment-openstack-edpm-ipam" Feb 18 19:57:35 crc kubenswrapper[4942]: I0218 19:57:35.518349 4942 state_mem.go:107] "Deleted CPUSet assignment" podUID="d2f25881-283c-4b0e-9f7f-e7e8ae0dfc70" containerName="nova-edpm-deployment-openstack-edpm-ipam" Feb 18 19:57:35 crc kubenswrapper[4942]: I0218 19:57:35.518586 4942 memory_manager.go:354] "RemoveStaleState removing state" podUID="d2f25881-283c-4b0e-9f7f-e7e8ae0dfc70" containerName="nova-edpm-deployment-openstack-edpm-ipam" Feb 18 19:57:35 crc kubenswrapper[4942]: I0218 19:57:35.523728 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-drng5" Feb 18 19:57:35 crc kubenswrapper[4942]: I0218 19:57:35.527657 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/telemetry-edpm-deployment-openstack-edpm-ipam-drng5"] Feb 18 19:57:35 crc kubenswrapper[4942]: I0218 19:57:35.528801 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 18 19:57:35 crc kubenswrapper[4942]: I0218 19:57:35.529072 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-compute-config-data" Feb 18 19:57:35 crc kubenswrapper[4942]: I0218 19:57:35.529292 4942 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 18 19:57:35 crc kubenswrapper[4942]: I0218 19:57:35.528996 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 18 19:57:35 crc kubenswrapper[4942]: I0218 19:57:35.529493 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-rgcbh" Feb 18 19:57:35 crc kubenswrapper[4942]: I0218 19:57:35.688931 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5ea9c52a-c8f0-4189-a995-202a5a8a07db-telemetry-combined-ca-bundle\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-drng5\" (UID: \"5ea9c52a-c8f0-4189-a995-202a5a8a07db\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-drng5" Feb 18 19:57:35 crc kubenswrapper[4942]: I0218 19:57:35.689080 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/5ea9c52a-c8f0-4189-a995-202a5a8a07db-ceilometer-compute-config-data-2\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-drng5\" (UID: \"5ea9c52a-c8f0-4189-a995-202a5a8a07db\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-drng5" Feb 18 19:57:35 crc kubenswrapper[4942]: I0218 19:57:35.689161 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6xpn6\" (UniqueName: \"kubernetes.io/projected/5ea9c52a-c8f0-4189-a995-202a5a8a07db-kube-api-access-6xpn6\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-drng5\" (UID: \"5ea9c52a-c8f0-4189-a995-202a5a8a07db\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-drng5" Feb 18 19:57:35 crc kubenswrapper[4942]: I0218 19:57:35.689211 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/5ea9c52a-c8f0-4189-a995-202a5a8a07db-ssh-key-openstack-edpm-ipam\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-drng5\" (UID: \"5ea9c52a-c8f0-4189-a995-202a5a8a07db\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-drng5" Feb 18 19:57:35 crc kubenswrapper[4942]: I0218 19:57:35.689244 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/5ea9c52a-c8f0-4189-a995-202a5a8a07db-inventory\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-drng5\" (UID: \"5ea9c52a-c8f0-4189-a995-202a5a8a07db\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-drng5" Feb 18 19:57:35 crc kubenswrapper[4942]: I0218 19:57:35.689309 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/5ea9c52a-c8f0-4189-a995-202a5a8a07db-ceilometer-compute-config-data-1\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-drng5\" (UID: \"5ea9c52a-c8f0-4189-a995-202a5a8a07db\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-drng5" Feb 18 19:57:35 crc kubenswrapper[4942]: I0218 19:57:35.689368 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/5ea9c52a-c8f0-4189-a995-202a5a8a07db-ceilometer-compute-config-data-0\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-drng5\" (UID: \"5ea9c52a-c8f0-4189-a995-202a5a8a07db\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-drng5" Feb 18 19:57:35 crc kubenswrapper[4942]: I0218 19:57:35.791515 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/5ea9c52a-c8f0-4189-a995-202a5a8a07db-ceilometer-compute-config-data-1\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-drng5\" (UID: \"5ea9c52a-c8f0-4189-a995-202a5a8a07db\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-drng5" Feb 18 19:57:35 crc kubenswrapper[4942]: I0218 19:57:35.791610 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/5ea9c52a-c8f0-4189-a995-202a5a8a07db-ceilometer-compute-config-data-0\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-drng5\" (UID: \"5ea9c52a-c8f0-4189-a995-202a5a8a07db\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-drng5" Feb 18 19:57:35 crc kubenswrapper[4942]: I0218 19:57:35.791641 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5ea9c52a-c8f0-4189-a995-202a5a8a07db-telemetry-combined-ca-bundle\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-drng5\" (UID: \"5ea9c52a-c8f0-4189-a995-202a5a8a07db\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-drng5" Feb 18 19:57:35 crc kubenswrapper[4942]: I0218 19:57:35.791747 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/5ea9c52a-c8f0-4189-a995-202a5a8a07db-ceilometer-compute-config-data-2\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-drng5\" (UID: \"5ea9c52a-c8f0-4189-a995-202a5a8a07db\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-drng5" Feb 18 19:57:35 crc kubenswrapper[4942]: I0218 19:57:35.791807 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6xpn6\" (UniqueName: \"kubernetes.io/projected/5ea9c52a-c8f0-4189-a995-202a5a8a07db-kube-api-access-6xpn6\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-drng5\" (UID: \"5ea9c52a-c8f0-4189-a995-202a5a8a07db\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-drng5" Feb 18 19:57:35 crc kubenswrapper[4942]: I0218 19:57:35.791852 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/5ea9c52a-c8f0-4189-a995-202a5a8a07db-ssh-key-openstack-edpm-ipam\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-drng5\" (UID: \"5ea9c52a-c8f0-4189-a995-202a5a8a07db\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-drng5" Feb 18 19:57:35 crc kubenswrapper[4942]: I0218 19:57:35.791885 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/5ea9c52a-c8f0-4189-a995-202a5a8a07db-inventory\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-drng5\" (UID: \"5ea9c52a-c8f0-4189-a995-202a5a8a07db\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-drng5" Feb 18 19:57:35 crc kubenswrapper[4942]: I0218 19:57:35.795298 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/5ea9c52a-c8f0-4189-a995-202a5a8a07db-ceilometer-compute-config-data-0\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-drng5\" (UID: \"5ea9c52a-c8f0-4189-a995-202a5a8a07db\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-drng5" Feb 18 19:57:35 crc kubenswrapper[4942]: I0218 19:57:35.796421 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5ea9c52a-c8f0-4189-a995-202a5a8a07db-telemetry-combined-ca-bundle\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-drng5\" (UID: \"5ea9c52a-c8f0-4189-a995-202a5a8a07db\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-drng5" Feb 18 19:57:35 crc kubenswrapper[4942]: I0218 19:57:35.796495 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/5ea9c52a-c8f0-4189-a995-202a5a8a07db-ceilometer-compute-config-data-2\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-drng5\" (UID: \"5ea9c52a-c8f0-4189-a995-202a5a8a07db\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-drng5" Feb 18 19:57:35 crc kubenswrapper[4942]: I0218 19:57:35.796533 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/5ea9c52a-c8f0-4189-a995-202a5a8a07db-ssh-key-openstack-edpm-ipam\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-drng5\" (UID: \"5ea9c52a-c8f0-4189-a995-202a5a8a07db\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-drng5" Feb 18 19:57:35 crc kubenswrapper[4942]: I0218 19:57:35.799525 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/5ea9c52a-c8f0-4189-a995-202a5a8a07db-inventory\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-drng5\" (UID: \"5ea9c52a-c8f0-4189-a995-202a5a8a07db\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-drng5" Feb 18 19:57:35 crc kubenswrapper[4942]: I0218 19:57:35.806544 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/5ea9c52a-c8f0-4189-a995-202a5a8a07db-ceilometer-compute-config-data-1\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-drng5\" (UID: \"5ea9c52a-c8f0-4189-a995-202a5a8a07db\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-drng5" Feb 18 19:57:35 crc kubenswrapper[4942]: I0218 19:57:35.810118 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6xpn6\" (UniqueName: \"kubernetes.io/projected/5ea9c52a-c8f0-4189-a995-202a5a8a07db-kube-api-access-6xpn6\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-drng5\" (UID: \"5ea9c52a-c8f0-4189-a995-202a5a8a07db\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-drng5" Feb 18 19:57:35 crc kubenswrapper[4942]: I0218 19:57:35.862559 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-drng5" Feb 18 19:57:36 crc kubenswrapper[4942]: I0218 19:57:36.384391 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/telemetry-edpm-deployment-openstack-edpm-ipam-drng5"] Feb 18 19:57:37 crc kubenswrapper[4942]: I0218 19:57:37.036358 4942 scope.go:117] "RemoveContainer" containerID="5e4e4cde2bbc876890dcc79d1035aec859f9c3fe975d1ce36677f131f53ddd1d" Feb 18 19:57:37 crc kubenswrapper[4942]: E0218 19:57:37.036866 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wqxh4_openshift-machine-config-operator(28921539-823a-4439-a230-3b5aed7085cc)\"" pod="openshift-machine-config-operator/machine-config-daemon-wqxh4" podUID="28921539-823a-4439-a230-3b5aed7085cc" Feb 18 19:57:37 crc kubenswrapper[4942]: I0218 19:57:37.400170 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-drng5" event={"ID":"5ea9c52a-c8f0-4189-a995-202a5a8a07db","Type":"ContainerStarted","Data":"6617e1c641d88ba7eecc0e139fcf0fe9a178e976a0890aa0716fd002e93b4732"} Feb 18 19:57:37 crc kubenswrapper[4942]: I0218 19:57:37.400241 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-drng5" event={"ID":"5ea9c52a-c8f0-4189-a995-202a5a8a07db","Type":"ContainerStarted","Data":"0c96156ca263a32dd6a9652c4b243415d18ac4f84af1e982cee89d29282773ff"} Feb 18 19:57:37 crc kubenswrapper[4942]: I0218 19:57:37.423185 4942 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-drng5" podStartSLOduration=1.9631740720000002 podStartE2EDuration="2.423166323s" podCreationTimestamp="2026-02-18 19:57:35 +0000 UTC" firstStartedPulling="2026-02-18 19:57:36.397696681 +0000 UTC m=+2416.102629346" lastFinishedPulling="2026-02-18 19:57:36.857688922 +0000 UTC m=+2416.562621597" observedRunningTime="2026-02-18 19:57:37.420168394 +0000 UTC m=+2417.125101059" watchObservedRunningTime="2026-02-18 19:57:37.423166323 +0000 UTC m=+2417.128098988" Feb 18 19:57:48 crc kubenswrapper[4942]: I0218 19:57:48.035735 4942 scope.go:117] "RemoveContainer" containerID="5e4e4cde2bbc876890dcc79d1035aec859f9c3fe975d1ce36677f131f53ddd1d" Feb 18 19:57:48 crc kubenswrapper[4942]: E0218 19:57:48.036742 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wqxh4_openshift-machine-config-operator(28921539-823a-4439-a230-3b5aed7085cc)\"" pod="openshift-machine-config-operator/machine-config-daemon-wqxh4" podUID="28921539-823a-4439-a230-3b5aed7085cc" Feb 18 19:58:02 crc kubenswrapper[4942]: I0218 19:58:02.036285 4942 scope.go:117] "RemoveContainer" containerID="5e4e4cde2bbc876890dcc79d1035aec859f9c3fe975d1ce36677f131f53ddd1d" Feb 18 19:58:02 crc kubenswrapper[4942]: E0218 19:58:02.037311 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wqxh4_openshift-machine-config-operator(28921539-823a-4439-a230-3b5aed7085cc)\"" pod="openshift-machine-config-operator/machine-config-daemon-wqxh4" podUID="28921539-823a-4439-a230-3b5aed7085cc" Feb 18 19:58:16 crc kubenswrapper[4942]: I0218 19:58:16.035721 4942 scope.go:117] "RemoveContainer" containerID="5e4e4cde2bbc876890dcc79d1035aec859f9c3fe975d1ce36677f131f53ddd1d" Feb 18 19:58:16 crc kubenswrapper[4942]: E0218 19:58:16.036488 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wqxh4_openshift-machine-config-operator(28921539-823a-4439-a230-3b5aed7085cc)\"" pod="openshift-machine-config-operator/machine-config-daemon-wqxh4" podUID="28921539-823a-4439-a230-3b5aed7085cc" Feb 18 19:58:28 crc kubenswrapper[4942]: I0218 19:58:28.036261 4942 scope.go:117] "RemoveContainer" containerID="5e4e4cde2bbc876890dcc79d1035aec859f9c3fe975d1ce36677f131f53ddd1d" Feb 18 19:58:28 crc kubenswrapper[4942]: E0218 19:58:28.037435 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wqxh4_openshift-machine-config-operator(28921539-823a-4439-a230-3b5aed7085cc)\"" pod="openshift-machine-config-operator/machine-config-daemon-wqxh4" podUID="28921539-823a-4439-a230-3b5aed7085cc" Feb 18 19:58:42 crc kubenswrapper[4942]: I0218 19:58:42.037148 4942 scope.go:117] "RemoveContainer" containerID="5e4e4cde2bbc876890dcc79d1035aec859f9c3fe975d1ce36677f131f53ddd1d" Feb 18 19:58:42 crc kubenswrapper[4942]: E0218 19:58:42.038738 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wqxh4_openshift-machine-config-operator(28921539-823a-4439-a230-3b5aed7085cc)\"" pod="openshift-machine-config-operator/machine-config-daemon-wqxh4" podUID="28921539-823a-4439-a230-3b5aed7085cc" Feb 18 19:58:54 crc kubenswrapper[4942]: I0218 19:58:54.036111 4942 scope.go:117] "RemoveContainer" containerID="5e4e4cde2bbc876890dcc79d1035aec859f9c3fe975d1ce36677f131f53ddd1d" Feb 18 19:58:54 crc kubenswrapper[4942]: E0218 19:58:54.037330 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wqxh4_openshift-machine-config-operator(28921539-823a-4439-a230-3b5aed7085cc)\"" pod="openshift-machine-config-operator/machine-config-daemon-wqxh4" podUID="28921539-823a-4439-a230-3b5aed7085cc" Feb 18 19:59:06 crc kubenswrapper[4942]: I0218 19:59:06.039699 4942 scope.go:117] "RemoveContainer" containerID="5e4e4cde2bbc876890dcc79d1035aec859f9c3fe975d1ce36677f131f53ddd1d" Feb 18 19:59:06 crc kubenswrapper[4942]: E0218 19:59:06.040670 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wqxh4_openshift-machine-config-operator(28921539-823a-4439-a230-3b5aed7085cc)\"" pod="openshift-machine-config-operator/machine-config-daemon-wqxh4" podUID="28921539-823a-4439-a230-3b5aed7085cc" Feb 18 19:59:19 crc kubenswrapper[4942]: I0218 19:59:19.036863 4942 scope.go:117] "RemoveContainer" containerID="5e4e4cde2bbc876890dcc79d1035aec859f9c3fe975d1ce36677f131f53ddd1d" Feb 18 19:59:19 crc kubenswrapper[4942]: E0218 19:59:19.037680 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wqxh4_openshift-machine-config-operator(28921539-823a-4439-a230-3b5aed7085cc)\"" pod="openshift-machine-config-operator/machine-config-daemon-wqxh4" podUID="28921539-823a-4439-a230-3b5aed7085cc" Feb 18 19:59:34 crc kubenswrapper[4942]: I0218 19:59:34.036530 4942 scope.go:117] "RemoveContainer" containerID="5e4e4cde2bbc876890dcc79d1035aec859f9c3fe975d1ce36677f131f53ddd1d" Feb 18 19:59:34 crc kubenswrapper[4942]: E0218 19:59:34.037821 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wqxh4_openshift-machine-config-operator(28921539-823a-4439-a230-3b5aed7085cc)\"" pod="openshift-machine-config-operator/machine-config-daemon-wqxh4" podUID="28921539-823a-4439-a230-3b5aed7085cc" Feb 18 19:59:38 crc kubenswrapper[4942]: I0218 19:59:38.730362 4942 generic.go:334] "Generic (PLEG): container finished" podID="5ea9c52a-c8f0-4189-a995-202a5a8a07db" containerID="6617e1c641d88ba7eecc0e139fcf0fe9a178e976a0890aa0716fd002e93b4732" exitCode=0 Feb 18 19:59:38 crc kubenswrapper[4942]: I0218 19:59:38.730424 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-drng5" event={"ID":"5ea9c52a-c8f0-4189-a995-202a5a8a07db","Type":"ContainerDied","Data":"6617e1c641d88ba7eecc0e139fcf0fe9a178e976a0890aa0716fd002e93b4732"} Feb 18 19:59:40 crc kubenswrapper[4942]: I0218 19:59:40.184917 4942 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-drng5" Feb 18 19:59:40 crc kubenswrapper[4942]: I0218 19:59:40.301471 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/5ea9c52a-c8f0-4189-a995-202a5a8a07db-ssh-key-openstack-edpm-ipam\") pod \"5ea9c52a-c8f0-4189-a995-202a5a8a07db\" (UID: \"5ea9c52a-c8f0-4189-a995-202a5a8a07db\") " Feb 18 19:59:40 crc kubenswrapper[4942]: I0218 19:59:40.301532 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/5ea9c52a-c8f0-4189-a995-202a5a8a07db-ceilometer-compute-config-data-2\") pod \"5ea9c52a-c8f0-4189-a995-202a5a8a07db\" (UID: \"5ea9c52a-c8f0-4189-a995-202a5a8a07db\") " Feb 18 19:59:40 crc kubenswrapper[4942]: I0218 19:59:40.301558 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/5ea9c52a-c8f0-4189-a995-202a5a8a07db-ceilometer-compute-config-data-1\") pod \"5ea9c52a-c8f0-4189-a995-202a5a8a07db\" (UID: \"5ea9c52a-c8f0-4189-a995-202a5a8a07db\") " Feb 18 19:59:40 crc kubenswrapper[4942]: I0218 19:59:40.301660 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/5ea9c52a-c8f0-4189-a995-202a5a8a07db-ceilometer-compute-config-data-0\") pod \"5ea9c52a-c8f0-4189-a995-202a5a8a07db\" (UID: \"5ea9c52a-c8f0-4189-a995-202a5a8a07db\") " Feb 18 19:59:40 crc kubenswrapper[4942]: I0218 19:59:40.301829 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/5ea9c52a-c8f0-4189-a995-202a5a8a07db-inventory\") pod \"5ea9c52a-c8f0-4189-a995-202a5a8a07db\" (UID: \"5ea9c52a-c8f0-4189-a995-202a5a8a07db\") " Feb 18 19:59:40 crc kubenswrapper[4942]: I0218 19:59:40.301901 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6xpn6\" (UniqueName: \"kubernetes.io/projected/5ea9c52a-c8f0-4189-a995-202a5a8a07db-kube-api-access-6xpn6\") pod \"5ea9c52a-c8f0-4189-a995-202a5a8a07db\" (UID: \"5ea9c52a-c8f0-4189-a995-202a5a8a07db\") " Feb 18 19:59:40 crc kubenswrapper[4942]: I0218 19:59:40.301964 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5ea9c52a-c8f0-4189-a995-202a5a8a07db-telemetry-combined-ca-bundle\") pod \"5ea9c52a-c8f0-4189-a995-202a5a8a07db\" (UID: \"5ea9c52a-c8f0-4189-a995-202a5a8a07db\") " Feb 18 19:59:40 crc kubenswrapper[4942]: I0218 19:59:40.308315 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5ea9c52a-c8f0-4189-a995-202a5a8a07db-telemetry-combined-ca-bundle" (OuterVolumeSpecName: "telemetry-combined-ca-bundle") pod "5ea9c52a-c8f0-4189-a995-202a5a8a07db" (UID: "5ea9c52a-c8f0-4189-a995-202a5a8a07db"). InnerVolumeSpecName "telemetry-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:59:40 crc kubenswrapper[4942]: I0218 19:59:40.308965 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5ea9c52a-c8f0-4189-a995-202a5a8a07db-kube-api-access-6xpn6" (OuterVolumeSpecName: "kube-api-access-6xpn6") pod "5ea9c52a-c8f0-4189-a995-202a5a8a07db" (UID: "5ea9c52a-c8f0-4189-a995-202a5a8a07db"). InnerVolumeSpecName "kube-api-access-6xpn6". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 19:59:40 crc kubenswrapper[4942]: I0218 19:59:40.335850 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5ea9c52a-c8f0-4189-a995-202a5a8a07db-ceilometer-compute-config-data-2" (OuterVolumeSpecName: "ceilometer-compute-config-data-2") pod "5ea9c52a-c8f0-4189-a995-202a5a8a07db" (UID: "5ea9c52a-c8f0-4189-a995-202a5a8a07db"). InnerVolumeSpecName "ceilometer-compute-config-data-2". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:59:40 crc kubenswrapper[4942]: I0218 19:59:40.336558 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5ea9c52a-c8f0-4189-a995-202a5a8a07db-ceilometer-compute-config-data-1" (OuterVolumeSpecName: "ceilometer-compute-config-data-1") pod "5ea9c52a-c8f0-4189-a995-202a5a8a07db" (UID: "5ea9c52a-c8f0-4189-a995-202a5a8a07db"). InnerVolumeSpecName "ceilometer-compute-config-data-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:59:40 crc kubenswrapper[4942]: I0218 19:59:40.342141 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5ea9c52a-c8f0-4189-a995-202a5a8a07db-inventory" (OuterVolumeSpecName: "inventory") pod "5ea9c52a-c8f0-4189-a995-202a5a8a07db" (UID: "5ea9c52a-c8f0-4189-a995-202a5a8a07db"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:59:40 crc kubenswrapper[4942]: I0218 19:59:40.345042 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5ea9c52a-c8f0-4189-a995-202a5a8a07db-ceilometer-compute-config-data-0" (OuterVolumeSpecName: "ceilometer-compute-config-data-0") pod "5ea9c52a-c8f0-4189-a995-202a5a8a07db" (UID: "5ea9c52a-c8f0-4189-a995-202a5a8a07db"). InnerVolumeSpecName "ceilometer-compute-config-data-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:59:40 crc kubenswrapper[4942]: I0218 19:59:40.360208 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5ea9c52a-c8f0-4189-a995-202a5a8a07db-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "5ea9c52a-c8f0-4189-a995-202a5a8a07db" (UID: "5ea9c52a-c8f0-4189-a995-202a5a8a07db"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 19:59:40 crc kubenswrapper[4942]: I0218 19:59:40.404531 4942 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6xpn6\" (UniqueName: \"kubernetes.io/projected/5ea9c52a-c8f0-4189-a995-202a5a8a07db-kube-api-access-6xpn6\") on node \"crc\" DevicePath \"\"" Feb 18 19:59:40 crc kubenswrapper[4942]: I0218 19:59:40.404561 4942 reconciler_common.go:293] "Volume detached for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5ea9c52a-c8f0-4189-a995-202a5a8a07db-telemetry-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 19:59:40 crc kubenswrapper[4942]: I0218 19:59:40.404572 4942 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/5ea9c52a-c8f0-4189-a995-202a5a8a07db-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 18 19:59:40 crc kubenswrapper[4942]: I0218 19:59:40.404580 4942 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/5ea9c52a-c8f0-4189-a995-202a5a8a07db-ceilometer-compute-config-data-2\") on node \"crc\" DevicePath \"\"" Feb 18 19:59:40 crc kubenswrapper[4942]: I0218 19:59:40.404590 4942 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/5ea9c52a-c8f0-4189-a995-202a5a8a07db-ceilometer-compute-config-data-1\") on node \"crc\" DevicePath \"\"" Feb 18 19:59:40 crc kubenswrapper[4942]: I0218 19:59:40.404599 4942 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/5ea9c52a-c8f0-4189-a995-202a5a8a07db-ceilometer-compute-config-data-0\") on node \"crc\" DevicePath \"\"" Feb 18 19:59:40 crc kubenswrapper[4942]: I0218 19:59:40.404608 4942 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/5ea9c52a-c8f0-4189-a995-202a5a8a07db-inventory\") on node \"crc\" DevicePath \"\"" Feb 18 19:59:40 crc kubenswrapper[4942]: I0218 19:59:40.764578 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-drng5" event={"ID":"5ea9c52a-c8f0-4189-a995-202a5a8a07db","Type":"ContainerDied","Data":"0c96156ca263a32dd6a9652c4b243415d18ac4f84af1e982cee89d29282773ff"} Feb 18 19:59:40 crc kubenswrapper[4942]: I0218 19:59:40.764640 4942 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0c96156ca263a32dd6a9652c4b243415d18ac4f84af1e982cee89d29282773ff" Feb 18 19:59:40 crc kubenswrapper[4942]: I0218 19:59:40.764694 4942 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-drng5" Feb 18 19:59:45 crc kubenswrapper[4942]: I0218 19:59:45.037370 4942 scope.go:117] "RemoveContainer" containerID="5e4e4cde2bbc876890dcc79d1035aec859f9c3fe975d1ce36677f131f53ddd1d" Feb 18 19:59:45 crc kubenswrapper[4942]: E0218 19:59:45.038471 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wqxh4_openshift-machine-config-operator(28921539-823a-4439-a230-3b5aed7085cc)\"" pod="openshift-machine-config-operator/machine-config-daemon-wqxh4" podUID="28921539-823a-4439-a230-3b5aed7085cc" Feb 18 19:59:57 crc kubenswrapper[4942]: I0218 19:59:57.037418 4942 scope.go:117] "RemoveContainer" containerID="5e4e4cde2bbc876890dcc79d1035aec859f9c3fe975d1ce36677f131f53ddd1d" Feb 18 19:59:57 crc kubenswrapper[4942]: E0218 19:59:57.038604 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wqxh4_openshift-machine-config-operator(28921539-823a-4439-a230-3b5aed7085cc)\"" pod="openshift-machine-config-operator/machine-config-daemon-wqxh4" podUID="28921539-823a-4439-a230-3b5aed7085cc" Feb 18 20:00:00 crc kubenswrapper[4942]: I0218 20:00:00.175173 4942 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29524080-55fqp"] Feb 18 20:00:00 crc kubenswrapper[4942]: E0218 20:00:00.176311 4942 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5ea9c52a-c8f0-4189-a995-202a5a8a07db" containerName="telemetry-edpm-deployment-openstack-edpm-ipam" Feb 18 20:00:00 crc kubenswrapper[4942]: I0218 20:00:00.176345 4942 state_mem.go:107] "Deleted CPUSet assignment" podUID="5ea9c52a-c8f0-4189-a995-202a5a8a07db" containerName="telemetry-edpm-deployment-openstack-edpm-ipam" Feb 18 20:00:00 crc kubenswrapper[4942]: I0218 20:00:00.177141 4942 memory_manager.go:354] "RemoveStaleState removing state" podUID="5ea9c52a-c8f0-4189-a995-202a5a8a07db" containerName="telemetry-edpm-deployment-openstack-edpm-ipam" Feb 18 20:00:00 crc kubenswrapper[4942]: I0218 20:00:00.178282 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29524080-55fqp" Feb 18 20:00:00 crc kubenswrapper[4942]: I0218 20:00:00.182413 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 18 20:00:00 crc kubenswrapper[4942]: I0218 20:00:00.182671 4942 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 18 20:00:00 crc kubenswrapper[4942]: I0218 20:00:00.196023 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29524080-55fqp"] Feb 18 20:00:00 crc kubenswrapper[4942]: I0218 20:00:00.262106 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/84ccdc2e-1528-43d4-9c24-42f72bfbb0de-config-volume\") pod \"collect-profiles-29524080-55fqp\" (UID: \"84ccdc2e-1528-43d4-9c24-42f72bfbb0de\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524080-55fqp" Feb 18 20:00:00 crc kubenswrapper[4942]: I0218 20:00:00.262197 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/84ccdc2e-1528-43d4-9c24-42f72bfbb0de-secret-volume\") pod \"collect-profiles-29524080-55fqp\" (UID: \"84ccdc2e-1528-43d4-9c24-42f72bfbb0de\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524080-55fqp" Feb 18 20:00:00 crc kubenswrapper[4942]: I0218 20:00:00.262231 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-676gt\" (UniqueName: \"kubernetes.io/projected/84ccdc2e-1528-43d4-9c24-42f72bfbb0de-kube-api-access-676gt\") pod \"collect-profiles-29524080-55fqp\" (UID: \"84ccdc2e-1528-43d4-9c24-42f72bfbb0de\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524080-55fqp" Feb 18 20:00:00 crc kubenswrapper[4942]: I0218 20:00:00.363872 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/84ccdc2e-1528-43d4-9c24-42f72bfbb0de-config-volume\") pod \"collect-profiles-29524080-55fqp\" (UID: \"84ccdc2e-1528-43d4-9c24-42f72bfbb0de\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524080-55fqp" Feb 18 20:00:00 crc kubenswrapper[4942]: I0218 20:00:00.363957 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/84ccdc2e-1528-43d4-9c24-42f72bfbb0de-secret-volume\") pod \"collect-profiles-29524080-55fqp\" (UID: \"84ccdc2e-1528-43d4-9c24-42f72bfbb0de\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524080-55fqp" Feb 18 20:00:00 crc kubenswrapper[4942]: I0218 20:00:00.363995 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-676gt\" (UniqueName: \"kubernetes.io/projected/84ccdc2e-1528-43d4-9c24-42f72bfbb0de-kube-api-access-676gt\") pod \"collect-profiles-29524080-55fqp\" (UID: \"84ccdc2e-1528-43d4-9c24-42f72bfbb0de\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524080-55fqp" Feb 18 20:00:00 crc kubenswrapper[4942]: I0218 20:00:00.364947 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/84ccdc2e-1528-43d4-9c24-42f72bfbb0de-config-volume\") pod \"collect-profiles-29524080-55fqp\" (UID: \"84ccdc2e-1528-43d4-9c24-42f72bfbb0de\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524080-55fqp" Feb 18 20:00:00 crc kubenswrapper[4942]: I0218 20:00:00.370424 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/84ccdc2e-1528-43d4-9c24-42f72bfbb0de-secret-volume\") pod \"collect-profiles-29524080-55fqp\" (UID: \"84ccdc2e-1528-43d4-9c24-42f72bfbb0de\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524080-55fqp" Feb 18 20:00:00 crc kubenswrapper[4942]: I0218 20:00:00.382020 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-676gt\" (UniqueName: \"kubernetes.io/projected/84ccdc2e-1528-43d4-9c24-42f72bfbb0de-kube-api-access-676gt\") pod \"collect-profiles-29524080-55fqp\" (UID: \"84ccdc2e-1528-43d4-9c24-42f72bfbb0de\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524080-55fqp" Feb 18 20:00:00 crc kubenswrapper[4942]: I0218 20:00:00.511895 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29524080-55fqp" Feb 18 20:00:01 crc kubenswrapper[4942]: I0218 20:00:01.079788 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29524080-55fqp"] Feb 18 20:00:02 crc kubenswrapper[4942]: I0218 20:00:02.012296 4942 generic.go:334] "Generic (PLEG): container finished" podID="84ccdc2e-1528-43d4-9c24-42f72bfbb0de" containerID="61bb3a2b09293111d8de2349b0416e5a02bfa7aaf7424af19bf5902a23d6049e" exitCode=0 Feb 18 20:00:02 crc kubenswrapper[4942]: I0218 20:00:02.012473 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29524080-55fqp" event={"ID":"84ccdc2e-1528-43d4-9c24-42f72bfbb0de","Type":"ContainerDied","Data":"61bb3a2b09293111d8de2349b0416e5a02bfa7aaf7424af19bf5902a23d6049e"} Feb 18 20:00:02 crc kubenswrapper[4942]: I0218 20:00:02.012698 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29524080-55fqp" event={"ID":"84ccdc2e-1528-43d4-9c24-42f72bfbb0de","Type":"ContainerStarted","Data":"3f160f06bcdb4ed80d5a39638f45d7b063d7ba8757457db74483d1ab8f5566cc"} Feb 18 20:00:03 crc kubenswrapper[4942]: I0218 20:00:03.322951 4942 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29524080-55fqp" Feb 18 20:00:03 crc kubenswrapper[4942]: I0218 20:00:03.440521 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-676gt\" (UniqueName: \"kubernetes.io/projected/84ccdc2e-1528-43d4-9c24-42f72bfbb0de-kube-api-access-676gt\") pod \"84ccdc2e-1528-43d4-9c24-42f72bfbb0de\" (UID: \"84ccdc2e-1528-43d4-9c24-42f72bfbb0de\") " Feb 18 20:00:03 crc kubenswrapper[4942]: I0218 20:00:03.440566 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/84ccdc2e-1528-43d4-9c24-42f72bfbb0de-config-volume\") pod \"84ccdc2e-1528-43d4-9c24-42f72bfbb0de\" (UID: \"84ccdc2e-1528-43d4-9c24-42f72bfbb0de\") " Feb 18 20:00:03 crc kubenswrapper[4942]: I0218 20:00:03.440593 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/84ccdc2e-1528-43d4-9c24-42f72bfbb0de-secret-volume\") pod \"84ccdc2e-1528-43d4-9c24-42f72bfbb0de\" (UID: \"84ccdc2e-1528-43d4-9c24-42f72bfbb0de\") " Feb 18 20:00:03 crc kubenswrapper[4942]: I0218 20:00:03.441925 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/84ccdc2e-1528-43d4-9c24-42f72bfbb0de-config-volume" (OuterVolumeSpecName: "config-volume") pod "84ccdc2e-1528-43d4-9c24-42f72bfbb0de" (UID: "84ccdc2e-1528-43d4-9c24-42f72bfbb0de"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 20:00:03 crc kubenswrapper[4942]: I0218 20:00:03.446089 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/84ccdc2e-1528-43d4-9c24-42f72bfbb0de-kube-api-access-676gt" (OuterVolumeSpecName: "kube-api-access-676gt") pod "84ccdc2e-1528-43d4-9c24-42f72bfbb0de" (UID: "84ccdc2e-1528-43d4-9c24-42f72bfbb0de"). InnerVolumeSpecName "kube-api-access-676gt". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 20:00:03 crc kubenswrapper[4942]: I0218 20:00:03.448039 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/84ccdc2e-1528-43d4-9c24-42f72bfbb0de-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "84ccdc2e-1528-43d4-9c24-42f72bfbb0de" (UID: "84ccdc2e-1528-43d4-9c24-42f72bfbb0de"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 20:00:03 crc kubenswrapper[4942]: I0218 20:00:03.542868 4942 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-676gt\" (UniqueName: \"kubernetes.io/projected/84ccdc2e-1528-43d4-9c24-42f72bfbb0de-kube-api-access-676gt\") on node \"crc\" DevicePath \"\"" Feb 18 20:00:03 crc kubenswrapper[4942]: I0218 20:00:03.542920 4942 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/84ccdc2e-1528-43d4-9c24-42f72bfbb0de-config-volume\") on node \"crc\" DevicePath \"\"" Feb 18 20:00:03 crc kubenswrapper[4942]: I0218 20:00:03.542931 4942 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/84ccdc2e-1528-43d4-9c24-42f72bfbb0de-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 18 20:00:04 crc kubenswrapper[4942]: I0218 20:00:04.036725 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29524080-55fqp" event={"ID":"84ccdc2e-1528-43d4-9c24-42f72bfbb0de","Type":"ContainerDied","Data":"3f160f06bcdb4ed80d5a39638f45d7b063d7ba8757457db74483d1ab8f5566cc"} Feb 18 20:00:04 crc kubenswrapper[4942]: I0218 20:00:04.036819 4942 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3f160f06bcdb4ed80d5a39638f45d7b063d7ba8757457db74483d1ab8f5566cc" Feb 18 20:00:04 crc kubenswrapper[4942]: I0218 20:00:04.036888 4942 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29524080-55fqp" Feb 18 20:00:04 crc kubenswrapper[4942]: I0218 20:00:04.421356 4942 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29524035-tk5g4"] Feb 18 20:00:04 crc kubenswrapper[4942]: I0218 20:00:04.433099 4942 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29524035-tk5g4"] Feb 18 20:00:05 crc kubenswrapper[4942]: I0218 20:00:05.058923 4942 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="01ba4570-01bb-4964-8c1d-791c25d72a1a" path="/var/lib/kubelet/pods/01ba4570-01bb-4964-8c1d-791c25d72a1a/volumes" Feb 18 20:00:08 crc kubenswrapper[4942]: I0218 20:00:08.036757 4942 scope.go:117] "RemoveContainer" containerID="5e4e4cde2bbc876890dcc79d1035aec859f9c3fe975d1ce36677f131f53ddd1d" Feb 18 20:00:08 crc kubenswrapper[4942]: E0218 20:00:08.037274 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wqxh4_openshift-machine-config-operator(28921539-823a-4439-a230-3b5aed7085cc)\"" pod="openshift-machine-config-operator/machine-config-daemon-wqxh4" podUID="28921539-823a-4439-a230-3b5aed7085cc" Feb 18 20:00:19 crc kubenswrapper[4942]: I0218 20:00:19.036867 4942 scope.go:117] "RemoveContainer" containerID="5e4e4cde2bbc876890dcc79d1035aec859f9c3fe975d1ce36677f131f53ddd1d" Feb 18 20:00:19 crc kubenswrapper[4942]: E0218 20:00:19.038167 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wqxh4_openshift-machine-config-operator(28921539-823a-4439-a230-3b5aed7085cc)\"" pod="openshift-machine-config-operator/machine-config-daemon-wqxh4" podUID="28921539-823a-4439-a230-3b5aed7085cc" Feb 18 20:00:25 crc kubenswrapper[4942]: I0218 20:00:25.057697 4942 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/prometheus-metric-storage-0"] Feb 18 20:00:25 crc kubenswrapper[4942]: I0218 20:00:25.058526 4942 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/prometheus-metric-storage-0" podUID="219b2aa4-0497-40f8-a3d0-947d37be720d" containerName="prometheus" containerID="cri-o://3c0ad897361779d547581def3c2fec1b2d3e96f7b286fe553ae81f2d2d440845" gracePeriod=600 Feb 18 20:00:25 crc kubenswrapper[4942]: I0218 20:00:25.058610 4942 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/prometheus-metric-storage-0" podUID="219b2aa4-0497-40f8-a3d0-947d37be720d" containerName="thanos-sidecar" containerID="cri-o://d83f11bf1c6741c63e8403adeaf2debe729c7d20905670045ec96ee9fceb1c98" gracePeriod=600 Feb 18 20:00:25 crc kubenswrapper[4942]: I0218 20:00:25.058671 4942 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/prometheus-metric-storage-0" podUID="219b2aa4-0497-40f8-a3d0-947d37be720d" containerName="config-reloader" containerID="cri-o://188d1b7181a7567a8d1558b4f9342a2d5d02b2fb0b9db6d0ed29fc015cdd4109" gracePeriod=600 Feb 18 20:00:25 crc kubenswrapper[4942]: I0218 20:00:25.311892 4942 generic.go:334] "Generic (PLEG): container finished" podID="219b2aa4-0497-40f8-a3d0-947d37be720d" containerID="d83f11bf1c6741c63e8403adeaf2debe729c7d20905670045ec96ee9fceb1c98" exitCode=0 Feb 18 20:00:25 crc kubenswrapper[4942]: I0218 20:00:25.312211 4942 generic.go:334] "Generic (PLEG): container finished" podID="219b2aa4-0497-40f8-a3d0-947d37be720d" containerID="3c0ad897361779d547581def3c2fec1b2d3e96f7b286fe553ae81f2d2d440845" exitCode=0 Feb 18 20:00:25 crc kubenswrapper[4942]: I0218 20:00:25.312240 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"219b2aa4-0497-40f8-a3d0-947d37be720d","Type":"ContainerDied","Data":"d83f11bf1c6741c63e8403adeaf2debe729c7d20905670045ec96ee9fceb1c98"} Feb 18 20:00:25 crc kubenswrapper[4942]: I0218 20:00:25.312273 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"219b2aa4-0497-40f8-a3d0-947d37be720d","Type":"ContainerDied","Data":"3c0ad897361779d547581def3c2fec1b2d3e96f7b286fe553ae81f2d2d440845"} Feb 18 20:00:26 crc kubenswrapper[4942]: I0218 20:00:26.187836 4942 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Feb 18 20:00:26 crc kubenswrapper[4942]: I0218 20:00:26.313847 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/219b2aa4-0497-40f8-a3d0-947d37be720d-prometheus-metric-storage-rulefiles-1\") pod \"219b2aa4-0497-40f8-a3d0-947d37be720d\" (UID: \"219b2aa4-0497-40f8-a3d0-947d37be720d\") " Feb 18 20:00:26 crc kubenswrapper[4942]: I0218 20:00:26.313932 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/219b2aa4-0497-40f8-a3d0-947d37be720d-prometheus-metric-storage-rulefiles-0\") pod \"219b2aa4-0497-40f8-a3d0-947d37be720d\" (UID: \"219b2aa4-0497-40f8-a3d0-947d37be720d\") " Feb 18 20:00:26 crc kubenswrapper[4942]: I0218 20:00:26.313962 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/219b2aa4-0497-40f8-a3d0-947d37be720d-config\") pod \"219b2aa4-0497-40f8-a3d0-947d37be720d\" (UID: \"219b2aa4-0497-40f8-a3d0-947d37be720d\") " Feb 18 20:00:26 crc kubenswrapper[4942]: I0218 20:00:26.314645 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-db\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-99d9d799-8f85-4f2f-8ca2-c6e20d4d69c5\") pod \"219b2aa4-0497-40f8-a3d0-947d37be720d\" (UID: \"219b2aa4-0497-40f8-a3d0-947d37be720d\") " Feb 18 20:00:26 crc kubenswrapper[4942]: I0218 20:00:26.314746 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/219b2aa4-0497-40f8-a3d0-947d37be720d-thanos-prometheus-http-client-file\") pod \"219b2aa4-0497-40f8-a3d0-947d37be720d\" (UID: \"219b2aa4-0497-40f8-a3d0-947d37be720d\") " Feb 18 20:00:26 crc kubenswrapper[4942]: I0218 20:00:26.314814 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/219b2aa4-0497-40f8-a3d0-947d37be720d-tls-assets\") pod \"219b2aa4-0497-40f8-a3d0-947d37be720d\" (UID: \"219b2aa4-0497-40f8-a3d0-947d37be720d\") " Feb 18 20:00:26 crc kubenswrapper[4942]: I0218 20:00:26.314876 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s685z\" (UniqueName: \"kubernetes.io/projected/219b2aa4-0497-40f8-a3d0-947d37be720d-kube-api-access-s685z\") pod \"219b2aa4-0497-40f8-a3d0-947d37be720d\" (UID: \"219b2aa4-0497-40f8-a3d0-947d37be720d\") " Feb 18 20:00:26 crc kubenswrapper[4942]: I0218 20:00:26.314927 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/219b2aa4-0497-40f8-a3d0-947d37be720d-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") pod \"219b2aa4-0497-40f8-a3d0-947d37be720d\" (UID: \"219b2aa4-0497-40f8-a3d0-947d37be720d\") " Feb 18 20:00:26 crc kubenswrapper[4942]: I0218 20:00:26.314962 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/219b2aa4-0497-40f8-a3d0-947d37be720d-secret-combined-ca-bundle\") pod \"219b2aa4-0497-40f8-a3d0-947d37be720d\" (UID: \"219b2aa4-0497-40f8-a3d0-947d37be720d\") " Feb 18 20:00:26 crc kubenswrapper[4942]: I0218 20:00:26.315010 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/219b2aa4-0497-40f8-a3d0-947d37be720d-prometheus-metric-storage-rulefiles-2\") pod \"219b2aa4-0497-40f8-a3d0-947d37be720d\" (UID: \"219b2aa4-0497-40f8-a3d0-947d37be720d\") " Feb 18 20:00:26 crc kubenswrapper[4942]: I0218 20:00:26.315087 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/219b2aa4-0497-40f8-a3d0-947d37be720d-web-config\") pod \"219b2aa4-0497-40f8-a3d0-947d37be720d\" (UID: \"219b2aa4-0497-40f8-a3d0-947d37be720d\") " Feb 18 20:00:26 crc kubenswrapper[4942]: I0218 20:00:26.315120 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/219b2aa4-0497-40f8-a3d0-947d37be720d-config-out\") pod \"219b2aa4-0497-40f8-a3d0-947d37be720d\" (UID: \"219b2aa4-0497-40f8-a3d0-947d37be720d\") " Feb 18 20:00:26 crc kubenswrapper[4942]: I0218 20:00:26.315153 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/219b2aa4-0497-40f8-a3d0-947d37be720d-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") pod \"219b2aa4-0497-40f8-a3d0-947d37be720d\" (UID: \"219b2aa4-0497-40f8-a3d0-947d37be720d\") " Feb 18 20:00:26 crc kubenswrapper[4942]: I0218 20:00:26.314868 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/219b2aa4-0497-40f8-a3d0-947d37be720d-prometheus-metric-storage-rulefiles-1" (OuterVolumeSpecName: "prometheus-metric-storage-rulefiles-1") pod "219b2aa4-0497-40f8-a3d0-947d37be720d" (UID: "219b2aa4-0497-40f8-a3d0-947d37be720d"). InnerVolumeSpecName "prometheus-metric-storage-rulefiles-1". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 20:00:26 crc kubenswrapper[4942]: I0218 20:00:26.315517 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/219b2aa4-0497-40f8-a3d0-947d37be720d-prometheus-metric-storage-rulefiles-0" (OuterVolumeSpecName: "prometheus-metric-storage-rulefiles-0") pod "219b2aa4-0497-40f8-a3d0-947d37be720d" (UID: "219b2aa4-0497-40f8-a3d0-947d37be720d"). InnerVolumeSpecName "prometheus-metric-storage-rulefiles-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 20:00:26 crc kubenswrapper[4942]: I0218 20:00:26.315641 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/219b2aa4-0497-40f8-a3d0-947d37be720d-prometheus-metric-storage-rulefiles-2" (OuterVolumeSpecName: "prometheus-metric-storage-rulefiles-2") pod "219b2aa4-0497-40f8-a3d0-947d37be720d" (UID: "219b2aa4-0497-40f8-a3d0-947d37be720d"). InnerVolumeSpecName "prometheus-metric-storage-rulefiles-2". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 20:00:26 crc kubenswrapper[4942]: I0218 20:00:26.316281 4942 reconciler_common.go:293] "Volume detached for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/219b2aa4-0497-40f8-a3d0-947d37be720d-prometheus-metric-storage-rulefiles-2\") on node \"crc\" DevicePath \"\"" Feb 18 20:00:26 crc kubenswrapper[4942]: I0218 20:00:26.316307 4942 reconciler_common.go:293] "Volume detached for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/219b2aa4-0497-40f8-a3d0-947d37be720d-prometheus-metric-storage-rulefiles-1\") on node \"crc\" DevicePath \"\"" Feb 18 20:00:26 crc kubenswrapper[4942]: I0218 20:00:26.316318 4942 reconciler_common.go:293] "Volume detached for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/219b2aa4-0497-40f8-a3d0-947d37be720d-prometheus-metric-storage-rulefiles-0\") on node \"crc\" DevicePath \"\"" Feb 18 20:00:26 crc kubenswrapper[4942]: I0218 20:00:26.321542 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/219b2aa4-0497-40f8-a3d0-947d37be720d-config-out" (OuterVolumeSpecName: "config-out") pod "219b2aa4-0497-40f8-a3d0-947d37be720d" (UID: "219b2aa4-0497-40f8-a3d0-947d37be720d"). InnerVolumeSpecName "config-out". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 20:00:26 crc kubenswrapper[4942]: I0218 20:00:26.323404 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/219b2aa4-0497-40f8-a3d0-947d37be720d-thanos-prometheus-http-client-file" (OuterVolumeSpecName: "thanos-prometheus-http-client-file") pod "219b2aa4-0497-40f8-a3d0-947d37be720d" (UID: "219b2aa4-0497-40f8-a3d0-947d37be720d"). InnerVolumeSpecName "thanos-prometheus-http-client-file". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 20:00:26 crc kubenswrapper[4942]: I0218 20:00:26.323491 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/219b2aa4-0497-40f8-a3d0-947d37be720d-tls-assets" (OuterVolumeSpecName: "tls-assets") pod "219b2aa4-0497-40f8-a3d0-947d37be720d" (UID: "219b2aa4-0497-40f8-a3d0-947d37be720d"). InnerVolumeSpecName "tls-assets". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 20:00:26 crc kubenswrapper[4942]: I0218 20:00:26.323790 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/219b2aa4-0497-40f8-a3d0-947d37be720d-kube-api-access-s685z" (OuterVolumeSpecName: "kube-api-access-s685z") pod "219b2aa4-0497-40f8-a3d0-947d37be720d" (UID: "219b2aa4-0497-40f8-a3d0-947d37be720d"). InnerVolumeSpecName "kube-api-access-s685z". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 20:00:26 crc kubenswrapper[4942]: I0218 20:00:26.323934 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/219b2aa4-0497-40f8-a3d0-947d37be720d-secret-combined-ca-bundle" (OuterVolumeSpecName: "secret-combined-ca-bundle") pod "219b2aa4-0497-40f8-a3d0-947d37be720d" (UID: "219b2aa4-0497-40f8-a3d0-947d37be720d"). InnerVolumeSpecName "secret-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 20:00:26 crc kubenswrapper[4942]: I0218 20:00:26.324026 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/219b2aa4-0497-40f8-a3d0-947d37be720d-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d" (OuterVolumeSpecName: "web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d") pod "219b2aa4-0497-40f8-a3d0-947d37be720d" (UID: "219b2aa4-0497-40f8-a3d0-947d37be720d"). InnerVolumeSpecName "web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 20:00:26 crc kubenswrapper[4942]: I0218 20:00:26.324105 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/219b2aa4-0497-40f8-a3d0-947d37be720d-config" (OuterVolumeSpecName: "config") pod "219b2aa4-0497-40f8-a3d0-947d37be720d" (UID: "219b2aa4-0497-40f8-a3d0-947d37be720d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 20:00:26 crc kubenswrapper[4942]: I0218 20:00:26.328340 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/219b2aa4-0497-40f8-a3d0-947d37be720d-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d" (OuterVolumeSpecName: "web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d") pod "219b2aa4-0497-40f8-a3d0-947d37be720d" (UID: "219b2aa4-0497-40f8-a3d0-947d37be720d"). InnerVolumeSpecName "web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 20:00:26 crc kubenswrapper[4942]: I0218 20:00:26.336582 4942 generic.go:334] "Generic (PLEG): container finished" podID="219b2aa4-0497-40f8-a3d0-947d37be720d" containerID="188d1b7181a7567a8d1558b4f9342a2d5d02b2fb0b9db6d0ed29fc015cdd4109" exitCode=0 Feb 18 20:00:26 crc kubenswrapper[4942]: I0218 20:00:26.336633 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"219b2aa4-0497-40f8-a3d0-947d37be720d","Type":"ContainerDied","Data":"188d1b7181a7567a8d1558b4f9342a2d5d02b2fb0b9db6d0ed29fc015cdd4109"} Feb 18 20:00:26 crc kubenswrapper[4942]: I0218 20:00:26.336670 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"219b2aa4-0497-40f8-a3d0-947d37be720d","Type":"ContainerDied","Data":"60c6687648dd41b94a4225ed03866cf4c665cec18c0eb5d84fcb09f0dbc7012b"} Feb 18 20:00:26 crc kubenswrapper[4942]: I0218 20:00:26.336695 4942 scope.go:117] "RemoveContainer" containerID="d83f11bf1c6741c63e8403adeaf2debe729c7d20905670045ec96ee9fceb1c98" Feb 18 20:00:26 crc kubenswrapper[4942]: I0218 20:00:26.336711 4942 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Feb 18 20:00:26 crc kubenswrapper[4942]: I0218 20:00:26.343566 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-99d9d799-8f85-4f2f-8ca2-c6e20d4d69c5" (OuterVolumeSpecName: "prometheus-metric-storage-db") pod "219b2aa4-0497-40f8-a3d0-947d37be720d" (UID: "219b2aa4-0497-40f8-a3d0-947d37be720d"). InnerVolumeSpecName "pvc-99d9d799-8f85-4f2f-8ca2-c6e20d4d69c5". PluginName "kubernetes.io/csi", VolumeGidValue "" Feb 18 20:00:26 crc kubenswrapper[4942]: I0218 20:00:26.387749 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/219b2aa4-0497-40f8-a3d0-947d37be720d-web-config" (OuterVolumeSpecName: "web-config") pod "219b2aa4-0497-40f8-a3d0-947d37be720d" (UID: "219b2aa4-0497-40f8-a3d0-947d37be720d"). InnerVolumeSpecName "web-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 20:00:26 crc kubenswrapper[4942]: I0218 20:00:26.418551 4942 reconciler_common.go:293] "Volume detached for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/219b2aa4-0497-40f8-a3d0-947d37be720d-web-config\") on node \"crc\" DevicePath \"\"" Feb 18 20:00:26 crc kubenswrapper[4942]: I0218 20:00:26.418599 4942 reconciler_common.go:293] "Volume detached for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/219b2aa4-0497-40f8-a3d0-947d37be720d-config-out\") on node \"crc\" DevicePath \"\"" Feb 18 20:00:26 crc kubenswrapper[4942]: I0218 20:00:26.418614 4942 reconciler_common.go:293] "Volume detached for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/219b2aa4-0497-40f8-a3d0-947d37be720d-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") on node \"crc\" DevicePath \"\"" Feb 18 20:00:26 crc kubenswrapper[4942]: I0218 20:00:26.418631 4942 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/219b2aa4-0497-40f8-a3d0-947d37be720d-config\") on node \"crc\" DevicePath \"\"" Feb 18 20:00:26 crc kubenswrapper[4942]: I0218 20:00:26.418676 4942 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-99d9d799-8f85-4f2f-8ca2-c6e20d4d69c5\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-99d9d799-8f85-4f2f-8ca2-c6e20d4d69c5\") on node \"crc\" " Feb 18 20:00:26 crc kubenswrapper[4942]: I0218 20:00:26.418691 4942 reconciler_common.go:293] "Volume detached for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/219b2aa4-0497-40f8-a3d0-947d37be720d-thanos-prometheus-http-client-file\") on node \"crc\" DevicePath \"\"" Feb 18 20:00:26 crc kubenswrapper[4942]: I0218 20:00:26.418705 4942 reconciler_common.go:293] "Volume detached for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/219b2aa4-0497-40f8-a3d0-947d37be720d-tls-assets\") on node \"crc\" DevicePath \"\"" Feb 18 20:00:26 crc kubenswrapper[4942]: I0218 20:00:26.418719 4942 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s685z\" (UniqueName: \"kubernetes.io/projected/219b2aa4-0497-40f8-a3d0-947d37be720d-kube-api-access-s685z\") on node \"crc\" DevicePath \"\"" Feb 18 20:00:26 crc kubenswrapper[4942]: I0218 20:00:26.418731 4942 reconciler_common.go:293] "Volume detached for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/219b2aa4-0497-40f8-a3d0-947d37be720d-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") on node \"crc\" DevicePath \"\"" Feb 18 20:00:26 crc kubenswrapper[4942]: I0218 20:00:26.418746 4942 reconciler_common.go:293] "Volume detached for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/219b2aa4-0497-40f8-a3d0-947d37be720d-secret-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 20:00:26 crc kubenswrapper[4942]: I0218 20:00:26.456431 4942 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Feb 18 20:00:26 crc kubenswrapper[4942]: I0218 20:00:26.456813 4942 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-99d9d799-8f85-4f2f-8ca2-c6e20d4d69c5" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-99d9d799-8f85-4f2f-8ca2-c6e20d4d69c5") on node "crc" Feb 18 20:00:26 crc kubenswrapper[4942]: I0218 20:00:26.500905 4942 scope.go:117] "RemoveContainer" containerID="188d1b7181a7567a8d1558b4f9342a2d5d02b2fb0b9db6d0ed29fc015cdd4109" Feb 18 20:00:26 crc kubenswrapper[4942]: I0218 20:00:26.519588 4942 reconciler_common.go:293] "Volume detached for volume \"pvc-99d9d799-8f85-4f2f-8ca2-c6e20d4d69c5\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-99d9d799-8f85-4f2f-8ca2-c6e20d4d69c5\") on node \"crc\" DevicePath \"\"" Feb 18 20:00:26 crc kubenswrapper[4942]: I0218 20:00:26.526407 4942 scope.go:117] "RemoveContainer" containerID="3c0ad897361779d547581def3c2fec1b2d3e96f7b286fe553ae81f2d2d440845" Feb 18 20:00:26 crc kubenswrapper[4942]: I0218 20:00:26.571041 4942 scope.go:117] "RemoveContainer" containerID="7267448d8e93628304f568d013573a3a00dd9f0b1c853388c54db4200d6ef067" Feb 18 20:00:26 crc kubenswrapper[4942]: I0218 20:00:26.604616 4942 scope.go:117] "RemoveContainer" containerID="d83f11bf1c6741c63e8403adeaf2debe729c7d20905670045ec96ee9fceb1c98" Feb 18 20:00:26 crc kubenswrapper[4942]: E0218 20:00:26.605163 4942 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d83f11bf1c6741c63e8403adeaf2debe729c7d20905670045ec96ee9fceb1c98\": container with ID starting with d83f11bf1c6741c63e8403adeaf2debe729c7d20905670045ec96ee9fceb1c98 not found: ID does not exist" containerID="d83f11bf1c6741c63e8403adeaf2debe729c7d20905670045ec96ee9fceb1c98" Feb 18 20:00:26 crc kubenswrapper[4942]: I0218 20:00:26.605227 4942 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d83f11bf1c6741c63e8403adeaf2debe729c7d20905670045ec96ee9fceb1c98"} err="failed to get container status \"d83f11bf1c6741c63e8403adeaf2debe729c7d20905670045ec96ee9fceb1c98\": rpc error: code = NotFound desc = could not find container \"d83f11bf1c6741c63e8403adeaf2debe729c7d20905670045ec96ee9fceb1c98\": container with ID starting with d83f11bf1c6741c63e8403adeaf2debe729c7d20905670045ec96ee9fceb1c98 not found: ID does not exist" Feb 18 20:00:26 crc kubenswrapper[4942]: I0218 20:00:26.605262 4942 scope.go:117] "RemoveContainer" containerID="188d1b7181a7567a8d1558b4f9342a2d5d02b2fb0b9db6d0ed29fc015cdd4109" Feb 18 20:00:26 crc kubenswrapper[4942]: E0218 20:00:26.605594 4942 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"188d1b7181a7567a8d1558b4f9342a2d5d02b2fb0b9db6d0ed29fc015cdd4109\": container with ID starting with 188d1b7181a7567a8d1558b4f9342a2d5d02b2fb0b9db6d0ed29fc015cdd4109 not found: ID does not exist" containerID="188d1b7181a7567a8d1558b4f9342a2d5d02b2fb0b9db6d0ed29fc015cdd4109" Feb 18 20:00:26 crc kubenswrapper[4942]: I0218 20:00:26.605629 4942 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"188d1b7181a7567a8d1558b4f9342a2d5d02b2fb0b9db6d0ed29fc015cdd4109"} err="failed to get container status \"188d1b7181a7567a8d1558b4f9342a2d5d02b2fb0b9db6d0ed29fc015cdd4109\": rpc error: code = NotFound desc = could not find container \"188d1b7181a7567a8d1558b4f9342a2d5d02b2fb0b9db6d0ed29fc015cdd4109\": container with ID starting with 188d1b7181a7567a8d1558b4f9342a2d5d02b2fb0b9db6d0ed29fc015cdd4109 not found: ID does not exist" Feb 18 20:00:26 crc kubenswrapper[4942]: I0218 20:00:26.605652 4942 scope.go:117] "RemoveContainer" containerID="3c0ad897361779d547581def3c2fec1b2d3e96f7b286fe553ae81f2d2d440845" Feb 18 20:00:26 crc kubenswrapper[4942]: E0218 20:00:26.606313 4942 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3c0ad897361779d547581def3c2fec1b2d3e96f7b286fe553ae81f2d2d440845\": container with ID starting with 3c0ad897361779d547581def3c2fec1b2d3e96f7b286fe553ae81f2d2d440845 not found: ID does not exist" containerID="3c0ad897361779d547581def3c2fec1b2d3e96f7b286fe553ae81f2d2d440845" Feb 18 20:00:26 crc kubenswrapper[4942]: I0218 20:00:26.606345 4942 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3c0ad897361779d547581def3c2fec1b2d3e96f7b286fe553ae81f2d2d440845"} err="failed to get container status \"3c0ad897361779d547581def3c2fec1b2d3e96f7b286fe553ae81f2d2d440845\": rpc error: code = NotFound desc = could not find container \"3c0ad897361779d547581def3c2fec1b2d3e96f7b286fe553ae81f2d2d440845\": container with ID starting with 3c0ad897361779d547581def3c2fec1b2d3e96f7b286fe553ae81f2d2d440845 not found: ID does not exist" Feb 18 20:00:26 crc kubenswrapper[4942]: I0218 20:00:26.606367 4942 scope.go:117] "RemoveContainer" containerID="7267448d8e93628304f568d013573a3a00dd9f0b1c853388c54db4200d6ef067" Feb 18 20:00:26 crc kubenswrapper[4942]: E0218 20:00:26.606660 4942 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7267448d8e93628304f568d013573a3a00dd9f0b1c853388c54db4200d6ef067\": container with ID starting with 7267448d8e93628304f568d013573a3a00dd9f0b1c853388c54db4200d6ef067 not found: ID does not exist" containerID="7267448d8e93628304f568d013573a3a00dd9f0b1c853388c54db4200d6ef067" Feb 18 20:00:26 crc kubenswrapper[4942]: I0218 20:00:26.606729 4942 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7267448d8e93628304f568d013573a3a00dd9f0b1c853388c54db4200d6ef067"} err="failed to get container status \"7267448d8e93628304f568d013573a3a00dd9f0b1c853388c54db4200d6ef067\": rpc error: code = NotFound desc = could not find container \"7267448d8e93628304f568d013573a3a00dd9f0b1c853388c54db4200d6ef067\": container with ID starting with 7267448d8e93628304f568d013573a3a00dd9f0b1c853388c54db4200d6ef067 not found: ID does not exist" Feb 18 20:00:26 crc kubenswrapper[4942]: I0218 20:00:26.693509 4942 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/prometheus-metric-storage-0"] Feb 18 20:00:26 crc kubenswrapper[4942]: I0218 20:00:26.713447 4942 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/prometheus-metric-storage-0"] Feb 18 20:00:26 crc kubenswrapper[4942]: I0218 20:00:26.730834 4942 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/prometheus-metric-storage-0"] Feb 18 20:00:26 crc kubenswrapper[4942]: E0218 20:00:26.731225 4942 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="219b2aa4-0497-40f8-a3d0-947d37be720d" containerName="init-config-reloader" Feb 18 20:00:26 crc kubenswrapper[4942]: I0218 20:00:26.731241 4942 state_mem.go:107] "Deleted CPUSet assignment" podUID="219b2aa4-0497-40f8-a3d0-947d37be720d" containerName="init-config-reloader" Feb 18 20:00:26 crc kubenswrapper[4942]: E0218 20:00:26.731257 4942 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="219b2aa4-0497-40f8-a3d0-947d37be720d" containerName="thanos-sidecar" Feb 18 20:00:26 crc kubenswrapper[4942]: I0218 20:00:26.731264 4942 state_mem.go:107] "Deleted CPUSet assignment" podUID="219b2aa4-0497-40f8-a3d0-947d37be720d" containerName="thanos-sidecar" Feb 18 20:00:26 crc kubenswrapper[4942]: E0218 20:00:26.731274 4942 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="84ccdc2e-1528-43d4-9c24-42f72bfbb0de" containerName="collect-profiles" Feb 18 20:00:26 crc kubenswrapper[4942]: I0218 20:00:26.731281 4942 state_mem.go:107] "Deleted CPUSet assignment" podUID="84ccdc2e-1528-43d4-9c24-42f72bfbb0de" containerName="collect-profiles" Feb 18 20:00:26 crc kubenswrapper[4942]: E0218 20:00:26.731300 4942 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="219b2aa4-0497-40f8-a3d0-947d37be720d" containerName="prometheus" Feb 18 20:00:26 crc kubenswrapper[4942]: I0218 20:00:26.731305 4942 state_mem.go:107] "Deleted CPUSet assignment" podUID="219b2aa4-0497-40f8-a3d0-947d37be720d" containerName="prometheus" Feb 18 20:00:26 crc kubenswrapper[4942]: E0218 20:00:26.731313 4942 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="219b2aa4-0497-40f8-a3d0-947d37be720d" containerName="config-reloader" Feb 18 20:00:26 crc kubenswrapper[4942]: I0218 20:00:26.731319 4942 state_mem.go:107] "Deleted CPUSet assignment" podUID="219b2aa4-0497-40f8-a3d0-947d37be720d" containerName="config-reloader" Feb 18 20:00:26 crc kubenswrapper[4942]: I0218 20:00:26.731500 4942 memory_manager.go:354] "RemoveStaleState removing state" podUID="219b2aa4-0497-40f8-a3d0-947d37be720d" containerName="prometheus" Feb 18 20:00:26 crc kubenswrapper[4942]: I0218 20:00:26.731513 4942 memory_manager.go:354] "RemoveStaleState removing state" podUID="84ccdc2e-1528-43d4-9c24-42f72bfbb0de" containerName="collect-profiles" Feb 18 20:00:26 crc kubenswrapper[4942]: I0218 20:00:26.731528 4942 memory_manager.go:354] "RemoveStaleState removing state" podUID="219b2aa4-0497-40f8-a3d0-947d37be720d" containerName="thanos-sidecar" Feb 18 20:00:26 crc kubenswrapper[4942]: I0218 20:00:26.731539 4942 memory_manager.go:354] "RemoveStaleState removing state" podUID="219b2aa4-0497-40f8-a3d0-947d37be720d" containerName="config-reloader" Feb 18 20:00:26 crc kubenswrapper[4942]: I0218 20:00:26.745707 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Feb 18 20:00:26 crc kubenswrapper[4942]: I0218 20:00:26.745832 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Feb 18 20:00:26 crc kubenswrapper[4942]: I0218 20:00:26.747820 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage" Feb 18 20:00:26 crc kubenswrapper[4942]: I0218 20:00:26.748107 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-thanos-prometheus-http-client-file" Feb 18 20:00:26 crc kubenswrapper[4942]: I0218 20:00:26.748294 4942 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-2" Feb 18 20:00:26 crc kubenswrapper[4942]: I0218 20:00:26.748498 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"metric-storage-prometheus-dockercfg-7f4m2" Feb 18 20:00:26 crc kubenswrapper[4942]: I0218 20:00:26.748783 4942 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-0" Feb 18 20:00:26 crc kubenswrapper[4942]: I0218 20:00:26.748916 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-web-config" Feb 18 20:00:26 crc kubenswrapper[4942]: I0218 20:00:26.749121 4942 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-1" Feb 18 20:00:26 crc kubenswrapper[4942]: I0218 20:00:26.755385 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-tls-assets-0" Feb 18 20:00:26 crc kubenswrapper[4942]: I0218 20:00:26.925549 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/3ddfc3cc-08ad-436c-b5e9-0ab2ee325555-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"3ddfc3cc-08ad-436c-b5e9-0ab2ee325555\") " pod="openstack/prometheus-metric-storage-0" Feb 18 20:00:26 crc kubenswrapper[4942]: I0218 20:00:26.925856 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/3ddfc3cc-08ad-436c-b5e9-0ab2ee325555-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"3ddfc3cc-08ad-436c-b5e9-0ab2ee325555\") " pod="openstack/prometheus-metric-storage-0" Feb 18 20:00:26 crc kubenswrapper[4942]: I0218 20:00:26.925973 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/3ddfc3cc-08ad-436c-b5e9-0ab2ee325555-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"3ddfc3cc-08ad-436c-b5e9-0ab2ee325555\") " pod="openstack/prometheus-metric-storage-0" Feb 18 20:00:26 crc kubenswrapper[4942]: I0218 20:00:26.926089 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/3ddfc3cc-08ad-436c-b5e9-0ab2ee325555-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"3ddfc3cc-08ad-436c-b5e9-0ab2ee325555\") " pod="openstack/prometheus-metric-storage-0" Feb 18 20:00:26 crc kubenswrapper[4942]: I0218 20:00:26.926170 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/3ddfc3cc-08ad-436c-b5e9-0ab2ee325555-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"3ddfc3cc-08ad-436c-b5e9-0ab2ee325555\") " pod="openstack/prometheus-metric-storage-0" Feb 18 20:00:26 crc kubenswrapper[4942]: I0218 20:00:26.926251 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/3ddfc3cc-08ad-436c-b5e9-0ab2ee325555-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"3ddfc3cc-08ad-436c-b5e9-0ab2ee325555\") " pod="openstack/prometheus-metric-storage-0" Feb 18 20:00:26 crc kubenswrapper[4942]: I0218 20:00:26.926338 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/3ddfc3cc-08ad-436c-b5e9-0ab2ee325555-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"3ddfc3cc-08ad-436c-b5e9-0ab2ee325555\") " pod="openstack/prometheus-metric-storage-0" Feb 18 20:00:26 crc kubenswrapper[4942]: I0218 20:00:26.926418 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/3ddfc3cc-08ad-436c-b5e9-0ab2ee325555-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"3ddfc3cc-08ad-436c-b5e9-0ab2ee325555\") " pod="openstack/prometheus-metric-storage-0" Feb 18 20:00:26 crc kubenswrapper[4942]: I0218 20:00:26.926510 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3ddfc3cc-08ad-436c-b5e9-0ab2ee325555-secret-combined-ca-bundle\") pod \"prometheus-metric-storage-0\" (UID: \"3ddfc3cc-08ad-436c-b5e9-0ab2ee325555\") " pod="openstack/prometheus-metric-storage-0" Feb 18 20:00:26 crc kubenswrapper[4942]: I0218 20:00:26.926597 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/3ddfc3cc-08ad-436c-b5e9-0ab2ee325555-config\") pod \"prometheus-metric-storage-0\" (UID: \"3ddfc3cc-08ad-436c-b5e9-0ab2ee325555\") " pod="openstack/prometheus-metric-storage-0" Feb 18 20:00:26 crc kubenswrapper[4942]: I0218 20:00:26.926741 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-99d9d799-8f85-4f2f-8ca2-c6e20d4d69c5\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-99d9d799-8f85-4f2f-8ca2-c6e20d4d69c5\") pod \"prometheus-metric-storage-0\" (UID: \"3ddfc3cc-08ad-436c-b5e9-0ab2ee325555\") " pod="openstack/prometheus-metric-storage-0" Feb 18 20:00:26 crc kubenswrapper[4942]: I0218 20:00:26.926916 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/3ddfc3cc-08ad-436c-b5e9-0ab2ee325555-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"3ddfc3cc-08ad-436c-b5e9-0ab2ee325555\") " pod="openstack/prometheus-metric-storage-0" Feb 18 20:00:26 crc kubenswrapper[4942]: I0218 20:00:26.927058 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5xjld\" (UniqueName: \"kubernetes.io/projected/3ddfc3cc-08ad-436c-b5e9-0ab2ee325555-kube-api-access-5xjld\") pod \"prometheus-metric-storage-0\" (UID: \"3ddfc3cc-08ad-436c-b5e9-0ab2ee325555\") " pod="openstack/prometheus-metric-storage-0" Feb 18 20:00:27 crc kubenswrapper[4942]: I0218 20:00:27.029041 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5xjld\" (UniqueName: \"kubernetes.io/projected/3ddfc3cc-08ad-436c-b5e9-0ab2ee325555-kube-api-access-5xjld\") pod \"prometheus-metric-storage-0\" (UID: \"3ddfc3cc-08ad-436c-b5e9-0ab2ee325555\") " pod="openstack/prometheus-metric-storage-0" Feb 18 20:00:27 crc kubenswrapper[4942]: I0218 20:00:27.029520 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/3ddfc3cc-08ad-436c-b5e9-0ab2ee325555-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"3ddfc3cc-08ad-436c-b5e9-0ab2ee325555\") " pod="openstack/prometheus-metric-storage-0" Feb 18 20:00:27 crc kubenswrapper[4942]: I0218 20:00:27.029572 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/3ddfc3cc-08ad-436c-b5e9-0ab2ee325555-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"3ddfc3cc-08ad-436c-b5e9-0ab2ee325555\") " pod="openstack/prometheus-metric-storage-0" Feb 18 20:00:27 crc kubenswrapper[4942]: I0218 20:00:27.029639 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/3ddfc3cc-08ad-436c-b5e9-0ab2ee325555-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"3ddfc3cc-08ad-436c-b5e9-0ab2ee325555\") " pod="openstack/prometheus-metric-storage-0" Feb 18 20:00:27 crc kubenswrapper[4942]: I0218 20:00:27.029798 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/3ddfc3cc-08ad-436c-b5e9-0ab2ee325555-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"3ddfc3cc-08ad-436c-b5e9-0ab2ee325555\") " pod="openstack/prometheus-metric-storage-0" Feb 18 20:00:27 crc kubenswrapper[4942]: I0218 20:00:27.029867 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/3ddfc3cc-08ad-436c-b5e9-0ab2ee325555-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"3ddfc3cc-08ad-436c-b5e9-0ab2ee325555\") " pod="openstack/prometheus-metric-storage-0" Feb 18 20:00:27 crc kubenswrapper[4942]: I0218 20:00:27.029918 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/3ddfc3cc-08ad-436c-b5e9-0ab2ee325555-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"3ddfc3cc-08ad-436c-b5e9-0ab2ee325555\") " pod="openstack/prometheus-metric-storage-0" Feb 18 20:00:27 crc kubenswrapper[4942]: I0218 20:00:27.029982 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/3ddfc3cc-08ad-436c-b5e9-0ab2ee325555-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"3ddfc3cc-08ad-436c-b5e9-0ab2ee325555\") " pod="openstack/prometheus-metric-storage-0" Feb 18 20:00:27 crc kubenswrapper[4942]: I0218 20:00:27.030014 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/3ddfc3cc-08ad-436c-b5e9-0ab2ee325555-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"3ddfc3cc-08ad-436c-b5e9-0ab2ee325555\") " pod="openstack/prometheus-metric-storage-0" Feb 18 20:00:27 crc kubenswrapper[4942]: I0218 20:00:27.030065 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3ddfc3cc-08ad-436c-b5e9-0ab2ee325555-secret-combined-ca-bundle\") pod \"prometheus-metric-storage-0\" (UID: \"3ddfc3cc-08ad-436c-b5e9-0ab2ee325555\") " pod="openstack/prometheus-metric-storage-0" Feb 18 20:00:27 crc kubenswrapper[4942]: I0218 20:00:27.030112 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/3ddfc3cc-08ad-436c-b5e9-0ab2ee325555-config\") pod \"prometheus-metric-storage-0\" (UID: \"3ddfc3cc-08ad-436c-b5e9-0ab2ee325555\") " pod="openstack/prometheus-metric-storage-0" Feb 18 20:00:27 crc kubenswrapper[4942]: I0218 20:00:27.030221 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-99d9d799-8f85-4f2f-8ca2-c6e20d4d69c5\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-99d9d799-8f85-4f2f-8ca2-c6e20d4d69c5\") pod \"prometheus-metric-storage-0\" (UID: \"3ddfc3cc-08ad-436c-b5e9-0ab2ee325555\") " pod="openstack/prometheus-metric-storage-0" Feb 18 20:00:27 crc kubenswrapper[4942]: I0218 20:00:27.030302 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/3ddfc3cc-08ad-436c-b5e9-0ab2ee325555-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"3ddfc3cc-08ad-436c-b5e9-0ab2ee325555\") " pod="openstack/prometheus-metric-storage-0" Feb 18 20:00:27 crc kubenswrapper[4942]: I0218 20:00:27.030718 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/3ddfc3cc-08ad-436c-b5e9-0ab2ee325555-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"3ddfc3cc-08ad-436c-b5e9-0ab2ee325555\") " pod="openstack/prometheus-metric-storage-0" Feb 18 20:00:27 crc kubenswrapper[4942]: I0218 20:00:27.031201 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/3ddfc3cc-08ad-436c-b5e9-0ab2ee325555-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"3ddfc3cc-08ad-436c-b5e9-0ab2ee325555\") " pod="openstack/prometheus-metric-storage-0" Feb 18 20:00:27 crc kubenswrapper[4942]: I0218 20:00:27.031641 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/3ddfc3cc-08ad-436c-b5e9-0ab2ee325555-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"3ddfc3cc-08ad-436c-b5e9-0ab2ee325555\") " pod="openstack/prometheus-metric-storage-0" Feb 18 20:00:27 crc kubenswrapper[4942]: I0218 20:00:27.034066 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/3ddfc3cc-08ad-436c-b5e9-0ab2ee325555-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"3ddfc3cc-08ad-436c-b5e9-0ab2ee325555\") " pod="openstack/prometheus-metric-storage-0" Feb 18 20:00:27 crc kubenswrapper[4942]: I0218 20:00:27.036065 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/3ddfc3cc-08ad-436c-b5e9-0ab2ee325555-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"3ddfc3cc-08ad-436c-b5e9-0ab2ee325555\") " pod="openstack/prometheus-metric-storage-0" Feb 18 20:00:27 crc kubenswrapper[4942]: I0218 20:00:27.037183 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/3ddfc3cc-08ad-436c-b5e9-0ab2ee325555-config\") pod \"prometheus-metric-storage-0\" (UID: \"3ddfc3cc-08ad-436c-b5e9-0ab2ee325555\") " pod="openstack/prometheus-metric-storage-0" Feb 18 20:00:27 crc kubenswrapper[4942]: I0218 20:00:27.039991 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/3ddfc3cc-08ad-436c-b5e9-0ab2ee325555-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"3ddfc3cc-08ad-436c-b5e9-0ab2ee325555\") " pod="openstack/prometheus-metric-storage-0" Feb 18 20:00:27 crc kubenswrapper[4942]: I0218 20:00:27.039995 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3ddfc3cc-08ad-436c-b5e9-0ab2ee325555-secret-combined-ca-bundle\") pod \"prometheus-metric-storage-0\" (UID: \"3ddfc3cc-08ad-436c-b5e9-0ab2ee325555\") " pod="openstack/prometheus-metric-storage-0" Feb 18 20:00:27 crc kubenswrapper[4942]: I0218 20:00:27.042388 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/3ddfc3cc-08ad-436c-b5e9-0ab2ee325555-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"3ddfc3cc-08ad-436c-b5e9-0ab2ee325555\") " pod="openstack/prometheus-metric-storage-0" Feb 18 20:00:27 crc kubenswrapper[4942]: I0218 20:00:27.044165 4942 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 18 20:00:27 crc kubenswrapper[4942]: I0218 20:00:27.044219 4942 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-99d9d799-8f85-4f2f-8ca2-c6e20d4d69c5\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-99d9d799-8f85-4f2f-8ca2-c6e20d4d69c5\") pod \"prometheus-metric-storage-0\" (UID: \"3ddfc3cc-08ad-436c-b5e9-0ab2ee325555\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/70b345b463ff13ff33bce45da0f4a8796a1574afa2d8fd2ecf4f2239b34767fb/globalmount\"" pod="openstack/prometheus-metric-storage-0" Feb 18 20:00:27 crc kubenswrapper[4942]: I0218 20:00:27.046891 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/3ddfc3cc-08ad-436c-b5e9-0ab2ee325555-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"3ddfc3cc-08ad-436c-b5e9-0ab2ee325555\") " pod="openstack/prometheus-metric-storage-0" Feb 18 20:00:27 crc kubenswrapper[4942]: I0218 20:00:27.049743 4942 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="219b2aa4-0497-40f8-a3d0-947d37be720d" path="/var/lib/kubelet/pods/219b2aa4-0497-40f8-a3d0-947d37be720d/volumes" Feb 18 20:00:27 crc kubenswrapper[4942]: I0218 20:00:27.052168 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/3ddfc3cc-08ad-436c-b5e9-0ab2ee325555-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"3ddfc3cc-08ad-436c-b5e9-0ab2ee325555\") " pod="openstack/prometheus-metric-storage-0" Feb 18 20:00:27 crc kubenswrapper[4942]: I0218 20:00:27.061787 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5xjld\" (UniqueName: \"kubernetes.io/projected/3ddfc3cc-08ad-436c-b5e9-0ab2ee325555-kube-api-access-5xjld\") pod \"prometheus-metric-storage-0\" (UID: \"3ddfc3cc-08ad-436c-b5e9-0ab2ee325555\") " pod="openstack/prometheus-metric-storage-0" Feb 18 20:00:27 crc kubenswrapper[4942]: I0218 20:00:27.085981 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-99d9d799-8f85-4f2f-8ca2-c6e20d4d69c5\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-99d9d799-8f85-4f2f-8ca2-c6e20d4d69c5\") pod \"prometheus-metric-storage-0\" (UID: \"3ddfc3cc-08ad-436c-b5e9-0ab2ee325555\") " pod="openstack/prometheus-metric-storage-0" Feb 18 20:00:27 crc kubenswrapper[4942]: I0218 20:00:27.361525 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Feb 18 20:00:27 crc kubenswrapper[4942]: I0218 20:00:27.918531 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Feb 18 20:00:28 crc kubenswrapper[4942]: I0218 20:00:28.361674 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"3ddfc3cc-08ad-436c-b5e9-0ab2ee325555","Type":"ContainerStarted","Data":"545326c73b65ccb8a02bb743e6b1e4d065270e3279a3f53424d256e608ba6aea"} Feb 18 20:00:32 crc kubenswrapper[4942]: I0218 20:00:32.036364 4942 scope.go:117] "RemoveContainer" containerID="5e4e4cde2bbc876890dcc79d1035aec859f9c3fe975d1ce36677f131f53ddd1d" Feb 18 20:00:32 crc kubenswrapper[4942]: E0218 20:00:32.037318 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wqxh4_openshift-machine-config-operator(28921539-823a-4439-a230-3b5aed7085cc)\"" pod="openshift-machine-config-operator/machine-config-daemon-wqxh4" podUID="28921539-823a-4439-a230-3b5aed7085cc" Feb 18 20:00:33 crc kubenswrapper[4942]: I0218 20:00:33.427538 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"3ddfc3cc-08ad-436c-b5e9-0ab2ee325555","Type":"ContainerStarted","Data":"fd92dca173e7de10a9954c7748a19c58e83ce614f9318931ad2724cfd5ccc508"} Feb 18 20:00:42 crc kubenswrapper[4942]: I0218 20:00:42.529720 4942 generic.go:334] "Generic (PLEG): container finished" podID="3ddfc3cc-08ad-436c-b5e9-0ab2ee325555" containerID="fd92dca173e7de10a9954c7748a19c58e83ce614f9318931ad2724cfd5ccc508" exitCode=0 Feb 18 20:00:42 crc kubenswrapper[4942]: I0218 20:00:42.529874 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"3ddfc3cc-08ad-436c-b5e9-0ab2ee325555","Type":"ContainerDied","Data":"fd92dca173e7de10a9954c7748a19c58e83ce614f9318931ad2724cfd5ccc508"} Feb 18 20:00:43 crc kubenswrapper[4942]: I0218 20:00:43.545543 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"3ddfc3cc-08ad-436c-b5e9-0ab2ee325555","Type":"ContainerStarted","Data":"add4033ed1184099f2d7277281b19211db0936b4f10bedf77a4a511bca20b42e"} Feb 18 20:00:45 crc kubenswrapper[4942]: I0218 20:00:45.037156 4942 scope.go:117] "RemoveContainer" containerID="5e4e4cde2bbc876890dcc79d1035aec859f9c3fe975d1ce36677f131f53ddd1d" Feb 18 20:00:45 crc kubenswrapper[4942]: E0218 20:00:45.037811 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wqxh4_openshift-machine-config-operator(28921539-823a-4439-a230-3b5aed7085cc)\"" pod="openshift-machine-config-operator/machine-config-daemon-wqxh4" podUID="28921539-823a-4439-a230-3b5aed7085cc" Feb 18 20:00:46 crc kubenswrapper[4942]: I0218 20:00:46.576910 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"3ddfc3cc-08ad-436c-b5e9-0ab2ee325555","Type":"ContainerStarted","Data":"35d9b0bb66a5a8bb7a2fdaaab5c93c11198c98a6c020e68354d41a73c50bc2c4"} Feb 18 20:00:46 crc kubenswrapper[4942]: I0218 20:00:46.577569 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"3ddfc3cc-08ad-436c-b5e9-0ab2ee325555","Type":"ContainerStarted","Data":"c21b89f287762670936c49e3c845de86b6fab5771fb13375c928cd0141b4bdca"} Feb 18 20:00:46 crc kubenswrapper[4942]: I0218 20:00:46.618186 4942 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/prometheus-metric-storage-0" podStartSLOduration=20.617758131 podStartE2EDuration="20.617758131s" podCreationTimestamp="2026-02-18 20:00:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 20:00:46.608826545 +0000 UTC m=+2606.313759220" watchObservedRunningTime="2026-02-18 20:00:46.617758131 +0000 UTC m=+2606.322690806" Feb 18 20:00:47 crc kubenswrapper[4942]: I0218 20:00:47.362219 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/prometheus-metric-storage-0" Feb 18 20:00:49 crc kubenswrapper[4942]: I0218 20:00:49.749110 4942 scope.go:117] "RemoveContainer" containerID="01241740eda1e01b1148596092553039bc8d0f4fa82bfe1851e652e1a9db2c10" Feb 18 20:00:49 crc kubenswrapper[4942]: I0218 20:00:49.789793 4942 scope.go:117] "RemoveContainer" containerID="e6e36f3a740b91dbd03b01c5e3d04984228711747c2ab244bd4357d34fe38eec" Feb 18 20:00:49 crc kubenswrapper[4942]: I0218 20:00:49.822890 4942 scope.go:117] "RemoveContainer" containerID="fced62a823aabc9eb96d7dc1c21c39c26f67347f087ea0b1c45827cef7157377" Feb 18 20:00:49 crc kubenswrapper[4942]: I0218 20:00:49.854006 4942 scope.go:117] "RemoveContainer" containerID="5fb82fb77a7895a43a30ace42481cf4c1da624e8742b15c1cb5a5cf3044d7c22" Feb 18 20:00:57 crc kubenswrapper[4942]: I0218 20:00:57.363278 4942 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/prometheus-metric-storage-0" Feb 18 20:00:57 crc kubenswrapper[4942]: I0218 20:00:57.374982 4942 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/prometheus-metric-storage-0" Feb 18 20:00:57 crc kubenswrapper[4942]: I0218 20:00:57.695018 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/prometheus-metric-storage-0" Feb 18 20:01:00 crc kubenswrapper[4942]: I0218 20:01:00.036391 4942 scope.go:117] "RemoveContainer" containerID="5e4e4cde2bbc876890dcc79d1035aec859f9c3fe975d1ce36677f131f53ddd1d" Feb 18 20:01:00 crc kubenswrapper[4942]: E0218 20:01:00.036916 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wqxh4_openshift-machine-config-operator(28921539-823a-4439-a230-3b5aed7085cc)\"" pod="openshift-machine-config-operator/machine-config-daemon-wqxh4" podUID="28921539-823a-4439-a230-3b5aed7085cc" Feb 18 20:01:00 crc kubenswrapper[4942]: I0218 20:01:00.151543 4942 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-cron-29524081-m78nz"] Feb 18 20:01:00 crc kubenswrapper[4942]: I0218 20:01:00.152957 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29524081-m78nz" Feb 18 20:01:00 crc kubenswrapper[4942]: I0218 20:01:00.171424 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29524081-m78nz"] Feb 18 20:01:00 crc kubenswrapper[4942]: I0218 20:01:00.269164 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kbxkl\" (UniqueName: \"kubernetes.io/projected/2bd5363d-fb40-4123-b9bb-5e6179d65b44-kube-api-access-kbxkl\") pod \"keystone-cron-29524081-m78nz\" (UID: \"2bd5363d-fb40-4123-b9bb-5e6179d65b44\") " pod="openstack/keystone-cron-29524081-m78nz" Feb 18 20:01:00 crc kubenswrapper[4942]: I0218 20:01:00.269317 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2bd5363d-fb40-4123-b9bb-5e6179d65b44-config-data\") pod \"keystone-cron-29524081-m78nz\" (UID: \"2bd5363d-fb40-4123-b9bb-5e6179d65b44\") " pod="openstack/keystone-cron-29524081-m78nz" Feb 18 20:01:00 crc kubenswrapper[4942]: I0218 20:01:00.269349 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/2bd5363d-fb40-4123-b9bb-5e6179d65b44-fernet-keys\") pod \"keystone-cron-29524081-m78nz\" (UID: \"2bd5363d-fb40-4123-b9bb-5e6179d65b44\") " pod="openstack/keystone-cron-29524081-m78nz" Feb 18 20:01:00 crc kubenswrapper[4942]: I0218 20:01:00.269371 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2bd5363d-fb40-4123-b9bb-5e6179d65b44-combined-ca-bundle\") pod \"keystone-cron-29524081-m78nz\" (UID: \"2bd5363d-fb40-4123-b9bb-5e6179d65b44\") " pod="openstack/keystone-cron-29524081-m78nz" Feb 18 20:01:00 crc kubenswrapper[4942]: I0218 20:01:00.371232 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kbxkl\" (UniqueName: \"kubernetes.io/projected/2bd5363d-fb40-4123-b9bb-5e6179d65b44-kube-api-access-kbxkl\") pod \"keystone-cron-29524081-m78nz\" (UID: \"2bd5363d-fb40-4123-b9bb-5e6179d65b44\") " pod="openstack/keystone-cron-29524081-m78nz" Feb 18 20:01:00 crc kubenswrapper[4942]: I0218 20:01:00.371383 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2bd5363d-fb40-4123-b9bb-5e6179d65b44-config-data\") pod \"keystone-cron-29524081-m78nz\" (UID: \"2bd5363d-fb40-4123-b9bb-5e6179d65b44\") " pod="openstack/keystone-cron-29524081-m78nz" Feb 18 20:01:00 crc kubenswrapper[4942]: I0218 20:01:00.371416 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/2bd5363d-fb40-4123-b9bb-5e6179d65b44-fernet-keys\") pod \"keystone-cron-29524081-m78nz\" (UID: \"2bd5363d-fb40-4123-b9bb-5e6179d65b44\") " pod="openstack/keystone-cron-29524081-m78nz" Feb 18 20:01:00 crc kubenswrapper[4942]: I0218 20:01:00.371445 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2bd5363d-fb40-4123-b9bb-5e6179d65b44-combined-ca-bundle\") pod \"keystone-cron-29524081-m78nz\" (UID: \"2bd5363d-fb40-4123-b9bb-5e6179d65b44\") " pod="openstack/keystone-cron-29524081-m78nz" Feb 18 20:01:00 crc kubenswrapper[4942]: I0218 20:01:00.378488 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2bd5363d-fb40-4123-b9bb-5e6179d65b44-combined-ca-bundle\") pod \"keystone-cron-29524081-m78nz\" (UID: \"2bd5363d-fb40-4123-b9bb-5e6179d65b44\") " pod="openstack/keystone-cron-29524081-m78nz" Feb 18 20:01:00 crc kubenswrapper[4942]: I0218 20:01:00.379862 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/2bd5363d-fb40-4123-b9bb-5e6179d65b44-fernet-keys\") pod \"keystone-cron-29524081-m78nz\" (UID: \"2bd5363d-fb40-4123-b9bb-5e6179d65b44\") " pod="openstack/keystone-cron-29524081-m78nz" Feb 18 20:01:00 crc kubenswrapper[4942]: I0218 20:01:00.381222 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2bd5363d-fb40-4123-b9bb-5e6179d65b44-config-data\") pod \"keystone-cron-29524081-m78nz\" (UID: \"2bd5363d-fb40-4123-b9bb-5e6179d65b44\") " pod="openstack/keystone-cron-29524081-m78nz" Feb 18 20:01:00 crc kubenswrapper[4942]: I0218 20:01:00.394955 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kbxkl\" (UniqueName: \"kubernetes.io/projected/2bd5363d-fb40-4123-b9bb-5e6179d65b44-kube-api-access-kbxkl\") pod \"keystone-cron-29524081-m78nz\" (UID: \"2bd5363d-fb40-4123-b9bb-5e6179d65b44\") " pod="openstack/keystone-cron-29524081-m78nz" Feb 18 20:01:00 crc kubenswrapper[4942]: I0218 20:01:00.473250 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29524081-m78nz" Feb 18 20:01:00 crc kubenswrapper[4942]: I0218 20:01:00.912213 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29524081-m78nz"] Feb 18 20:01:01 crc kubenswrapper[4942]: I0218 20:01:01.726021 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29524081-m78nz" event={"ID":"2bd5363d-fb40-4123-b9bb-5e6179d65b44","Type":"ContainerStarted","Data":"028bdad6670299f79ade249de30440c67f798800431953939dfd578e4bc4642c"} Feb 18 20:01:01 crc kubenswrapper[4942]: I0218 20:01:01.726341 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29524081-m78nz" event={"ID":"2bd5363d-fb40-4123-b9bb-5e6179d65b44","Type":"ContainerStarted","Data":"2ff9040fd76ba76041a0011c39da104db97678efd87e14759f6c3866c30d61ee"} Feb 18 20:01:01 crc kubenswrapper[4942]: I0218 20:01:01.765727 4942 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-cron-29524081-m78nz" podStartSLOduration=1.765705041 podStartE2EDuration="1.765705041s" podCreationTimestamp="2026-02-18 20:01:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 20:01:01.763654737 +0000 UTC m=+2621.468587402" watchObservedRunningTime="2026-02-18 20:01:01.765705041 +0000 UTC m=+2621.470637706" Feb 18 20:01:03 crc kubenswrapper[4942]: I0218 20:01:03.749674 4942 generic.go:334] "Generic (PLEG): container finished" podID="2bd5363d-fb40-4123-b9bb-5e6179d65b44" containerID="028bdad6670299f79ade249de30440c67f798800431953939dfd578e4bc4642c" exitCode=0 Feb 18 20:01:03 crc kubenswrapper[4942]: I0218 20:01:03.749822 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29524081-m78nz" event={"ID":"2bd5363d-fb40-4123-b9bb-5e6179d65b44","Type":"ContainerDied","Data":"028bdad6670299f79ade249de30440c67f798800431953939dfd578e4bc4642c"} Feb 18 20:01:05 crc kubenswrapper[4942]: I0218 20:01:05.158299 4942 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29524081-m78nz" Feb 18 20:01:05 crc kubenswrapper[4942]: I0218 20:01:05.284209 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2bd5363d-fb40-4123-b9bb-5e6179d65b44-combined-ca-bundle\") pod \"2bd5363d-fb40-4123-b9bb-5e6179d65b44\" (UID: \"2bd5363d-fb40-4123-b9bb-5e6179d65b44\") " Feb 18 20:01:05 crc kubenswrapper[4942]: I0218 20:01:05.284423 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2bd5363d-fb40-4123-b9bb-5e6179d65b44-config-data\") pod \"2bd5363d-fb40-4123-b9bb-5e6179d65b44\" (UID: \"2bd5363d-fb40-4123-b9bb-5e6179d65b44\") " Feb 18 20:01:05 crc kubenswrapper[4942]: I0218 20:01:05.284497 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/2bd5363d-fb40-4123-b9bb-5e6179d65b44-fernet-keys\") pod \"2bd5363d-fb40-4123-b9bb-5e6179d65b44\" (UID: \"2bd5363d-fb40-4123-b9bb-5e6179d65b44\") " Feb 18 20:01:05 crc kubenswrapper[4942]: I0218 20:01:05.284624 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kbxkl\" (UniqueName: \"kubernetes.io/projected/2bd5363d-fb40-4123-b9bb-5e6179d65b44-kube-api-access-kbxkl\") pod \"2bd5363d-fb40-4123-b9bb-5e6179d65b44\" (UID: \"2bd5363d-fb40-4123-b9bb-5e6179d65b44\") " Feb 18 20:01:05 crc kubenswrapper[4942]: I0218 20:01:05.290147 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2bd5363d-fb40-4123-b9bb-5e6179d65b44-kube-api-access-kbxkl" (OuterVolumeSpecName: "kube-api-access-kbxkl") pod "2bd5363d-fb40-4123-b9bb-5e6179d65b44" (UID: "2bd5363d-fb40-4123-b9bb-5e6179d65b44"). InnerVolumeSpecName "kube-api-access-kbxkl". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 20:01:05 crc kubenswrapper[4942]: I0218 20:01:05.290924 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2bd5363d-fb40-4123-b9bb-5e6179d65b44-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "2bd5363d-fb40-4123-b9bb-5e6179d65b44" (UID: "2bd5363d-fb40-4123-b9bb-5e6179d65b44"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 20:01:05 crc kubenswrapper[4942]: I0218 20:01:05.315064 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2bd5363d-fb40-4123-b9bb-5e6179d65b44-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "2bd5363d-fb40-4123-b9bb-5e6179d65b44" (UID: "2bd5363d-fb40-4123-b9bb-5e6179d65b44"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 20:01:05 crc kubenswrapper[4942]: I0218 20:01:05.352123 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2bd5363d-fb40-4123-b9bb-5e6179d65b44-config-data" (OuterVolumeSpecName: "config-data") pod "2bd5363d-fb40-4123-b9bb-5e6179d65b44" (UID: "2bd5363d-fb40-4123-b9bb-5e6179d65b44"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 20:01:05 crc kubenswrapper[4942]: I0218 20:01:05.386919 4942 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/2bd5363d-fb40-4123-b9bb-5e6179d65b44-fernet-keys\") on node \"crc\" DevicePath \"\"" Feb 18 20:01:05 crc kubenswrapper[4942]: I0218 20:01:05.386960 4942 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kbxkl\" (UniqueName: \"kubernetes.io/projected/2bd5363d-fb40-4123-b9bb-5e6179d65b44-kube-api-access-kbxkl\") on node \"crc\" DevicePath \"\"" Feb 18 20:01:05 crc kubenswrapper[4942]: I0218 20:01:05.386974 4942 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2bd5363d-fb40-4123-b9bb-5e6179d65b44-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 20:01:05 crc kubenswrapper[4942]: I0218 20:01:05.386987 4942 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2bd5363d-fb40-4123-b9bb-5e6179d65b44-config-data\") on node \"crc\" DevicePath \"\"" Feb 18 20:01:05 crc kubenswrapper[4942]: I0218 20:01:05.772134 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29524081-m78nz" event={"ID":"2bd5363d-fb40-4123-b9bb-5e6179d65b44","Type":"ContainerDied","Data":"2ff9040fd76ba76041a0011c39da104db97678efd87e14759f6c3866c30d61ee"} Feb 18 20:01:05 crc kubenswrapper[4942]: I0218 20:01:05.772350 4942 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2ff9040fd76ba76041a0011c39da104db97678efd87e14759f6c3866c30d61ee" Feb 18 20:01:05 crc kubenswrapper[4942]: I0218 20:01:05.772400 4942 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29524081-m78nz" Feb 18 20:01:14 crc kubenswrapper[4942]: I0218 20:01:14.037548 4942 scope.go:117] "RemoveContainer" containerID="5e4e4cde2bbc876890dcc79d1035aec859f9c3fe975d1ce36677f131f53ddd1d" Feb 18 20:01:14 crc kubenswrapper[4942]: E0218 20:01:14.038272 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wqxh4_openshift-machine-config-operator(28921539-823a-4439-a230-3b5aed7085cc)\"" pod="openshift-machine-config-operator/machine-config-daemon-wqxh4" podUID="28921539-823a-4439-a230-3b5aed7085cc" Feb 18 20:01:19 crc kubenswrapper[4942]: I0218 20:01:19.164849 4942 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/tempest-tests-tempest"] Feb 18 20:01:19 crc kubenswrapper[4942]: E0218 20:01:19.165709 4942 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2bd5363d-fb40-4123-b9bb-5e6179d65b44" containerName="keystone-cron" Feb 18 20:01:19 crc kubenswrapper[4942]: I0218 20:01:19.165722 4942 state_mem.go:107] "Deleted CPUSet assignment" podUID="2bd5363d-fb40-4123-b9bb-5e6179d65b44" containerName="keystone-cron" Feb 18 20:01:19 crc kubenswrapper[4942]: I0218 20:01:19.165960 4942 memory_manager.go:354] "RemoveStaleState removing state" podUID="2bd5363d-fb40-4123-b9bb-5e6179d65b44" containerName="keystone-cron" Feb 18 20:01:19 crc kubenswrapper[4942]: I0218 20:01:19.166625 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Feb 18 20:01:19 crc kubenswrapper[4942]: I0218 20:01:19.169568 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"test-operator-controller-priv-key" Feb 18 20:01:19 crc kubenswrapper[4942]: I0218 20:01:19.170015 4942 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tempest-tests-tempest-custom-data-s0" Feb 18 20:01:19 crc kubenswrapper[4942]: I0218 20:01:19.170246 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"default-dockercfg-fgwq4" Feb 18 20:01:19 crc kubenswrapper[4942]: I0218 20:01:19.171563 4942 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tempest-tests-tempest-env-vars-s0" Feb 18 20:01:19 crc kubenswrapper[4942]: I0218 20:01:19.186088 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/tempest-tests-tempest"] Feb 18 20:01:19 crc kubenswrapper[4942]: I0218 20:01:19.322967 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/498a3ae0-adb2-4729-a2eb-78e267e1613b-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest\" (UID: \"498a3ae0-adb2-4729-a2eb-78e267e1613b\") " pod="openstack/tempest-tests-tempest" Feb 18 20:01:19 crc kubenswrapper[4942]: I0218 20:01:19.323026 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/498a3ae0-adb2-4729-a2eb-78e267e1613b-openstack-config\") pod \"tempest-tests-tempest\" (UID: \"498a3ae0-adb2-4729-a2eb-78e267e1613b\") " pod="openstack/tempest-tests-tempest" Feb 18 20:01:19 crc kubenswrapper[4942]: I0218 20:01:19.323071 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"tempest-tests-tempest\" (UID: \"498a3ae0-adb2-4729-a2eb-78e267e1613b\") " pod="openstack/tempest-tests-tempest" Feb 18 20:01:19 crc kubenswrapper[4942]: I0218 20:01:19.323116 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/498a3ae0-adb2-4729-a2eb-78e267e1613b-openstack-config-secret\") pod \"tempest-tests-tempest\" (UID: \"498a3ae0-adb2-4729-a2eb-78e267e1613b\") " pod="openstack/tempest-tests-tempest" Feb 18 20:01:19 crc kubenswrapper[4942]: I0218 20:01:19.323148 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/498a3ae0-adb2-4729-a2eb-78e267e1613b-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest\" (UID: \"498a3ae0-adb2-4729-a2eb-78e267e1613b\") " pod="openstack/tempest-tests-tempest" Feb 18 20:01:19 crc kubenswrapper[4942]: I0218 20:01:19.323211 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/498a3ae0-adb2-4729-a2eb-78e267e1613b-ssh-key\") pod \"tempest-tests-tempest\" (UID: \"498a3ae0-adb2-4729-a2eb-78e267e1613b\") " pod="openstack/tempest-tests-tempest" Feb 18 20:01:19 crc kubenswrapper[4942]: I0218 20:01:19.323306 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nnpsv\" (UniqueName: \"kubernetes.io/projected/498a3ae0-adb2-4729-a2eb-78e267e1613b-kube-api-access-nnpsv\") pod \"tempest-tests-tempest\" (UID: \"498a3ae0-adb2-4729-a2eb-78e267e1613b\") " pod="openstack/tempest-tests-tempest" Feb 18 20:01:19 crc kubenswrapper[4942]: I0218 20:01:19.323405 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/498a3ae0-adb2-4729-a2eb-78e267e1613b-config-data\") pod \"tempest-tests-tempest\" (UID: \"498a3ae0-adb2-4729-a2eb-78e267e1613b\") " pod="openstack/tempest-tests-tempest" Feb 18 20:01:19 crc kubenswrapper[4942]: I0218 20:01:19.323546 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/498a3ae0-adb2-4729-a2eb-78e267e1613b-ca-certs\") pod \"tempest-tests-tempest\" (UID: \"498a3ae0-adb2-4729-a2eb-78e267e1613b\") " pod="openstack/tempest-tests-tempest" Feb 18 20:01:19 crc kubenswrapper[4942]: I0218 20:01:19.424821 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/498a3ae0-adb2-4729-a2eb-78e267e1613b-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest\" (UID: \"498a3ae0-adb2-4729-a2eb-78e267e1613b\") " pod="openstack/tempest-tests-tempest" Feb 18 20:01:19 crc kubenswrapper[4942]: I0218 20:01:19.424868 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/498a3ae0-adb2-4729-a2eb-78e267e1613b-openstack-config\") pod \"tempest-tests-tempest\" (UID: \"498a3ae0-adb2-4729-a2eb-78e267e1613b\") " pod="openstack/tempest-tests-tempest" Feb 18 20:01:19 crc kubenswrapper[4942]: I0218 20:01:19.424898 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"tempest-tests-tempest\" (UID: \"498a3ae0-adb2-4729-a2eb-78e267e1613b\") " pod="openstack/tempest-tests-tempest" Feb 18 20:01:19 crc kubenswrapper[4942]: I0218 20:01:19.424921 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/498a3ae0-adb2-4729-a2eb-78e267e1613b-openstack-config-secret\") pod \"tempest-tests-tempest\" (UID: \"498a3ae0-adb2-4729-a2eb-78e267e1613b\") " pod="openstack/tempest-tests-tempest" Feb 18 20:01:19 crc kubenswrapper[4942]: I0218 20:01:19.424941 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/498a3ae0-adb2-4729-a2eb-78e267e1613b-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest\" (UID: \"498a3ae0-adb2-4729-a2eb-78e267e1613b\") " pod="openstack/tempest-tests-tempest" Feb 18 20:01:19 crc kubenswrapper[4942]: I0218 20:01:19.424976 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/498a3ae0-adb2-4729-a2eb-78e267e1613b-ssh-key\") pod \"tempest-tests-tempest\" (UID: \"498a3ae0-adb2-4729-a2eb-78e267e1613b\") " pod="openstack/tempest-tests-tempest" Feb 18 20:01:19 crc kubenswrapper[4942]: I0218 20:01:19.424994 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nnpsv\" (UniqueName: \"kubernetes.io/projected/498a3ae0-adb2-4729-a2eb-78e267e1613b-kube-api-access-nnpsv\") pod \"tempest-tests-tempest\" (UID: \"498a3ae0-adb2-4729-a2eb-78e267e1613b\") " pod="openstack/tempest-tests-tempest" Feb 18 20:01:19 crc kubenswrapper[4942]: I0218 20:01:19.425039 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/498a3ae0-adb2-4729-a2eb-78e267e1613b-config-data\") pod \"tempest-tests-tempest\" (UID: \"498a3ae0-adb2-4729-a2eb-78e267e1613b\") " pod="openstack/tempest-tests-tempest" Feb 18 20:01:19 crc kubenswrapper[4942]: I0218 20:01:19.425112 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/498a3ae0-adb2-4729-a2eb-78e267e1613b-ca-certs\") pod \"tempest-tests-tempest\" (UID: \"498a3ae0-adb2-4729-a2eb-78e267e1613b\") " pod="openstack/tempest-tests-tempest" Feb 18 20:01:19 crc kubenswrapper[4942]: I0218 20:01:19.425289 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/498a3ae0-adb2-4729-a2eb-78e267e1613b-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest\" (UID: \"498a3ae0-adb2-4729-a2eb-78e267e1613b\") " pod="openstack/tempest-tests-tempest" Feb 18 20:01:19 crc kubenswrapper[4942]: I0218 20:01:19.425325 4942 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"tempest-tests-tempest\" (UID: \"498a3ae0-adb2-4729-a2eb-78e267e1613b\") device mount path \"/mnt/openstack/pv10\"" pod="openstack/tempest-tests-tempest" Feb 18 20:01:19 crc kubenswrapper[4942]: I0218 20:01:19.425867 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/498a3ae0-adb2-4729-a2eb-78e267e1613b-openstack-config\") pod \"tempest-tests-tempest\" (UID: \"498a3ae0-adb2-4729-a2eb-78e267e1613b\") " pod="openstack/tempest-tests-tempest" Feb 18 20:01:19 crc kubenswrapper[4942]: I0218 20:01:19.425903 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/498a3ae0-adb2-4729-a2eb-78e267e1613b-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest\" (UID: \"498a3ae0-adb2-4729-a2eb-78e267e1613b\") " pod="openstack/tempest-tests-tempest" Feb 18 20:01:19 crc kubenswrapper[4942]: I0218 20:01:19.426969 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/498a3ae0-adb2-4729-a2eb-78e267e1613b-config-data\") pod \"tempest-tests-tempest\" (UID: \"498a3ae0-adb2-4729-a2eb-78e267e1613b\") " pod="openstack/tempest-tests-tempest" Feb 18 20:01:19 crc kubenswrapper[4942]: I0218 20:01:19.429827 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/498a3ae0-adb2-4729-a2eb-78e267e1613b-ssh-key\") pod \"tempest-tests-tempest\" (UID: \"498a3ae0-adb2-4729-a2eb-78e267e1613b\") " pod="openstack/tempest-tests-tempest" Feb 18 20:01:19 crc kubenswrapper[4942]: I0218 20:01:19.429950 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/498a3ae0-adb2-4729-a2eb-78e267e1613b-openstack-config-secret\") pod \"tempest-tests-tempest\" (UID: \"498a3ae0-adb2-4729-a2eb-78e267e1613b\") " pod="openstack/tempest-tests-tempest" Feb 18 20:01:19 crc kubenswrapper[4942]: I0218 20:01:19.433136 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/498a3ae0-adb2-4729-a2eb-78e267e1613b-ca-certs\") pod \"tempest-tests-tempest\" (UID: \"498a3ae0-adb2-4729-a2eb-78e267e1613b\") " pod="openstack/tempest-tests-tempest" Feb 18 20:01:19 crc kubenswrapper[4942]: I0218 20:01:19.450555 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nnpsv\" (UniqueName: \"kubernetes.io/projected/498a3ae0-adb2-4729-a2eb-78e267e1613b-kube-api-access-nnpsv\") pod \"tempest-tests-tempest\" (UID: \"498a3ae0-adb2-4729-a2eb-78e267e1613b\") " pod="openstack/tempest-tests-tempest" Feb 18 20:01:19 crc kubenswrapper[4942]: I0218 20:01:19.457255 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"tempest-tests-tempest\" (UID: \"498a3ae0-adb2-4729-a2eb-78e267e1613b\") " pod="openstack/tempest-tests-tempest" Feb 18 20:01:19 crc kubenswrapper[4942]: I0218 20:01:19.488904 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Feb 18 20:01:19 crc kubenswrapper[4942]: I0218 20:01:19.936731 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/tempest-tests-tempest"] Feb 18 20:01:19 crc kubenswrapper[4942]: I0218 20:01:19.957963 4942 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 18 20:01:19 crc kubenswrapper[4942]: I0218 20:01:19.981855 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"498a3ae0-adb2-4729-a2eb-78e267e1613b","Type":"ContainerStarted","Data":"4638cc0d3971f910691e7e7ad60b86d01493160078b4f86a07d3570748f42e2f"} Feb 18 20:01:26 crc kubenswrapper[4942]: I0218 20:01:26.035776 4942 scope.go:117] "RemoveContainer" containerID="5e4e4cde2bbc876890dcc79d1035aec859f9c3fe975d1ce36677f131f53ddd1d" Feb 18 20:01:26 crc kubenswrapper[4942]: E0218 20:01:26.036696 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wqxh4_openshift-machine-config-operator(28921539-823a-4439-a230-3b5aed7085cc)\"" pod="openshift-machine-config-operator/machine-config-daemon-wqxh4" podUID="28921539-823a-4439-a230-3b5aed7085cc" Feb 18 20:01:31 crc kubenswrapper[4942]: I0218 20:01:31.034932 4942 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tempest-tests-tempest-env-vars-s0" Feb 18 20:01:32 crc kubenswrapper[4942]: I0218 20:01:32.133534 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"498a3ae0-adb2-4729-a2eb-78e267e1613b","Type":"ContainerStarted","Data":"169b9c7b6b3a31907bbb5568c6300b81731785a07744ed74ff40a7d3cf050f29"} Feb 18 20:01:32 crc kubenswrapper[4942]: I0218 20:01:32.189018 4942 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/tempest-tests-tempest" podStartSLOduration=3.118768519 podStartE2EDuration="14.188994483s" podCreationTimestamp="2026-02-18 20:01:18 +0000 UTC" firstStartedPulling="2026-02-18 20:01:19.957603691 +0000 UTC m=+2639.662536366" lastFinishedPulling="2026-02-18 20:01:31.027829625 +0000 UTC m=+2650.732762330" observedRunningTime="2026-02-18 20:01:32.166921968 +0000 UTC m=+2651.871854643" watchObservedRunningTime="2026-02-18 20:01:32.188994483 +0000 UTC m=+2651.893927158" Feb 18 20:01:38 crc kubenswrapper[4942]: I0218 20:01:38.035979 4942 scope.go:117] "RemoveContainer" containerID="5e4e4cde2bbc876890dcc79d1035aec859f9c3fe975d1ce36677f131f53ddd1d" Feb 18 20:01:38 crc kubenswrapper[4942]: E0218 20:01:38.036828 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wqxh4_openshift-machine-config-operator(28921539-823a-4439-a230-3b5aed7085cc)\"" pod="openshift-machine-config-operator/machine-config-daemon-wqxh4" podUID="28921539-823a-4439-a230-3b5aed7085cc" Feb 18 20:01:49 crc kubenswrapper[4942]: I0218 20:01:49.037097 4942 scope.go:117] "RemoveContainer" containerID="5e4e4cde2bbc876890dcc79d1035aec859f9c3fe975d1ce36677f131f53ddd1d" Feb 18 20:01:49 crc kubenswrapper[4942]: E0218 20:01:49.038014 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wqxh4_openshift-machine-config-operator(28921539-823a-4439-a230-3b5aed7085cc)\"" pod="openshift-machine-config-operator/machine-config-daemon-wqxh4" podUID="28921539-823a-4439-a230-3b5aed7085cc" Feb 18 20:02:03 crc kubenswrapper[4942]: I0218 20:02:03.035691 4942 scope.go:117] "RemoveContainer" containerID="5e4e4cde2bbc876890dcc79d1035aec859f9c3fe975d1ce36677f131f53ddd1d" Feb 18 20:02:03 crc kubenswrapper[4942]: E0218 20:02:03.037647 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wqxh4_openshift-machine-config-operator(28921539-823a-4439-a230-3b5aed7085cc)\"" pod="openshift-machine-config-operator/machine-config-daemon-wqxh4" podUID="28921539-823a-4439-a230-3b5aed7085cc" Feb 18 20:02:18 crc kubenswrapper[4942]: I0218 20:02:18.036177 4942 scope.go:117] "RemoveContainer" containerID="5e4e4cde2bbc876890dcc79d1035aec859f9c3fe975d1ce36677f131f53ddd1d" Feb 18 20:02:18 crc kubenswrapper[4942]: E0218 20:02:18.036946 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wqxh4_openshift-machine-config-operator(28921539-823a-4439-a230-3b5aed7085cc)\"" pod="openshift-machine-config-operator/machine-config-daemon-wqxh4" podUID="28921539-823a-4439-a230-3b5aed7085cc" Feb 18 20:02:26 crc kubenswrapper[4942]: I0218 20:02:26.704652 4942 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-v69dv"] Feb 18 20:02:26 crc kubenswrapper[4942]: I0218 20:02:26.707537 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-v69dv" Feb 18 20:02:26 crc kubenswrapper[4942]: I0218 20:02:26.725511 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-v69dv"] Feb 18 20:02:26 crc kubenswrapper[4942]: I0218 20:02:26.851262 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m8kwr\" (UniqueName: \"kubernetes.io/projected/0f306d5c-e9fd-4d66-babc-d5812662f2c6-kube-api-access-m8kwr\") pod \"redhat-operators-v69dv\" (UID: \"0f306d5c-e9fd-4d66-babc-d5812662f2c6\") " pod="openshift-marketplace/redhat-operators-v69dv" Feb 18 20:02:26 crc kubenswrapper[4942]: I0218 20:02:26.851753 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0f306d5c-e9fd-4d66-babc-d5812662f2c6-catalog-content\") pod \"redhat-operators-v69dv\" (UID: \"0f306d5c-e9fd-4d66-babc-d5812662f2c6\") " pod="openshift-marketplace/redhat-operators-v69dv" Feb 18 20:02:26 crc kubenswrapper[4942]: I0218 20:02:26.851931 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0f306d5c-e9fd-4d66-babc-d5812662f2c6-utilities\") pod \"redhat-operators-v69dv\" (UID: \"0f306d5c-e9fd-4d66-babc-d5812662f2c6\") " pod="openshift-marketplace/redhat-operators-v69dv" Feb 18 20:02:26 crc kubenswrapper[4942]: I0218 20:02:26.953990 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0f306d5c-e9fd-4d66-babc-d5812662f2c6-catalog-content\") pod \"redhat-operators-v69dv\" (UID: \"0f306d5c-e9fd-4d66-babc-d5812662f2c6\") " pod="openshift-marketplace/redhat-operators-v69dv" Feb 18 20:02:26 crc kubenswrapper[4942]: I0218 20:02:26.954082 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0f306d5c-e9fd-4d66-babc-d5812662f2c6-utilities\") pod \"redhat-operators-v69dv\" (UID: \"0f306d5c-e9fd-4d66-babc-d5812662f2c6\") " pod="openshift-marketplace/redhat-operators-v69dv" Feb 18 20:02:26 crc kubenswrapper[4942]: I0218 20:02:26.954589 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0f306d5c-e9fd-4d66-babc-d5812662f2c6-catalog-content\") pod \"redhat-operators-v69dv\" (UID: \"0f306d5c-e9fd-4d66-babc-d5812662f2c6\") " pod="openshift-marketplace/redhat-operators-v69dv" Feb 18 20:02:26 crc kubenswrapper[4942]: I0218 20:02:26.954707 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0f306d5c-e9fd-4d66-babc-d5812662f2c6-utilities\") pod \"redhat-operators-v69dv\" (UID: \"0f306d5c-e9fd-4d66-babc-d5812662f2c6\") " pod="openshift-marketplace/redhat-operators-v69dv" Feb 18 20:02:26 crc kubenswrapper[4942]: I0218 20:02:26.955203 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m8kwr\" (UniqueName: \"kubernetes.io/projected/0f306d5c-e9fd-4d66-babc-d5812662f2c6-kube-api-access-m8kwr\") pod \"redhat-operators-v69dv\" (UID: \"0f306d5c-e9fd-4d66-babc-d5812662f2c6\") " pod="openshift-marketplace/redhat-operators-v69dv" Feb 18 20:02:26 crc kubenswrapper[4942]: I0218 20:02:26.984292 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m8kwr\" (UniqueName: \"kubernetes.io/projected/0f306d5c-e9fd-4d66-babc-d5812662f2c6-kube-api-access-m8kwr\") pod \"redhat-operators-v69dv\" (UID: \"0f306d5c-e9fd-4d66-babc-d5812662f2c6\") " pod="openshift-marketplace/redhat-operators-v69dv" Feb 18 20:02:27 crc kubenswrapper[4942]: I0218 20:02:27.036384 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-v69dv" Feb 18 20:02:27 crc kubenswrapper[4942]: I0218 20:02:27.562551 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-v69dv"] Feb 18 20:02:27 crc kubenswrapper[4942]: I0218 20:02:27.771240 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-v69dv" event={"ID":"0f306d5c-e9fd-4d66-babc-d5812662f2c6","Type":"ContainerStarted","Data":"489436400d82be93fb769cbcdd5323663ef8c990df5f4e0eb67e5cdeeade6085"} Feb 18 20:02:28 crc kubenswrapper[4942]: I0218 20:02:28.789627 4942 generic.go:334] "Generic (PLEG): container finished" podID="0f306d5c-e9fd-4d66-babc-d5812662f2c6" containerID="3e8f85446e91ce4eb5904affbae1ea88bbae59483cc9db62c515656ec6f70abb" exitCode=0 Feb 18 20:02:28 crc kubenswrapper[4942]: I0218 20:02:28.789692 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-v69dv" event={"ID":"0f306d5c-e9fd-4d66-babc-d5812662f2c6","Type":"ContainerDied","Data":"3e8f85446e91ce4eb5904affbae1ea88bbae59483cc9db62c515656ec6f70abb"} Feb 18 20:02:29 crc kubenswrapper[4942]: I0218 20:02:29.035938 4942 scope.go:117] "RemoveContainer" containerID="5e4e4cde2bbc876890dcc79d1035aec859f9c3fe975d1ce36677f131f53ddd1d" Feb 18 20:02:29 crc kubenswrapper[4942]: I0218 20:02:29.800954 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-v69dv" event={"ID":"0f306d5c-e9fd-4d66-babc-d5812662f2c6","Type":"ContainerStarted","Data":"3cf587211302339a8fc7a76125477edee2e278fe85de19d0ae6e18f45b77c370"} Feb 18 20:02:29 crc kubenswrapper[4942]: I0218 20:02:29.804414 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wqxh4" event={"ID":"28921539-823a-4439-a230-3b5aed7085cc","Type":"ContainerStarted","Data":"339398ef2c817c25ee087d2b884ff3bce0c2b59c4bf8c232769e062241809fa2"} Feb 18 20:02:34 crc kubenswrapper[4942]: I0218 20:02:34.852954 4942 generic.go:334] "Generic (PLEG): container finished" podID="0f306d5c-e9fd-4d66-babc-d5812662f2c6" containerID="3cf587211302339a8fc7a76125477edee2e278fe85de19d0ae6e18f45b77c370" exitCode=0 Feb 18 20:02:34 crc kubenswrapper[4942]: I0218 20:02:34.853019 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-v69dv" event={"ID":"0f306d5c-e9fd-4d66-babc-d5812662f2c6","Type":"ContainerDied","Data":"3cf587211302339a8fc7a76125477edee2e278fe85de19d0ae6e18f45b77c370"} Feb 18 20:02:35 crc kubenswrapper[4942]: I0218 20:02:35.866108 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-v69dv" event={"ID":"0f306d5c-e9fd-4d66-babc-d5812662f2c6","Type":"ContainerStarted","Data":"747dc8419d1569ab8e14e3e3717c8ce097eb298c94f623d8eca51d0a0baee704"} Feb 18 20:02:35 crc kubenswrapper[4942]: I0218 20:02:35.888342 4942 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-v69dv" podStartSLOduration=3.425524291 podStartE2EDuration="9.888323736s" podCreationTimestamp="2026-02-18 20:02:26 +0000 UTC" firstStartedPulling="2026-02-18 20:02:28.793179841 +0000 UTC m=+2708.498112506" lastFinishedPulling="2026-02-18 20:02:35.255979266 +0000 UTC m=+2714.960911951" observedRunningTime="2026-02-18 20:02:35.886898708 +0000 UTC m=+2715.591831373" watchObservedRunningTime="2026-02-18 20:02:35.888323736 +0000 UTC m=+2715.593256401" Feb 18 20:02:37 crc kubenswrapper[4942]: I0218 20:02:37.047602 4942 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-v69dv" Feb 18 20:02:37 crc kubenswrapper[4942]: I0218 20:02:37.047928 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-v69dv" Feb 18 20:02:38 crc kubenswrapper[4942]: I0218 20:02:38.098874 4942 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-v69dv" podUID="0f306d5c-e9fd-4d66-babc-d5812662f2c6" containerName="registry-server" probeResult="failure" output=< Feb 18 20:02:38 crc kubenswrapper[4942]: timeout: failed to connect service ":50051" within 1s Feb 18 20:02:38 crc kubenswrapper[4942]: > Feb 18 20:02:47 crc kubenswrapper[4942]: I0218 20:02:47.098398 4942 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-v69dv" Feb 18 20:02:47 crc kubenswrapper[4942]: I0218 20:02:47.153698 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-v69dv" Feb 18 20:02:47 crc kubenswrapper[4942]: I0218 20:02:47.335098 4942 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-v69dv"] Feb 18 20:02:48 crc kubenswrapper[4942]: I0218 20:02:48.994909 4942 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-v69dv" podUID="0f306d5c-e9fd-4d66-babc-d5812662f2c6" containerName="registry-server" containerID="cri-o://747dc8419d1569ab8e14e3e3717c8ce097eb298c94f623d8eca51d0a0baee704" gracePeriod=2 Feb 18 20:02:49 crc kubenswrapper[4942]: I0218 20:02:49.547380 4942 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-v69dv" Feb 18 20:02:49 crc kubenswrapper[4942]: I0218 20:02:49.652914 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0f306d5c-e9fd-4d66-babc-d5812662f2c6-utilities\") pod \"0f306d5c-e9fd-4d66-babc-d5812662f2c6\" (UID: \"0f306d5c-e9fd-4d66-babc-d5812662f2c6\") " Feb 18 20:02:49 crc kubenswrapper[4942]: I0218 20:02:49.653023 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m8kwr\" (UniqueName: \"kubernetes.io/projected/0f306d5c-e9fd-4d66-babc-d5812662f2c6-kube-api-access-m8kwr\") pod \"0f306d5c-e9fd-4d66-babc-d5812662f2c6\" (UID: \"0f306d5c-e9fd-4d66-babc-d5812662f2c6\") " Feb 18 20:02:49 crc kubenswrapper[4942]: I0218 20:02:49.653114 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0f306d5c-e9fd-4d66-babc-d5812662f2c6-catalog-content\") pod \"0f306d5c-e9fd-4d66-babc-d5812662f2c6\" (UID: \"0f306d5c-e9fd-4d66-babc-d5812662f2c6\") " Feb 18 20:02:49 crc kubenswrapper[4942]: I0218 20:02:49.653976 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0f306d5c-e9fd-4d66-babc-d5812662f2c6-utilities" (OuterVolumeSpecName: "utilities") pod "0f306d5c-e9fd-4d66-babc-d5812662f2c6" (UID: "0f306d5c-e9fd-4d66-babc-d5812662f2c6"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 20:02:49 crc kubenswrapper[4942]: I0218 20:02:49.661413 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0f306d5c-e9fd-4d66-babc-d5812662f2c6-kube-api-access-m8kwr" (OuterVolumeSpecName: "kube-api-access-m8kwr") pod "0f306d5c-e9fd-4d66-babc-d5812662f2c6" (UID: "0f306d5c-e9fd-4d66-babc-d5812662f2c6"). InnerVolumeSpecName "kube-api-access-m8kwr". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 20:02:49 crc kubenswrapper[4942]: I0218 20:02:49.756837 4942 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0f306d5c-e9fd-4d66-babc-d5812662f2c6-utilities\") on node \"crc\" DevicePath \"\"" Feb 18 20:02:49 crc kubenswrapper[4942]: I0218 20:02:49.756886 4942 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m8kwr\" (UniqueName: \"kubernetes.io/projected/0f306d5c-e9fd-4d66-babc-d5812662f2c6-kube-api-access-m8kwr\") on node \"crc\" DevicePath \"\"" Feb 18 20:02:49 crc kubenswrapper[4942]: I0218 20:02:49.815688 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0f306d5c-e9fd-4d66-babc-d5812662f2c6-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "0f306d5c-e9fd-4d66-babc-d5812662f2c6" (UID: "0f306d5c-e9fd-4d66-babc-d5812662f2c6"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 20:02:49 crc kubenswrapper[4942]: I0218 20:02:49.859103 4942 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0f306d5c-e9fd-4d66-babc-d5812662f2c6-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 18 20:02:50 crc kubenswrapper[4942]: I0218 20:02:50.007954 4942 generic.go:334] "Generic (PLEG): container finished" podID="0f306d5c-e9fd-4d66-babc-d5812662f2c6" containerID="747dc8419d1569ab8e14e3e3717c8ce097eb298c94f623d8eca51d0a0baee704" exitCode=0 Feb 18 20:02:50 crc kubenswrapper[4942]: I0218 20:02:50.008004 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-v69dv" event={"ID":"0f306d5c-e9fd-4d66-babc-d5812662f2c6","Type":"ContainerDied","Data":"747dc8419d1569ab8e14e3e3717c8ce097eb298c94f623d8eca51d0a0baee704"} Feb 18 20:02:50 crc kubenswrapper[4942]: I0218 20:02:50.008034 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-v69dv" event={"ID":"0f306d5c-e9fd-4d66-babc-d5812662f2c6","Type":"ContainerDied","Data":"489436400d82be93fb769cbcdd5323663ef8c990df5f4e0eb67e5cdeeade6085"} Feb 18 20:02:50 crc kubenswrapper[4942]: I0218 20:02:50.008054 4942 scope.go:117] "RemoveContainer" containerID="747dc8419d1569ab8e14e3e3717c8ce097eb298c94f623d8eca51d0a0baee704" Feb 18 20:02:50 crc kubenswrapper[4942]: I0218 20:02:50.008177 4942 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-v69dv" Feb 18 20:02:50 crc kubenswrapper[4942]: I0218 20:02:50.036530 4942 scope.go:117] "RemoveContainer" containerID="3cf587211302339a8fc7a76125477edee2e278fe85de19d0ae6e18f45b77c370" Feb 18 20:02:50 crc kubenswrapper[4942]: I0218 20:02:50.052882 4942 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-v69dv"] Feb 18 20:02:50 crc kubenswrapper[4942]: I0218 20:02:50.060973 4942 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-v69dv"] Feb 18 20:02:50 crc kubenswrapper[4942]: I0218 20:02:50.072060 4942 scope.go:117] "RemoveContainer" containerID="3e8f85446e91ce4eb5904affbae1ea88bbae59483cc9db62c515656ec6f70abb" Feb 18 20:02:50 crc kubenswrapper[4942]: I0218 20:02:50.135065 4942 scope.go:117] "RemoveContainer" containerID="747dc8419d1569ab8e14e3e3717c8ce097eb298c94f623d8eca51d0a0baee704" Feb 18 20:02:50 crc kubenswrapper[4942]: E0218 20:02:50.135473 4942 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"747dc8419d1569ab8e14e3e3717c8ce097eb298c94f623d8eca51d0a0baee704\": container with ID starting with 747dc8419d1569ab8e14e3e3717c8ce097eb298c94f623d8eca51d0a0baee704 not found: ID does not exist" containerID="747dc8419d1569ab8e14e3e3717c8ce097eb298c94f623d8eca51d0a0baee704" Feb 18 20:02:50 crc kubenswrapper[4942]: I0218 20:02:50.135513 4942 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"747dc8419d1569ab8e14e3e3717c8ce097eb298c94f623d8eca51d0a0baee704"} err="failed to get container status \"747dc8419d1569ab8e14e3e3717c8ce097eb298c94f623d8eca51d0a0baee704\": rpc error: code = NotFound desc = could not find container \"747dc8419d1569ab8e14e3e3717c8ce097eb298c94f623d8eca51d0a0baee704\": container with ID starting with 747dc8419d1569ab8e14e3e3717c8ce097eb298c94f623d8eca51d0a0baee704 not found: ID does not exist" Feb 18 20:02:50 crc kubenswrapper[4942]: I0218 20:02:50.135539 4942 scope.go:117] "RemoveContainer" containerID="3cf587211302339a8fc7a76125477edee2e278fe85de19d0ae6e18f45b77c370" Feb 18 20:02:50 crc kubenswrapper[4942]: E0218 20:02:50.136132 4942 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3cf587211302339a8fc7a76125477edee2e278fe85de19d0ae6e18f45b77c370\": container with ID starting with 3cf587211302339a8fc7a76125477edee2e278fe85de19d0ae6e18f45b77c370 not found: ID does not exist" containerID="3cf587211302339a8fc7a76125477edee2e278fe85de19d0ae6e18f45b77c370" Feb 18 20:02:50 crc kubenswrapper[4942]: I0218 20:02:50.136193 4942 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3cf587211302339a8fc7a76125477edee2e278fe85de19d0ae6e18f45b77c370"} err="failed to get container status \"3cf587211302339a8fc7a76125477edee2e278fe85de19d0ae6e18f45b77c370\": rpc error: code = NotFound desc = could not find container \"3cf587211302339a8fc7a76125477edee2e278fe85de19d0ae6e18f45b77c370\": container with ID starting with 3cf587211302339a8fc7a76125477edee2e278fe85de19d0ae6e18f45b77c370 not found: ID does not exist" Feb 18 20:02:50 crc kubenswrapper[4942]: I0218 20:02:50.136224 4942 scope.go:117] "RemoveContainer" containerID="3e8f85446e91ce4eb5904affbae1ea88bbae59483cc9db62c515656ec6f70abb" Feb 18 20:02:50 crc kubenswrapper[4942]: E0218 20:02:50.136483 4942 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3e8f85446e91ce4eb5904affbae1ea88bbae59483cc9db62c515656ec6f70abb\": container with ID starting with 3e8f85446e91ce4eb5904affbae1ea88bbae59483cc9db62c515656ec6f70abb not found: ID does not exist" containerID="3e8f85446e91ce4eb5904affbae1ea88bbae59483cc9db62c515656ec6f70abb" Feb 18 20:02:50 crc kubenswrapper[4942]: I0218 20:02:50.136507 4942 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3e8f85446e91ce4eb5904affbae1ea88bbae59483cc9db62c515656ec6f70abb"} err="failed to get container status \"3e8f85446e91ce4eb5904affbae1ea88bbae59483cc9db62c515656ec6f70abb\": rpc error: code = NotFound desc = could not find container \"3e8f85446e91ce4eb5904affbae1ea88bbae59483cc9db62c515656ec6f70abb\": container with ID starting with 3e8f85446e91ce4eb5904affbae1ea88bbae59483cc9db62c515656ec6f70abb not found: ID does not exist" Feb 18 20:02:51 crc kubenswrapper[4942]: I0218 20:02:51.054614 4942 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0f306d5c-e9fd-4d66-babc-d5812662f2c6" path="/var/lib/kubelet/pods/0f306d5c-e9fd-4d66-babc-d5812662f2c6/volumes" Feb 18 20:04:08 crc kubenswrapper[4942]: I0218 20:04:08.956827 4942 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-rm4qf"] Feb 18 20:04:08 crc kubenswrapper[4942]: E0218 20:04:08.957815 4942 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0f306d5c-e9fd-4d66-babc-d5812662f2c6" containerName="extract-content" Feb 18 20:04:08 crc kubenswrapper[4942]: I0218 20:04:08.957955 4942 state_mem.go:107] "Deleted CPUSet assignment" podUID="0f306d5c-e9fd-4d66-babc-d5812662f2c6" containerName="extract-content" Feb 18 20:04:08 crc kubenswrapper[4942]: E0218 20:04:08.957981 4942 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0f306d5c-e9fd-4d66-babc-d5812662f2c6" containerName="registry-server" Feb 18 20:04:08 crc kubenswrapper[4942]: I0218 20:04:08.958015 4942 state_mem.go:107] "Deleted CPUSet assignment" podUID="0f306d5c-e9fd-4d66-babc-d5812662f2c6" containerName="registry-server" Feb 18 20:04:08 crc kubenswrapper[4942]: E0218 20:04:08.958052 4942 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0f306d5c-e9fd-4d66-babc-d5812662f2c6" containerName="extract-utilities" Feb 18 20:04:08 crc kubenswrapper[4942]: I0218 20:04:08.958061 4942 state_mem.go:107] "Deleted CPUSet assignment" podUID="0f306d5c-e9fd-4d66-babc-d5812662f2c6" containerName="extract-utilities" Feb 18 20:04:08 crc kubenswrapper[4942]: I0218 20:04:08.958501 4942 memory_manager.go:354] "RemoveStaleState removing state" podUID="0f306d5c-e9fd-4d66-babc-d5812662f2c6" containerName="registry-server" Feb 18 20:04:08 crc kubenswrapper[4942]: I0218 20:04:08.961029 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rm4qf" Feb 18 20:04:08 crc kubenswrapper[4942]: I0218 20:04:08.983428 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-rm4qf"] Feb 18 20:04:09 crc kubenswrapper[4942]: I0218 20:04:09.008904 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qwdkj\" (UniqueName: \"kubernetes.io/projected/df296b06-0ec2-4b9b-bf0c-f93f98b2f928-kube-api-access-qwdkj\") pod \"redhat-marketplace-rm4qf\" (UID: \"df296b06-0ec2-4b9b-bf0c-f93f98b2f928\") " pod="openshift-marketplace/redhat-marketplace-rm4qf" Feb 18 20:04:09 crc kubenswrapper[4942]: I0218 20:04:09.009007 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/df296b06-0ec2-4b9b-bf0c-f93f98b2f928-utilities\") pod \"redhat-marketplace-rm4qf\" (UID: \"df296b06-0ec2-4b9b-bf0c-f93f98b2f928\") " pod="openshift-marketplace/redhat-marketplace-rm4qf" Feb 18 20:04:09 crc kubenswrapper[4942]: I0218 20:04:09.009046 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/df296b06-0ec2-4b9b-bf0c-f93f98b2f928-catalog-content\") pod \"redhat-marketplace-rm4qf\" (UID: \"df296b06-0ec2-4b9b-bf0c-f93f98b2f928\") " pod="openshift-marketplace/redhat-marketplace-rm4qf" Feb 18 20:04:09 crc kubenswrapper[4942]: I0218 20:04:09.110919 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/df296b06-0ec2-4b9b-bf0c-f93f98b2f928-catalog-content\") pod \"redhat-marketplace-rm4qf\" (UID: \"df296b06-0ec2-4b9b-bf0c-f93f98b2f928\") " pod="openshift-marketplace/redhat-marketplace-rm4qf" Feb 18 20:04:09 crc kubenswrapper[4942]: I0218 20:04:09.111158 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qwdkj\" (UniqueName: \"kubernetes.io/projected/df296b06-0ec2-4b9b-bf0c-f93f98b2f928-kube-api-access-qwdkj\") pod \"redhat-marketplace-rm4qf\" (UID: \"df296b06-0ec2-4b9b-bf0c-f93f98b2f928\") " pod="openshift-marketplace/redhat-marketplace-rm4qf" Feb 18 20:04:09 crc kubenswrapper[4942]: I0218 20:04:09.111263 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/df296b06-0ec2-4b9b-bf0c-f93f98b2f928-utilities\") pod \"redhat-marketplace-rm4qf\" (UID: \"df296b06-0ec2-4b9b-bf0c-f93f98b2f928\") " pod="openshift-marketplace/redhat-marketplace-rm4qf" Feb 18 20:04:09 crc kubenswrapper[4942]: I0218 20:04:09.112193 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/df296b06-0ec2-4b9b-bf0c-f93f98b2f928-catalog-content\") pod \"redhat-marketplace-rm4qf\" (UID: \"df296b06-0ec2-4b9b-bf0c-f93f98b2f928\") " pod="openshift-marketplace/redhat-marketplace-rm4qf" Feb 18 20:04:09 crc kubenswrapper[4942]: I0218 20:04:09.112256 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/df296b06-0ec2-4b9b-bf0c-f93f98b2f928-utilities\") pod \"redhat-marketplace-rm4qf\" (UID: \"df296b06-0ec2-4b9b-bf0c-f93f98b2f928\") " pod="openshift-marketplace/redhat-marketplace-rm4qf" Feb 18 20:04:09 crc kubenswrapper[4942]: I0218 20:04:09.139955 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qwdkj\" (UniqueName: \"kubernetes.io/projected/df296b06-0ec2-4b9b-bf0c-f93f98b2f928-kube-api-access-qwdkj\") pod \"redhat-marketplace-rm4qf\" (UID: \"df296b06-0ec2-4b9b-bf0c-f93f98b2f928\") " pod="openshift-marketplace/redhat-marketplace-rm4qf" Feb 18 20:04:09 crc kubenswrapper[4942]: I0218 20:04:09.291902 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rm4qf" Feb 18 20:04:09 crc kubenswrapper[4942]: I0218 20:04:09.777129 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-rm4qf"] Feb 18 20:04:09 crc kubenswrapper[4942]: I0218 20:04:09.853274 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rm4qf" event={"ID":"df296b06-0ec2-4b9b-bf0c-f93f98b2f928","Type":"ContainerStarted","Data":"7c8900315c93c1b686c84d136b2ab2bcaad574034c6670f39b754103e2492749"} Feb 18 20:04:10 crc kubenswrapper[4942]: I0218 20:04:10.866280 4942 generic.go:334] "Generic (PLEG): container finished" podID="df296b06-0ec2-4b9b-bf0c-f93f98b2f928" containerID="4d89f74473f731cd8c10eb232d6b0a80f5b8d13e23ca51dd32db98a490c5f7a6" exitCode=0 Feb 18 20:04:10 crc kubenswrapper[4942]: I0218 20:04:10.866376 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rm4qf" event={"ID":"df296b06-0ec2-4b9b-bf0c-f93f98b2f928","Type":"ContainerDied","Data":"4d89f74473f731cd8c10eb232d6b0a80f5b8d13e23ca51dd32db98a490c5f7a6"} Feb 18 20:04:12 crc kubenswrapper[4942]: I0218 20:04:12.895726 4942 generic.go:334] "Generic (PLEG): container finished" podID="df296b06-0ec2-4b9b-bf0c-f93f98b2f928" containerID="719c934471a30c7d0af7d67e1de1c7b5e6d6548d9f2201cf76aa55d3d4308a66" exitCode=0 Feb 18 20:04:12 crc kubenswrapper[4942]: I0218 20:04:12.895865 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rm4qf" event={"ID":"df296b06-0ec2-4b9b-bf0c-f93f98b2f928","Type":"ContainerDied","Data":"719c934471a30c7d0af7d67e1de1c7b5e6d6548d9f2201cf76aa55d3d4308a66"} Feb 18 20:04:13 crc kubenswrapper[4942]: I0218 20:04:13.917382 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rm4qf" event={"ID":"df296b06-0ec2-4b9b-bf0c-f93f98b2f928","Type":"ContainerStarted","Data":"a08595b93f02e72c25162671051895b5f133c5b7c73b292f75f712d7d0ec489a"} Feb 18 20:04:13 crc kubenswrapper[4942]: I0218 20:04:13.948628 4942 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-rm4qf" podStartSLOduration=3.513543258 podStartE2EDuration="5.948600679s" podCreationTimestamp="2026-02-18 20:04:08 +0000 UTC" firstStartedPulling="2026-02-18 20:04:10.869584401 +0000 UTC m=+2810.574517116" lastFinishedPulling="2026-02-18 20:04:13.304641852 +0000 UTC m=+2813.009574537" observedRunningTime="2026-02-18 20:04:13.943644028 +0000 UTC m=+2813.648576703" watchObservedRunningTime="2026-02-18 20:04:13.948600679 +0000 UTC m=+2813.653533344" Feb 18 20:04:19 crc kubenswrapper[4942]: I0218 20:04:19.292962 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-rm4qf" Feb 18 20:04:19 crc kubenswrapper[4942]: I0218 20:04:19.293714 4942 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-rm4qf" Feb 18 20:04:19 crc kubenswrapper[4942]: I0218 20:04:19.378852 4942 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-rm4qf" Feb 18 20:04:20 crc kubenswrapper[4942]: I0218 20:04:20.062265 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-rm4qf" Feb 18 20:04:20 crc kubenswrapper[4942]: I0218 20:04:20.123205 4942 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-rm4qf"] Feb 18 20:04:21 crc kubenswrapper[4942]: I0218 20:04:21.999133 4942 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-rm4qf" podUID="df296b06-0ec2-4b9b-bf0c-f93f98b2f928" containerName="registry-server" containerID="cri-o://a08595b93f02e72c25162671051895b5f133c5b7c73b292f75f712d7d0ec489a" gracePeriod=2 Feb 18 20:04:22 crc kubenswrapper[4942]: I0218 20:04:22.535982 4942 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rm4qf" Feb 18 20:04:22 crc kubenswrapper[4942]: I0218 20:04:22.601410 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/df296b06-0ec2-4b9b-bf0c-f93f98b2f928-catalog-content\") pod \"df296b06-0ec2-4b9b-bf0c-f93f98b2f928\" (UID: \"df296b06-0ec2-4b9b-bf0c-f93f98b2f928\") " Feb 18 20:04:22 crc kubenswrapper[4942]: I0218 20:04:22.603974 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/df296b06-0ec2-4b9b-bf0c-f93f98b2f928-utilities\") pod \"df296b06-0ec2-4b9b-bf0c-f93f98b2f928\" (UID: \"df296b06-0ec2-4b9b-bf0c-f93f98b2f928\") " Feb 18 20:04:22 crc kubenswrapper[4942]: I0218 20:04:22.604174 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qwdkj\" (UniqueName: \"kubernetes.io/projected/df296b06-0ec2-4b9b-bf0c-f93f98b2f928-kube-api-access-qwdkj\") pod \"df296b06-0ec2-4b9b-bf0c-f93f98b2f928\" (UID: \"df296b06-0ec2-4b9b-bf0c-f93f98b2f928\") " Feb 18 20:04:22 crc kubenswrapper[4942]: I0218 20:04:22.605897 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/df296b06-0ec2-4b9b-bf0c-f93f98b2f928-utilities" (OuterVolumeSpecName: "utilities") pod "df296b06-0ec2-4b9b-bf0c-f93f98b2f928" (UID: "df296b06-0ec2-4b9b-bf0c-f93f98b2f928"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 20:04:22 crc kubenswrapper[4942]: I0218 20:04:22.611592 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/df296b06-0ec2-4b9b-bf0c-f93f98b2f928-kube-api-access-qwdkj" (OuterVolumeSpecName: "kube-api-access-qwdkj") pod "df296b06-0ec2-4b9b-bf0c-f93f98b2f928" (UID: "df296b06-0ec2-4b9b-bf0c-f93f98b2f928"). InnerVolumeSpecName "kube-api-access-qwdkj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 20:04:22 crc kubenswrapper[4942]: I0218 20:04:22.629241 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/df296b06-0ec2-4b9b-bf0c-f93f98b2f928-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "df296b06-0ec2-4b9b-bf0c-f93f98b2f928" (UID: "df296b06-0ec2-4b9b-bf0c-f93f98b2f928"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 20:04:22 crc kubenswrapper[4942]: I0218 20:04:22.707423 4942 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/df296b06-0ec2-4b9b-bf0c-f93f98b2f928-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 18 20:04:22 crc kubenswrapper[4942]: I0218 20:04:22.707454 4942 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/df296b06-0ec2-4b9b-bf0c-f93f98b2f928-utilities\") on node \"crc\" DevicePath \"\"" Feb 18 20:04:22 crc kubenswrapper[4942]: I0218 20:04:22.707464 4942 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qwdkj\" (UniqueName: \"kubernetes.io/projected/df296b06-0ec2-4b9b-bf0c-f93f98b2f928-kube-api-access-qwdkj\") on node \"crc\" DevicePath \"\"" Feb 18 20:04:23 crc kubenswrapper[4942]: I0218 20:04:23.008826 4942 generic.go:334] "Generic (PLEG): container finished" podID="df296b06-0ec2-4b9b-bf0c-f93f98b2f928" containerID="a08595b93f02e72c25162671051895b5f133c5b7c73b292f75f712d7d0ec489a" exitCode=0 Feb 18 20:04:23 crc kubenswrapper[4942]: I0218 20:04:23.008938 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rm4qf" event={"ID":"df296b06-0ec2-4b9b-bf0c-f93f98b2f928","Type":"ContainerDied","Data":"a08595b93f02e72c25162671051895b5f133c5b7c73b292f75f712d7d0ec489a"} Feb 18 20:04:23 crc kubenswrapper[4942]: I0218 20:04:23.009617 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rm4qf" event={"ID":"df296b06-0ec2-4b9b-bf0c-f93f98b2f928","Type":"ContainerDied","Data":"7c8900315c93c1b686c84d136b2ab2bcaad574034c6670f39b754103e2492749"} Feb 18 20:04:23 crc kubenswrapper[4942]: I0218 20:04:23.008950 4942 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rm4qf" Feb 18 20:04:23 crc kubenswrapper[4942]: I0218 20:04:23.009679 4942 scope.go:117] "RemoveContainer" containerID="a08595b93f02e72c25162671051895b5f133c5b7c73b292f75f712d7d0ec489a" Feb 18 20:04:23 crc kubenswrapper[4942]: I0218 20:04:23.029834 4942 scope.go:117] "RemoveContainer" containerID="719c934471a30c7d0af7d67e1de1c7b5e6d6548d9f2201cf76aa55d3d4308a66" Feb 18 20:04:23 crc kubenswrapper[4942]: I0218 20:04:23.048282 4942 scope.go:117] "RemoveContainer" containerID="4d89f74473f731cd8c10eb232d6b0a80f5b8d13e23ca51dd32db98a490c5f7a6" Feb 18 20:04:23 crc kubenswrapper[4942]: I0218 20:04:23.076326 4942 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-rm4qf"] Feb 18 20:04:23 crc kubenswrapper[4942]: I0218 20:04:23.089986 4942 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-rm4qf"] Feb 18 20:04:23 crc kubenswrapper[4942]: I0218 20:04:23.127115 4942 scope.go:117] "RemoveContainer" containerID="a08595b93f02e72c25162671051895b5f133c5b7c73b292f75f712d7d0ec489a" Feb 18 20:04:23 crc kubenswrapper[4942]: E0218 20:04:23.127706 4942 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a08595b93f02e72c25162671051895b5f133c5b7c73b292f75f712d7d0ec489a\": container with ID starting with a08595b93f02e72c25162671051895b5f133c5b7c73b292f75f712d7d0ec489a not found: ID does not exist" containerID="a08595b93f02e72c25162671051895b5f133c5b7c73b292f75f712d7d0ec489a" Feb 18 20:04:23 crc kubenswrapper[4942]: I0218 20:04:23.127816 4942 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a08595b93f02e72c25162671051895b5f133c5b7c73b292f75f712d7d0ec489a"} err="failed to get container status \"a08595b93f02e72c25162671051895b5f133c5b7c73b292f75f712d7d0ec489a\": rpc error: code = NotFound desc = could not find container \"a08595b93f02e72c25162671051895b5f133c5b7c73b292f75f712d7d0ec489a\": container with ID starting with a08595b93f02e72c25162671051895b5f133c5b7c73b292f75f712d7d0ec489a not found: ID does not exist" Feb 18 20:04:23 crc kubenswrapper[4942]: I0218 20:04:23.127858 4942 scope.go:117] "RemoveContainer" containerID="719c934471a30c7d0af7d67e1de1c7b5e6d6548d9f2201cf76aa55d3d4308a66" Feb 18 20:04:23 crc kubenswrapper[4942]: E0218 20:04:23.130472 4942 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"719c934471a30c7d0af7d67e1de1c7b5e6d6548d9f2201cf76aa55d3d4308a66\": container with ID starting with 719c934471a30c7d0af7d67e1de1c7b5e6d6548d9f2201cf76aa55d3d4308a66 not found: ID does not exist" containerID="719c934471a30c7d0af7d67e1de1c7b5e6d6548d9f2201cf76aa55d3d4308a66" Feb 18 20:04:23 crc kubenswrapper[4942]: I0218 20:04:23.132098 4942 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"719c934471a30c7d0af7d67e1de1c7b5e6d6548d9f2201cf76aa55d3d4308a66"} err="failed to get container status \"719c934471a30c7d0af7d67e1de1c7b5e6d6548d9f2201cf76aa55d3d4308a66\": rpc error: code = NotFound desc = could not find container \"719c934471a30c7d0af7d67e1de1c7b5e6d6548d9f2201cf76aa55d3d4308a66\": container with ID starting with 719c934471a30c7d0af7d67e1de1c7b5e6d6548d9f2201cf76aa55d3d4308a66 not found: ID does not exist" Feb 18 20:04:23 crc kubenswrapper[4942]: I0218 20:04:23.132165 4942 scope.go:117] "RemoveContainer" containerID="4d89f74473f731cd8c10eb232d6b0a80f5b8d13e23ca51dd32db98a490c5f7a6" Feb 18 20:04:23 crc kubenswrapper[4942]: E0218 20:04:23.133993 4942 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4d89f74473f731cd8c10eb232d6b0a80f5b8d13e23ca51dd32db98a490c5f7a6\": container with ID starting with 4d89f74473f731cd8c10eb232d6b0a80f5b8d13e23ca51dd32db98a490c5f7a6 not found: ID does not exist" containerID="4d89f74473f731cd8c10eb232d6b0a80f5b8d13e23ca51dd32db98a490c5f7a6" Feb 18 20:04:23 crc kubenswrapper[4942]: I0218 20:04:23.134057 4942 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4d89f74473f731cd8c10eb232d6b0a80f5b8d13e23ca51dd32db98a490c5f7a6"} err="failed to get container status \"4d89f74473f731cd8c10eb232d6b0a80f5b8d13e23ca51dd32db98a490c5f7a6\": rpc error: code = NotFound desc = could not find container \"4d89f74473f731cd8c10eb232d6b0a80f5b8d13e23ca51dd32db98a490c5f7a6\": container with ID starting with 4d89f74473f731cd8c10eb232d6b0a80f5b8d13e23ca51dd32db98a490c5f7a6 not found: ID does not exist" Feb 18 20:04:25 crc kubenswrapper[4942]: I0218 20:04:25.050874 4942 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="df296b06-0ec2-4b9b-bf0c-f93f98b2f928" path="/var/lib/kubelet/pods/df296b06-0ec2-4b9b-bf0c-f93f98b2f928/volumes" Feb 18 20:04:25 crc kubenswrapper[4942]: I0218 20:04:25.990271 4942 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-jfgwd"] Feb 18 20:04:25 crc kubenswrapper[4942]: E0218 20:04:25.990851 4942 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="df296b06-0ec2-4b9b-bf0c-f93f98b2f928" containerName="extract-utilities" Feb 18 20:04:25 crc kubenswrapper[4942]: I0218 20:04:25.990879 4942 state_mem.go:107] "Deleted CPUSet assignment" podUID="df296b06-0ec2-4b9b-bf0c-f93f98b2f928" containerName="extract-utilities" Feb 18 20:04:25 crc kubenswrapper[4942]: E0218 20:04:25.990908 4942 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="df296b06-0ec2-4b9b-bf0c-f93f98b2f928" containerName="extract-content" Feb 18 20:04:25 crc kubenswrapper[4942]: I0218 20:04:25.990920 4942 state_mem.go:107] "Deleted CPUSet assignment" podUID="df296b06-0ec2-4b9b-bf0c-f93f98b2f928" containerName="extract-content" Feb 18 20:04:25 crc kubenswrapper[4942]: E0218 20:04:25.990958 4942 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="df296b06-0ec2-4b9b-bf0c-f93f98b2f928" containerName="registry-server" Feb 18 20:04:25 crc kubenswrapper[4942]: I0218 20:04:25.990971 4942 state_mem.go:107] "Deleted CPUSet assignment" podUID="df296b06-0ec2-4b9b-bf0c-f93f98b2f928" containerName="registry-server" Feb 18 20:04:25 crc kubenswrapper[4942]: I0218 20:04:25.991296 4942 memory_manager.go:354] "RemoveStaleState removing state" podUID="df296b06-0ec2-4b9b-bf0c-f93f98b2f928" containerName="registry-server" Feb 18 20:04:25 crc kubenswrapper[4942]: I0218 20:04:25.994002 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-jfgwd" Feb 18 20:04:26 crc kubenswrapper[4942]: I0218 20:04:26.012111 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-jfgwd"] Feb 18 20:04:26 crc kubenswrapper[4942]: I0218 20:04:26.085621 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3f7c8faf-9df9-40e0-83c7-8fb987985673-utilities\") pod \"certified-operators-jfgwd\" (UID: \"3f7c8faf-9df9-40e0-83c7-8fb987985673\") " pod="openshift-marketplace/certified-operators-jfgwd" Feb 18 20:04:26 crc kubenswrapper[4942]: I0218 20:04:26.085731 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lx7pn\" (UniqueName: \"kubernetes.io/projected/3f7c8faf-9df9-40e0-83c7-8fb987985673-kube-api-access-lx7pn\") pod \"certified-operators-jfgwd\" (UID: \"3f7c8faf-9df9-40e0-83c7-8fb987985673\") " pod="openshift-marketplace/certified-operators-jfgwd" Feb 18 20:04:26 crc kubenswrapper[4942]: I0218 20:04:26.085774 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3f7c8faf-9df9-40e0-83c7-8fb987985673-catalog-content\") pod \"certified-operators-jfgwd\" (UID: \"3f7c8faf-9df9-40e0-83c7-8fb987985673\") " pod="openshift-marketplace/certified-operators-jfgwd" Feb 18 20:04:26 crc kubenswrapper[4942]: I0218 20:04:26.187226 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3f7c8faf-9df9-40e0-83c7-8fb987985673-utilities\") pod \"certified-operators-jfgwd\" (UID: \"3f7c8faf-9df9-40e0-83c7-8fb987985673\") " pod="openshift-marketplace/certified-operators-jfgwd" Feb 18 20:04:26 crc kubenswrapper[4942]: I0218 20:04:26.187297 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lx7pn\" (UniqueName: \"kubernetes.io/projected/3f7c8faf-9df9-40e0-83c7-8fb987985673-kube-api-access-lx7pn\") pod \"certified-operators-jfgwd\" (UID: \"3f7c8faf-9df9-40e0-83c7-8fb987985673\") " pod="openshift-marketplace/certified-operators-jfgwd" Feb 18 20:04:26 crc kubenswrapper[4942]: I0218 20:04:26.187325 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3f7c8faf-9df9-40e0-83c7-8fb987985673-catalog-content\") pod \"certified-operators-jfgwd\" (UID: \"3f7c8faf-9df9-40e0-83c7-8fb987985673\") " pod="openshift-marketplace/certified-operators-jfgwd" Feb 18 20:04:26 crc kubenswrapper[4942]: I0218 20:04:26.188091 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3f7c8faf-9df9-40e0-83c7-8fb987985673-catalog-content\") pod \"certified-operators-jfgwd\" (UID: \"3f7c8faf-9df9-40e0-83c7-8fb987985673\") " pod="openshift-marketplace/certified-operators-jfgwd" Feb 18 20:04:26 crc kubenswrapper[4942]: I0218 20:04:26.188119 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3f7c8faf-9df9-40e0-83c7-8fb987985673-utilities\") pod \"certified-operators-jfgwd\" (UID: \"3f7c8faf-9df9-40e0-83c7-8fb987985673\") " pod="openshift-marketplace/certified-operators-jfgwd" Feb 18 20:04:26 crc kubenswrapper[4942]: I0218 20:04:26.207612 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lx7pn\" (UniqueName: \"kubernetes.io/projected/3f7c8faf-9df9-40e0-83c7-8fb987985673-kube-api-access-lx7pn\") pod \"certified-operators-jfgwd\" (UID: \"3f7c8faf-9df9-40e0-83c7-8fb987985673\") " pod="openshift-marketplace/certified-operators-jfgwd" Feb 18 20:04:26 crc kubenswrapper[4942]: I0218 20:04:26.336074 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-jfgwd" Feb 18 20:04:26 crc kubenswrapper[4942]: I0218 20:04:26.824906 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-jfgwd"] Feb 18 20:04:27 crc kubenswrapper[4942]: I0218 20:04:27.063119 4942 generic.go:334] "Generic (PLEG): container finished" podID="3f7c8faf-9df9-40e0-83c7-8fb987985673" containerID="8b4a72d995ead7aa23d1d6c3a03aa7b487c59cbf0762bfd51172efb9e0f9ba5e" exitCode=0 Feb 18 20:04:27 crc kubenswrapper[4942]: I0218 20:04:27.063182 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jfgwd" event={"ID":"3f7c8faf-9df9-40e0-83c7-8fb987985673","Type":"ContainerDied","Data":"8b4a72d995ead7aa23d1d6c3a03aa7b487c59cbf0762bfd51172efb9e0f9ba5e"} Feb 18 20:04:27 crc kubenswrapper[4942]: I0218 20:04:27.063217 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jfgwd" event={"ID":"3f7c8faf-9df9-40e0-83c7-8fb987985673","Type":"ContainerStarted","Data":"d3befc2b3f5841889d2c50989a675ede20bb79cfd1a022cebe42c3897bfc202a"} Feb 18 20:04:29 crc kubenswrapper[4942]: I0218 20:04:29.085084 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jfgwd" event={"ID":"3f7c8faf-9df9-40e0-83c7-8fb987985673","Type":"ContainerStarted","Data":"ba6dbdbacef34144d17506be55e352f7a3b68da7911dad63979a43406b04cce7"} Feb 18 20:04:30 crc kubenswrapper[4942]: I0218 20:04:30.097361 4942 generic.go:334] "Generic (PLEG): container finished" podID="3f7c8faf-9df9-40e0-83c7-8fb987985673" containerID="ba6dbdbacef34144d17506be55e352f7a3b68da7911dad63979a43406b04cce7" exitCode=0 Feb 18 20:04:30 crc kubenswrapper[4942]: I0218 20:04:30.097422 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jfgwd" event={"ID":"3f7c8faf-9df9-40e0-83c7-8fb987985673","Type":"ContainerDied","Data":"ba6dbdbacef34144d17506be55e352f7a3b68da7911dad63979a43406b04cce7"} Feb 18 20:04:31 crc kubenswrapper[4942]: I0218 20:04:31.110703 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jfgwd" event={"ID":"3f7c8faf-9df9-40e0-83c7-8fb987985673","Type":"ContainerStarted","Data":"b3a0cd7538a1e6ca7dc15f925dc50f2f1120e6aea8947fd5d0cc56e26d6d0631"} Feb 18 20:04:31 crc kubenswrapper[4942]: I0218 20:04:31.127167 4942 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-jfgwd" podStartSLOduration=2.432641881 podStartE2EDuration="6.127147353s" podCreationTimestamp="2026-02-18 20:04:25 +0000 UTC" firstStartedPulling="2026-02-18 20:04:27.065104242 +0000 UTC m=+2826.770036907" lastFinishedPulling="2026-02-18 20:04:30.759609724 +0000 UTC m=+2830.464542379" observedRunningTime="2026-02-18 20:04:31.125171411 +0000 UTC m=+2830.830104086" watchObservedRunningTime="2026-02-18 20:04:31.127147353 +0000 UTC m=+2830.832080028" Feb 18 20:04:36 crc kubenswrapper[4942]: I0218 20:04:36.336799 4942 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-jfgwd" Feb 18 20:04:36 crc kubenswrapper[4942]: I0218 20:04:36.337483 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-jfgwd" Feb 18 20:04:36 crc kubenswrapper[4942]: I0218 20:04:36.389069 4942 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-jfgwd" Feb 18 20:04:37 crc kubenswrapper[4942]: I0218 20:04:37.239745 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-jfgwd" Feb 18 20:04:37 crc kubenswrapper[4942]: I0218 20:04:37.289013 4942 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-jfgwd"] Feb 18 20:04:39 crc kubenswrapper[4942]: I0218 20:04:39.207321 4942 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-jfgwd" podUID="3f7c8faf-9df9-40e0-83c7-8fb987985673" containerName="registry-server" containerID="cri-o://b3a0cd7538a1e6ca7dc15f925dc50f2f1120e6aea8947fd5d0cc56e26d6d0631" gracePeriod=2 Feb 18 20:04:39 crc kubenswrapper[4942]: I0218 20:04:39.726572 4942 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-jfgwd" Feb 18 20:04:39 crc kubenswrapper[4942]: I0218 20:04:39.872472 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lx7pn\" (UniqueName: \"kubernetes.io/projected/3f7c8faf-9df9-40e0-83c7-8fb987985673-kube-api-access-lx7pn\") pod \"3f7c8faf-9df9-40e0-83c7-8fb987985673\" (UID: \"3f7c8faf-9df9-40e0-83c7-8fb987985673\") " Feb 18 20:04:39 crc kubenswrapper[4942]: I0218 20:04:39.872709 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3f7c8faf-9df9-40e0-83c7-8fb987985673-utilities\") pod \"3f7c8faf-9df9-40e0-83c7-8fb987985673\" (UID: \"3f7c8faf-9df9-40e0-83c7-8fb987985673\") " Feb 18 20:04:39 crc kubenswrapper[4942]: I0218 20:04:39.872857 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3f7c8faf-9df9-40e0-83c7-8fb987985673-catalog-content\") pod \"3f7c8faf-9df9-40e0-83c7-8fb987985673\" (UID: \"3f7c8faf-9df9-40e0-83c7-8fb987985673\") " Feb 18 20:04:39 crc kubenswrapper[4942]: I0218 20:04:39.873626 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3f7c8faf-9df9-40e0-83c7-8fb987985673-utilities" (OuterVolumeSpecName: "utilities") pod "3f7c8faf-9df9-40e0-83c7-8fb987985673" (UID: "3f7c8faf-9df9-40e0-83c7-8fb987985673"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 20:04:39 crc kubenswrapper[4942]: I0218 20:04:39.881160 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3f7c8faf-9df9-40e0-83c7-8fb987985673-kube-api-access-lx7pn" (OuterVolumeSpecName: "kube-api-access-lx7pn") pod "3f7c8faf-9df9-40e0-83c7-8fb987985673" (UID: "3f7c8faf-9df9-40e0-83c7-8fb987985673"). InnerVolumeSpecName "kube-api-access-lx7pn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 20:04:39 crc kubenswrapper[4942]: I0218 20:04:39.925576 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3f7c8faf-9df9-40e0-83c7-8fb987985673-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "3f7c8faf-9df9-40e0-83c7-8fb987985673" (UID: "3f7c8faf-9df9-40e0-83c7-8fb987985673"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 20:04:39 crc kubenswrapper[4942]: I0218 20:04:39.975540 4942 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lx7pn\" (UniqueName: \"kubernetes.io/projected/3f7c8faf-9df9-40e0-83c7-8fb987985673-kube-api-access-lx7pn\") on node \"crc\" DevicePath \"\"" Feb 18 20:04:39 crc kubenswrapper[4942]: I0218 20:04:39.975585 4942 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3f7c8faf-9df9-40e0-83c7-8fb987985673-utilities\") on node \"crc\" DevicePath \"\"" Feb 18 20:04:39 crc kubenswrapper[4942]: I0218 20:04:39.975600 4942 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3f7c8faf-9df9-40e0-83c7-8fb987985673-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 18 20:04:40 crc kubenswrapper[4942]: I0218 20:04:40.232988 4942 generic.go:334] "Generic (PLEG): container finished" podID="3f7c8faf-9df9-40e0-83c7-8fb987985673" containerID="b3a0cd7538a1e6ca7dc15f925dc50f2f1120e6aea8947fd5d0cc56e26d6d0631" exitCode=0 Feb 18 20:04:40 crc kubenswrapper[4942]: I0218 20:04:40.233081 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jfgwd" event={"ID":"3f7c8faf-9df9-40e0-83c7-8fb987985673","Type":"ContainerDied","Data":"b3a0cd7538a1e6ca7dc15f925dc50f2f1120e6aea8947fd5d0cc56e26d6d0631"} Feb 18 20:04:40 crc kubenswrapper[4942]: I0218 20:04:40.233197 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jfgwd" event={"ID":"3f7c8faf-9df9-40e0-83c7-8fb987985673","Type":"ContainerDied","Data":"d3befc2b3f5841889d2c50989a675ede20bb79cfd1a022cebe42c3897bfc202a"} Feb 18 20:04:40 crc kubenswrapper[4942]: I0218 20:04:40.233270 4942 scope.go:117] "RemoveContainer" containerID="b3a0cd7538a1e6ca7dc15f925dc50f2f1120e6aea8947fd5d0cc56e26d6d0631" Feb 18 20:04:40 crc kubenswrapper[4942]: I0218 20:04:40.233119 4942 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-jfgwd" Feb 18 20:04:40 crc kubenswrapper[4942]: I0218 20:04:40.285017 4942 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-jfgwd"] Feb 18 20:04:40 crc kubenswrapper[4942]: I0218 20:04:40.300612 4942 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-jfgwd"] Feb 18 20:04:40 crc kubenswrapper[4942]: I0218 20:04:40.302443 4942 scope.go:117] "RemoveContainer" containerID="ba6dbdbacef34144d17506be55e352f7a3b68da7911dad63979a43406b04cce7" Feb 18 20:04:40 crc kubenswrapper[4942]: I0218 20:04:40.337035 4942 scope.go:117] "RemoveContainer" containerID="8b4a72d995ead7aa23d1d6c3a03aa7b487c59cbf0762bfd51172efb9e0f9ba5e" Feb 18 20:04:40 crc kubenswrapper[4942]: I0218 20:04:40.402663 4942 scope.go:117] "RemoveContainer" containerID="b3a0cd7538a1e6ca7dc15f925dc50f2f1120e6aea8947fd5d0cc56e26d6d0631" Feb 18 20:04:40 crc kubenswrapper[4942]: E0218 20:04:40.403185 4942 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b3a0cd7538a1e6ca7dc15f925dc50f2f1120e6aea8947fd5d0cc56e26d6d0631\": container with ID starting with b3a0cd7538a1e6ca7dc15f925dc50f2f1120e6aea8947fd5d0cc56e26d6d0631 not found: ID does not exist" containerID="b3a0cd7538a1e6ca7dc15f925dc50f2f1120e6aea8947fd5d0cc56e26d6d0631" Feb 18 20:04:40 crc kubenswrapper[4942]: I0218 20:04:40.403218 4942 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b3a0cd7538a1e6ca7dc15f925dc50f2f1120e6aea8947fd5d0cc56e26d6d0631"} err="failed to get container status \"b3a0cd7538a1e6ca7dc15f925dc50f2f1120e6aea8947fd5d0cc56e26d6d0631\": rpc error: code = NotFound desc = could not find container \"b3a0cd7538a1e6ca7dc15f925dc50f2f1120e6aea8947fd5d0cc56e26d6d0631\": container with ID starting with b3a0cd7538a1e6ca7dc15f925dc50f2f1120e6aea8947fd5d0cc56e26d6d0631 not found: ID does not exist" Feb 18 20:04:40 crc kubenswrapper[4942]: I0218 20:04:40.403240 4942 scope.go:117] "RemoveContainer" containerID="ba6dbdbacef34144d17506be55e352f7a3b68da7911dad63979a43406b04cce7" Feb 18 20:04:40 crc kubenswrapper[4942]: E0218 20:04:40.403551 4942 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ba6dbdbacef34144d17506be55e352f7a3b68da7911dad63979a43406b04cce7\": container with ID starting with ba6dbdbacef34144d17506be55e352f7a3b68da7911dad63979a43406b04cce7 not found: ID does not exist" containerID="ba6dbdbacef34144d17506be55e352f7a3b68da7911dad63979a43406b04cce7" Feb 18 20:04:40 crc kubenswrapper[4942]: I0218 20:04:40.403598 4942 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ba6dbdbacef34144d17506be55e352f7a3b68da7911dad63979a43406b04cce7"} err="failed to get container status \"ba6dbdbacef34144d17506be55e352f7a3b68da7911dad63979a43406b04cce7\": rpc error: code = NotFound desc = could not find container \"ba6dbdbacef34144d17506be55e352f7a3b68da7911dad63979a43406b04cce7\": container with ID starting with ba6dbdbacef34144d17506be55e352f7a3b68da7911dad63979a43406b04cce7 not found: ID does not exist" Feb 18 20:04:40 crc kubenswrapper[4942]: I0218 20:04:40.403636 4942 scope.go:117] "RemoveContainer" containerID="8b4a72d995ead7aa23d1d6c3a03aa7b487c59cbf0762bfd51172efb9e0f9ba5e" Feb 18 20:04:40 crc kubenswrapper[4942]: E0218 20:04:40.403960 4942 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8b4a72d995ead7aa23d1d6c3a03aa7b487c59cbf0762bfd51172efb9e0f9ba5e\": container with ID starting with 8b4a72d995ead7aa23d1d6c3a03aa7b487c59cbf0762bfd51172efb9e0f9ba5e not found: ID does not exist" containerID="8b4a72d995ead7aa23d1d6c3a03aa7b487c59cbf0762bfd51172efb9e0f9ba5e" Feb 18 20:04:40 crc kubenswrapper[4942]: I0218 20:04:40.403995 4942 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8b4a72d995ead7aa23d1d6c3a03aa7b487c59cbf0762bfd51172efb9e0f9ba5e"} err="failed to get container status \"8b4a72d995ead7aa23d1d6c3a03aa7b487c59cbf0762bfd51172efb9e0f9ba5e\": rpc error: code = NotFound desc = could not find container \"8b4a72d995ead7aa23d1d6c3a03aa7b487c59cbf0762bfd51172efb9e0f9ba5e\": container with ID starting with 8b4a72d995ead7aa23d1d6c3a03aa7b487c59cbf0762bfd51172efb9e0f9ba5e not found: ID does not exist" Feb 18 20:04:41 crc kubenswrapper[4942]: I0218 20:04:41.055541 4942 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3f7c8faf-9df9-40e0-83c7-8fb987985673" path="/var/lib/kubelet/pods/3f7c8faf-9df9-40e0-83c7-8fb987985673/volumes" Feb 18 20:04:53 crc kubenswrapper[4942]: I0218 20:04:53.740434 4942 patch_prober.go:28] interesting pod/machine-config-daemon-wqxh4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 20:04:53 crc kubenswrapper[4942]: I0218 20:04:53.741073 4942 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wqxh4" podUID="28921539-823a-4439-a230-3b5aed7085cc" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 20:05:23 crc kubenswrapper[4942]: I0218 20:05:23.741489 4942 patch_prober.go:28] interesting pod/machine-config-daemon-wqxh4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 20:05:23 crc kubenswrapper[4942]: I0218 20:05:23.742076 4942 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wqxh4" podUID="28921539-823a-4439-a230-3b5aed7085cc" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 20:05:53 crc kubenswrapper[4942]: I0218 20:05:53.741248 4942 patch_prober.go:28] interesting pod/machine-config-daemon-wqxh4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 20:05:53 crc kubenswrapper[4942]: I0218 20:05:53.741935 4942 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wqxh4" podUID="28921539-823a-4439-a230-3b5aed7085cc" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 20:05:53 crc kubenswrapper[4942]: I0218 20:05:53.741998 4942 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-wqxh4" Feb 18 20:05:53 crc kubenswrapper[4942]: I0218 20:05:53.742870 4942 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"339398ef2c817c25ee087d2b884ff3bce0c2b59c4bf8c232769e062241809fa2"} pod="openshift-machine-config-operator/machine-config-daemon-wqxh4" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 18 20:05:53 crc kubenswrapper[4942]: I0218 20:05:53.742937 4942 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-wqxh4" podUID="28921539-823a-4439-a230-3b5aed7085cc" containerName="machine-config-daemon" containerID="cri-o://339398ef2c817c25ee087d2b884ff3bce0c2b59c4bf8c232769e062241809fa2" gracePeriod=600 Feb 18 20:05:53 crc kubenswrapper[4942]: I0218 20:05:53.994047 4942 generic.go:334] "Generic (PLEG): container finished" podID="28921539-823a-4439-a230-3b5aed7085cc" containerID="339398ef2c817c25ee087d2b884ff3bce0c2b59c4bf8c232769e062241809fa2" exitCode=0 Feb 18 20:05:53 crc kubenswrapper[4942]: I0218 20:05:53.994136 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wqxh4" event={"ID":"28921539-823a-4439-a230-3b5aed7085cc","Type":"ContainerDied","Data":"339398ef2c817c25ee087d2b884ff3bce0c2b59c4bf8c232769e062241809fa2"} Feb 18 20:05:53 crc kubenswrapper[4942]: I0218 20:05:53.994448 4942 scope.go:117] "RemoveContainer" containerID="5e4e4cde2bbc876890dcc79d1035aec859f9c3fe975d1ce36677f131f53ddd1d" Feb 18 20:05:55 crc kubenswrapper[4942]: I0218 20:05:55.005626 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wqxh4" event={"ID":"28921539-823a-4439-a230-3b5aed7085cc","Type":"ContainerStarted","Data":"5805170dd8a5bdf54f8aac0015f4c83ad571c8c859aab4da98b887ecc1a60495"} Feb 18 20:07:31 crc kubenswrapper[4942]: I0218 20:07:31.772298 4942 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/ceilometer-0" podUID="c330a0f3-afd7-4b55-8d33-8617b38bba91" containerName="ceilometer-notification-agent" probeResult="failure" output="command timed out" Feb 18 20:07:31 crc kubenswrapper[4942]: I0218 20:07:31.772299 4942 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/ceilometer-0" podUID="c330a0f3-afd7-4b55-8d33-8617b38bba91" containerName="ceilometer-central-agent" probeResult="failure" output="command timed out" Feb 18 20:07:31 crc kubenswrapper[4942]: I0218 20:07:31.994971 4942 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/cinder-scheduler-0" podUID="e7ce79f4-8fac-499d-aa4d-1ca6b2b50259" containerName="cinder-scheduler" probeResult="failure" output="Get \"http://10.217.0.186:8080/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 18 20:07:36 crc kubenswrapper[4942]: I0218 20:07:36.773133 4942 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/ceilometer-0" podUID="c330a0f3-afd7-4b55-8d33-8617b38bba91" containerName="ceilometer-central-agent" probeResult="failure" output="command timed out" Feb 18 20:07:37 crc kubenswrapper[4942]: I0218 20:07:37.039009 4942 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/cinder-scheduler-0" podUID="e7ce79f4-8fac-499d-aa4d-1ca6b2b50259" containerName="cinder-scheduler" probeResult="failure" output="Get \"http://10.217.0.186:8080/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 18 20:07:41 crc kubenswrapper[4942]: I0218 20:07:41.772046 4942 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/ceilometer-0" podUID="c330a0f3-afd7-4b55-8d33-8617b38bba91" containerName="ceilometer-central-agent" probeResult="failure" output="command timed out" Feb 18 20:07:41 crc kubenswrapper[4942]: I0218 20:07:41.772653 4942 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/ceilometer-0" Feb 18 20:07:41 crc kubenswrapper[4942]: I0218 20:07:41.773726 4942 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="ceilometer-central-agent" containerStatusID={"Type":"cri-o","ID":"724cd265bca66d36c5206546352c1744fd4175372a93790f844a697f57c62cf3"} pod="openstack/ceilometer-0" containerMessage="Container ceilometer-central-agent failed liveness probe, will be restarted" Feb 18 20:07:41 crc kubenswrapper[4942]: I0218 20:07:41.773824 4942 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="c330a0f3-afd7-4b55-8d33-8617b38bba91" containerName="ceilometer-central-agent" containerID="cri-o://724cd265bca66d36c5206546352c1744fd4175372a93790f844a697f57c62cf3" gracePeriod=30 Feb 18 20:07:42 crc kubenswrapper[4942]: I0218 20:07:42.083974 4942 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/cinder-scheduler-0" podUID="e7ce79f4-8fac-499d-aa4d-1ca6b2b50259" containerName="cinder-scheduler" probeResult="failure" output="Get \"http://10.217.0.186:8080/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 18 20:07:42 crc kubenswrapper[4942]: I0218 20:07:42.084064 4942 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/cinder-scheduler-0" Feb 18 20:07:42 crc kubenswrapper[4942]: I0218 20:07:42.085357 4942 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="cinder-scheduler" containerStatusID={"Type":"cri-o","ID":"a4316a50ea1a16243db84d37fb517e94ea394f23b89e3660f9729bb3224e6560"} pod="openstack/cinder-scheduler-0" containerMessage="Container cinder-scheduler failed liveness probe, will be restarted" Feb 18 20:07:42 crc kubenswrapper[4942]: I0218 20:07:42.085434 4942 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="e7ce79f4-8fac-499d-aa4d-1ca6b2b50259" containerName="cinder-scheduler" containerID="cri-o://a4316a50ea1a16243db84d37fb517e94ea394f23b89e3660f9729bb3224e6560" gracePeriod=30 Feb 18 20:08:01 crc kubenswrapper[4942]: I0218 20:08:01.769035 4942 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/ceilometer-0" podUID="c330a0f3-afd7-4b55-8d33-8617b38bba91" containerName="ceilometer-notification-agent" probeResult="failure" output="command timed out" Feb 18 20:08:20 crc kubenswrapper[4942]: I0218 20:08:20.223365 4942 generic.go:334] "Generic (PLEG): container finished" podID="e7ce79f4-8fac-499d-aa4d-1ca6b2b50259" containerID="a4316a50ea1a16243db84d37fb517e94ea394f23b89e3660f9729bb3224e6560" exitCode=-1 Feb 18 20:08:20 crc kubenswrapper[4942]: I0218 20:08:20.223476 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"e7ce79f4-8fac-499d-aa4d-1ca6b2b50259","Type":"ContainerDied","Data":"a4316a50ea1a16243db84d37fb517e94ea394f23b89e3660f9729bb3224e6560"} Feb 18 20:08:23 crc kubenswrapper[4942]: I0218 20:08:23.741327 4942 patch_prober.go:28] interesting pod/machine-config-daemon-wqxh4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 20:08:23 crc kubenswrapper[4942]: I0218 20:08:23.742116 4942 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wqxh4" podUID="28921539-823a-4439-a230-3b5aed7085cc" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 20:08:24 crc kubenswrapper[4942]: I0218 20:08:24.891483 4942 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/octavia-operator-controller-manager-745bbbd77b-4xhmd" podUID="3b42f10c-a162-4d74-9eed-b6c3ef08cdb7" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.87:8081/healthz\": dial tcp 10.217.0.87:8081: connect: connection refused" Feb 18 20:08:24 crc kubenswrapper[4942]: I0218 20:08:24.891577 4942 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/octavia-operator-controller-manager-745bbbd77b-4xhmd" podUID="3b42f10c-a162-4d74-9eed-b6c3ef08cdb7" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.87:8081/readyz\": dial tcp 10.217.0.87:8081: connect: connection refused" Feb 18 20:08:26 crc kubenswrapper[4942]: I0218 20:08:26.737731 4942 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/ceilometer-0" podUID="c330a0f3-afd7-4b55-8d33-8617b38bba91" containerName="ceilometer-notification-agent" probeResult="failure" output=< Feb 18 20:08:26 crc kubenswrapper[4942]: Unkown error: Expecting value: line 1 column 1 (char 0) Feb 18 20:08:26 crc kubenswrapper[4942]: > Feb 18 20:08:26 crc kubenswrapper[4942]: I0218 20:08:26.738125 4942 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/ceilometer-0" Feb 18 20:08:28 crc kubenswrapper[4942]: I0218 20:08:28.312471 4942 generic.go:334] "Generic (PLEG): container finished" podID="3b42f10c-a162-4d74-9eed-b6c3ef08cdb7" containerID="b0e5cc17d5708a2bf67f2c62fdedb963fde1c3e9e426935ccb4895be0efefc73" exitCode=1 Feb 18 20:08:28 crc kubenswrapper[4942]: I0218 20:08:28.314151 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-745bbbd77b-4xhmd" event={"ID":"3b42f10c-a162-4d74-9eed-b6c3ef08cdb7","Type":"ContainerDied","Data":"b0e5cc17d5708a2bf67f2c62fdedb963fde1c3e9e426935ccb4895be0efefc73"} Feb 18 20:08:28 crc kubenswrapper[4942]: I0218 20:08:28.314997 4942 scope.go:117] "RemoveContainer" containerID="b0e5cc17d5708a2bf67f2c62fdedb963fde1c3e9e426935ccb4895be0efefc73" Feb 18 20:08:29 crc kubenswrapper[4942]: I0218 20:08:29.325134 4942 generic.go:334] "Generic (PLEG): container finished" podID="c330a0f3-afd7-4b55-8d33-8617b38bba91" containerID="724cd265bca66d36c5206546352c1744fd4175372a93790f844a697f57c62cf3" exitCode=137 Feb 18 20:08:29 crc kubenswrapper[4942]: I0218 20:08:29.325220 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c330a0f3-afd7-4b55-8d33-8617b38bba91","Type":"ContainerDied","Data":"724cd265bca66d36c5206546352c1744fd4175372a93790f844a697f57c62cf3"} Feb 18 20:08:29 crc kubenswrapper[4942]: I0218 20:08:29.433482 4942 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 18 20:08:30 crc kubenswrapper[4942]: I0218 20:08:30.336710 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c330a0f3-afd7-4b55-8d33-8617b38bba91","Type":"ContainerStarted","Data":"9fe9e0aa37767ce5de90121ee990ec21b503a4213a1d4d290cce06cc587867b8"} Feb 18 20:08:30 crc kubenswrapper[4942]: I0218 20:08:30.337697 4942 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="ceilometer-notification-agent" containerStatusID={"Type":"cri-o","ID":"532c795a258873ae20237a974d4194a954b9ccd2130576ed8beb675e6befbd60"} pod="openstack/ceilometer-0" containerMessage="Container ceilometer-notification-agent failed liveness probe, will be restarted" Feb 18 20:08:30 crc kubenswrapper[4942]: I0218 20:08:30.337799 4942 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="c330a0f3-afd7-4b55-8d33-8617b38bba91" containerName="ceilometer-notification-agent" containerID="cri-o://532c795a258873ae20237a974d4194a954b9ccd2130576ed8beb675e6befbd60" gracePeriod=30 Feb 18 20:08:30 crc kubenswrapper[4942]: I0218 20:08:30.339643 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"e7ce79f4-8fac-499d-aa4d-1ca6b2b50259","Type":"ContainerStarted","Data":"2dda9acf7c5f07d65a720caa052bdd40927e1bef6f72f788b3ad1623f5768a13"} Feb 18 20:08:30 crc kubenswrapper[4942]: I0218 20:08:30.342473 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-745bbbd77b-4xhmd" event={"ID":"3b42f10c-a162-4d74-9eed-b6c3ef08cdb7","Type":"ContainerStarted","Data":"9429fdb2b3dba638af08226edd5a7591b18b05408131768a1f55e256068987cc"} Feb 18 20:08:30 crc kubenswrapper[4942]: I0218 20:08:30.342733 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/octavia-operator-controller-manager-745bbbd77b-4xhmd" Feb 18 20:08:30 crc kubenswrapper[4942]: I0218 20:08:30.954433 4942 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Feb 18 20:08:33 crc kubenswrapper[4942]: I0218 20:08:33.397151 4942 generic.go:334] "Generic (PLEG): container finished" podID="c330a0f3-afd7-4b55-8d33-8617b38bba91" containerID="532c795a258873ae20237a974d4194a954b9ccd2130576ed8beb675e6befbd60" exitCode=0 Feb 18 20:08:33 crc kubenswrapper[4942]: I0218 20:08:33.398580 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c330a0f3-afd7-4b55-8d33-8617b38bba91","Type":"ContainerDied","Data":"532c795a258873ae20237a974d4194a954b9ccd2130576ed8beb675e6befbd60"} Feb 18 20:08:33 crc kubenswrapper[4942]: I0218 20:08:33.398648 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c330a0f3-afd7-4b55-8d33-8617b38bba91","Type":"ContainerStarted","Data":"e5443cb0e3f421192e8c5cff2ac8b62d842802120d6ff0cd27e163c42866d441"} Feb 18 20:08:34 crc kubenswrapper[4942]: I0218 20:08:34.891818 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/octavia-operator-controller-manager-745bbbd77b-4xhmd" Feb 18 20:08:35 crc kubenswrapper[4942]: I0218 20:08:35.967735 4942 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Feb 18 20:08:53 crc kubenswrapper[4942]: I0218 20:08:53.740449 4942 patch_prober.go:28] interesting pod/machine-config-daemon-wqxh4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 20:08:53 crc kubenswrapper[4942]: I0218 20:08:53.740963 4942 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wqxh4" podUID="28921539-823a-4439-a230-3b5aed7085cc" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 20:09:23 crc kubenswrapper[4942]: I0218 20:09:23.741175 4942 patch_prober.go:28] interesting pod/machine-config-daemon-wqxh4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 20:09:23 crc kubenswrapper[4942]: I0218 20:09:23.744167 4942 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wqxh4" podUID="28921539-823a-4439-a230-3b5aed7085cc" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 20:09:23 crc kubenswrapper[4942]: I0218 20:09:23.744513 4942 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-wqxh4" Feb 18 20:09:23 crc kubenswrapper[4942]: I0218 20:09:23.746054 4942 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"5805170dd8a5bdf54f8aac0015f4c83ad571c8c859aab4da98b887ecc1a60495"} pod="openshift-machine-config-operator/machine-config-daemon-wqxh4" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 18 20:09:23 crc kubenswrapper[4942]: I0218 20:09:23.746427 4942 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-wqxh4" podUID="28921539-823a-4439-a230-3b5aed7085cc" containerName="machine-config-daemon" containerID="cri-o://5805170dd8a5bdf54f8aac0015f4c83ad571c8c859aab4da98b887ecc1a60495" gracePeriod=600 Feb 18 20:09:23 crc kubenswrapper[4942]: E0218 20:09:23.873157 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wqxh4_openshift-machine-config-operator(28921539-823a-4439-a230-3b5aed7085cc)\"" pod="openshift-machine-config-operator/machine-config-daemon-wqxh4" podUID="28921539-823a-4439-a230-3b5aed7085cc" Feb 18 20:09:23 crc kubenswrapper[4942]: I0218 20:09:23.888411 4942 generic.go:334] "Generic (PLEG): container finished" podID="28921539-823a-4439-a230-3b5aed7085cc" containerID="5805170dd8a5bdf54f8aac0015f4c83ad571c8c859aab4da98b887ecc1a60495" exitCode=0 Feb 18 20:09:23 crc kubenswrapper[4942]: I0218 20:09:23.888506 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wqxh4" event={"ID":"28921539-823a-4439-a230-3b5aed7085cc","Type":"ContainerDied","Data":"5805170dd8a5bdf54f8aac0015f4c83ad571c8c859aab4da98b887ecc1a60495"} Feb 18 20:09:23 crc kubenswrapper[4942]: I0218 20:09:23.888847 4942 scope.go:117] "RemoveContainer" containerID="339398ef2c817c25ee087d2b884ff3bce0c2b59c4bf8c232769e062241809fa2" Feb 18 20:09:23 crc kubenswrapper[4942]: I0218 20:09:23.889838 4942 scope.go:117] "RemoveContainer" containerID="5805170dd8a5bdf54f8aac0015f4c83ad571c8c859aab4da98b887ecc1a60495" Feb 18 20:09:23 crc kubenswrapper[4942]: E0218 20:09:23.890231 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wqxh4_openshift-machine-config-operator(28921539-823a-4439-a230-3b5aed7085cc)\"" pod="openshift-machine-config-operator/machine-config-daemon-wqxh4" podUID="28921539-823a-4439-a230-3b5aed7085cc" Feb 18 20:09:37 crc kubenswrapper[4942]: I0218 20:09:37.039012 4942 scope.go:117] "RemoveContainer" containerID="5805170dd8a5bdf54f8aac0015f4c83ad571c8c859aab4da98b887ecc1a60495" Feb 18 20:09:37 crc kubenswrapper[4942]: E0218 20:09:37.040432 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wqxh4_openshift-machine-config-operator(28921539-823a-4439-a230-3b5aed7085cc)\"" pod="openshift-machine-config-operator/machine-config-daemon-wqxh4" podUID="28921539-823a-4439-a230-3b5aed7085cc" Feb 18 20:09:46 crc kubenswrapper[4942]: I0218 20:09:46.728139 4942 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-54nj4"] Feb 18 20:09:46 crc kubenswrapper[4942]: E0218 20:09:46.729021 4942 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3f7c8faf-9df9-40e0-83c7-8fb987985673" containerName="extract-utilities" Feb 18 20:09:46 crc kubenswrapper[4942]: I0218 20:09:46.729033 4942 state_mem.go:107] "Deleted CPUSet assignment" podUID="3f7c8faf-9df9-40e0-83c7-8fb987985673" containerName="extract-utilities" Feb 18 20:09:46 crc kubenswrapper[4942]: E0218 20:09:46.729046 4942 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3f7c8faf-9df9-40e0-83c7-8fb987985673" containerName="extract-content" Feb 18 20:09:46 crc kubenswrapper[4942]: I0218 20:09:46.729052 4942 state_mem.go:107] "Deleted CPUSet assignment" podUID="3f7c8faf-9df9-40e0-83c7-8fb987985673" containerName="extract-content" Feb 18 20:09:46 crc kubenswrapper[4942]: E0218 20:09:46.729084 4942 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3f7c8faf-9df9-40e0-83c7-8fb987985673" containerName="registry-server" Feb 18 20:09:46 crc kubenswrapper[4942]: I0218 20:09:46.729091 4942 state_mem.go:107] "Deleted CPUSet assignment" podUID="3f7c8faf-9df9-40e0-83c7-8fb987985673" containerName="registry-server" Feb 18 20:09:46 crc kubenswrapper[4942]: I0218 20:09:46.729257 4942 memory_manager.go:354] "RemoveStaleState removing state" podUID="3f7c8faf-9df9-40e0-83c7-8fb987985673" containerName="registry-server" Feb 18 20:09:46 crc kubenswrapper[4942]: I0218 20:09:46.730576 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-54nj4" Feb 18 20:09:46 crc kubenswrapper[4942]: I0218 20:09:46.760295 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-54nj4"] Feb 18 20:09:46 crc kubenswrapper[4942]: I0218 20:09:46.795584 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6wnh4\" (UniqueName: \"kubernetes.io/projected/8fb258f6-7f5f-4390-914c-c995678e50a1-kube-api-access-6wnh4\") pod \"community-operators-54nj4\" (UID: \"8fb258f6-7f5f-4390-914c-c995678e50a1\") " pod="openshift-marketplace/community-operators-54nj4" Feb 18 20:09:46 crc kubenswrapper[4942]: I0218 20:09:46.795917 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8fb258f6-7f5f-4390-914c-c995678e50a1-utilities\") pod \"community-operators-54nj4\" (UID: \"8fb258f6-7f5f-4390-914c-c995678e50a1\") " pod="openshift-marketplace/community-operators-54nj4" Feb 18 20:09:46 crc kubenswrapper[4942]: I0218 20:09:46.795988 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8fb258f6-7f5f-4390-914c-c995678e50a1-catalog-content\") pod \"community-operators-54nj4\" (UID: \"8fb258f6-7f5f-4390-914c-c995678e50a1\") " pod="openshift-marketplace/community-operators-54nj4" Feb 18 20:09:46 crc kubenswrapper[4942]: I0218 20:09:46.898813 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6wnh4\" (UniqueName: \"kubernetes.io/projected/8fb258f6-7f5f-4390-914c-c995678e50a1-kube-api-access-6wnh4\") pod \"community-operators-54nj4\" (UID: \"8fb258f6-7f5f-4390-914c-c995678e50a1\") " pod="openshift-marketplace/community-operators-54nj4" Feb 18 20:09:46 crc kubenswrapper[4942]: I0218 20:09:46.899200 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8fb258f6-7f5f-4390-914c-c995678e50a1-utilities\") pod \"community-operators-54nj4\" (UID: \"8fb258f6-7f5f-4390-914c-c995678e50a1\") " pod="openshift-marketplace/community-operators-54nj4" Feb 18 20:09:46 crc kubenswrapper[4942]: I0218 20:09:46.899320 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8fb258f6-7f5f-4390-914c-c995678e50a1-catalog-content\") pod \"community-operators-54nj4\" (UID: \"8fb258f6-7f5f-4390-914c-c995678e50a1\") " pod="openshift-marketplace/community-operators-54nj4" Feb 18 20:09:46 crc kubenswrapper[4942]: I0218 20:09:46.899709 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8fb258f6-7f5f-4390-914c-c995678e50a1-utilities\") pod \"community-operators-54nj4\" (UID: \"8fb258f6-7f5f-4390-914c-c995678e50a1\") " pod="openshift-marketplace/community-operators-54nj4" Feb 18 20:09:46 crc kubenswrapper[4942]: I0218 20:09:46.899800 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8fb258f6-7f5f-4390-914c-c995678e50a1-catalog-content\") pod \"community-operators-54nj4\" (UID: \"8fb258f6-7f5f-4390-914c-c995678e50a1\") " pod="openshift-marketplace/community-operators-54nj4" Feb 18 20:09:46 crc kubenswrapper[4942]: I0218 20:09:46.946115 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6wnh4\" (UniqueName: \"kubernetes.io/projected/8fb258f6-7f5f-4390-914c-c995678e50a1-kube-api-access-6wnh4\") pod \"community-operators-54nj4\" (UID: \"8fb258f6-7f5f-4390-914c-c995678e50a1\") " pod="openshift-marketplace/community-operators-54nj4" Feb 18 20:09:47 crc kubenswrapper[4942]: I0218 20:09:47.057323 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-54nj4" Feb 18 20:09:47 crc kubenswrapper[4942]: I0218 20:09:47.589611 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-54nj4"] Feb 18 20:09:48 crc kubenswrapper[4942]: I0218 20:09:48.194947 4942 generic.go:334] "Generic (PLEG): container finished" podID="8fb258f6-7f5f-4390-914c-c995678e50a1" containerID="461620309be11c6d74d4dfea6973afbc2f273830fc09c7aefbb9416bcfc66705" exitCode=0 Feb 18 20:09:48 crc kubenswrapper[4942]: I0218 20:09:48.195269 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-54nj4" event={"ID":"8fb258f6-7f5f-4390-914c-c995678e50a1","Type":"ContainerDied","Data":"461620309be11c6d74d4dfea6973afbc2f273830fc09c7aefbb9416bcfc66705"} Feb 18 20:09:48 crc kubenswrapper[4942]: I0218 20:09:48.195666 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-54nj4" event={"ID":"8fb258f6-7f5f-4390-914c-c995678e50a1","Type":"ContainerStarted","Data":"b40cd8f81d01c74449c4f0f506f6c37d26862a0d01d0a884cb9a3cbe588a5959"} Feb 18 20:09:49 crc kubenswrapper[4942]: I0218 20:09:49.036411 4942 scope.go:117] "RemoveContainer" containerID="5805170dd8a5bdf54f8aac0015f4c83ad571c8c859aab4da98b887ecc1a60495" Feb 18 20:09:49 crc kubenswrapper[4942]: E0218 20:09:49.037342 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wqxh4_openshift-machine-config-operator(28921539-823a-4439-a230-3b5aed7085cc)\"" pod="openshift-machine-config-operator/machine-config-daemon-wqxh4" podUID="28921539-823a-4439-a230-3b5aed7085cc" Feb 18 20:09:49 crc kubenswrapper[4942]: I0218 20:09:49.207957 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-54nj4" event={"ID":"8fb258f6-7f5f-4390-914c-c995678e50a1","Type":"ContainerStarted","Data":"8e298d888c278e4c90132d234a8180768a8d5dcf4ddeb6b7f12c3cde11330db2"} Feb 18 20:09:51 crc kubenswrapper[4942]: I0218 20:09:51.230373 4942 generic.go:334] "Generic (PLEG): container finished" podID="8fb258f6-7f5f-4390-914c-c995678e50a1" containerID="8e298d888c278e4c90132d234a8180768a8d5dcf4ddeb6b7f12c3cde11330db2" exitCode=0 Feb 18 20:09:51 crc kubenswrapper[4942]: I0218 20:09:51.230634 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-54nj4" event={"ID":"8fb258f6-7f5f-4390-914c-c995678e50a1","Type":"ContainerDied","Data":"8e298d888c278e4c90132d234a8180768a8d5dcf4ddeb6b7f12c3cde11330db2"} Feb 18 20:09:52 crc kubenswrapper[4942]: I0218 20:09:52.244584 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-54nj4" event={"ID":"8fb258f6-7f5f-4390-914c-c995678e50a1","Type":"ContainerStarted","Data":"8305a88c416de0c9b2310929b24a0d21d4e3ea27fd98a7ee5ee4c0732cd69eba"} Feb 18 20:09:52 crc kubenswrapper[4942]: I0218 20:09:52.292245 4942 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-54nj4" podStartSLOduration=2.784529353 podStartE2EDuration="6.292220387s" podCreationTimestamp="2026-02-18 20:09:46 +0000 UTC" firstStartedPulling="2026-02-18 20:09:48.199998568 +0000 UTC m=+3147.904931233" lastFinishedPulling="2026-02-18 20:09:51.707689562 +0000 UTC m=+3151.412622267" observedRunningTime="2026-02-18 20:09:52.268999823 +0000 UTC m=+3151.973932528" watchObservedRunningTime="2026-02-18 20:09:52.292220387 +0000 UTC m=+3151.997153062" Feb 18 20:09:57 crc kubenswrapper[4942]: I0218 20:09:57.058078 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-54nj4" Feb 18 20:09:57 crc kubenswrapper[4942]: I0218 20:09:57.058799 4942 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-54nj4" Feb 18 20:09:57 crc kubenswrapper[4942]: I0218 20:09:57.133066 4942 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-54nj4" Feb 18 20:09:57 crc kubenswrapper[4942]: I0218 20:09:57.381969 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-54nj4" Feb 18 20:09:57 crc kubenswrapper[4942]: I0218 20:09:57.438587 4942 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-54nj4"] Feb 18 20:09:59 crc kubenswrapper[4942]: I0218 20:09:59.336503 4942 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-54nj4" podUID="8fb258f6-7f5f-4390-914c-c995678e50a1" containerName="registry-server" containerID="cri-o://8305a88c416de0c9b2310929b24a0d21d4e3ea27fd98a7ee5ee4c0732cd69eba" gracePeriod=2 Feb 18 20:09:59 crc kubenswrapper[4942]: I0218 20:09:59.976415 4942 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-54nj4" Feb 18 20:10:00 crc kubenswrapper[4942]: I0218 20:10:00.037090 4942 scope.go:117] "RemoveContainer" containerID="5805170dd8a5bdf54f8aac0015f4c83ad571c8c859aab4da98b887ecc1a60495" Feb 18 20:10:00 crc kubenswrapper[4942]: E0218 20:10:00.037404 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wqxh4_openshift-machine-config-operator(28921539-823a-4439-a230-3b5aed7085cc)\"" pod="openshift-machine-config-operator/machine-config-daemon-wqxh4" podUID="28921539-823a-4439-a230-3b5aed7085cc" Feb 18 20:10:00 crc kubenswrapper[4942]: I0218 20:10:00.088853 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6wnh4\" (UniqueName: \"kubernetes.io/projected/8fb258f6-7f5f-4390-914c-c995678e50a1-kube-api-access-6wnh4\") pod \"8fb258f6-7f5f-4390-914c-c995678e50a1\" (UID: \"8fb258f6-7f5f-4390-914c-c995678e50a1\") " Feb 18 20:10:00 crc kubenswrapper[4942]: I0218 20:10:00.088920 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8fb258f6-7f5f-4390-914c-c995678e50a1-catalog-content\") pod \"8fb258f6-7f5f-4390-914c-c995678e50a1\" (UID: \"8fb258f6-7f5f-4390-914c-c995678e50a1\") " Feb 18 20:10:00 crc kubenswrapper[4942]: I0218 20:10:00.088944 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8fb258f6-7f5f-4390-914c-c995678e50a1-utilities\") pod \"8fb258f6-7f5f-4390-914c-c995678e50a1\" (UID: \"8fb258f6-7f5f-4390-914c-c995678e50a1\") " Feb 18 20:10:00 crc kubenswrapper[4942]: I0218 20:10:00.089930 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8fb258f6-7f5f-4390-914c-c995678e50a1-utilities" (OuterVolumeSpecName: "utilities") pod "8fb258f6-7f5f-4390-914c-c995678e50a1" (UID: "8fb258f6-7f5f-4390-914c-c995678e50a1"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 20:10:00 crc kubenswrapper[4942]: I0218 20:10:00.097223 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8fb258f6-7f5f-4390-914c-c995678e50a1-kube-api-access-6wnh4" (OuterVolumeSpecName: "kube-api-access-6wnh4") pod "8fb258f6-7f5f-4390-914c-c995678e50a1" (UID: "8fb258f6-7f5f-4390-914c-c995678e50a1"). InnerVolumeSpecName "kube-api-access-6wnh4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 20:10:00 crc kubenswrapper[4942]: I0218 20:10:00.149232 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8fb258f6-7f5f-4390-914c-c995678e50a1-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "8fb258f6-7f5f-4390-914c-c995678e50a1" (UID: "8fb258f6-7f5f-4390-914c-c995678e50a1"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 20:10:00 crc kubenswrapper[4942]: I0218 20:10:00.193438 4942 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6wnh4\" (UniqueName: \"kubernetes.io/projected/8fb258f6-7f5f-4390-914c-c995678e50a1-kube-api-access-6wnh4\") on node \"crc\" DevicePath \"\"" Feb 18 20:10:00 crc kubenswrapper[4942]: I0218 20:10:00.193479 4942 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8fb258f6-7f5f-4390-914c-c995678e50a1-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 18 20:10:00 crc kubenswrapper[4942]: I0218 20:10:00.193490 4942 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8fb258f6-7f5f-4390-914c-c995678e50a1-utilities\") on node \"crc\" DevicePath \"\"" Feb 18 20:10:00 crc kubenswrapper[4942]: I0218 20:10:00.347885 4942 generic.go:334] "Generic (PLEG): container finished" podID="8fb258f6-7f5f-4390-914c-c995678e50a1" containerID="8305a88c416de0c9b2310929b24a0d21d4e3ea27fd98a7ee5ee4c0732cd69eba" exitCode=0 Feb 18 20:10:00 crc kubenswrapper[4942]: I0218 20:10:00.347945 4942 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-54nj4" Feb 18 20:10:00 crc kubenswrapper[4942]: I0218 20:10:00.347952 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-54nj4" event={"ID":"8fb258f6-7f5f-4390-914c-c995678e50a1","Type":"ContainerDied","Data":"8305a88c416de0c9b2310929b24a0d21d4e3ea27fd98a7ee5ee4c0732cd69eba"} Feb 18 20:10:00 crc kubenswrapper[4942]: I0218 20:10:00.348111 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-54nj4" event={"ID":"8fb258f6-7f5f-4390-914c-c995678e50a1","Type":"ContainerDied","Data":"b40cd8f81d01c74449c4f0f506f6c37d26862a0d01d0a884cb9a3cbe588a5959"} Feb 18 20:10:00 crc kubenswrapper[4942]: I0218 20:10:00.348160 4942 scope.go:117] "RemoveContainer" containerID="8305a88c416de0c9b2310929b24a0d21d4e3ea27fd98a7ee5ee4c0732cd69eba" Feb 18 20:10:00 crc kubenswrapper[4942]: I0218 20:10:00.382744 4942 scope.go:117] "RemoveContainer" containerID="8e298d888c278e4c90132d234a8180768a8d5dcf4ddeb6b7f12c3cde11330db2" Feb 18 20:10:00 crc kubenswrapper[4942]: I0218 20:10:00.403152 4942 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-54nj4"] Feb 18 20:10:00 crc kubenswrapper[4942]: I0218 20:10:00.410346 4942 scope.go:117] "RemoveContainer" containerID="461620309be11c6d74d4dfea6973afbc2f273830fc09c7aefbb9416bcfc66705" Feb 18 20:10:00 crc kubenswrapper[4942]: I0218 20:10:00.415899 4942 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-54nj4"] Feb 18 20:10:00 crc kubenswrapper[4942]: I0218 20:10:00.474362 4942 scope.go:117] "RemoveContainer" containerID="8305a88c416de0c9b2310929b24a0d21d4e3ea27fd98a7ee5ee4c0732cd69eba" Feb 18 20:10:00 crc kubenswrapper[4942]: E0218 20:10:00.474843 4942 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8305a88c416de0c9b2310929b24a0d21d4e3ea27fd98a7ee5ee4c0732cd69eba\": container with ID starting with 8305a88c416de0c9b2310929b24a0d21d4e3ea27fd98a7ee5ee4c0732cd69eba not found: ID does not exist" containerID="8305a88c416de0c9b2310929b24a0d21d4e3ea27fd98a7ee5ee4c0732cd69eba" Feb 18 20:10:00 crc kubenswrapper[4942]: I0218 20:10:00.474892 4942 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8305a88c416de0c9b2310929b24a0d21d4e3ea27fd98a7ee5ee4c0732cd69eba"} err="failed to get container status \"8305a88c416de0c9b2310929b24a0d21d4e3ea27fd98a7ee5ee4c0732cd69eba\": rpc error: code = NotFound desc = could not find container \"8305a88c416de0c9b2310929b24a0d21d4e3ea27fd98a7ee5ee4c0732cd69eba\": container with ID starting with 8305a88c416de0c9b2310929b24a0d21d4e3ea27fd98a7ee5ee4c0732cd69eba not found: ID does not exist" Feb 18 20:10:00 crc kubenswrapper[4942]: I0218 20:10:00.474920 4942 scope.go:117] "RemoveContainer" containerID="8e298d888c278e4c90132d234a8180768a8d5dcf4ddeb6b7f12c3cde11330db2" Feb 18 20:10:00 crc kubenswrapper[4942]: E0218 20:10:00.475486 4942 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8e298d888c278e4c90132d234a8180768a8d5dcf4ddeb6b7f12c3cde11330db2\": container with ID starting with 8e298d888c278e4c90132d234a8180768a8d5dcf4ddeb6b7f12c3cde11330db2 not found: ID does not exist" containerID="8e298d888c278e4c90132d234a8180768a8d5dcf4ddeb6b7f12c3cde11330db2" Feb 18 20:10:00 crc kubenswrapper[4942]: I0218 20:10:00.475508 4942 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8e298d888c278e4c90132d234a8180768a8d5dcf4ddeb6b7f12c3cde11330db2"} err="failed to get container status \"8e298d888c278e4c90132d234a8180768a8d5dcf4ddeb6b7f12c3cde11330db2\": rpc error: code = NotFound desc = could not find container \"8e298d888c278e4c90132d234a8180768a8d5dcf4ddeb6b7f12c3cde11330db2\": container with ID starting with 8e298d888c278e4c90132d234a8180768a8d5dcf4ddeb6b7f12c3cde11330db2 not found: ID does not exist" Feb 18 20:10:00 crc kubenswrapper[4942]: I0218 20:10:00.475521 4942 scope.go:117] "RemoveContainer" containerID="461620309be11c6d74d4dfea6973afbc2f273830fc09c7aefbb9416bcfc66705" Feb 18 20:10:00 crc kubenswrapper[4942]: E0218 20:10:00.475751 4942 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"461620309be11c6d74d4dfea6973afbc2f273830fc09c7aefbb9416bcfc66705\": container with ID starting with 461620309be11c6d74d4dfea6973afbc2f273830fc09c7aefbb9416bcfc66705 not found: ID does not exist" containerID="461620309be11c6d74d4dfea6973afbc2f273830fc09c7aefbb9416bcfc66705" Feb 18 20:10:00 crc kubenswrapper[4942]: I0218 20:10:00.475790 4942 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"461620309be11c6d74d4dfea6973afbc2f273830fc09c7aefbb9416bcfc66705"} err="failed to get container status \"461620309be11c6d74d4dfea6973afbc2f273830fc09c7aefbb9416bcfc66705\": rpc error: code = NotFound desc = could not find container \"461620309be11c6d74d4dfea6973afbc2f273830fc09c7aefbb9416bcfc66705\": container with ID starting with 461620309be11c6d74d4dfea6973afbc2f273830fc09c7aefbb9416bcfc66705 not found: ID does not exist" Feb 18 20:10:01 crc kubenswrapper[4942]: I0218 20:10:01.069966 4942 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8fb258f6-7f5f-4390-914c-c995678e50a1" path="/var/lib/kubelet/pods/8fb258f6-7f5f-4390-914c-c995678e50a1/volumes" Feb 18 20:10:13 crc kubenswrapper[4942]: I0218 20:10:13.037071 4942 scope.go:117] "RemoveContainer" containerID="5805170dd8a5bdf54f8aac0015f4c83ad571c8c859aab4da98b887ecc1a60495" Feb 18 20:10:13 crc kubenswrapper[4942]: E0218 20:10:13.038343 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wqxh4_openshift-machine-config-operator(28921539-823a-4439-a230-3b5aed7085cc)\"" pod="openshift-machine-config-operator/machine-config-daemon-wqxh4" podUID="28921539-823a-4439-a230-3b5aed7085cc" Feb 18 20:10:28 crc kubenswrapper[4942]: I0218 20:10:28.036363 4942 scope.go:117] "RemoveContainer" containerID="5805170dd8a5bdf54f8aac0015f4c83ad571c8c859aab4da98b887ecc1a60495" Feb 18 20:10:28 crc kubenswrapper[4942]: E0218 20:10:28.037155 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wqxh4_openshift-machine-config-operator(28921539-823a-4439-a230-3b5aed7085cc)\"" pod="openshift-machine-config-operator/machine-config-daemon-wqxh4" podUID="28921539-823a-4439-a230-3b5aed7085cc" Feb 18 20:10:39 crc kubenswrapper[4942]: I0218 20:10:39.039155 4942 scope.go:117] "RemoveContainer" containerID="5805170dd8a5bdf54f8aac0015f4c83ad571c8c859aab4da98b887ecc1a60495" Feb 18 20:10:39 crc kubenswrapper[4942]: E0218 20:10:39.040049 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wqxh4_openshift-machine-config-operator(28921539-823a-4439-a230-3b5aed7085cc)\"" pod="openshift-machine-config-operator/machine-config-daemon-wqxh4" podUID="28921539-823a-4439-a230-3b5aed7085cc" Feb 18 20:10:53 crc kubenswrapper[4942]: I0218 20:10:53.038518 4942 scope.go:117] "RemoveContainer" containerID="5805170dd8a5bdf54f8aac0015f4c83ad571c8c859aab4da98b887ecc1a60495" Feb 18 20:10:53 crc kubenswrapper[4942]: E0218 20:10:53.039470 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wqxh4_openshift-machine-config-operator(28921539-823a-4439-a230-3b5aed7085cc)\"" pod="openshift-machine-config-operator/machine-config-daemon-wqxh4" podUID="28921539-823a-4439-a230-3b5aed7085cc" Feb 18 20:11:06 crc kubenswrapper[4942]: I0218 20:11:06.037490 4942 scope.go:117] "RemoveContainer" containerID="5805170dd8a5bdf54f8aac0015f4c83ad571c8c859aab4da98b887ecc1a60495" Feb 18 20:11:06 crc kubenswrapper[4942]: E0218 20:11:06.038195 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wqxh4_openshift-machine-config-operator(28921539-823a-4439-a230-3b5aed7085cc)\"" pod="openshift-machine-config-operator/machine-config-daemon-wqxh4" podUID="28921539-823a-4439-a230-3b5aed7085cc" Feb 18 20:11:20 crc kubenswrapper[4942]: I0218 20:11:20.036516 4942 scope.go:117] "RemoveContainer" containerID="5805170dd8a5bdf54f8aac0015f4c83ad571c8c859aab4da98b887ecc1a60495" Feb 18 20:11:20 crc kubenswrapper[4942]: E0218 20:11:20.037547 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wqxh4_openshift-machine-config-operator(28921539-823a-4439-a230-3b5aed7085cc)\"" pod="openshift-machine-config-operator/machine-config-daemon-wqxh4" podUID="28921539-823a-4439-a230-3b5aed7085cc" Feb 18 20:11:32 crc kubenswrapper[4942]: I0218 20:11:32.036953 4942 scope.go:117] "RemoveContainer" containerID="5805170dd8a5bdf54f8aac0015f4c83ad571c8c859aab4da98b887ecc1a60495" Feb 18 20:11:32 crc kubenswrapper[4942]: E0218 20:11:32.037831 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wqxh4_openshift-machine-config-operator(28921539-823a-4439-a230-3b5aed7085cc)\"" pod="openshift-machine-config-operator/machine-config-daemon-wqxh4" podUID="28921539-823a-4439-a230-3b5aed7085cc" Feb 18 20:11:43 crc kubenswrapper[4942]: I0218 20:11:43.037755 4942 scope.go:117] "RemoveContainer" containerID="5805170dd8a5bdf54f8aac0015f4c83ad571c8c859aab4da98b887ecc1a60495" Feb 18 20:11:43 crc kubenswrapper[4942]: E0218 20:11:43.039352 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wqxh4_openshift-machine-config-operator(28921539-823a-4439-a230-3b5aed7085cc)\"" pod="openshift-machine-config-operator/machine-config-daemon-wqxh4" podUID="28921539-823a-4439-a230-3b5aed7085cc" Feb 18 20:11:55 crc kubenswrapper[4942]: I0218 20:11:55.036846 4942 scope.go:117] "RemoveContainer" containerID="5805170dd8a5bdf54f8aac0015f4c83ad571c8c859aab4da98b887ecc1a60495" Feb 18 20:11:55 crc kubenswrapper[4942]: E0218 20:11:55.037554 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wqxh4_openshift-machine-config-operator(28921539-823a-4439-a230-3b5aed7085cc)\"" pod="openshift-machine-config-operator/machine-config-daemon-wqxh4" podUID="28921539-823a-4439-a230-3b5aed7085cc" Feb 18 20:12:06 crc kubenswrapper[4942]: I0218 20:12:06.036412 4942 scope.go:117] "RemoveContainer" containerID="5805170dd8a5bdf54f8aac0015f4c83ad571c8c859aab4da98b887ecc1a60495" Feb 18 20:12:06 crc kubenswrapper[4942]: E0218 20:12:06.037160 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wqxh4_openshift-machine-config-operator(28921539-823a-4439-a230-3b5aed7085cc)\"" pod="openshift-machine-config-operator/machine-config-daemon-wqxh4" podUID="28921539-823a-4439-a230-3b5aed7085cc" Feb 18 20:12:18 crc kubenswrapper[4942]: I0218 20:12:18.037071 4942 scope.go:117] "RemoveContainer" containerID="5805170dd8a5bdf54f8aac0015f4c83ad571c8c859aab4da98b887ecc1a60495" Feb 18 20:12:18 crc kubenswrapper[4942]: E0218 20:12:18.040276 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wqxh4_openshift-machine-config-operator(28921539-823a-4439-a230-3b5aed7085cc)\"" pod="openshift-machine-config-operator/machine-config-daemon-wqxh4" podUID="28921539-823a-4439-a230-3b5aed7085cc" Feb 18 20:12:33 crc kubenswrapper[4942]: I0218 20:12:33.036329 4942 scope.go:117] "RemoveContainer" containerID="5805170dd8a5bdf54f8aac0015f4c83ad571c8c859aab4da98b887ecc1a60495" Feb 18 20:12:33 crc kubenswrapper[4942]: E0218 20:12:33.037104 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wqxh4_openshift-machine-config-operator(28921539-823a-4439-a230-3b5aed7085cc)\"" pod="openshift-machine-config-operator/machine-config-daemon-wqxh4" podUID="28921539-823a-4439-a230-3b5aed7085cc" Feb 18 20:12:48 crc kubenswrapper[4942]: I0218 20:12:48.035985 4942 scope.go:117] "RemoveContainer" containerID="5805170dd8a5bdf54f8aac0015f4c83ad571c8c859aab4da98b887ecc1a60495" Feb 18 20:12:48 crc kubenswrapper[4942]: E0218 20:12:48.036891 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wqxh4_openshift-machine-config-operator(28921539-823a-4439-a230-3b5aed7085cc)\"" pod="openshift-machine-config-operator/machine-config-daemon-wqxh4" podUID="28921539-823a-4439-a230-3b5aed7085cc" Feb 18 20:13:00 crc kubenswrapper[4942]: I0218 20:13:00.036462 4942 scope.go:117] "RemoveContainer" containerID="5805170dd8a5bdf54f8aac0015f4c83ad571c8c859aab4da98b887ecc1a60495" Feb 18 20:13:00 crc kubenswrapper[4942]: E0218 20:13:00.037637 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wqxh4_openshift-machine-config-operator(28921539-823a-4439-a230-3b5aed7085cc)\"" pod="openshift-machine-config-operator/machine-config-daemon-wqxh4" podUID="28921539-823a-4439-a230-3b5aed7085cc" Feb 18 20:13:13 crc kubenswrapper[4942]: I0218 20:13:13.036150 4942 scope.go:117] "RemoveContainer" containerID="5805170dd8a5bdf54f8aac0015f4c83ad571c8c859aab4da98b887ecc1a60495" Feb 18 20:13:13 crc kubenswrapper[4942]: E0218 20:13:13.036748 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wqxh4_openshift-machine-config-operator(28921539-823a-4439-a230-3b5aed7085cc)\"" pod="openshift-machine-config-operator/machine-config-daemon-wqxh4" podUID="28921539-823a-4439-a230-3b5aed7085cc" Feb 18 20:13:27 crc kubenswrapper[4942]: I0218 20:13:27.052734 4942 scope.go:117] "RemoveContainer" containerID="5805170dd8a5bdf54f8aac0015f4c83ad571c8c859aab4da98b887ecc1a60495" Feb 18 20:13:27 crc kubenswrapper[4942]: E0218 20:13:27.055441 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wqxh4_openshift-machine-config-operator(28921539-823a-4439-a230-3b5aed7085cc)\"" pod="openshift-machine-config-operator/machine-config-daemon-wqxh4" podUID="28921539-823a-4439-a230-3b5aed7085cc" Feb 18 20:13:39 crc kubenswrapper[4942]: I0218 20:13:39.036500 4942 scope.go:117] "RemoveContainer" containerID="5805170dd8a5bdf54f8aac0015f4c83ad571c8c859aab4da98b887ecc1a60495" Feb 18 20:13:39 crc kubenswrapper[4942]: E0218 20:13:39.037421 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wqxh4_openshift-machine-config-operator(28921539-823a-4439-a230-3b5aed7085cc)\"" pod="openshift-machine-config-operator/machine-config-daemon-wqxh4" podUID="28921539-823a-4439-a230-3b5aed7085cc" Feb 18 20:13:53 crc kubenswrapper[4942]: I0218 20:13:53.822287 4942 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-nfjgd"] Feb 18 20:13:53 crc kubenswrapper[4942]: E0218 20:13:53.823323 4942 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8fb258f6-7f5f-4390-914c-c995678e50a1" containerName="extract-utilities" Feb 18 20:13:53 crc kubenswrapper[4942]: I0218 20:13:53.823338 4942 state_mem.go:107] "Deleted CPUSet assignment" podUID="8fb258f6-7f5f-4390-914c-c995678e50a1" containerName="extract-utilities" Feb 18 20:13:53 crc kubenswrapper[4942]: E0218 20:13:53.823372 4942 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8fb258f6-7f5f-4390-914c-c995678e50a1" containerName="registry-server" Feb 18 20:13:53 crc kubenswrapper[4942]: I0218 20:13:53.823379 4942 state_mem.go:107] "Deleted CPUSet assignment" podUID="8fb258f6-7f5f-4390-914c-c995678e50a1" containerName="registry-server" Feb 18 20:13:53 crc kubenswrapper[4942]: E0218 20:13:53.823401 4942 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8fb258f6-7f5f-4390-914c-c995678e50a1" containerName="extract-content" Feb 18 20:13:53 crc kubenswrapper[4942]: I0218 20:13:53.823409 4942 state_mem.go:107] "Deleted CPUSet assignment" podUID="8fb258f6-7f5f-4390-914c-c995678e50a1" containerName="extract-content" Feb 18 20:13:53 crc kubenswrapper[4942]: I0218 20:13:53.823654 4942 memory_manager.go:354] "RemoveStaleState removing state" podUID="8fb258f6-7f5f-4390-914c-c995678e50a1" containerName="registry-server" Feb 18 20:13:53 crc kubenswrapper[4942]: I0218 20:13:53.825358 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-nfjgd" Feb 18 20:13:53 crc kubenswrapper[4942]: I0218 20:13:53.834563 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-nfjgd"] Feb 18 20:13:53 crc kubenswrapper[4942]: I0218 20:13:53.888022 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pwbms\" (UniqueName: \"kubernetes.io/projected/5d21c90f-12fc-4f90-a74e-8da0266710d6-kube-api-access-pwbms\") pod \"redhat-operators-nfjgd\" (UID: \"5d21c90f-12fc-4f90-a74e-8da0266710d6\") " pod="openshift-marketplace/redhat-operators-nfjgd" Feb 18 20:13:53 crc kubenswrapper[4942]: I0218 20:13:53.888123 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5d21c90f-12fc-4f90-a74e-8da0266710d6-utilities\") pod \"redhat-operators-nfjgd\" (UID: \"5d21c90f-12fc-4f90-a74e-8da0266710d6\") " pod="openshift-marketplace/redhat-operators-nfjgd" Feb 18 20:13:53 crc kubenswrapper[4942]: I0218 20:13:53.888183 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5d21c90f-12fc-4f90-a74e-8da0266710d6-catalog-content\") pod \"redhat-operators-nfjgd\" (UID: \"5d21c90f-12fc-4f90-a74e-8da0266710d6\") " pod="openshift-marketplace/redhat-operators-nfjgd" Feb 18 20:13:53 crc kubenswrapper[4942]: I0218 20:13:53.989365 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pwbms\" (UniqueName: \"kubernetes.io/projected/5d21c90f-12fc-4f90-a74e-8da0266710d6-kube-api-access-pwbms\") pod \"redhat-operators-nfjgd\" (UID: \"5d21c90f-12fc-4f90-a74e-8da0266710d6\") " pod="openshift-marketplace/redhat-operators-nfjgd" Feb 18 20:13:53 crc kubenswrapper[4942]: I0218 20:13:53.989838 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5d21c90f-12fc-4f90-a74e-8da0266710d6-utilities\") pod \"redhat-operators-nfjgd\" (UID: \"5d21c90f-12fc-4f90-a74e-8da0266710d6\") " pod="openshift-marketplace/redhat-operators-nfjgd" Feb 18 20:13:53 crc kubenswrapper[4942]: I0218 20:13:53.990031 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5d21c90f-12fc-4f90-a74e-8da0266710d6-catalog-content\") pod \"redhat-operators-nfjgd\" (UID: \"5d21c90f-12fc-4f90-a74e-8da0266710d6\") " pod="openshift-marketplace/redhat-operators-nfjgd" Feb 18 20:13:53 crc kubenswrapper[4942]: I0218 20:13:53.990211 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5d21c90f-12fc-4f90-a74e-8da0266710d6-utilities\") pod \"redhat-operators-nfjgd\" (UID: \"5d21c90f-12fc-4f90-a74e-8da0266710d6\") " pod="openshift-marketplace/redhat-operators-nfjgd" Feb 18 20:13:53 crc kubenswrapper[4942]: I0218 20:13:53.990530 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5d21c90f-12fc-4f90-a74e-8da0266710d6-catalog-content\") pod \"redhat-operators-nfjgd\" (UID: \"5d21c90f-12fc-4f90-a74e-8da0266710d6\") " pod="openshift-marketplace/redhat-operators-nfjgd" Feb 18 20:13:54 crc kubenswrapper[4942]: I0218 20:13:54.017669 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pwbms\" (UniqueName: \"kubernetes.io/projected/5d21c90f-12fc-4f90-a74e-8da0266710d6-kube-api-access-pwbms\") pod \"redhat-operators-nfjgd\" (UID: \"5d21c90f-12fc-4f90-a74e-8da0266710d6\") " pod="openshift-marketplace/redhat-operators-nfjgd" Feb 18 20:13:54 crc kubenswrapper[4942]: I0218 20:13:54.035397 4942 scope.go:117] "RemoveContainer" containerID="5805170dd8a5bdf54f8aac0015f4c83ad571c8c859aab4da98b887ecc1a60495" Feb 18 20:13:54 crc kubenswrapper[4942]: E0218 20:13:54.035937 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wqxh4_openshift-machine-config-operator(28921539-823a-4439-a230-3b5aed7085cc)\"" pod="openshift-machine-config-operator/machine-config-daemon-wqxh4" podUID="28921539-823a-4439-a230-3b5aed7085cc" Feb 18 20:13:54 crc kubenswrapper[4942]: I0218 20:13:54.150246 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-nfjgd" Feb 18 20:13:55 crc kubenswrapper[4942]: I0218 20:13:55.266339 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-nfjgd"] Feb 18 20:13:55 crc kubenswrapper[4942]: I0218 20:13:55.938110 4942 generic.go:334] "Generic (PLEG): container finished" podID="5d21c90f-12fc-4f90-a74e-8da0266710d6" containerID="4abacf3ceed61b4f2b51d004ec4785bc183d43b69b5e2b5da2d29159be50768f" exitCode=0 Feb 18 20:13:55 crc kubenswrapper[4942]: I0218 20:13:55.938179 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-nfjgd" event={"ID":"5d21c90f-12fc-4f90-a74e-8da0266710d6","Type":"ContainerDied","Data":"4abacf3ceed61b4f2b51d004ec4785bc183d43b69b5e2b5da2d29159be50768f"} Feb 18 20:13:55 crc kubenswrapper[4942]: I0218 20:13:55.938211 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-nfjgd" event={"ID":"5d21c90f-12fc-4f90-a74e-8da0266710d6","Type":"ContainerStarted","Data":"c54b96648e25109d528d6659096fcf53de95dbbd628245057ff76cb7139280b0"} Feb 18 20:13:55 crc kubenswrapper[4942]: I0218 20:13:55.939786 4942 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 18 20:13:57 crc kubenswrapper[4942]: I0218 20:13:57.959698 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-nfjgd" event={"ID":"5d21c90f-12fc-4f90-a74e-8da0266710d6","Type":"ContainerStarted","Data":"cc157299f0f23d6f51bd55cc366d82c09c24ae9675fc7441f263cc7e0003b3ee"} Feb 18 20:14:01 crc kubenswrapper[4942]: I0218 20:14:01.996515 4942 generic.go:334] "Generic (PLEG): container finished" podID="5d21c90f-12fc-4f90-a74e-8da0266710d6" containerID="cc157299f0f23d6f51bd55cc366d82c09c24ae9675fc7441f263cc7e0003b3ee" exitCode=0 Feb 18 20:14:01 crc kubenswrapper[4942]: I0218 20:14:01.996583 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-nfjgd" event={"ID":"5d21c90f-12fc-4f90-a74e-8da0266710d6","Type":"ContainerDied","Data":"cc157299f0f23d6f51bd55cc366d82c09c24ae9675fc7441f263cc7e0003b3ee"} Feb 18 20:14:03 crc kubenswrapper[4942]: I0218 20:14:03.008075 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-nfjgd" event={"ID":"5d21c90f-12fc-4f90-a74e-8da0266710d6","Type":"ContainerStarted","Data":"7e2a7b2370f1f30bd67f503f19062372eeefbc1b47c75198207dd554e8bf2526"} Feb 18 20:14:03 crc kubenswrapper[4942]: I0218 20:14:03.029796 4942 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-nfjgd" podStartSLOduration=3.530061087 podStartE2EDuration="10.02977845s" podCreationTimestamp="2026-02-18 20:13:53 +0000 UTC" firstStartedPulling="2026-02-18 20:13:55.939586574 +0000 UTC m=+3395.644519239" lastFinishedPulling="2026-02-18 20:14:02.439303927 +0000 UTC m=+3402.144236602" observedRunningTime="2026-02-18 20:14:03.025702932 +0000 UTC m=+3402.730635607" watchObservedRunningTime="2026-02-18 20:14:03.02977845 +0000 UTC m=+3402.734711105" Feb 18 20:14:04 crc kubenswrapper[4942]: I0218 20:14:04.151333 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-nfjgd" Feb 18 20:14:04 crc kubenswrapper[4942]: I0218 20:14:04.152635 4942 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-nfjgd" Feb 18 20:14:05 crc kubenswrapper[4942]: I0218 20:14:05.218086 4942 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-nfjgd" podUID="5d21c90f-12fc-4f90-a74e-8da0266710d6" containerName="registry-server" probeResult="failure" output=< Feb 18 20:14:05 crc kubenswrapper[4942]: timeout: failed to connect service ":50051" within 1s Feb 18 20:14:05 crc kubenswrapper[4942]: > Feb 18 20:14:08 crc kubenswrapper[4942]: I0218 20:14:08.036255 4942 scope.go:117] "RemoveContainer" containerID="5805170dd8a5bdf54f8aac0015f4c83ad571c8c859aab4da98b887ecc1a60495" Feb 18 20:14:08 crc kubenswrapper[4942]: E0218 20:14:08.036805 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wqxh4_openshift-machine-config-operator(28921539-823a-4439-a230-3b5aed7085cc)\"" pod="openshift-machine-config-operator/machine-config-daemon-wqxh4" podUID="28921539-823a-4439-a230-3b5aed7085cc" Feb 18 20:14:13 crc kubenswrapper[4942]: I0218 20:14:13.318107 4942 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-4sdkv"] Feb 18 20:14:13 crc kubenswrapper[4942]: I0218 20:14:13.320913 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-4sdkv" Feb 18 20:14:13 crc kubenswrapper[4942]: I0218 20:14:13.340354 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-4sdkv"] Feb 18 20:14:13 crc kubenswrapper[4942]: I0218 20:14:13.481565 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/faf7f8c4-fc64-4175-91fc-88159872b42c-utilities\") pod \"redhat-marketplace-4sdkv\" (UID: \"faf7f8c4-fc64-4175-91fc-88159872b42c\") " pod="openshift-marketplace/redhat-marketplace-4sdkv" Feb 18 20:14:13 crc kubenswrapper[4942]: I0218 20:14:13.481822 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/faf7f8c4-fc64-4175-91fc-88159872b42c-catalog-content\") pod \"redhat-marketplace-4sdkv\" (UID: \"faf7f8c4-fc64-4175-91fc-88159872b42c\") " pod="openshift-marketplace/redhat-marketplace-4sdkv" Feb 18 20:14:13 crc kubenswrapper[4942]: I0218 20:14:13.481926 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-czrkg\" (UniqueName: \"kubernetes.io/projected/faf7f8c4-fc64-4175-91fc-88159872b42c-kube-api-access-czrkg\") pod \"redhat-marketplace-4sdkv\" (UID: \"faf7f8c4-fc64-4175-91fc-88159872b42c\") " pod="openshift-marketplace/redhat-marketplace-4sdkv" Feb 18 20:14:13 crc kubenswrapper[4942]: I0218 20:14:13.583911 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/faf7f8c4-fc64-4175-91fc-88159872b42c-utilities\") pod \"redhat-marketplace-4sdkv\" (UID: \"faf7f8c4-fc64-4175-91fc-88159872b42c\") " pod="openshift-marketplace/redhat-marketplace-4sdkv" Feb 18 20:14:13 crc kubenswrapper[4942]: I0218 20:14:13.584237 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/faf7f8c4-fc64-4175-91fc-88159872b42c-catalog-content\") pod \"redhat-marketplace-4sdkv\" (UID: \"faf7f8c4-fc64-4175-91fc-88159872b42c\") " pod="openshift-marketplace/redhat-marketplace-4sdkv" Feb 18 20:14:13 crc kubenswrapper[4942]: I0218 20:14:13.584284 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-czrkg\" (UniqueName: \"kubernetes.io/projected/faf7f8c4-fc64-4175-91fc-88159872b42c-kube-api-access-czrkg\") pod \"redhat-marketplace-4sdkv\" (UID: \"faf7f8c4-fc64-4175-91fc-88159872b42c\") " pod="openshift-marketplace/redhat-marketplace-4sdkv" Feb 18 20:14:13 crc kubenswrapper[4942]: I0218 20:14:13.584546 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/faf7f8c4-fc64-4175-91fc-88159872b42c-utilities\") pod \"redhat-marketplace-4sdkv\" (UID: \"faf7f8c4-fc64-4175-91fc-88159872b42c\") " pod="openshift-marketplace/redhat-marketplace-4sdkv" Feb 18 20:14:13 crc kubenswrapper[4942]: I0218 20:14:13.584644 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/faf7f8c4-fc64-4175-91fc-88159872b42c-catalog-content\") pod \"redhat-marketplace-4sdkv\" (UID: \"faf7f8c4-fc64-4175-91fc-88159872b42c\") " pod="openshift-marketplace/redhat-marketplace-4sdkv" Feb 18 20:14:13 crc kubenswrapper[4942]: I0218 20:14:13.604036 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-czrkg\" (UniqueName: \"kubernetes.io/projected/faf7f8c4-fc64-4175-91fc-88159872b42c-kube-api-access-czrkg\") pod \"redhat-marketplace-4sdkv\" (UID: \"faf7f8c4-fc64-4175-91fc-88159872b42c\") " pod="openshift-marketplace/redhat-marketplace-4sdkv" Feb 18 20:14:13 crc kubenswrapper[4942]: I0218 20:14:13.646025 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-4sdkv" Feb 18 20:14:14 crc kubenswrapper[4942]: W0218 20:14:14.151126 4942 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfaf7f8c4_fc64_4175_91fc_88159872b42c.slice/crio-64999d3a1747b900f09b3bd8bcd593a06bdda3e21320990765cd8f52df1e7ef4 WatchSource:0}: Error finding container 64999d3a1747b900f09b3bd8bcd593a06bdda3e21320990765cd8f52df1e7ef4: Status 404 returned error can't find the container with id 64999d3a1747b900f09b3bd8bcd593a06bdda3e21320990765cd8f52df1e7ef4 Feb 18 20:14:14 crc kubenswrapper[4942]: I0218 20:14:14.192996 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-4sdkv"] Feb 18 20:14:15 crc kubenswrapper[4942]: I0218 20:14:15.130597 4942 generic.go:334] "Generic (PLEG): container finished" podID="faf7f8c4-fc64-4175-91fc-88159872b42c" containerID="bd970cae2ac80796b290caa547861b3b5927678f99ce99f4840749db4d17df9a" exitCode=0 Feb 18 20:14:15 crc kubenswrapper[4942]: I0218 20:14:15.130707 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4sdkv" event={"ID":"faf7f8c4-fc64-4175-91fc-88159872b42c","Type":"ContainerDied","Data":"bd970cae2ac80796b290caa547861b3b5927678f99ce99f4840749db4d17df9a"} Feb 18 20:14:15 crc kubenswrapper[4942]: I0218 20:14:15.131713 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4sdkv" event={"ID":"faf7f8c4-fc64-4175-91fc-88159872b42c","Type":"ContainerStarted","Data":"64999d3a1747b900f09b3bd8bcd593a06bdda3e21320990765cd8f52df1e7ef4"} Feb 18 20:14:15 crc kubenswrapper[4942]: I0218 20:14:15.212720 4942 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-nfjgd" podUID="5d21c90f-12fc-4f90-a74e-8da0266710d6" containerName="registry-server" probeResult="failure" output=< Feb 18 20:14:15 crc kubenswrapper[4942]: timeout: failed to connect service ":50051" within 1s Feb 18 20:14:15 crc kubenswrapper[4942]: > Feb 18 20:14:16 crc kubenswrapper[4942]: I0218 20:14:16.141286 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4sdkv" event={"ID":"faf7f8c4-fc64-4175-91fc-88159872b42c","Type":"ContainerStarted","Data":"c068d262aee1dbb5c83bbc9c9d4b94f3f620ed1186516e827ce6cffc66e87ae0"} Feb 18 20:14:18 crc kubenswrapper[4942]: I0218 20:14:18.166295 4942 generic.go:334] "Generic (PLEG): container finished" podID="faf7f8c4-fc64-4175-91fc-88159872b42c" containerID="c068d262aee1dbb5c83bbc9c9d4b94f3f620ed1186516e827ce6cffc66e87ae0" exitCode=0 Feb 18 20:14:18 crc kubenswrapper[4942]: I0218 20:14:18.166378 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4sdkv" event={"ID":"faf7f8c4-fc64-4175-91fc-88159872b42c","Type":"ContainerDied","Data":"c068d262aee1dbb5c83bbc9c9d4b94f3f620ed1186516e827ce6cffc66e87ae0"} Feb 18 20:14:19 crc kubenswrapper[4942]: I0218 20:14:19.036032 4942 scope.go:117] "RemoveContainer" containerID="5805170dd8a5bdf54f8aac0015f4c83ad571c8c859aab4da98b887ecc1a60495" Feb 18 20:14:19 crc kubenswrapper[4942]: E0218 20:14:19.036554 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wqxh4_openshift-machine-config-operator(28921539-823a-4439-a230-3b5aed7085cc)\"" pod="openshift-machine-config-operator/machine-config-daemon-wqxh4" podUID="28921539-823a-4439-a230-3b5aed7085cc" Feb 18 20:14:19 crc kubenswrapper[4942]: I0218 20:14:19.178629 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4sdkv" event={"ID":"faf7f8c4-fc64-4175-91fc-88159872b42c","Type":"ContainerStarted","Data":"1ce487bc1b4a9d2dba3fbea588fa97259c974924d32756b597d604b463533253"} Feb 18 20:14:19 crc kubenswrapper[4942]: I0218 20:14:19.205588 4942 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-4sdkv" podStartSLOduration=2.770162916 podStartE2EDuration="6.205568668s" podCreationTimestamp="2026-02-18 20:14:13 +0000 UTC" firstStartedPulling="2026-02-18 20:14:15.132873256 +0000 UTC m=+3414.837805921" lastFinishedPulling="2026-02-18 20:14:18.568279008 +0000 UTC m=+3418.273211673" observedRunningTime="2026-02-18 20:14:19.203339019 +0000 UTC m=+3418.908271694" watchObservedRunningTime="2026-02-18 20:14:19.205568668 +0000 UTC m=+3418.910501343" Feb 18 20:14:23 crc kubenswrapper[4942]: I0218 20:14:23.647206 4942 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-4sdkv" Feb 18 20:14:23 crc kubenswrapper[4942]: I0218 20:14:23.649001 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-4sdkv" Feb 18 20:14:24 crc kubenswrapper[4942]: I0218 20:14:24.706404 4942 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-marketplace-4sdkv" podUID="faf7f8c4-fc64-4175-91fc-88159872b42c" containerName="registry-server" probeResult="failure" output=< Feb 18 20:14:24 crc kubenswrapper[4942]: timeout: failed to connect service ":50051" within 1s Feb 18 20:14:24 crc kubenswrapper[4942]: > Feb 18 20:14:25 crc kubenswrapper[4942]: I0218 20:14:25.212092 4942 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-nfjgd" podUID="5d21c90f-12fc-4f90-a74e-8da0266710d6" containerName="registry-server" probeResult="failure" output=< Feb 18 20:14:25 crc kubenswrapper[4942]: timeout: failed to connect service ":50051" within 1s Feb 18 20:14:25 crc kubenswrapper[4942]: > Feb 18 20:14:32 crc kubenswrapper[4942]: I0218 20:14:32.036706 4942 scope.go:117] "RemoveContainer" containerID="5805170dd8a5bdf54f8aac0015f4c83ad571c8c859aab4da98b887ecc1a60495" Feb 18 20:14:32 crc kubenswrapper[4942]: I0218 20:14:32.319522 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wqxh4" event={"ID":"28921539-823a-4439-a230-3b5aed7085cc","Type":"ContainerStarted","Data":"4637aee37878bbe70bd62244a6764ec2f38f73d2c09b6cb8754f4ec3ccb78f19"} Feb 18 20:14:33 crc kubenswrapper[4942]: I0218 20:14:33.706286 4942 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-4sdkv" Feb 18 20:14:33 crc kubenswrapper[4942]: I0218 20:14:33.767595 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-4sdkv" Feb 18 20:14:33 crc kubenswrapper[4942]: I0218 20:14:33.944743 4942 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-4sdkv"] Feb 18 20:14:34 crc kubenswrapper[4942]: I0218 20:14:34.206552 4942 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-nfjgd" Feb 18 20:14:34 crc kubenswrapper[4942]: I0218 20:14:34.269874 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-nfjgd" Feb 18 20:14:35 crc kubenswrapper[4942]: I0218 20:14:35.350939 4942 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-4sdkv" podUID="faf7f8c4-fc64-4175-91fc-88159872b42c" containerName="registry-server" containerID="cri-o://1ce487bc1b4a9d2dba3fbea588fa97259c974924d32756b597d604b463533253" gracePeriod=2 Feb 18 20:14:36 crc kubenswrapper[4942]: I0218 20:14:36.344564 4942 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-nfjgd"] Feb 18 20:14:36 crc kubenswrapper[4942]: I0218 20:14:36.345104 4942 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-nfjgd" podUID="5d21c90f-12fc-4f90-a74e-8da0266710d6" containerName="registry-server" containerID="cri-o://7e2a7b2370f1f30bd67f503f19062372eeefbc1b47c75198207dd554e8bf2526" gracePeriod=2 Feb 18 20:14:36 crc kubenswrapper[4942]: I0218 20:14:36.345547 4942 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-4sdkv" Feb 18 20:14:36 crc kubenswrapper[4942]: I0218 20:14:36.360478 4942 generic.go:334] "Generic (PLEG): container finished" podID="faf7f8c4-fc64-4175-91fc-88159872b42c" containerID="1ce487bc1b4a9d2dba3fbea588fa97259c974924d32756b597d604b463533253" exitCode=0 Feb 18 20:14:36 crc kubenswrapper[4942]: I0218 20:14:36.360521 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4sdkv" event={"ID":"faf7f8c4-fc64-4175-91fc-88159872b42c","Type":"ContainerDied","Data":"1ce487bc1b4a9d2dba3fbea588fa97259c974924d32756b597d604b463533253"} Feb 18 20:14:36 crc kubenswrapper[4942]: I0218 20:14:36.360547 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4sdkv" event={"ID":"faf7f8c4-fc64-4175-91fc-88159872b42c","Type":"ContainerDied","Data":"64999d3a1747b900f09b3bd8bcd593a06bdda3e21320990765cd8f52df1e7ef4"} Feb 18 20:14:36 crc kubenswrapper[4942]: I0218 20:14:36.360556 4942 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-4sdkv" Feb 18 20:14:36 crc kubenswrapper[4942]: I0218 20:14:36.360576 4942 scope.go:117] "RemoveContainer" containerID="1ce487bc1b4a9d2dba3fbea588fa97259c974924d32756b597d604b463533253" Feb 18 20:14:36 crc kubenswrapper[4942]: I0218 20:14:36.386602 4942 scope.go:117] "RemoveContainer" containerID="c068d262aee1dbb5c83bbc9c9d4b94f3f620ed1186516e827ce6cffc66e87ae0" Feb 18 20:14:36 crc kubenswrapper[4942]: I0218 20:14:36.389436 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/faf7f8c4-fc64-4175-91fc-88159872b42c-utilities\") pod \"faf7f8c4-fc64-4175-91fc-88159872b42c\" (UID: \"faf7f8c4-fc64-4175-91fc-88159872b42c\") " Feb 18 20:14:36 crc kubenswrapper[4942]: I0218 20:14:36.389510 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-czrkg\" (UniqueName: \"kubernetes.io/projected/faf7f8c4-fc64-4175-91fc-88159872b42c-kube-api-access-czrkg\") pod \"faf7f8c4-fc64-4175-91fc-88159872b42c\" (UID: \"faf7f8c4-fc64-4175-91fc-88159872b42c\") " Feb 18 20:14:36 crc kubenswrapper[4942]: I0218 20:14:36.389568 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/faf7f8c4-fc64-4175-91fc-88159872b42c-catalog-content\") pod \"faf7f8c4-fc64-4175-91fc-88159872b42c\" (UID: \"faf7f8c4-fc64-4175-91fc-88159872b42c\") " Feb 18 20:14:36 crc kubenswrapper[4942]: I0218 20:14:36.390683 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/faf7f8c4-fc64-4175-91fc-88159872b42c-utilities" (OuterVolumeSpecName: "utilities") pod "faf7f8c4-fc64-4175-91fc-88159872b42c" (UID: "faf7f8c4-fc64-4175-91fc-88159872b42c"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 20:14:36 crc kubenswrapper[4942]: I0218 20:14:36.415325 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/faf7f8c4-fc64-4175-91fc-88159872b42c-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "faf7f8c4-fc64-4175-91fc-88159872b42c" (UID: "faf7f8c4-fc64-4175-91fc-88159872b42c"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 20:14:36 crc kubenswrapper[4942]: I0218 20:14:36.426955 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/faf7f8c4-fc64-4175-91fc-88159872b42c-kube-api-access-czrkg" (OuterVolumeSpecName: "kube-api-access-czrkg") pod "faf7f8c4-fc64-4175-91fc-88159872b42c" (UID: "faf7f8c4-fc64-4175-91fc-88159872b42c"). InnerVolumeSpecName "kube-api-access-czrkg". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 20:14:36 crc kubenswrapper[4942]: I0218 20:14:36.438401 4942 scope.go:117] "RemoveContainer" containerID="bd970cae2ac80796b290caa547861b3b5927678f99ce99f4840749db4d17df9a" Feb 18 20:14:36 crc kubenswrapper[4942]: I0218 20:14:36.492042 4942 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/faf7f8c4-fc64-4175-91fc-88159872b42c-utilities\") on node \"crc\" DevicePath \"\"" Feb 18 20:14:36 crc kubenswrapper[4942]: I0218 20:14:36.492087 4942 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-czrkg\" (UniqueName: \"kubernetes.io/projected/faf7f8c4-fc64-4175-91fc-88159872b42c-kube-api-access-czrkg\") on node \"crc\" DevicePath \"\"" Feb 18 20:14:36 crc kubenswrapper[4942]: I0218 20:14:36.492099 4942 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/faf7f8c4-fc64-4175-91fc-88159872b42c-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 18 20:14:36 crc kubenswrapper[4942]: I0218 20:14:36.620401 4942 scope.go:117] "RemoveContainer" containerID="1ce487bc1b4a9d2dba3fbea588fa97259c974924d32756b597d604b463533253" Feb 18 20:14:36 crc kubenswrapper[4942]: E0218 20:14:36.622837 4942 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1ce487bc1b4a9d2dba3fbea588fa97259c974924d32756b597d604b463533253\": container with ID starting with 1ce487bc1b4a9d2dba3fbea588fa97259c974924d32756b597d604b463533253 not found: ID does not exist" containerID="1ce487bc1b4a9d2dba3fbea588fa97259c974924d32756b597d604b463533253" Feb 18 20:14:36 crc kubenswrapper[4942]: I0218 20:14:36.622877 4942 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1ce487bc1b4a9d2dba3fbea588fa97259c974924d32756b597d604b463533253"} err="failed to get container status \"1ce487bc1b4a9d2dba3fbea588fa97259c974924d32756b597d604b463533253\": rpc error: code = NotFound desc = could not find container \"1ce487bc1b4a9d2dba3fbea588fa97259c974924d32756b597d604b463533253\": container with ID starting with 1ce487bc1b4a9d2dba3fbea588fa97259c974924d32756b597d604b463533253 not found: ID does not exist" Feb 18 20:14:36 crc kubenswrapper[4942]: I0218 20:14:36.622903 4942 scope.go:117] "RemoveContainer" containerID="c068d262aee1dbb5c83bbc9c9d4b94f3f620ed1186516e827ce6cffc66e87ae0" Feb 18 20:14:36 crc kubenswrapper[4942]: E0218 20:14:36.627000 4942 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c068d262aee1dbb5c83bbc9c9d4b94f3f620ed1186516e827ce6cffc66e87ae0\": container with ID starting with c068d262aee1dbb5c83bbc9c9d4b94f3f620ed1186516e827ce6cffc66e87ae0 not found: ID does not exist" containerID="c068d262aee1dbb5c83bbc9c9d4b94f3f620ed1186516e827ce6cffc66e87ae0" Feb 18 20:14:36 crc kubenswrapper[4942]: I0218 20:14:36.627046 4942 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c068d262aee1dbb5c83bbc9c9d4b94f3f620ed1186516e827ce6cffc66e87ae0"} err="failed to get container status \"c068d262aee1dbb5c83bbc9c9d4b94f3f620ed1186516e827ce6cffc66e87ae0\": rpc error: code = NotFound desc = could not find container \"c068d262aee1dbb5c83bbc9c9d4b94f3f620ed1186516e827ce6cffc66e87ae0\": container with ID starting with c068d262aee1dbb5c83bbc9c9d4b94f3f620ed1186516e827ce6cffc66e87ae0 not found: ID does not exist" Feb 18 20:14:36 crc kubenswrapper[4942]: I0218 20:14:36.627075 4942 scope.go:117] "RemoveContainer" containerID="bd970cae2ac80796b290caa547861b3b5927678f99ce99f4840749db4d17df9a" Feb 18 20:14:36 crc kubenswrapper[4942]: E0218 20:14:36.627359 4942 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bd970cae2ac80796b290caa547861b3b5927678f99ce99f4840749db4d17df9a\": container with ID starting with bd970cae2ac80796b290caa547861b3b5927678f99ce99f4840749db4d17df9a not found: ID does not exist" containerID="bd970cae2ac80796b290caa547861b3b5927678f99ce99f4840749db4d17df9a" Feb 18 20:14:36 crc kubenswrapper[4942]: I0218 20:14:36.627380 4942 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bd970cae2ac80796b290caa547861b3b5927678f99ce99f4840749db4d17df9a"} err="failed to get container status \"bd970cae2ac80796b290caa547861b3b5927678f99ce99f4840749db4d17df9a\": rpc error: code = NotFound desc = could not find container \"bd970cae2ac80796b290caa547861b3b5927678f99ce99f4840749db4d17df9a\": container with ID starting with bd970cae2ac80796b290caa547861b3b5927678f99ce99f4840749db4d17df9a not found: ID does not exist" Feb 18 20:14:36 crc kubenswrapper[4942]: I0218 20:14:36.698591 4942 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-4sdkv"] Feb 18 20:14:36 crc kubenswrapper[4942]: I0218 20:14:36.709028 4942 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-4sdkv"] Feb 18 20:14:36 crc kubenswrapper[4942]: I0218 20:14:36.808340 4942 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-nfjgd" Feb 18 20:14:36 crc kubenswrapper[4942]: I0218 20:14:36.898941 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pwbms\" (UniqueName: \"kubernetes.io/projected/5d21c90f-12fc-4f90-a74e-8da0266710d6-kube-api-access-pwbms\") pod \"5d21c90f-12fc-4f90-a74e-8da0266710d6\" (UID: \"5d21c90f-12fc-4f90-a74e-8da0266710d6\") " Feb 18 20:14:36 crc kubenswrapper[4942]: I0218 20:14:36.899025 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5d21c90f-12fc-4f90-a74e-8da0266710d6-catalog-content\") pod \"5d21c90f-12fc-4f90-a74e-8da0266710d6\" (UID: \"5d21c90f-12fc-4f90-a74e-8da0266710d6\") " Feb 18 20:14:36 crc kubenswrapper[4942]: I0218 20:14:36.899326 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5d21c90f-12fc-4f90-a74e-8da0266710d6-utilities\") pod \"5d21c90f-12fc-4f90-a74e-8da0266710d6\" (UID: \"5d21c90f-12fc-4f90-a74e-8da0266710d6\") " Feb 18 20:14:36 crc kubenswrapper[4942]: I0218 20:14:36.900030 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5d21c90f-12fc-4f90-a74e-8da0266710d6-utilities" (OuterVolumeSpecName: "utilities") pod "5d21c90f-12fc-4f90-a74e-8da0266710d6" (UID: "5d21c90f-12fc-4f90-a74e-8da0266710d6"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 20:14:36 crc kubenswrapper[4942]: I0218 20:14:36.904159 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5d21c90f-12fc-4f90-a74e-8da0266710d6-kube-api-access-pwbms" (OuterVolumeSpecName: "kube-api-access-pwbms") pod "5d21c90f-12fc-4f90-a74e-8da0266710d6" (UID: "5d21c90f-12fc-4f90-a74e-8da0266710d6"). InnerVolumeSpecName "kube-api-access-pwbms". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 20:14:37 crc kubenswrapper[4942]: I0218 20:14:37.002065 4942 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pwbms\" (UniqueName: \"kubernetes.io/projected/5d21c90f-12fc-4f90-a74e-8da0266710d6-kube-api-access-pwbms\") on node \"crc\" DevicePath \"\"" Feb 18 20:14:37 crc kubenswrapper[4942]: I0218 20:14:37.002460 4942 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5d21c90f-12fc-4f90-a74e-8da0266710d6-utilities\") on node \"crc\" DevicePath \"\"" Feb 18 20:14:37 crc kubenswrapper[4942]: I0218 20:14:37.019326 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5d21c90f-12fc-4f90-a74e-8da0266710d6-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5d21c90f-12fc-4f90-a74e-8da0266710d6" (UID: "5d21c90f-12fc-4f90-a74e-8da0266710d6"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 20:14:37 crc kubenswrapper[4942]: I0218 20:14:37.049309 4942 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="faf7f8c4-fc64-4175-91fc-88159872b42c" path="/var/lib/kubelet/pods/faf7f8c4-fc64-4175-91fc-88159872b42c/volumes" Feb 18 20:14:37 crc kubenswrapper[4942]: I0218 20:14:37.105344 4942 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5d21c90f-12fc-4f90-a74e-8da0266710d6-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 18 20:14:37 crc kubenswrapper[4942]: I0218 20:14:37.372437 4942 generic.go:334] "Generic (PLEG): container finished" podID="5d21c90f-12fc-4f90-a74e-8da0266710d6" containerID="7e2a7b2370f1f30bd67f503f19062372eeefbc1b47c75198207dd554e8bf2526" exitCode=0 Feb 18 20:14:37 crc kubenswrapper[4942]: I0218 20:14:37.372651 4942 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-nfjgd" Feb 18 20:14:37 crc kubenswrapper[4942]: I0218 20:14:37.372669 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-nfjgd" event={"ID":"5d21c90f-12fc-4f90-a74e-8da0266710d6","Type":"ContainerDied","Data":"7e2a7b2370f1f30bd67f503f19062372eeefbc1b47c75198207dd554e8bf2526"} Feb 18 20:14:37 crc kubenswrapper[4942]: I0218 20:14:37.372731 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-nfjgd" event={"ID":"5d21c90f-12fc-4f90-a74e-8da0266710d6","Type":"ContainerDied","Data":"c54b96648e25109d528d6659096fcf53de95dbbd628245057ff76cb7139280b0"} Feb 18 20:14:37 crc kubenswrapper[4942]: I0218 20:14:37.372790 4942 scope.go:117] "RemoveContainer" containerID="7e2a7b2370f1f30bd67f503f19062372eeefbc1b47c75198207dd554e8bf2526" Feb 18 20:14:37 crc kubenswrapper[4942]: I0218 20:14:37.397245 4942 scope.go:117] "RemoveContainer" containerID="cc157299f0f23d6f51bd55cc366d82c09c24ae9675fc7441f263cc7e0003b3ee" Feb 18 20:14:37 crc kubenswrapper[4942]: I0218 20:14:37.407378 4942 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-nfjgd"] Feb 18 20:14:37 crc kubenswrapper[4942]: I0218 20:14:37.418343 4942 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-nfjgd"] Feb 18 20:14:37 crc kubenswrapper[4942]: I0218 20:14:37.442594 4942 scope.go:117] "RemoveContainer" containerID="4abacf3ceed61b4f2b51d004ec4785bc183d43b69b5e2b5da2d29159be50768f" Feb 18 20:14:37 crc kubenswrapper[4942]: I0218 20:14:37.464298 4942 scope.go:117] "RemoveContainer" containerID="7e2a7b2370f1f30bd67f503f19062372eeefbc1b47c75198207dd554e8bf2526" Feb 18 20:14:37 crc kubenswrapper[4942]: E0218 20:14:37.464897 4942 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7e2a7b2370f1f30bd67f503f19062372eeefbc1b47c75198207dd554e8bf2526\": container with ID starting with 7e2a7b2370f1f30bd67f503f19062372eeefbc1b47c75198207dd554e8bf2526 not found: ID does not exist" containerID="7e2a7b2370f1f30bd67f503f19062372eeefbc1b47c75198207dd554e8bf2526" Feb 18 20:14:37 crc kubenswrapper[4942]: I0218 20:14:37.464949 4942 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7e2a7b2370f1f30bd67f503f19062372eeefbc1b47c75198207dd554e8bf2526"} err="failed to get container status \"7e2a7b2370f1f30bd67f503f19062372eeefbc1b47c75198207dd554e8bf2526\": rpc error: code = NotFound desc = could not find container \"7e2a7b2370f1f30bd67f503f19062372eeefbc1b47c75198207dd554e8bf2526\": container with ID starting with 7e2a7b2370f1f30bd67f503f19062372eeefbc1b47c75198207dd554e8bf2526 not found: ID does not exist" Feb 18 20:14:37 crc kubenswrapper[4942]: I0218 20:14:37.465002 4942 scope.go:117] "RemoveContainer" containerID="cc157299f0f23d6f51bd55cc366d82c09c24ae9675fc7441f263cc7e0003b3ee" Feb 18 20:14:37 crc kubenswrapper[4942]: E0218 20:14:37.465478 4942 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cc157299f0f23d6f51bd55cc366d82c09c24ae9675fc7441f263cc7e0003b3ee\": container with ID starting with cc157299f0f23d6f51bd55cc366d82c09c24ae9675fc7441f263cc7e0003b3ee not found: ID does not exist" containerID="cc157299f0f23d6f51bd55cc366d82c09c24ae9675fc7441f263cc7e0003b3ee" Feb 18 20:14:37 crc kubenswrapper[4942]: I0218 20:14:37.465511 4942 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cc157299f0f23d6f51bd55cc366d82c09c24ae9675fc7441f263cc7e0003b3ee"} err="failed to get container status \"cc157299f0f23d6f51bd55cc366d82c09c24ae9675fc7441f263cc7e0003b3ee\": rpc error: code = NotFound desc = could not find container \"cc157299f0f23d6f51bd55cc366d82c09c24ae9675fc7441f263cc7e0003b3ee\": container with ID starting with cc157299f0f23d6f51bd55cc366d82c09c24ae9675fc7441f263cc7e0003b3ee not found: ID does not exist" Feb 18 20:14:37 crc kubenswrapper[4942]: I0218 20:14:37.465530 4942 scope.go:117] "RemoveContainer" containerID="4abacf3ceed61b4f2b51d004ec4785bc183d43b69b5e2b5da2d29159be50768f" Feb 18 20:14:37 crc kubenswrapper[4942]: E0218 20:14:37.467288 4942 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4abacf3ceed61b4f2b51d004ec4785bc183d43b69b5e2b5da2d29159be50768f\": container with ID starting with 4abacf3ceed61b4f2b51d004ec4785bc183d43b69b5e2b5da2d29159be50768f not found: ID does not exist" containerID="4abacf3ceed61b4f2b51d004ec4785bc183d43b69b5e2b5da2d29159be50768f" Feb 18 20:14:37 crc kubenswrapper[4942]: I0218 20:14:37.467324 4942 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4abacf3ceed61b4f2b51d004ec4785bc183d43b69b5e2b5da2d29159be50768f"} err="failed to get container status \"4abacf3ceed61b4f2b51d004ec4785bc183d43b69b5e2b5da2d29159be50768f\": rpc error: code = NotFound desc = could not find container \"4abacf3ceed61b4f2b51d004ec4785bc183d43b69b5e2b5da2d29159be50768f\": container with ID starting with 4abacf3ceed61b4f2b51d004ec4785bc183d43b69b5e2b5da2d29159be50768f not found: ID does not exist" Feb 18 20:14:39 crc kubenswrapper[4942]: I0218 20:14:39.048425 4942 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5d21c90f-12fc-4f90-a74e-8da0266710d6" path="/var/lib/kubelet/pods/5d21c90f-12fc-4f90-a74e-8da0266710d6/volumes" Feb 18 20:14:58 crc kubenswrapper[4942]: I0218 20:14:58.319458 4942 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-r6dvm"] Feb 18 20:14:58 crc kubenswrapper[4942]: E0218 20:14:58.320522 4942 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5d21c90f-12fc-4f90-a74e-8da0266710d6" containerName="extract-utilities" Feb 18 20:14:58 crc kubenswrapper[4942]: I0218 20:14:58.320538 4942 state_mem.go:107] "Deleted CPUSet assignment" podUID="5d21c90f-12fc-4f90-a74e-8da0266710d6" containerName="extract-utilities" Feb 18 20:14:58 crc kubenswrapper[4942]: E0218 20:14:58.320564 4942 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="faf7f8c4-fc64-4175-91fc-88159872b42c" containerName="extract-content" Feb 18 20:14:58 crc kubenswrapper[4942]: I0218 20:14:58.320572 4942 state_mem.go:107] "Deleted CPUSet assignment" podUID="faf7f8c4-fc64-4175-91fc-88159872b42c" containerName="extract-content" Feb 18 20:14:58 crc kubenswrapper[4942]: E0218 20:14:58.320599 4942 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="faf7f8c4-fc64-4175-91fc-88159872b42c" containerName="registry-server" Feb 18 20:14:58 crc kubenswrapper[4942]: I0218 20:14:58.320608 4942 state_mem.go:107] "Deleted CPUSet assignment" podUID="faf7f8c4-fc64-4175-91fc-88159872b42c" containerName="registry-server" Feb 18 20:14:58 crc kubenswrapper[4942]: E0218 20:14:58.320629 4942 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5d21c90f-12fc-4f90-a74e-8da0266710d6" containerName="registry-server" Feb 18 20:14:58 crc kubenswrapper[4942]: I0218 20:14:58.320637 4942 state_mem.go:107] "Deleted CPUSet assignment" podUID="5d21c90f-12fc-4f90-a74e-8da0266710d6" containerName="registry-server" Feb 18 20:14:58 crc kubenswrapper[4942]: E0218 20:14:58.320658 4942 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="faf7f8c4-fc64-4175-91fc-88159872b42c" containerName="extract-utilities" Feb 18 20:14:58 crc kubenswrapper[4942]: I0218 20:14:58.320666 4942 state_mem.go:107] "Deleted CPUSet assignment" podUID="faf7f8c4-fc64-4175-91fc-88159872b42c" containerName="extract-utilities" Feb 18 20:14:58 crc kubenswrapper[4942]: E0218 20:14:58.320685 4942 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5d21c90f-12fc-4f90-a74e-8da0266710d6" containerName="extract-content" Feb 18 20:14:58 crc kubenswrapper[4942]: I0218 20:14:58.320693 4942 state_mem.go:107] "Deleted CPUSet assignment" podUID="5d21c90f-12fc-4f90-a74e-8da0266710d6" containerName="extract-content" Feb 18 20:14:58 crc kubenswrapper[4942]: I0218 20:14:58.320968 4942 memory_manager.go:354] "RemoveStaleState removing state" podUID="faf7f8c4-fc64-4175-91fc-88159872b42c" containerName="registry-server" Feb 18 20:14:58 crc kubenswrapper[4942]: I0218 20:14:58.320991 4942 memory_manager.go:354] "RemoveStaleState removing state" podUID="5d21c90f-12fc-4f90-a74e-8da0266710d6" containerName="registry-server" Feb 18 20:14:58 crc kubenswrapper[4942]: I0218 20:14:58.322922 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-r6dvm" Feb 18 20:14:58 crc kubenswrapper[4942]: I0218 20:14:58.330408 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-r6dvm"] Feb 18 20:14:58 crc kubenswrapper[4942]: I0218 20:14:58.467605 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6gfk4\" (UniqueName: \"kubernetes.io/projected/53c5bf36-8646-4dfb-a736-038ae98719e0-kube-api-access-6gfk4\") pod \"certified-operators-r6dvm\" (UID: \"53c5bf36-8646-4dfb-a736-038ae98719e0\") " pod="openshift-marketplace/certified-operators-r6dvm" Feb 18 20:14:58 crc kubenswrapper[4942]: I0218 20:14:58.467888 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/53c5bf36-8646-4dfb-a736-038ae98719e0-catalog-content\") pod \"certified-operators-r6dvm\" (UID: \"53c5bf36-8646-4dfb-a736-038ae98719e0\") " pod="openshift-marketplace/certified-operators-r6dvm" Feb 18 20:14:58 crc kubenswrapper[4942]: I0218 20:14:58.468080 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/53c5bf36-8646-4dfb-a736-038ae98719e0-utilities\") pod \"certified-operators-r6dvm\" (UID: \"53c5bf36-8646-4dfb-a736-038ae98719e0\") " pod="openshift-marketplace/certified-operators-r6dvm" Feb 18 20:14:58 crc kubenswrapper[4942]: I0218 20:14:58.568984 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/53c5bf36-8646-4dfb-a736-038ae98719e0-catalog-content\") pod \"certified-operators-r6dvm\" (UID: \"53c5bf36-8646-4dfb-a736-038ae98719e0\") " pod="openshift-marketplace/certified-operators-r6dvm" Feb 18 20:14:58 crc kubenswrapper[4942]: I0218 20:14:58.569085 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/53c5bf36-8646-4dfb-a736-038ae98719e0-utilities\") pod \"certified-operators-r6dvm\" (UID: \"53c5bf36-8646-4dfb-a736-038ae98719e0\") " pod="openshift-marketplace/certified-operators-r6dvm" Feb 18 20:14:58 crc kubenswrapper[4942]: I0218 20:14:58.569143 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6gfk4\" (UniqueName: \"kubernetes.io/projected/53c5bf36-8646-4dfb-a736-038ae98719e0-kube-api-access-6gfk4\") pod \"certified-operators-r6dvm\" (UID: \"53c5bf36-8646-4dfb-a736-038ae98719e0\") " pod="openshift-marketplace/certified-operators-r6dvm" Feb 18 20:14:58 crc kubenswrapper[4942]: I0218 20:14:58.569506 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/53c5bf36-8646-4dfb-a736-038ae98719e0-catalog-content\") pod \"certified-operators-r6dvm\" (UID: \"53c5bf36-8646-4dfb-a736-038ae98719e0\") " pod="openshift-marketplace/certified-operators-r6dvm" Feb 18 20:14:58 crc kubenswrapper[4942]: I0218 20:14:58.569814 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/53c5bf36-8646-4dfb-a736-038ae98719e0-utilities\") pod \"certified-operators-r6dvm\" (UID: \"53c5bf36-8646-4dfb-a736-038ae98719e0\") " pod="openshift-marketplace/certified-operators-r6dvm" Feb 18 20:14:58 crc kubenswrapper[4942]: I0218 20:14:58.588805 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6gfk4\" (UniqueName: \"kubernetes.io/projected/53c5bf36-8646-4dfb-a736-038ae98719e0-kube-api-access-6gfk4\") pod \"certified-operators-r6dvm\" (UID: \"53c5bf36-8646-4dfb-a736-038ae98719e0\") " pod="openshift-marketplace/certified-operators-r6dvm" Feb 18 20:14:58 crc kubenswrapper[4942]: I0218 20:14:58.642554 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-r6dvm" Feb 18 20:14:59 crc kubenswrapper[4942]: I0218 20:14:59.135570 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-r6dvm"] Feb 18 20:14:59 crc kubenswrapper[4942]: I0218 20:14:59.584185 4942 generic.go:334] "Generic (PLEG): container finished" podID="53c5bf36-8646-4dfb-a736-038ae98719e0" containerID="931405098cc3e52001cb681fc8feb5b0a195e18daa7d338a3f35c7fdff3e8b5d" exitCode=0 Feb 18 20:14:59 crc kubenswrapper[4942]: I0218 20:14:59.584259 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-r6dvm" event={"ID":"53c5bf36-8646-4dfb-a736-038ae98719e0","Type":"ContainerDied","Data":"931405098cc3e52001cb681fc8feb5b0a195e18daa7d338a3f35c7fdff3e8b5d"} Feb 18 20:14:59 crc kubenswrapper[4942]: I0218 20:14:59.584473 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-r6dvm" event={"ID":"53c5bf36-8646-4dfb-a736-038ae98719e0","Type":"ContainerStarted","Data":"4135badf274346a5d8726ac2a925164017752639dc54d9d9ac76c44933d06402"} Feb 18 20:15:00 crc kubenswrapper[4942]: I0218 20:15:00.146127 4942 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29524095-ztvmd"] Feb 18 20:15:00 crc kubenswrapper[4942]: I0218 20:15:00.147879 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29524095-ztvmd" Feb 18 20:15:00 crc kubenswrapper[4942]: I0218 20:15:00.150032 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 18 20:15:00 crc kubenswrapper[4942]: I0218 20:15:00.154998 4942 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 18 20:15:00 crc kubenswrapper[4942]: I0218 20:15:00.157546 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29524095-ztvmd"] Feb 18 20:15:00 crc kubenswrapper[4942]: I0218 20:15:00.297530 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jwzh9\" (UniqueName: \"kubernetes.io/projected/81acc89a-7a32-4040-93b5-5332398d6374-kube-api-access-jwzh9\") pod \"collect-profiles-29524095-ztvmd\" (UID: \"81acc89a-7a32-4040-93b5-5332398d6374\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524095-ztvmd" Feb 18 20:15:00 crc kubenswrapper[4942]: I0218 20:15:00.297646 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/81acc89a-7a32-4040-93b5-5332398d6374-secret-volume\") pod \"collect-profiles-29524095-ztvmd\" (UID: \"81acc89a-7a32-4040-93b5-5332398d6374\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524095-ztvmd" Feb 18 20:15:00 crc kubenswrapper[4942]: I0218 20:15:00.297726 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/81acc89a-7a32-4040-93b5-5332398d6374-config-volume\") pod \"collect-profiles-29524095-ztvmd\" (UID: \"81acc89a-7a32-4040-93b5-5332398d6374\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524095-ztvmd" Feb 18 20:15:00 crc kubenswrapper[4942]: I0218 20:15:00.400542 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/81acc89a-7a32-4040-93b5-5332398d6374-config-volume\") pod \"collect-profiles-29524095-ztvmd\" (UID: \"81acc89a-7a32-4040-93b5-5332398d6374\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524095-ztvmd" Feb 18 20:15:00 crc kubenswrapper[4942]: I0218 20:15:00.400700 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jwzh9\" (UniqueName: \"kubernetes.io/projected/81acc89a-7a32-4040-93b5-5332398d6374-kube-api-access-jwzh9\") pod \"collect-profiles-29524095-ztvmd\" (UID: \"81acc89a-7a32-4040-93b5-5332398d6374\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524095-ztvmd" Feb 18 20:15:00 crc kubenswrapper[4942]: I0218 20:15:00.400828 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/81acc89a-7a32-4040-93b5-5332398d6374-secret-volume\") pod \"collect-profiles-29524095-ztvmd\" (UID: \"81acc89a-7a32-4040-93b5-5332398d6374\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524095-ztvmd" Feb 18 20:15:00 crc kubenswrapper[4942]: I0218 20:15:00.401588 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/81acc89a-7a32-4040-93b5-5332398d6374-config-volume\") pod \"collect-profiles-29524095-ztvmd\" (UID: \"81acc89a-7a32-4040-93b5-5332398d6374\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524095-ztvmd" Feb 18 20:15:00 crc kubenswrapper[4942]: I0218 20:15:00.407261 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/81acc89a-7a32-4040-93b5-5332398d6374-secret-volume\") pod \"collect-profiles-29524095-ztvmd\" (UID: \"81acc89a-7a32-4040-93b5-5332398d6374\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524095-ztvmd" Feb 18 20:15:00 crc kubenswrapper[4942]: I0218 20:15:00.418454 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jwzh9\" (UniqueName: \"kubernetes.io/projected/81acc89a-7a32-4040-93b5-5332398d6374-kube-api-access-jwzh9\") pod \"collect-profiles-29524095-ztvmd\" (UID: \"81acc89a-7a32-4040-93b5-5332398d6374\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524095-ztvmd" Feb 18 20:15:00 crc kubenswrapper[4942]: I0218 20:15:00.468154 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29524095-ztvmd" Feb 18 20:15:01 crc kubenswrapper[4942]: W0218 20:15:01.171314 4942 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod81acc89a_7a32_4040_93b5_5332398d6374.slice/crio-230cf1b7689637e6ab2309b369f9c74f4311b7a88c2f87c93eea3d9f235f4ec2 WatchSource:0}: Error finding container 230cf1b7689637e6ab2309b369f9c74f4311b7a88c2f87c93eea3d9f235f4ec2: Status 404 returned error can't find the container with id 230cf1b7689637e6ab2309b369f9c74f4311b7a88c2f87c93eea3d9f235f4ec2 Feb 18 20:15:01 crc kubenswrapper[4942]: I0218 20:15:01.173593 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29524095-ztvmd"] Feb 18 20:15:01 crc kubenswrapper[4942]: I0218 20:15:01.605637 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-r6dvm" event={"ID":"53c5bf36-8646-4dfb-a736-038ae98719e0","Type":"ContainerStarted","Data":"7ae16e640a62c3c435f2c4eb293b0bc9abe41bbb203716364c9a4f3546a602e5"} Feb 18 20:15:01 crc kubenswrapper[4942]: I0218 20:15:01.607222 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29524095-ztvmd" event={"ID":"81acc89a-7a32-4040-93b5-5332398d6374","Type":"ContainerStarted","Data":"75fa89dd848d4145951f50b9174b52dadf015d0268cd8ea1b9dbd6a82f591ee1"} Feb 18 20:15:01 crc kubenswrapper[4942]: I0218 20:15:01.607254 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29524095-ztvmd" event={"ID":"81acc89a-7a32-4040-93b5-5332398d6374","Type":"ContainerStarted","Data":"230cf1b7689637e6ab2309b369f9c74f4311b7a88c2f87c93eea3d9f235f4ec2"} Feb 18 20:15:02 crc kubenswrapper[4942]: I0218 20:15:02.618399 4942 generic.go:334] "Generic (PLEG): container finished" podID="53c5bf36-8646-4dfb-a736-038ae98719e0" containerID="7ae16e640a62c3c435f2c4eb293b0bc9abe41bbb203716364c9a4f3546a602e5" exitCode=0 Feb 18 20:15:02 crc kubenswrapper[4942]: I0218 20:15:02.618500 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-r6dvm" event={"ID":"53c5bf36-8646-4dfb-a736-038ae98719e0","Type":"ContainerDied","Data":"7ae16e640a62c3c435f2c4eb293b0bc9abe41bbb203716364c9a4f3546a602e5"} Feb 18 20:15:02 crc kubenswrapper[4942]: I0218 20:15:02.622628 4942 generic.go:334] "Generic (PLEG): container finished" podID="81acc89a-7a32-4040-93b5-5332398d6374" containerID="75fa89dd848d4145951f50b9174b52dadf015d0268cd8ea1b9dbd6a82f591ee1" exitCode=0 Feb 18 20:15:02 crc kubenswrapper[4942]: I0218 20:15:02.622690 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29524095-ztvmd" event={"ID":"81acc89a-7a32-4040-93b5-5332398d6374","Type":"ContainerDied","Data":"75fa89dd848d4145951f50b9174b52dadf015d0268cd8ea1b9dbd6a82f591ee1"} Feb 18 20:15:02 crc kubenswrapper[4942]: I0218 20:15:02.970425 4942 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29524095-ztvmd" Feb 18 20:15:03 crc kubenswrapper[4942]: I0218 20:15:03.151361 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jwzh9\" (UniqueName: \"kubernetes.io/projected/81acc89a-7a32-4040-93b5-5332398d6374-kube-api-access-jwzh9\") pod \"81acc89a-7a32-4040-93b5-5332398d6374\" (UID: \"81acc89a-7a32-4040-93b5-5332398d6374\") " Feb 18 20:15:03 crc kubenswrapper[4942]: I0218 20:15:03.151468 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/81acc89a-7a32-4040-93b5-5332398d6374-secret-volume\") pod \"81acc89a-7a32-4040-93b5-5332398d6374\" (UID: \"81acc89a-7a32-4040-93b5-5332398d6374\") " Feb 18 20:15:03 crc kubenswrapper[4942]: I0218 20:15:03.151597 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/81acc89a-7a32-4040-93b5-5332398d6374-config-volume\") pod \"81acc89a-7a32-4040-93b5-5332398d6374\" (UID: \"81acc89a-7a32-4040-93b5-5332398d6374\") " Feb 18 20:15:03 crc kubenswrapper[4942]: I0218 20:15:03.153372 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/81acc89a-7a32-4040-93b5-5332398d6374-config-volume" (OuterVolumeSpecName: "config-volume") pod "81acc89a-7a32-4040-93b5-5332398d6374" (UID: "81acc89a-7a32-4040-93b5-5332398d6374"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 20:15:03 crc kubenswrapper[4942]: I0218 20:15:03.160024 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/81acc89a-7a32-4040-93b5-5332398d6374-kube-api-access-jwzh9" (OuterVolumeSpecName: "kube-api-access-jwzh9") pod "81acc89a-7a32-4040-93b5-5332398d6374" (UID: "81acc89a-7a32-4040-93b5-5332398d6374"). InnerVolumeSpecName "kube-api-access-jwzh9". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 20:15:03 crc kubenswrapper[4942]: I0218 20:15:03.160122 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/81acc89a-7a32-4040-93b5-5332398d6374-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "81acc89a-7a32-4040-93b5-5332398d6374" (UID: "81acc89a-7a32-4040-93b5-5332398d6374"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 20:15:03 crc kubenswrapper[4942]: I0218 20:15:03.254121 4942 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/81acc89a-7a32-4040-93b5-5332398d6374-config-volume\") on node \"crc\" DevicePath \"\"" Feb 18 20:15:03 crc kubenswrapper[4942]: I0218 20:15:03.254455 4942 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jwzh9\" (UniqueName: \"kubernetes.io/projected/81acc89a-7a32-4040-93b5-5332398d6374-kube-api-access-jwzh9\") on node \"crc\" DevicePath \"\"" Feb 18 20:15:03 crc kubenswrapper[4942]: I0218 20:15:03.254470 4942 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/81acc89a-7a32-4040-93b5-5332398d6374-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 18 20:15:03 crc kubenswrapper[4942]: I0218 20:15:03.635122 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-r6dvm" event={"ID":"53c5bf36-8646-4dfb-a736-038ae98719e0","Type":"ContainerStarted","Data":"35d4a4d78ed077da802823eb576e25f1786ce8fba30d6ecec83341020487af77"} Feb 18 20:15:03 crc kubenswrapper[4942]: I0218 20:15:03.637809 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29524095-ztvmd" event={"ID":"81acc89a-7a32-4040-93b5-5332398d6374","Type":"ContainerDied","Data":"230cf1b7689637e6ab2309b369f9c74f4311b7a88c2f87c93eea3d9f235f4ec2"} Feb 18 20:15:03 crc kubenswrapper[4942]: I0218 20:15:03.637840 4942 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="230cf1b7689637e6ab2309b369f9c74f4311b7a88c2f87c93eea3d9f235f4ec2" Feb 18 20:15:03 crc kubenswrapper[4942]: I0218 20:15:03.637853 4942 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29524095-ztvmd" Feb 18 20:15:03 crc kubenswrapper[4942]: I0218 20:15:03.665345 4942 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-r6dvm" podStartSLOduration=1.975015827 podStartE2EDuration="5.665328849s" podCreationTimestamp="2026-02-18 20:14:58 +0000 UTC" firstStartedPulling="2026-02-18 20:14:59.585923579 +0000 UTC m=+3459.290856244" lastFinishedPulling="2026-02-18 20:15:03.276236591 +0000 UTC m=+3462.981169266" observedRunningTime="2026-02-18 20:15:03.656340881 +0000 UTC m=+3463.361273546" watchObservedRunningTime="2026-02-18 20:15:03.665328849 +0000 UTC m=+3463.370261514" Feb 18 20:15:04 crc kubenswrapper[4942]: I0218 20:15:04.057491 4942 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29524050-zccjh"] Feb 18 20:15:04 crc kubenswrapper[4942]: I0218 20:15:04.066435 4942 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29524050-zccjh"] Feb 18 20:15:05 crc kubenswrapper[4942]: I0218 20:15:05.050173 4942 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="50e6e4f2-9597-4f04-aa2d-d60b56446486" path="/var/lib/kubelet/pods/50e6e4f2-9597-4f04-aa2d-d60b56446486/volumes" Feb 18 20:15:08 crc kubenswrapper[4942]: I0218 20:15:08.643161 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-r6dvm" Feb 18 20:15:08 crc kubenswrapper[4942]: I0218 20:15:08.643885 4942 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-r6dvm" Feb 18 20:15:08 crc kubenswrapper[4942]: I0218 20:15:08.707492 4942 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-r6dvm" Feb 18 20:15:08 crc kubenswrapper[4942]: I0218 20:15:08.753371 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-r6dvm" Feb 18 20:15:08 crc kubenswrapper[4942]: I0218 20:15:08.947429 4942 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-r6dvm"] Feb 18 20:15:10 crc kubenswrapper[4942]: I0218 20:15:10.695861 4942 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-r6dvm" podUID="53c5bf36-8646-4dfb-a736-038ae98719e0" containerName="registry-server" containerID="cri-o://35d4a4d78ed077da802823eb576e25f1786ce8fba30d6ecec83341020487af77" gracePeriod=2 Feb 18 20:15:11 crc kubenswrapper[4942]: I0218 20:15:11.173353 4942 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-r6dvm" Feb 18 20:15:11 crc kubenswrapper[4942]: I0218 20:15:11.242698 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6gfk4\" (UniqueName: \"kubernetes.io/projected/53c5bf36-8646-4dfb-a736-038ae98719e0-kube-api-access-6gfk4\") pod \"53c5bf36-8646-4dfb-a736-038ae98719e0\" (UID: \"53c5bf36-8646-4dfb-a736-038ae98719e0\") " Feb 18 20:15:11 crc kubenswrapper[4942]: I0218 20:15:11.242883 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/53c5bf36-8646-4dfb-a736-038ae98719e0-utilities\") pod \"53c5bf36-8646-4dfb-a736-038ae98719e0\" (UID: \"53c5bf36-8646-4dfb-a736-038ae98719e0\") " Feb 18 20:15:11 crc kubenswrapper[4942]: I0218 20:15:11.243005 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/53c5bf36-8646-4dfb-a736-038ae98719e0-catalog-content\") pod \"53c5bf36-8646-4dfb-a736-038ae98719e0\" (UID: \"53c5bf36-8646-4dfb-a736-038ae98719e0\") " Feb 18 20:15:11 crc kubenswrapper[4942]: I0218 20:15:11.244100 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/53c5bf36-8646-4dfb-a736-038ae98719e0-utilities" (OuterVolumeSpecName: "utilities") pod "53c5bf36-8646-4dfb-a736-038ae98719e0" (UID: "53c5bf36-8646-4dfb-a736-038ae98719e0"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 20:15:11 crc kubenswrapper[4942]: I0218 20:15:11.255978 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/53c5bf36-8646-4dfb-a736-038ae98719e0-kube-api-access-6gfk4" (OuterVolumeSpecName: "kube-api-access-6gfk4") pod "53c5bf36-8646-4dfb-a736-038ae98719e0" (UID: "53c5bf36-8646-4dfb-a736-038ae98719e0"). InnerVolumeSpecName "kube-api-access-6gfk4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 20:15:11 crc kubenswrapper[4942]: I0218 20:15:11.344316 4942 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6gfk4\" (UniqueName: \"kubernetes.io/projected/53c5bf36-8646-4dfb-a736-038ae98719e0-kube-api-access-6gfk4\") on node \"crc\" DevicePath \"\"" Feb 18 20:15:11 crc kubenswrapper[4942]: I0218 20:15:11.344356 4942 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/53c5bf36-8646-4dfb-a736-038ae98719e0-utilities\") on node \"crc\" DevicePath \"\"" Feb 18 20:15:11 crc kubenswrapper[4942]: I0218 20:15:11.707600 4942 generic.go:334] "Generic (PLEG): container finished" podID="53c5bf36-8646-4dfb-a736-038ae98719e0" containerID="35d4a4d78ed077da802823eb576e25f1786ce8fba30d6ecec83341020487af77" exitCode=0 Feb 18 20:15:11 crc kubenswrapper[4942]: I0218 20:15:11.707653 4942 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-r6dvm" Feb 18 20:15:11 crc kubenswrapper[4942]: I0218 20:15:11.707677 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-r6dvm" event={"ID":"53c5bf36-8646-4dfb-a736-038ae98719e0","Type":"ContainerDied","Data":"35d4a4d78ed077da802823eb576e25f1786ce8fba30d6ecec83341020487af77"} Feb 18 20:15:11 crc kubenswrapper[4942]: I0218 20:15:11.708064 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-r6dvm" event={"ID":"53c5bf36-8646-4dfb-a736-038ae98719e0","Type":"ContainerDied","Data":"4135badf274346a5d8726ac2a925164017752639dc54d9d9ac76c44933d06402"} Feb 18 20:15:11 crc kubenswrapper[4942]: I0218 20:15:11.708087 4942 scope.go:117] "RemoveContainer" containerID="35d4a4d78ed077da802823eb576e25f1786ce8fba30d6ecec83341020487af77" Feb 18 20:15:11 crc kubenswrapper[4942]: I0218 20:15:11.726858 4942 scope.go:117] "RemoveContainer" containerID="7ae16e640a62c3c435f2c4eb293b0bc9abe41bbb203716364c9a4f3546a602e5" Feb 18 20:15:11 crc kubenswrapper[4942]: I0218 20:15:11.771182 4942 scope.go:117] "RemoveContainer" containerID="931405098cc3e52001cb681fc8feb5b0a195e18daa7d338a3f35c7fdff3e8b5d" Feb 18 20:15:11 crc kubenswrapper[4942]: I0218 20:15:11.808390 4942 scope.go:117] "RemoveContainer" containerID="35d4a4d78ed077da802823eb576e25f1786ce8fba30d6ecec83341020487af77" Feb 18 20:15:11 crc kubenswrapper[4942]: E0218 20:15:11.808936 4942 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"35d4a4d78ed077da802823eb576e25f1786ce8fba30d6ecec83341020487af77\": container with ID starting with 35d4a4d78ed077da802823eb576e25f1786ce8fba30d6ecec83341020487af77 not found: ID does not exist" containerID="35d4a4d78ed077da802823eb576e25f1786ce8fba30d6ecec83341020487af77" Feb 18 20:15:11 crc kubenswrapper[4942]: I0218 20:15:11.808981 4942 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"35d4a4d78ed077da802823eb576e25f1786ce8fba30d6ecec83341020487af77"} err="failed to get container status \"35d4a4d78ed077da802823eb576e25f1786ce8fba30d6ecec83341020487af77\": rpc error: code = NotFound desc = could not find container \"35d4a4d78ed077da802823eb576e25f1786ce8fba30d6ecec83341020487af77\": container with ID starting with 35d4a4d78ed077da802823eb576e25f1786ce8fba30d6ecec83341020487af77 not found: ID does not exist" Feb 18 20:15:11 crc kubenswrapper[4942]: I0218 20:15:11.809010 4942 scope.go:117] "RemoveContainer" containerID="7ae16e640a62c3c435f2c4eb293b0bc9abe41bbb203716364c9a4f3546a602e5" Feb 18 20:15:11 crc kubenswrapper[4942]: E0218 20:15:11.809542 4942 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7ae16e640a62c3c435f2c4eb293b0bc9abe41bbb203716364c9a4f3546a602e5\": container with ID starting with 7ae16e640a62c3c435f2c4eb293b0bc9abe41bbb203716364c9a4f3546a602e5 not found: ID does not exist" containerID="7ae16e640a62c3c435f2c4eb293b0bc9abe41bbb203716364c9a4f3546a602e5" Feb 18 20:15:11 crc kubenswrapper[4942]: I0218 20:15:11.809563 4942 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7ae16e640a62c3c435f2c4eb293b0bc9abe41bbb203716364c9a4f3546a602e5"} err="failed to get container status \"7ae16e640a62c3c435f2c4eb293b0bc9abe41bbb203716364c9a4f3546a602e5\": rpc error: code = NotFound desc = could not find container \"7ae16e640a62c3c435f2c4eb293b0bc9abe41bbb203716364c9a4f3546a602e5\": container with ID starting with 7ae16e640a62c3c435f2c4eb293b0bc9abe41bbb203716364c9a4f3546a602e5 not found: ID does not exist" Feb 18 20:15:11 crc kubenswrapper[4942]: I0218 20:15:11.809576 4942 scope.go:117] "RemoveContainer" containerID="931405098cc3e52001cb681fc8feb5b0a195e18daa7d338a3f35c7fdff3e8b5d" Feb 18 20:15:11 crc kubenswrapper[4942]: E0218 20:15:11.809994 4942 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"931405098cc3e52001cb681fc8feb5b0a195e18daa7d338a3f35c7fdff3e8b5d\": container with ID starting with 931405098cc3e52001cb681fc8feb5b0a195e18daa7d338a3f35c7fdff3e8b5d not found: ID does not exist" containerID="931405098cc3e52001cb681fc8feb5b0a195e18daa7d338a3f35c7fdff3e8b5d" Feb 18 20:15:11 crc kubenswrapper[4942]: I0218 20:15:11.810037 4942 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"931405098cc3e52001cb681fc8feb5b0a195e18daa7d338a3f35c7fdff3e8b5d"} err="failed to get container status \"931405098cc3e52001cb681fc8feb5b0a195e18daa7d338a3f35c7fdff3e8b5d\": rpc error: code = NotFound desc = could not find container \"931405098cc3e52001cb681fc8feb5b0a195e18daa7d338a3f35c7fdff3e8b5d\": container with ID starting with 931405098cc3e52001cb681fc8feb5b0a195e18daa7d338a3f35c7fdff3e8b5d not found: ID does not exist" Feb 18 20:15:12 crc kubenswrapper[4942]: I0218 20:15:12.254599 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/53c5bf36-8646-4dfb-a736-038ae98719e0-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "53c5bf36-8646-4dfb-a736-038ae98719e0" (UID: "53c5bf36-8646-4dfb-a736-038ae98719e0"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 20:15:12 crc kubenswrapper[4942]: I0218 20:15:12.261265 4942 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/53c5bf36-8646-4dfb-a736-038ae98719e0-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 18 20:15:12 crc kubenswrapper[4942]: I0218 20:15:12.340449 4942 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-r6dvm"] Feb 18 20:15:12 crc kubenswrapper[4942]: I0218 20:15:12.350681 4942 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-r6dvm"] Feb 18 20:15:13 crc kubenswrapper[4942]: I0218 20:15:13.045861 4942 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="53c5bf36-8646-4dfb-a736-038ae98719e0" path="/var/lib/kubelet/pods/53c5bf36-8646-4dfb-a736-038ae98719e0/volumes" Feb 18 20:15:50 crc kubenswrapper[4942]: I0218 20:15:50.361374 4942 scope.go:117] "RemoveContainer" containerID="45f611558efef294793c691f22c0d11c4ce92907ad4ca205006156562d59216c" Feb 18 20:16:53 crc kubenswrapper[4942]: I0218 20:16:53.741392 4942 patch_prober.go:28] interesting pod/machine-config-daemon-wqxh4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 20:16:53 crc kubenswrapper[4942]: I0218 20:16:53.741963 4942 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wqxh4" podUID="28921539-823a-4439-a230-3b5aed7085cc" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 20:17:23 crc kubenswrapper[4942]: I0218 20:17:23.740525 4942 patch_prober.go:28] interesting pod/machine-config-daemon-wqxh4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 20:17:23 crc kubenswrapper[4942]: I0218 20:17:23.741222 4942 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wqxh4" podUID="28921539-823a-4439-a230-3b5aed7085cc" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 20:17:53 crc kubenswrapper[4942]: I0218 20:17:53.740754 4942 patch_prober.go:28] interesting pod/machine-config-daemon-wqxh4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 20:17:53 crc kubenswrapper[4942]: I0218 20:17:53.741296 4942 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wqxh4" podUID="28921539-823a-4439-a230-3b5aed7085cc" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 20:17:53 crc kubenswrapper[4942]: I0218 20:17:53.741348 4942 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-wqxh4" Feb 18 20:17:53 crc kubenswrapper[4942]: I0218 20:17:53.742179 4942 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"4637aee37878bbe70bd62244a6764ec2f38f73d2c09b6cb8754f4ec3ccb78f19"} pod="openshift-machine-config-operator/machine-config-daemon-wqxh4" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 18 20:17:53 crc kubenswrapper[4942]: I0218 20:17:53.742237 4942 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-wqxh4" podUID="28921539-823a-4439-a230-3b5aed7085cc" containerName="machine-config-daemon" containerID="cri-o://4637aee37878bbe70bd62244a6764ec2f38f73d2c09b6cb8754f4ec3ccb78f19" gracePeriod=600 Feb 18 20:17:54 crc kubenswrapper[4942]: I0218 20:17:54.673356 4942 generic.go:334] "Generic (PLEG): container finished" podID="28921539-823a-4439-a230-3b5aed7085cc" containerID="4637aee37878bbe70bd62244a6764ec2f38f73d2c09b6cb8754f4ec3ccb78f19" exitCode=0 Feb 18 20:17:54 crc kubenswrapper[4942]: I0218 20:17:54.673441 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wqxh4" event={"ID":"28921539-823a-4439-a230-3b5aed7085cc","Type":"ContainerDied","Data":"4637aee37878bbe70bd62244a6764ec2f38f73d2c09b6cb8754f4ec3ccb78f19"} Feb 18 20:17:54 crc kubenswrapper[4942]: I0218 20:17:54.674084 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wqxh4" event={"ID":"28921539-823a-4439-a230-3b5aed7085cc","Type":"ContainerStarted","Data":"13108c3e1f4853bccc21a0b4ca8d8754dcd7b8ef84f2648c54eda40929f45769"} Feb 18 20:17:54 crc kubenswrapper[4942]: I0218 20:17:54.674161 4942 scope.go:117] "RemoveContainer" containerID="5805170dd8a5bdf54f8aac0015f4c83ad571c8c859aab4da98b887ecc1a60495" Feb 18 20:20:23 crc kubenswrapper[4942]: I0218 20:20:23.740405 4942 patch_prober.go:28] interesting pod/machine-config-daemon-wqxh4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 20:20:23 crc kubenswrapper[4942]: I0218 20:20:23.740977 4942 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wqxh4" podUID="28921539-823a-4439-a230-3b5aed7085cc" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 20:20:53 crc kubenswrapper[4942]: I0218 20:20:53.740624 4942 patch_prober.go:28] interesting pod/machine-config-daemon-wqxh4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 20:20:53 crc kubenswrapper[4942]: I0218 20:20:53.741493 4942 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wqxh4" podUID="28921539-823a-4439-a230-3b5aed7085cc" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 20:21:23 crc kubenswrapper[4942]: I0218 20:21:23.740555 4942 patch_prober.go:28] interesting pod/machine-config-daemon-wqxh4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 20:21:23 crc kubenswrapper[4942]: I0218 20:21:23.741053 4942 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wqxh4" podUID="28921539-823a-4439-a230-3b5aed7085cc" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 20:21:23 crc kubenswrapper[4942]: I0218 20:21:23.741099 4942 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-wqxh4" Feb 18 20:21:23 crc kubenswrapper[4942]: I0218 20:21:23.741970 4942 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"13108c3e1f4853bccc21a0b4ca8d8754dcd7b8ef84f2648c54eda40929f45769"} pod="openshift-machine-config-operator/machine-config-daemon-wqxh4" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 18 20:21:23 crc kubenswrapper[4942]: I0218 20:21:23.742030 4942 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-wqxh4" podUID="28921539-823a-4439-a230-3b5aed7085cc" containerName="machine-config-daemon" containerID="cri-o://13108c3e1f4853bccc21a0b4ca8d8754dcd7b8ef84f2648c54eda40929f45769" gracePeriod=600 Feb 18 20:21:23 crc kubenswrapper[4942]: E0218 20:21:23.867907 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wqxh4_openshift-machine-config-operator(28921539-823a-4439-a230-3b5aed7085cc)\"" pod="openshift-machine-config-operator/machine-config-daemon-wqxh4" podUID="28921539-823a-4439-a230-3b5aed7085cc" Feb 18 20:21:24 crc kubenswrapper[4942]: I0218 20:21:24.868160 4942 generic.go:334] "Generic (PLEG): container finished" podID="28921539-823a-4439-a230-3b5aed7085cc" containerID="13108c3e1f4853bccc21a0b4ca8d8754dcd7b8ef84f2648c54eda40929f45769" exitCode=0 Feb 18 20:21:24 crc kubenswrapper[4942]: I0218 20:21:24.868290 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wqxh4" event={"ID":"28921539-823a-4439-a230-3b5aed7085cc","Type":"ContainerDied","Data":"13108c3e1f4853bccc21a0b4ca8d8754dcd7b8ef84f2648c54eda40929f45769"} Feb 18 20:21:24 crc kubenswrapper[4942]: I0218 20:21:24.868882 4942 scope.go:117] "RemoveContainer" containerID="4637aee37878bbe70bd62244a6764ec2f38f73d2c09b6cb8754f4ec3ccb78f19" Feb 18 20:21:24 crc kubenswrapper[4942]: I0218 20:21:24.869615 4942 scope.go:117] "RemoveContainer" containerID="13108c3e1f4853bccc21a0b4ca8d8754dcd7b8ef84f2648c54eda40929f45769" Feb 18 20:21:24 crc kubenswrapper[4942]: E0218 20:21:24.870163 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wqxh4_openshift-machine-config-operator(28921539-823a-4439-a230-3b5aed7085cc)\"" pod="openshift-machine-config-operator/machine-config-daemon-wqxh4" podUID="28921539-823a-4439-a230-3b5aed7085cc" Feb 18 20:21:39 crc kubenswrapper[4942]: I0218 20:21:39.036440 4942 scope.go:117] "RemoveContainer" containerID="13108c3e1f4853bccc21a0b4ca8d8754dcd7b8ef84f2648c54eda40929f45769" Feb 18 20:21:39 crc kubenswrapper[4942]: E0218 20:21:39.037730 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wqxh4_openshift-machine-config-operator(28921539-823a-4439-a230-3b5aed7085cc)\"" pod="openshift-machine-config-operator/machine-config-daemon-wqxh4" podUID="28921539-823a-4439-a230-3b5aed7085cc" Feb 18 20:21:50 crc kubenswrapper[4942]: I0218 20:21:50.036096 4942 scope.go:117] "RemoveContainer" containerID="13108c3e1f4853bccc21a0b4ca8d8754dcd7b8ef84f2648c54eda40929f45769" Feb 18 20:21:50 crc kubenswrapper[4942]: E0218 20:21:50.036958 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wqxh4_openshift-machine-config-operator(28921539-823a-4439-a230-3b5aed7085cc)\"" pod="openshift-machine-config-operator/machine-config-daemon-wqxh4" podUID="28921539-823a-4439-a230-3b5aed7085cc" Feb 18 20:22:01 crc kubenswrapper[4942]: I0218 20:22:01.053488 4942 scope.go:117] "RemoveContainer" containerID="13108c3e1f4853bccc21a0b4ca8d8754dcd7b8ef84f2648c54eda40929f45769" Feb 18 20:22:01 crc kubenswrapper[4942]: E0218 20:22:01.056352 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wqxh4_openshift-machine-config-operator(28921539-823a-4439-a230-3b5aed7085cc)\"" pod="openshift-machine-config-operator/machine-config-daemon-wqxh4" podUID="28921539-823a-4439-a230-3b5aed7085cc" Feb 18 20:22:13 crc kubenswrapper[4942]: I0218 20:22:13.035900 4942 scope.go:117] "RemoveContainer" containerID="13108c3e1f4853bccc21a0b4ca8d8754dcd7b8ef84f2648c54eda40929f45769" Feb 18 20:22:13 crc kubenswrapper[4942]: E0218 20:22:13.037150 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wqxh4_openshift-machine-config-operator(28921539-823a-4439-a230-3b5aed7085cc)\"" pod="openshift-machine-config-operator/machine-config-daemon-wqxh4" podUID="28921539-823a-4439-a230-3b5aed7085cc" Feb 18 20:22:25 crc kubenswrapper[4942]: I0218 20:22:25.036457 4942 scope.go:117] "RemoveContainer" containerID="13108c3e1f4853bccc21a0b4ca8d8754dcd7b8ef84f2648c54eda40929f45769" Feb 18 20:22:25 crc kubenswrapper[4942]: E0218 20:22:25.037276 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wqxh4_openshift-machine-config-operator(28921539-823a-4439-a230-3b5aed7085cc)\"" pod="openshift-machine-config-operator/machine-config-daemon-wqxh4" podUID="28921539-823a-4439-a230-3b5aed7085cc" Feb 18 20:22:38 crc kubenswrapper[4942]: I0218 20:22:38.036506 4942 scope.go:117] "RemoveContainer" containerID="13108c3e1f4853bccc21a0b4ca8d8754dcd7b8ef84f2648c54eda40929f45769" Feb 18 20:22:38 crc kubenswrapper[4942]: E0218 20:22:38.037434 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wqxh4_openshift-machine-config-operator(28921539-823a-4439-a230-3b5aed7085cc)\"" pod="openshift-machine-config-operator/machine-config-daemon-wqxh4" podUID="28921539-823a-4439-a230-3b5aed7085cc" Feb 18 20:22:52 crc kubenswrapper[4942]: I0218 20:22:52.037222 4942 scope.go:117] "RemoveContainer" containerID="13108c3e1f4853bccc21a0b4ca8d8754dcd7b8ef84f2648c54eda40929f45769" Feb 18 20:22:52 crc kubenswrapper[4942]: E0218 20:22:52.038389 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wqxh4_openshift-machine-config-operator(28921539-823a-4439-a230-3b5aed7085cc)\"" pod="openshift-machine-config-operator/machine-config-daemon-wqxh4" podUID="28921539-823a-4439-a230-3b5aed7085cc" Feb 18 20:23:05 crc kubenswrapper[4942]: I0218 20:23:05.035553 4942 scope.go:117] "RemoveContainer" containerID="13108c3e1f4853bccc21a0b4ca8d8754dcd7b8ef84f2648c54eda40929f45769" Feb 18 20:23:05 crc kubenswrapper[4942]: E0218 20:23:05.036429 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wqxh4_openshift-machine-config-operator(28921539-823a-4439-a230-3b5aed7085cc)\"" pod="openshift-machine-config-operator/machine-config-daemon-wqxh4" podUID="28921539-823a-4439-a230-3b5aed7085cc" Feb 18 20:23:19 crc kubenswrapper[4942]: I0218 20:23:19.036196 4942 scope.go:117] "RemoveContainer" containerID="13108c3e1f4853bccc21a0b4ca8d8754dcd7b8ef84f2648c54eda40929f45769" Feb 18 20:23:19 crc kubenswrapper[4942]: E0218 20:23:19.036932 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wqxh4_openshift-machine-config-operator(28921539-823a-4439-a230-3b5aed7085cc)\"" pod="openshift-machine-config-operator/machine-config-daemon-wqxh4" podUID="28921539-823a-4439-a230-3b5aed7085cc" Feb 18 20:23:33 crc kubenswrapper[4942]: I0218 20:23:33.035954 4942 scope.go:117] "RemoveContainer" containerID="13108c3e1f4853bccc21a0b4ca8d8754dcd7b8ef84f2648c54eda40929f45769" Feb 18 20:23:33 crc kubenswrapper[4942]: E0218 20:23:33.036627 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wqxh4_openshift-machine-config-operator(28921539-823a-4439-a230-3b5aed7085cc)\"" pod="openshift-machine-config-operator/machine-config-daemon-wqxh4" podUID="28921539-823a-4439-a230-3b5aed7085cc" Feb 18 20:23:36 crc kubenswrapper[4942]: I0218 20:23:36.639894 4942 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-b5g55"] Feb 18 20:23:36 crc kubenswrapper[4942]: E0218 20:23:36.640633 4942 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="81acc89a-7a32-4040-93b5-5332398d6374" containerName="collect-profiles" Feb 18 20:23:36 crc kubenswrapper[4942]: I0218 20:23:36.640645 4942 state_mem.go:107] "Deleted CPUSet assignment" podUID="81acc89a-7a32-4040-93b5-5332398d6374" containerName="collect-profiles" Feb 18 20:23:36 crc kubenswrapper[4942]: E0218 20:23:36.640676 4942 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="53c5bf36-8646-4dfb-a736-038ae98719e0" containerName="extract-content" Feb 18 20:23:36 crc kubenswrapper[4942]: I0218 20:23:36.640682 4942 state_mem.go:107] "Deleted CPUSet assignment" podUID="53c5bf36-8646-4dfb-a736-038ae98719e0" containerName="extract-content" Feb 18 20:23:36 crc kubenswrapper[4942]: E0218 20:23:36.640696 4942 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="53c5bf36-8646-4dfb-a736-038ae98719e0" containerName="extract-utilities" Feb 18 20:23:36 crc kubenswrapper[4942]: I0218 20:23:36.640702 4942 state_mem.go:107] "Deleted CPUSet assignment" podUID="53c5bf36-8646-4dfb-a736-038ae98719e0" containerName="extract-utilities" Feb 18 20:23:36 crc kubenswrapper[4942]: E0218 20:23:36.640722 4942 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="53c5bf36-8646-4dfb-a736-038ae98719e0" containerName="registry-server" Feb 18 20:23:36 crc kubenswrapper[4942]: I0218 20:23:36.640729 4942 state_mem.go:107] "Deleted CPUSet assignment" podUID="53c5bf36-8646-4dfb-a736-038ae98719e0" containerName="registry-server" Feb 18 20:23:36 crc kubenswrapper[4942]: I0218 20:23:36.640947 4942 memory_manager.go:354] "RemoveStaleState removing state" podUID="81acc89a-7a32-4040-93b5-5332398d6374" containerName="collect-profiles" Feb 18 20:23:36 crc kubenswrapper[4942]: I0218 20:23:36.640966 4942 memory_manager.go:354] "RemoveStaleState removing state" podUID="53c5bf36-8646-4dfb-a736-038ae98719e0" containerName="registry-server" Feb 18 20:23:36 crc kubenswrapper[4942]: I0218 20:23:36.642311 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-b5g55" Feb 18 20:23:36 crc kubenswrapper[4942]: I0218 20:23:36.658027 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-b5g55"] Feb 18 20:23:36 crc kubenswrapper[4942]: I0218 20:23:36.791550 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/65f646d4-3b0a-4e0a-937c-a2452f28d07a-utilities\") pod \"community-operators-b5g55\" (UID: \"65f646d4-3b0a-4e0a-937c-a2452f28d07a\") " pod="openshift-marketplace/community-operators-b5g55" Feb 18 20:23:36 crc kubenswrapper[4942]: I0218 20:23:36.791622 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zwwpq\" (UniqueName: \"kubernetes.io/projected/65f646d4-3b0a-4e0a-937c-a2452f28d07a-kube-api-access-zwwpq\") pod \"community-operators-b5g55\" (UID: \"65f646d4-3b0a-4e0a-937c-a2452f28d07a\") " pod="openshift-marketplace/community-operators-b5g55" Feb 18 20:23:36 crc kubenswrapper[4942]: I0218 20:23:36.791776 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/65f646d4-3b0a-4e0a-937c-a2452f28d07a-catalog-content\") pod \"community-operators-b5g55\" (UID: \"65f646d4-3b0a-4e0a-937c-a2452f28d07a\") " pod="openshift-marketplace/community-operators-b5g55" Feb 18 20:23:36 crc kubenswrapper[4942]: I0218 20:23:36.893239 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zwwpq\" (UniqueName: \"kubernetes.io/projected/65f646d4-3b0a-4e0a-937c-a2452f28d07a-kube-api-access-zwwpq\") pod \"community-operators-b5g55\" (UID: \"65f646d4-3b0a-4e0a-937c-a2452f28d07a\") " pod="openshift-marketplace/community-operators-b5g55" Feb 18 20:23:36 crc kubenswrapper[4942]: I0218 20:23:36.893334 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/65f646d4-3b0a-4e0a-937c-a2452f28d07a-catalog-content\") pod \"community-operators-b5g55\" (UID: \"65f646d4-3b0a-4e0a-937c-a2452f28d07a\") " pod="openshift-marketplace/community-operators-b5g55" Feb 18 20:23:36 crc kubenswrapper[4942]: I0218 20:23:36.893427 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/65f646d4-3b0a-4e0a-937c-a2452f28d07a-utilities\") pod \"community-operators-b5g55\" (UID: \"65f646d4-3b0a-4e0a-937c-a2452f28d07a\") " pod="openshift-marketplace/community-operators-b5g55" Feb 18 20:23:36 crc kubenswrapper[4942]: I0218 20:23:36.893974 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/65f646d4-3b0a-4e0a-937c-a2452f28d07a-utilities\") pod \"community-operators-b5g55\" (UID: \"65f646d4-3b0a-4e0a-937c-a2452f28d07a\") " pod="openshift-marketplace/community-operators-b5g55" Feb 18 20:23:36 crc kubenswrapper[4942]: I0218 20:23:36.893992 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/65f646d4-3b0a-4e0a-937c-a2452f28d07a-catalog-content\") pod \"community-operators-b5g55\" (UID: \"65f646d4-3b0a-4e0a-937c-a2452f28d07a\") " pod="openshift-marketplace/community-operators-b5g55" Feb 18 20:23:36 crc kubenswrapper[4942]: I0218 20:23:36.913548 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zwwpq\" (UniqueName: \"kubernetes.io/projected/65f646d4-3b0a-4e0a-937c-a2452f28d07a-kube-api-access-zwwpq\") pod \"community-operators-b5g55\" (UID: \"65f646d4-3b0a-4e0a-937c-a2452f28d07a\") " pod="openshift-marketplace/community-operators-b5g55" Feb 18 20:23:37 crc kubenswrapper[4942]: I0218 20:23:37.001569 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-b5g55" Feb 18 20:23:37 crc kubenswrapper[4942]: I0218 20:23:37.555259 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-b5g55"] Feb 18 20:23:38 crc kubenswrapper[4942]: I0218 20:23:38.329827 4942 generic.go:334] "Generic (PLEG): container finished" podID="65f646d4-3b0a-4e0a-937c-a2452f28d07a" containerID="ab46d5481265a98fed8e46cd19caabde830f4e830b4c0d1c81988263fd39f087" exitCode=0 Feb 18 20:23:38 crc kubenswrapper[4942]: I0218 20:23:38.330207 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-b5g55" event={"ID":"65f646d4-3b0a-4e0a-937c-a2452f28d07a","Type":"ContainerDied","Data":"ab46d5481265a98fed8e46cd19caabde830f4e830b4c0d1c81988263fd39f087"} Feb 18 20:23:38 crc kubenswrapper[4942]: I0218 20:23:38.330253 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-b5g55" event={"ID":"65f646d4-3b0a-4e0a-937c-a2452f28d07a","Type":"ContainerStarted","Data":"b9cc4d0833f807ebae8c60f122da6c2a174c37c08c998b0062cddc24f7779bfc"} Feb 18 20:23:38 crc kubenswrapper[4942]: I0218 20:23:38.336892 4942 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 18 20:23:39 crc kubenswrapper[4942]: I0218 20:23:39.341186 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-b5g55" event={"ID":"65f646d4-3b0a-4e0a-937c-a2452f28d07a","Type":"ContainerStarted","Data":"03ab0f9a1d7fc2d7ff2620dec4fefd33d1757bde0307489a4514d36edb58e5b1"} Feb 18 20:23:41 crc kubenswrapper[4942]: I0218 20:23:41.363876 4942 generic.go:334] "Generic (PLEG): container finished" podID="65f646d4-3b0a-4e0a-937c-a2452f28d07a" containerID="03ab0f9a1d7fc2d7ff2620dec4fefd33d1757bde0307489a4514d36edb58e5b1" exitCode=0 Feb 18 20:23:41 crc kubenswrapper[4942]: I0218 20:23:41.364073 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-b5g55" event={"ID":"65f646d4-3b0a-4e0a-937c-a2452f28d07a","Type":"ContainerDied","Data":"03ab0f9a1d7fc2d7ff2620dec4fefd33d1757bde0307489a4514d36edb58e5b1"} Feb 18 20:23:42 crc kubenswrapper[4942]: I0218 20:23:42.377594 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-b5g55" event={"ID":"65f646d4-3b0a-4e0a-937c-a2452f28d07a","Type":"ContainerStarted","Data":"3c5c8d2944afeb01436667c48418b1471ab34b93e2cdd7ad10a784369fd56d40"} Feb 18 20:23:44 crc kubenswrapper[4942]: I0218 20:23:44.037498 4942 scope.go:117] "RemoveContainer" containerID="13108c3e1f4853bccc21a0b4ca8d8754dcd7b8ef84f2648c54eda40929f45769" Feb 18 20:23:44 crc kubenswrapper[4942]: E0218 20:23:44.038179 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wqxh4_openshift-machine-config-operator(28921539-823a-4439-a230-3b5aed7085cc)\"" pod="openshift-machine-config-operator/machine-config-daemon-wqxh4" podUID="28921539-823a-4439-a230-3b5aed7085cc" Feb 18 20:23:47 crc kubenswrapper[4942]: I0218 20:23:47.002607 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-b5g55" Feb 18 20:23:47 crc kubenswrapper[4942]: I0218 20:23:47.003172 4942 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-b5g55" Feb 18 20:23:47 crc kubenswrapper[4942]: I0218 20:23:47.083801 4942 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-b5g55" Feb 18 20:23:47 crc kubenswrapper[4942]: I0218 20:23:47.113905 4942 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-b5g55" podStartSLOduration=7.638243856 podStartE2EDuration="11.113884617s" podCreationTimestamp="2026-02-18 20:23:36 +0000 UTC" firstStartedPulling="2026-02-18 20:23:38.334565713 +0000 UTC m=+3978.039498388" lastFinishedPulling="2026-02-18 20:23:41.810206474 +0000 UTC m=+3981.515139149" observedRunningTime="2026-02-18 20:23:42.403337817 +0000 UTC m=+3982.108270512" watchObservedRunningTime="2026-02-18 20:23:47.113884617 +0000 UTC m=+3986.818817282" Feb 18 20:23:47 crc kubenswrapper[4942]: I0218 20:23:47.502385 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-b5g55" Feb 18 20:23:48 crc kubenswrapper[4942]: I0218 20:23:48.344370 4942 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-b5g55"] Feb 18 20:23:49 crc kubenswrapper[4942]: I0218 20:23:49.454524 4942 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-b5g55" podUID="65f646d4-3b0a-4e0a-937c-a2452f28d07a" containerName="registry-server" containerID="cri-o://3c5c8d2944afeb01436667c48418b1471ab34b93e2cdd7ad10a784369fd56d40" gracePeriod=2 Feb 18 20:23:50 crc kubenswrapper[4942]: I0218 20:23:50.130307 4942 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-b5g55" Feb 18 20:23:50 crc kubenswrapper[4942]: I0218 20:23:50.227384 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zwwpq\" (UniqueName: \"kubernetes.io/projected/65f646d4-3b0a-4e0a-937c-a2452f28d07a-kube-api-access-zwwpq\") pod \"65f646d4-3b0a-4e0a-937c-a2452f28d07a\" (UID: \"65f646d4-3b0a-4e0a-937c-a2452f28d07a\") " Feb 18 20:23:50 crc kubenswrapper[4942]: I0218 20:23:50.227511 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/65f646d4-3b0a-4e0a-937c-a2452f28d07a-catalog-content\") pod \"65f646d4-3b0a-4e0a-937c-a2452f28d07a\" (UID: \"65f646d4-3b0a-4e0a-937c-a2452f28d07a\") " Feb 18 20:23:50 crc kubenswrapper[4942]: I0218 20:23:50.227561 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/65f646d4-3b0a-4e0a-937c-a2452f28d07a-utilities\") pod \"65f646d4-3b0a-4e0a-937c-a2452f28d07a\" (UID: \"65f646d4-3b0a-4e0a-937c-a2452f28d07a\") " Feb 18 20:23:50 crc kubenswrapper[4942]: I0218 20:23:50.229541 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/65f646d4-3b0a-4e0a-937c-a2452f28d07a-utilities" (OuterVolumeSpecName: "utilities") pod "65f646d4-3b0a-4e0a-937c-a2452f28d07a" (UID: "65f646d4-3b0a-4e0a-937c-a2452f28d07a"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 20:23:50 crc kubenswrapper[4942]: I0218 20:23:50.235989 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/65f646d4-3b0a-4e0a-937c-a2452f28d07a-kube-api-access-zwwpq" (OuterVolumeSpecName: "kube-api-access-zwwpq") pod "65f646d4-3b0a-4e0a-937c-a2452f28d07a" (UID: "65f646d4-3b0a-4e0a-937c-a2452f28d07a"). InnerVolumeSpecName "kube-api-access-zwwpq". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 20:23:50 crc kubenswrapper[4942]: I0218 20:23:50.309026 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/65f646d4-3b0a-4e0a-937c-a2452f28d07a-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "65f646d4-3b0a-4e0a-937c-a2452f28d07a" (UID: "65f646d4-3b0a-4e0a-937c-a2452f28d07a"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 20:23:50 crc kubenswrapper[4942]: I0218 20:23:50.330527 4942 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/65f646d4-3b0a-4e0a-937c-a2452f28d07a-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 18 20:23:50 crc kubenswrapper[4942]: I0218 20:23:50.330570 4942 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/65f646d4-3b0a-4e0a-937c-a2452f28d07a-utilities\") on node \"crc\" DevicePath \"\"" Feb 18 20:23:50 crc kubenswrapper[4942]: I0218 20:23:50.330587 4942 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zwwpq\" (UniqueName: \"kubernetes.io/projected/65f646d4-3b0a-4e0a-937c-a2452f28d07a-kube-api-access-zwwpq\") on node \"crc\" DevicePath \"\"" Feb 18 20:23:50 crc kubenswrapper[4942]: I0218 20:23:50.468829 4942 generic.go:334] "Generic (PLEG): container finished" podID="65f646d4-3b0a-4e0a-937c-a2452f28d07a" containerID="3c5c8d2944afeb01436667c48418b1471ab34b93e2cdd7ad10a784369fd56d40" exitCode=0 Feb 18 20:23:50 crc kubenswrapper[4942]: I0218 20:23:50.468880 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-b5g55" event={"ID":"65f646d4-3b0a-4e0a-937c-a2452f28d07a","Type":"ContainerDied","Data":"3c5c8d2944afeb01436667c48418b1471ab34b93e2cdd7ad10a784369fd56d40"} Feb 18 20:23:50 crc kubenswrapper[4942]: I0218 20:23:50.468917 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-b5g55" event={"ID":"65f646d4-3b0a-4e0a-937c-a2452f28d07a","Type":"ContainerDied","Data":"b9cc4d0833f807ebae8c60f122da6c2a174c37c08c998b0062cddc24f7779bfc"} Feb 18 20:23:50 crc kubenswrapper[4942]: I0218 20:23:50.468939 4942 scope.go:117] "RemoveContainer" containerID="3c5c8d2944afeb01436667c48418b1471ab34b93e2cdd7ad10a784369fd56d40" Feb 18 20:23:50 crc kubenswrapper[4942]: I0218 20:23:50.468945 4942 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-b5g55" Feb 18 20:23:50 crc kubenswrapper[4942]: I0218 20:23:50.502960 4942 scope.go:117] "RemoveContainer" containerID="03ab0f9a1d7fc2d7ff2620dec4fefd33d1757bde0307489a4514d36edb58e5b1" Feb 18 20:23:50 crc kubenswrapper[4942]: I0218 20:23:50.538187 4942 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-b5g55"] Feb 18 20:23:50 crc kubenswrapper[4942]: I0218 20:23:50.550001 4942 scope.go:117] "RemoveContainer" containerID="ab46d5481265a98fed8e46cd19caabde830f4e830b4c0d1c81988263fd39f087" Feb 18 20:23:50 crc kubenswrapper[4942]: I0218 20:23:50.550601 4942 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-b5g55"] Feb 18 20:23:50 crc kubenswrapper[4942]: I0218 20:23:50.603818 4942 scope.go:117] "RemoveContainer" containerID="3c5c8d2944afeb01436667c48418b1471ab34b93e2cdd7ad10a784369fd56d40" Feb 18 20:23:50 crc kubenswrapper[4942]: E0218 20:23:50.604479 4942 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3c5c8d2944afeb01436667c48418b1471ab34b93e2cdd7ad10a784369fd56d40\": container with ID starting with 3c5c8d2944afeb01436667c48418b1471ab34b93e2cdd7ad10a784369fd56d40 not found: ID does not exist" containerID="3c5c8d2944afeb01436667c48418b1471ab34b93e2cdd7ad10a784369fd56d40" Feb 18 20:23:50 crc kubenswrapper[4942]: I0218 20:23:50.604551 4942 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3c5c8d2944afeb01436667c48418b1471ab34b93e2cdd7ad10a784369fd56d40"} err="failed to get container status \"3c5c8d2944afeb01436667c48418b1471ab34b93e2cdd7ad10a784369fd56d40\": rpc error: code = NotFound desc = could not find container \"3c5c8d2944afeb01436667c48418b1471ab34b93e2cdd7ad10a784369fd56d40\": container with ID starting with 3c5c8d2944afeb01436667c48418b1471ab34b93e2cdd7ad10a784369fd56d40 not found: ID does not exist" Feb 18 20:23:50 crc kubenswrapper[4942]: I0218 20:23:50.604594 4942 scope.go:117] "RemoveContainer" containerID="03ab0f9a1d7fc2d7ff2620dec4fefd33d1757bde0307489a4514d36edb58e5b1" Feb 18 20:23:50 crc kubenswrapper[4942]: E0218 20:23:50.605155 4942 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"03ab0f9a1d7fc2d7ff2620dec4fefd33d1757bde0307489a4514d36edb58e5b1\": container with ID starting with 03ab0f9a1d7fc2d7ff2620dec4fefd33d1757bde0307489a4514d36edb58e5b1 not found: ID does not exist" containerID="03ab0f9a1d7fc2d7ff2620dec4fefd33d1757bde0307489a4514d36edb58e5b1" Feb 18 20:23:50 crc kubenswrapper[4942]: I0218 20:23:50.605218 4942 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"03ab0f9a1d7fc2d7ff2620dec4fefd33d1757bde0307489a4514d36edb58e5b1"} err="failed to get container status \"03ab0f9a1d7fc2d7ff2620dec4fefd33d1757bde0307489a4514d36edb58e5b1\": rpc error: code = NotFound desc = could not find container \"03ab0f9a1d7fc2d7ff2620dec4fefd33d1757bde0307489a4514d36edb58e5b1\": container with ID starting with 03ab0f9a1d7fc2d7ff2620dec4fefd33d1757bde0307489a4514d36edb58e5b1 not found: ID does not exist" Feb 18 20:23:50 crc kubenswrapper[4942]: I0218 20:23:50.605255 4942 scope.go:117] "RemoveContainer" containerID="ab46d5481265a98fed8e46cd19caabde830f4e830b4c0d1c81988263fd39f087" Feb 18 20:23:50 crc kubenswrapper[4942]: E0218 20:23:50.606482 4942 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ab46d5481265a98fed8e46cd19caabde830f4e830b4c0d1c81988263fd39f087\": container with ID starting with ab46d5481265a98fed8e46cd19caabde830f4e830b4c0d1c81988263fd39f087 not found: ID does not exist" containerID="ab46d5481265a98fed8e46cd19caabde830f4e830b4c0d1c81988263fd39f087" Feb 18 20:23:50 crc kubenswrapper[4942]: I0218 20:23:50.606521 4942 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ab46d5481265a98fed8e46cd19caabde830f4e830b4c0d1c81988263fd39f087"} err="failed to get container status \"ab46d5481265a98fed8e46cd19caabde830f4e830b4c0d1c81988263fd39f087\": rpc error: code = NotFound desc = could not find container \"ab46d5481265a98fed8e46cd19caabde830f4e830b4c0d1c81988263fd39f087\": container with ID starting with ab46d5481265a98fed8e46cd19caabde830f4e830b4c0d1c81988263fd39f087 not found: ID does not exist" Feb 18 20:23:51 crc kubenswrapper[4942]: I0218 20:23:51.052605 4942 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="65f646d4-3b0a-4e0a-937c-a2452f28d07a" path="/var/lib/kubelet/pods/65f646d4-3b0a-4e0a-937c-a2452f28d07a/volumes" Feb 18 20:23:55 crc kubenswrapper[4942]: I0218 20:23:55.036232 4942 scope.go:117] "RemoveContainer" containerID="13108c3e1f4853bccc21a0b4ca8d8754dcd7b8ef84f2648c54eda40929f45769" Feb 18 20:23:55 crc kubenswrapper[4942]: E0218 20:23:55.037276 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wqxh4_openshift-machine-config-operator(28921539-823a-4439-a230-3b5aed7085cc)\"" pod="openshift-machine-config-operator/machine-config-daemon-wqxh4" podUID="28921539-823a-4439-a230-3b5aed7085cc" Feb 18 20:24:08 crc kubenswrapper[4942]: I0218 20:24:08.010692 4942 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-tkmxc"] Feb 18 20:24:08 crc kubenswrapper[4942]: E0218 20:24:08.015446 4942 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="65f646d4-3b0a-4e0a-937c-a2452f28d07a" containerName="extract-content" Feb 18 20:24:08 crc kubenswrapper[4942]: I0218 20:24:08.015464 4942 state_mem.go:107] "Deleted CPUSet assignment" podUID="65f646d4-3b0a-4e0a-937c-a2452f28d07a" containerName="extract-content" Feb 18 20:24:08 crc kubenswrapper[4942]: E0218 20:24:08.015487 4942 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="65f646d4-3b0a-4e0a-937c-a2452f28d07a" containerName="extract-utilities" Feb 18 20:24:08 crc kubenswrapper[4942]: I0218 20:24:08.015494 4942 state_mem.go:107] "Deleted CPUSet assignment" podUID="65f646d4-3b0a-4e0a-937c-a2452f28d07a" containerName="extract-utilities" Feb 18 20:24:08 crc kubenswrapper[4942]: E0218 20:24:08.015504 4942 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="65f646d4-3b0a-4e0a-937c-a2452f28d07a" containerName="registry-server" Feb 18 20:24:08 crc kubenswrapper[4942]: I0218 20:24:08.015510 4942 state_mem.go:107] "Deleted CPUSet assignment" podUID="65f646d4-3b0a-4e0a-937c-a2452f28d07a" containerName="registry-server" Feb 18 20:24:08 crc kubenswrapper[4942]: I0218 20:24:08.015699 4942 memory_manager.go:354] "RemoveStaleState removing state" podUID="65f646d4-3b0a-4e0a-937c-a2452f28d07a" containerName="registry-server" Feb 18 20:24:08 crc kubenswrapper[4942]: I0218 20:24:08.017497 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-tkmxc" Feb 18 20:24:08 crc kubenswrapper[4942]: I0218 20:24:08.023954 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-tkmxc"] Feb 18 20:24:08 crc kubenswrapper[4942]: I0218 20:24:08.103119 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/39477d0f-18a2-4113-9d72-f0ea81f9fae0-catalog-content\") pod \"redhat-operators-tkmxc\" (UID: \"39477d0f-18a2-4113-9d72-f0ea81f9fae0\") " pod="openshift-marketplace/redhat-operators-tkmxc" Feb 18 20:24:08 crc kubenswrapper[4942]: I0218 20:24:08.103430 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/39477d0f-18a2-4113-9d72-f0ea81f9fae0-utilities\") pod \"redhat-operators-tkmxc\" (UID: \"39477d0f-18a2-4113-9d72-f0ea81f9fae0\") " pod="openshift-marketplace/redhat-operators-tkmxc" Feb 18 20:24:08 crc kubenswrapper[4942]: I0218 20:24:08.103484 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rxr9j\" (UniqueName: \"kubernetes.io/projected/39477d0f-18a2-4113-9d72-f0ea81f9fae0-kube-api-access-rxr9j\") pod \"redhat-operators-tkmxc\" (UID: \"39477d0f-18a2-4113-9d72-f0ea81f9fae0\") " pod="openshift-marketplace/redhat-operators-tkmxc" Feb 18 20:24:08 crc kubenswrapper[4942]: I0218 20:24:08.206263 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/39477d0f-18a2-4113-9d72-f0ea81f9fae0-utilities\") pod \"redhat-operators-tkmxc\" (UID: \"39477d0f-18a2-4113-9d72-f0ea81f9fae0\") " pod="openshift-marketplace/redhat-operators-tkmxc" Feb 18 20:24:08 crc kubenswrapper[4942]: I0218 20:24:08.206310 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rxr9j\" (UniqueName: \"kubernetes.io/projected/39477d0f-18a2-4113-9d72-f0ea81f9fae0-kube-api-access-rxr9j\") pod \"redhat-operators-tkmxc\" (UID: \"39477d0f-18a2-4113-9d72-f0ea81f9fae0\") " pod="openshift-marketplace/redhat-operators-tkmxc" Feb 18 20:24:08 crc kubenswrapper[4942]: I0218 20:24:08.206469 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/39477d0f-18a2-4113-9d72-f0ea81f9fae0-catalog-content\") pod \"redhat-operators-tkmxc\" (UID: \"39477d0f-18a2-4113-9d72-f0ea81f9fae0\") " pod="openshift-marketplace/redhat-operators-tkmxc" Feb 18 20:24:08 crc kubenswrapper[4942]: I0218 20:24:08.206839 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/39477d0f-18a2-4113-9d72-f0ea81f9fae0-utilities\") pod \"redhat-operators-tkmxc\" (UID: \"39477d0f-18a2-4113-9d72-f0ea81f9fae0\") " pod="openshift-marketplace/redhat-operators-tkmxc" Feb 18 20:24:08 crc kubenswrapper[4942]: I0218 20:24:08.206945 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/39477d0f-18a2-4113-9d72-f0ea81f9fae0-catalog-content\") pod \"redhat-operators-tkmxc\" (UID: \"39477d0f-18a2-4113-9d72-f0ea81f9fae0\") " pod="openshift-marketplace/redhat-operators-tkmxc" Feb 18 20:24:08 crc kubenswrapper[4942]: I0218 20:24:08.771139 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rxr9j\" (UniqueName: \"kubernetes.io/projected/39477d0f-18a2-4113-9d72-f0ea81f9fae0-kube-api-access-rxr9j\") pod \"redhat-operators-tkmxc\" (UID: \"39477d0f-18a2-4113-9d72-f0ea81f9fae0\") " pod="openshift-marketplace/redhat-operators-tkmxc" Feb 18 20:24:08 crc kubenswrapper[4942]: I0218 20:24:08.938071 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-tkmxc" Feb 18 20:24:09 crc kubenswrapper[4942]: I0218 20:24:09.392791 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-tkmxc"] Feb 18 20:24:09 crc kubenswrapper[4942]: I0218 20:24:09.690146 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-tkmxc" event={"ID":"39477d0f-18a2-4113-9d72-f0ea81f9fae0","Type":"ContainerStarted","Data":"cfd8cf62941eb29c68b53718cc2cff228f3199df60e6d2272f680537b495bbf1"} Feb 18 20:24:09 crc kubenswrapper[4942]: I0218 20:24:09.690412 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-tkmxc" event={"ID":"39477d0f-18a2-4113-9d72-f0ea81f9fae0","Type":"ContainerStarted","Data":"bec3723b86d98bbf735cea6ac5f7c58c09652650f8968f3e83bbea71103f6f9b"} Feb 18 20:24:10 crc kubenswrapper[4942]: I0218 20:24:10.036219 4942 scope.go:117] "RemoveContainer" containerID="13108c3e1f4853bccc21a0b4ca8d8754dcd7b8ef84f2648c54eda40929f45769" Feb 18 20:24:10 crc kubenswrapper[4942]: E0218 20:24:10.036802 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wqxh4_openshift-machine-config-operator(28921539-823a-4439-a230-3b5aed7085cc)\"" pod="openshift-machine-config-operator/machine-config-daemon-wqxh4" podUID="28921539-823a-4439-a230-3b5aed7085cc" Feb 18 20:24:10 crc kubenswrapper[4942]: I0218 20:24:10.702709 4942 generic.go:334] "Generic (PLEG): container finished" podID="39477d0f-18a2-4113-9d72-f0ea81f9fae0" containerID="cfd8cf62941eb29c68b53718cc2cff228f3199df60e6d2272f680537b495bbf1" exitCode=0 Feb 18 20:24:10 crc kubenswrapper[4942]: I0218 20:24:10.702805 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-tkmxc" event={"ID":"39477d0f-18a2-4113-9d72-f0ea81f9fae0","Type":"ContainerDied","Data":"cfd8cf62941eb29c68b53718cc2cff228f3199df60e6d2272f680537b495bbf1"} Feb 18 20:24:11 crc kubenswrapper[4942]: I0218 20:24:11.713381 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-tkmxc" event={"ID":"39477d0f-18a2-4113-9d72-f0ea81f9fae0","Type":"ContainerStarted","Data":"a59d64f9f922cfa1b1f7afdf6bd4efc1c0e92f517cbb8fc82979aad11f90dcff"} Feb 18 20:24:16 crc kubenswrapper[4942]: I0218 20:24:16.788267 4942 generic.go:334] "Generic (PLEG): container finished" podID="39477d0f-18a2-4113-9d72-f0ea81f9fae0" containerID="a59d64f9f922cfa1b1f7afdf6bd4efc1c0e92f517cbb8fc82979aad11f90dcff" exitCode=0 Feb 18 20:24:16 crc kubenswrapper[4942]: I0218 20:24:16.788383 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-tkmxc" event={"ID":"39477d0f-18a2-4113-9d72-f0ea81f9fae0","Type":"ContainerDied","Data":"a59d64f9f922cfa1b1f7afdf6bd4efc1c0e92f517cbb8fc82979aad11f90dcff"} Feb 18 20:24:17 crc kubenswrapper[4942]: I0218 20:24:17.802275 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-tkmxc" event={"ID":"39477d0f-18a2-4113-9d72-f0ea81f9fae0","Type":"ContainerStarted","Data":"e2077a8035c4238c706f307f6b1f930d089490f7d8095e9cae30b2fc7349c59c"} Feb 18 20:24:17 crc kubenswrapper[4942]: I0218 20:24:17.834549 4942 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-tkmxc" podStartSLOduration=4.374556807 podStartE2EDuration="10.834531604s" podCreationTimestamp="2026-02-18 20:24:07 +0000 UTC" firstStartedPulling="2026-02-18 20:24:10.706156696 +0000 UTC m=+4010.411089401" lastFinishedPulling="2026-02-18 20:24:17.166131533 +0000 UTC m=+4016.871064198" observedRunningTime="2026-02-18 20:24:17.821944501 +0000 UTC m=+4017.526877176" watchObservedRunningTime="2026-02-18 20:24:17.834531604 +0000 UTC m=+4017.539464269" Feb 18 20:24:18 crc kubenswrapper[4942]: I0218 20:24:18.939170 4942 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-tkmxc" Feb 18 20:24:18 crc kubenswrapper[4942]: I0218 20:24:18.939242 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-tkmxc" Feb 18 20:24:19 crc kubenswrapper[4942]: I0218 20:24:19.997026 4942 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-tkmxc" podUID="39477d0f-18a2-4113-9d72-f0ea81f9fae0" containerName="registry-server" probeResult="failure" output=< Feb 18 20:24:19 crc kubenswrapper[4942]: timeout: failed to connect service ":50051" within 1s Feb 18 20:24:19 crc kubenswrapper[4942]: > Feb 18 20:24:24 crc kubenswrapper[4942]: I0218 20:24:24.036347 4942 scope.go:117] "RemoveContainer" containerID="13108c3e1f4853bccc21a0b4ca8d8754dcd7b8ef84f2648c54eda40929f45769" Feb 18 20:24:24 crc kubenswrapper[4942]: E0218 20:24:24.037430 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wqxh4_openshift-machine-config-operator(28921539-823a-4439-a230-3b5aed7085cc)\"" pod="openshift-machine-config-operator/machine-config-daemon-wqxh4" podUID="28921539-823a-4439-a230-3b5aed7085cc" Feb 18 20:24:29 crc kubenswrapper[4942]: I0218 20:24:29.001900 4942 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-tkmxc" Feb 18 20:24:29 crc kubenswrapper[4942]: I0218 20:24:29.058714 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-tkmxc" Feb 18 20:24:29 crc kubenswrapper[4942]: I0218 20:24:29.249286 4942 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-tkmxc"] Feb 18 20:24:30 crc kubenswrapper[4942]: I0218 20:24:30.942916 4942 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-tkmxc" podUID="39477d0f-18a2-4113-9d72-f0ea81f9fae0" containerName="registry-server" containerID="cri-o://e2077a8035c4238c706f307f6b1f930d089490f7d8095e9cae30b2fc7349c59c" gracePeriod=2 Feb 18 20:24:31 crc kubenswrapper[4942]: I0218 20:24:31.506457 4942 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-tkmxc" Feb 18 20:24:31 crc kubenswrapper[4942]: I0218 20:24:31.608529 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/39477d0f-18a2-4113-9d72-f0ea81f9fae0-catalog-content\") pod \"39477d0f-18a2-4113-9d72-f0ea81f9fae0\" (UID: \"39477d0f-18a2-4113-9d72-f0ea81f9fae0\") " Feb 18 20:24:31 crc kubenswrapper[4942]: I0218 20:24:31.608627 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rxr9j\" (UniqueName: \"kubernetes.io/projected/39477d0f-18a2-4113-9d72-f0ea81f9fae0-kube-api-access-rxr9j\") pod \"39477d0f-18a2-4113-9d72-f0ea81f9fae0\" (UID: \"39477d0f-18a2-4113-9d72-f0ea81f9fae0\") " Feb 18 20:24:31 crc kubenswrapper[4942]: I0218 20:24:31.608756 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/39477d0f-18a2-4113-9d72-f0ea81f9fae0-utilities\") pod \"39477d0f-18a2-4113-9d72-f0ea81f9fae0\" (UID: \"39477d0f-18a2-4113-9d72-f0ea81f9fae0\") " Feb 18 20:24:31 crc kubenswrapper[4942]: I0218 20:24:31.609713 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/39477d0f-18a2-4113-9d72-f0ea81f9fae0-utilities" (OuterVolumeSpecName: "utilities") pod "39477d0f-18a2-4113-9d72-f0ea81f9fae0" (UID: "39477d0f-18a2-4113-9d72-f0ea81f9fae0"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 20:24:31 crc kubenswrapper[4942]: I0218 20:24:31.618053 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/39477d0f-18a2-4113-9d72-f0ea81f9fae0-kube-api-access-rxr9j" (OuterVolumeSpecName: "kube-api-access-rxr9j") pod "39477d0f-18a2-4113-9d72-f0ea81f9fae0" (UID: "39477d0f-18a2-4113-9d72-f0ea81f9fae0"). InnerVolumeSpecName "kube-api-access-rxr9j". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 20:24:31 crc kubenswrapper[4942]: I0218 20:24:31.711137 4942 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rxr9j\" (UniqueName: \"kubernetes.io/projected/39477d0f-18a2-4113-9d72-f0ea81f9fae0-kube-api-access-rxr9j\") on node \"crc\" DevicePath \"\"" Feb 18 20:24:31 crc kubenswrapper[4942]: I0218 20:24:31.711181 4942 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/39477d0f-18a2-4113-9d72-f0ea81f9fae0-utilities\") on node \"crc\" DevicePath \"\"" Feb 18 20:24:31 crc kubenswrapper[4942]: I0218 20:24:31.762453 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/39477d0f-18a2-4113-9d72-f0ea81f9fae0-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "39477d0f-18a2-4113-9d72-f0ea81f9fae0" (UID: "39477d0f-18a2-4113-9d72-f0ea81f9fae0"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 20:24:31 crc kubenswrapper[4942]: I0218 20:24:31.813435 4942 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/39477d0f-18a2-4113-9d72-f0ea81f9fae0-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 18 20:24:31 crc kubenswrapper[4942]: I0218 20:24:31.958227 4942 generic.go:334] "Generic (PLEG): container finished" podID="39477d0f-18a2-4113-9d72-f0ea81f9fae0" containerID="e2077a8035c4238c706f307f6b1f930d089490f7d8095e9cae30b2fc7349c59c" exitCode=0 Feb 18 20:24:31 crc kubenswrapper[4942]: I0218 20:24:31.958299 4942 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-tkmxc" Feb 18 20:24:31 crc kubenswrapper[4942]: I0218 20:24:31.958326 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-tkmxc" event={"ID":"39477d0f-18a2-4113-9d72-f0ea81f9fae0","Type":"ContainerDied","Data":"e2077a8035c4238c706f307f6b1f930d089490f7d8095e9cae30b2fc7349c59c"} Feb 18 20:24:31 crc kubenswrapper[4942]: I0218 20:24:31.959628 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-tkmxc" event={"ID":"39477d0f-18a2-4113-9d72-f0ea81f9fae0","Type":"ContainerDied","Data":"bec3723b86d98bbf735cea6ac5f7c58c09652650f8968f3e83bbea71103f6f9b"} Feb 18 20:24:31 crc kubenswrapper[4942]: I0218 20:24:31.959657 4942 scope.go:117] "RemoveContainer" containerID="e2077a8035c4238c706f307f6b1f930d089490f7d8095e9cae30b2fc7349c59c" Feb 18 20:24:31 crc kubenswrapper[4942]: I0218 20:24:31.993843 4942 scope.go:117] "RemoveContainer" containerID="a59d64f9f922cfa1b1f7afdf6bd4efc1c0e92f517cbb8fc82979aad11f90dcff" Feb 18 20:24:31 crc kubenswrapper[4942]: I0218 20:24:31.996577 4942 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-tkmxc"] Feb 18 20:24:32 crc kubenswrapper[4942]: I0218 20:24:32.004324 4942 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-tkmxc"] Feb 18 20:24:32 crc kubenswrapper[4942]: I0218 20:24:32.028261 4942 scope.go:117] "RemoveContainer" containerID="cfd8cf62941eb29c68b53718cc2cff228f3199df60e6d2272f680537b495bbf1" Feb 18 20:24:32 crc kubenswrapper[4942]: I0218 20:24:32.085323 4942 scope.go:117] "RemoveContainer" containerID="e2077a8035c4238c706f307f6b1f930d089490f7d8095e9cae30b2fc7349c59c" Feb 18 20:24:32 crc kubenswrapper[4942]: E0218 20:24:32.085885 4942 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e2077a8035c4238c706f307f6b1f930d089490f7d8095e9cae30b2fc7349c59c\": container with ID starting with e2077a8035c4238c706f307f6b1f930d089490f7d8095e9cae30b2fc7349c59c not found: ID does not exist" containerID="e2077a8035c4238c706f307f6b1f930d089490f7d8095e9cae30b2fc7349c59c" Feb 18 20:24:32 crc kubenswrapper[4942]: I0218 20:24:32.086023 4942 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e2077a8035c4238c706f307f6b1f930d089490f7d8095e9cae30b2fc7349c59c"} err="failed to get container status \"e2077a8035c4238c706f307f6b1f930d089490f7d8095e9cae30b2fc7349c59c\": rpc error: code = NotFound desc = could not find container \"e2077a8035c4238c706f307f6b1f930d089490f7d8095e9cae30b2fc7349c59c\": container with ID starting with e2077a8035c4238c706f307f6b1f930d089490f7d8095e9cae30b2fc7349c59c not found: ID does not exist" Feb 18 20:24:32 crc kubenswrapper[4942]: I0218 20:24:32.086108 4942 scope.go:117] "RemoveContainer" containerID="a59d64f9f922cfa1b1f7afdf6bd4efc1c0e92f517cbb8fc82979aad11f90dcff" Feb 18 20:24:32 crc kubenswrapper[4942]: E0218 20:24:32.086592 4942 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a59d64f9f922cfa1b1f7afdf6bd4efc1c0e92f517cbb8fc82979aad11f90dcff\": container with ID starting with a59d64f9f922cfa1b1f7afdf6bd4efc1c0e92f517cbb8fc82979aad11f90dcff not found: ID does not exist" containerID="a59d64f9f922cfa1b1f7afdf6bd4efc1c0e92f517cbb8fc82979aad11f90dcff" Feb 18 20:24:32 crc kubenswrapper[4942]: I0218 20:24:32.086622 4942 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a59d64f9f922cfa1b1f7afdf6bd4efc1c0e92f517cbb8fc82979aad11f90dcff"} err="failed to get container status \"a59d64f9f922cfa1b1f7afdf6bd4efc1c0e92f517cbb8fc82979aad11f90dcff\": rpc error: code = NotFound desc = could not find container \"a59d64f9f922cfa1b1f7afdf6bd4efc1c0e92f517cbb8fc82979aad11f90dcff\": container with ID starting with a59d64f9f922cfa1b1f7afdf6bd4efc1c0e92f517cbb8fc82979aad11f90dcff not found: ID does not exist" Feb 18 20:24:32 crc kubenswrapper[4942]: I0218 20:24:32.086644 4942 scope.go:117] "RemoveContainer" containerID="cfd8cf62941eb29c68b53718cc2cff228f3199df60e6d2272f680537b495bbf1" Feb 18 20:24:32 crc kubenswrapper[4942]: E0218 20:24:32.086964 4942 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cfd8cf62941eb29c68b53718cc2cff228f3199df60e6d2272f680537b495bbf1\": container with ID starting with cfd8cf62941eb29c68b53718cc2cff228f3199df60e6d2272f680537b495bbf1 not found: ID does not exist" containerID="cfd8cf62941eb29c68b53718cc2cff228f3199df60e6d2272f680537b495bbf1" Feb 18 20:24:32 crc kubenswrapper[4942]: I0218 20:24:32.087114 4942 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cfd8cf62941eb29c68b53718cc2cff228f3199df60e6d2272f680537b495bbf1"} err="failed to get container status \"cfd8cf62941eb29c68b53718cc2cff228f3199df60e6d2272f680537b495bbf1\": rpc error: code = NotFound desc = could not find container \"cfd8cf62941eb29c68b53718cc2cff228f3199df60e6d2272f680537b495bbf1\": container with ID starting with cfd8cf62941eb29c68b53718cc2cff228f3199df60e6d2272f680537b495bbf1 not found: ID does not exist" Feb 18 20:24:33 crc kubenswrapper[4942]: I0218 20:24:33.049997 4942 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="39477d0f-18a2-4113-9d72-f0ea81f9fae0" path="/var/lib/kubelet/pods/39477d0f-18a2-4113-9d72-f0ea81f9fae0/volumes" Feb 18 20:24:35 crc kubenswrapper[4942]: I0218 20:24:35.418808 4942 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-sdc46"] Feb 18 20:24:35 crc kubenswrapper[4942]: E0218 20:24:35.419338 4942 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="39477d0f-18a2-4113-9d72-f0ea81f9fae0" containerName="registry-server" Feb 18 20:24:35 crc kubenswrapper[4942]: I0218 20:24:35.419353 4942 state_mem.go:107] "Deleted CPUSet assignment" podUID="39477d0f-18a2-4113-9d72-f0ea81f9fae0" containerName="registry-server" Feb 18 20:24:35 crc kubenswrapper[4942]: E0218 20:24:35.419365 4942 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="39477d0f-18a2-4113-9d72-f0ea81f9fae0" containerName="extract-content" Feb 18 20:24:35 crc kubenswrapper[4942]: I0218 20:24:35.419372 4942 state_mem.go:107] "Deleted CPUSet assignment" podUID="39477d0f-18a2-4113-9d72-f0ea81f9fae0" containerName="extract-content" Feb 18 20:24:35 crc kubenswrapper[4942]: E0218 20:24:35.419392 4942 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="39477d0f-18a2-4113-9d72-f0ea81f9fae0" containerName="extract-utilities" Feb 18 20:24:35 crc kubenswrapper[4942]: I0218 20:24:35.419398 4942 state_mem.go:107] "Deleted CPUSet assignment" podUID="39477d0f-18a2-4113-9d72-f0ea81f9fae0" containerName="extract-utilities" Feb 18 20:24:35 crc kubenswrapper[4942]: I0218 20:24:35.419566 4942 memory_manager.go:354] "RemoveStaleState removing state" podUID="39477d0f-18a2-4113-9d72-f0ea81f9fae0" containerName="registry-server" Feb 18 20:24:35 crc kubenswrapper[4942]: I0218 20:24:35.421040 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-sdc46" Feb 18 20:24:35 crc kubenswrapper[4942]: I0218 20:24:35.434554 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-sdc46"] Feb 18 20:24:35 crc kubenswrapper[4942]: I0218 20:24:35.594339 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9pgtl\" (UniqueName: \"kubernetes.io/projected/c74f41b3-2cc8-42d4-90b3-e2252bed77f6-kube-api-access-9pgtl\") pod \"redhat-marketplace-sdc46\" (UID: \"c74f41b3-2cc8-42d4-90b3-e2252bed77f6\") " pod="openshift-marketplace/redhat-marketplace-sdc46" Feb 18 20:24:35 crc kubenswrapper[4942]: I0218 20:24:35.594494 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c74f41b3-2cc8-42d4-90b3-e2252bed77f6-utilities\") pod \"redhat-marketplace-sdc46\" (UID: \"c74f41b3-2cc8-42d4-90b3-e2252bed77f6\") " pod="openshift-marketplace/redhat-marketplace-sdc46" Feb 18 20:24:35 crc kubenswrapper[4942]: I0218 20:24:35.594597 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c74f41b3-2cc8-42d4-90b3-e2252bed77f6-catalog-content\") pod \"redhat-marketplace-sdc46\" (UID: \"c74f41b3-2cc8-42d4-90b3-e2252bed77f6\") " pod="openshift-marketplace/redhat-marketplace-sdc46" Feb 18 20:24:35 crc kubenswrapper[4942]: I0218 20:24:35.696631 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9pgtl\" (UniqueName: \"kubernetes.io/projected/c74f41b3-2cc8-42d4-90b3-e2252bed77f6-kube-api-access-9pgtl\") pod \"redhat-marketplace-sdc46\" (UID: \"c74f41b3-2cc8-42d4-90b3-e2252bed77f6\") " pod="openshift-marketplace/redhat-marketplace-sdc46" Feb 18 20:24:35 crc kubenswrapper[4942]: I0218 20:24:35.696716 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c74f41b3-2cc8-42d4-90b3-e2252bed77f6-utilities\") pod \"redhat-marketplace-sdc46\" (UID: \"c74f41b3-2cc8-42d4-90b3-e2252bed77f6\") " pod="openshift-marketplace/redhat-marketplace-sdc46" Feb 18 20:24:35 crc kubenswrapper[4942]: I0218 20:24:35.696784 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c74f41b3-2cc8-42d4-90b3-e2252bed77f6-catalog-content\") pod \"redhat-marketplace-sdc46\" (UID: \"c74f41b3-2cc8-42d4-90b3-e2252bed77f6\") " pod="openshift-marketplace/redhat-marketplace-sdc46" Feb 18 20:24:35 crc kubenswrapper[4942]: I0218 20:24:35.697398 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c74f41b3-2cc8-42d4-90b3-e2252bed77f6-catalog-content\") pod \"redhat-marketplace-sdc46\" (UID: \"c74f41b3-2cc8-42d4-90b3-e2252bed77f6\") " pod="openshift-marketplace/redhat-marketplace-sdc46" Feb 18 20:24:35 crc kubenswrapper[4942]: I0218 20:24:35.697697 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c74f41b3-2cc8-42d4-90b3-e2252bed77f6-utilities\") pod \"redhat-marketplace-sdc46\" (UID: \"c74f41b3-2cc8-42d4-90b3-e2252bed77f6\") " pod="openshift-marketplace/redhat-marketplace-sdc46" Feb 18 20:24:36 crc kubenswrapper[4942]: I0218 20:24:36.173338 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9pgtl\" (UniqueName: \"kubernetes.io/projected/c74f41b3-2cc8-42d4-90b3-e2252bed77f6-kube-api-access-9pgtl\") pod \"redhat-marketplace-sdc46\" (UID: \"c74f41b3-2cc8-42d4-90b3-e2252bed77f6\") " pod="openshift-marketplace/redhat-marketplace-sdc46" Feb 18 20:24:36 crc kubenswrapper[4942]: I0218 20:24:36.349954 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-sdc46" Feb 18 20:24:36 crc kubenswrapper[4942]: I0218 20:24:36.838199 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-sdc46"] Feb 18 20:24:37 crc kubenswrapper[4942]: I0218 20:24:37.023069 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-sdc46" event={"ID":"c74f41b3-2cc8-42d4-90b3-e2252bed77f6","Type":"ContainerStarted","Data":"c9b0fb017ab3cc6daf6a8949493b9a9ef3c2c4268df580e987c1eed967f4d2fa"} Feb 18 20:24:38 crc kubenswrapper[4942]: I0218 20:24:38.036219 4942 scope.go:117] "RemoveContainer" containerID="13108c3e1f4853bccc21a0b4ca8d8754dcd7b8ef84f2648c54eda40929f45769" Feb 18 20:24:38 crc kubenswrapper[4942]: E0218 20:24:38.037130 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wqxh4_openshift-machine-config-operator(28921539-823a-4439-a230-3b5aed7085cc)\"" pod="openshift-machine-config-operator/machine-config-daemon-wqxh4" podUID="28921539-823a-4439-a230-3b5aed7085cc" Feb 18 20:24:38 crc kubenswrapper[4942]: I0218 20:24:38.038368 4942 generic.go:334] "Generic (PLEG): container finished" podID="c74f41b3-2cc8-42d4-90b3-e2252bed77f6" containerID="c32ba00cae4f7d1dd51ba40cbe103ac2c5e72299822a28333e5a4fab02b0f3d3" exitCode=0 Feb 18 20:24:38 crc kubenswrapper[4942]: I0218 20:24:38.038429 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-sdc46" event={"ID":"c74f41b3-2cc8-42d4-90b3-e2252bed77f6","Type":"ContainerDied","Data":"c32ba00cae4f7d1dd51ba40cbe103ac2c5e72299822a28333e5a4fab02b0f3d3"} Feb 18 20:24:40 crc kubenswrapper[4942]: I0218 20:24:40.062161 4942 generic.go:334] "Generic (PLEG): container finished" podID="c74f41b3-2cc8-42d4-90b3-e2252bed77f6" containerID="0a449d367accd206c984e8836605943c66b351dce79ad1676d37bcbde16abe47" exitCode=0 Feb 18 20:24:40 crc kubenswrapper[4942]: I0218 20:24:40.062251 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-sdc46" event={"ID":"c74f41b3-2cc8-42d4-90b3-e2252bed77f6","Type":"ContainerDied","Data":"0a449d367accd206c984e8836605943c66b351dce79ad1676d37bcbde16abe47"} Feb 18 20:24:41 crc kubenswrapper[4942]: I0218 20:24:41.075188 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-sdc46" event={"ID":"c74f41b3-2cc8-42d4-90b3-e2252bed77f6","Type":"ContainerStarted","Data":"b340bc5bb7932fcb915e74753d59bd9ee469e6f2151acce531197fd5a75a3e59"} Feb 18 20:24:41 crc kubenswrapper[4942]: I0218 20:24:41.094513 4942 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-sdc46" podStartSLOduration=3.658837708 podStartE2EDuration="6.094497786s" podCreationTimestamp="2026-02-18 20:24:35 +0000 UTC" firstStartedPulling="2026-02-18 20:24:38.040593532 +0000 UTC m=+4037.745526237" lastFinishedPulling="2026-02-18 20:24:40.47625365 +0000 UTC m=+4040.181186315" observedRunningTime="2026-02-18 20:24:41.092681138 +0000 UTC m=+4040.797613803" watchObservedRunningTime="2026-02-18 20:24:41.094497786 +0000 UTC m=+4040.799430451" Feb 18 20:24:46 crc kubenswrapper[4942]: I0218 20:24:46.351118 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-sdc46" Feb 18 20:24:46 crc kubenswrapper[4942]: I0218 20:24:46.351639 4942 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-sdc46" Feb 18 20:24:46 crc kubenswrapper[4942]: I0218 20:24:46.833474 4942 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-sdc46" Feb 18 20:24:47 crc kubenswrapper[4942]: I0218 20:24:47.199506 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-sdc46" Feb 18 20:24:47 crc kubenswrapper[4942]: I0218 20:24:47.243157 4942 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-sdc46"] Feb 18 20:24:49 crc kubenswrapper[4942]: I0218 20:24:49.163542 4942 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-sdc46" podUID="c74f41b3-2cc8-42d4-90b3-e2252bed77f6" containerName="registry-server" containerID="cri-o://b340bc5bb7932fcb915e74753d59bd9ee469e6f2151acce531197fd5a75a3e59" gracePeriod=2 Feb 18 20:24:49 crc kubenswrapper[4942]: I0218 20:24:49.690468 4942 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-sdc46" Feb 18 20:24:49 crc kubenswrapper[4942]: I0218 20:24:49.832473 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9pgtl\" (UniqueName: \"kubernetes.io/projected/c74f41b3-2cc8-42d4-90b3-e2252bed77f6-kube-api-access-9pgtl\") pod \"c74f41b3-2cc8-42d4-90b3-e2252bed77f6\" (UID: \"c74f41b3-2cc8-42d4-90b3-e2252bed77f6\") " Feb 18 20:24:49 crc kubenswrapper[4942]: I0218 20:24:49.832660 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c74f41b3-2cc8-42d4-90b3-e2252bed77f6-catalog-content\") pod \"c74f41b3-2cc8-42d4-90b3-e2252bed77f6\" (UID: \"c74f41b3-2cc8-42d4-90b3-e2252bed77f6\") " Feb 18 20:24:49 crc kubenswrapper[4942]: I0218 20:24:49.832763 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c74f41b3-2cc8-42d4-90b3-e2252bed77f6-utilities\") pod \"c74f41b3-2cc8-42d4-90b3-e2252bed77f6\" (UID: \"c74f41b3-2cc8-42d4-90b3-e2252bed77f6\") " Feb 18 20:24:49 crc kubenswrapper[4942]: I0218 20:24:49.833583 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c74f41b3-2cc8-42d4-90b3-e2252bed77f6-utilities" (OuterVolumeSpecName: "utilities") pod "c74f41b3-2cc8-42d4-90b3-e2252bed77f6" (UID: "c74f41b3-2cc8-42d4-90b3-e2252bed77f6"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 20:24:49 crc kubenswrapper[4942]: I0218 20:24:49.841835 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c74f41b3-2cc8-42d4-90b3-e2252bed77f6-kube-api-access-9pgtl" (OuterVolumeSpecName: "kube-api-access-9pgtl") pod "c74f41b3-2cc8-42d4-90b3-e2252bed77f6" (UID: "c74f41b3-2cc8-42d4-90b3-e2252bed77f6"). InnerVolumeSpecName "kube-api-access-9pgtl". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 20:24:49 crc kubenswrapper[4942]: I0218 20:24:49.879353 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c74f41b3-2cc8-42d4-90b3-e2252bed77f6-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "c74f41b3-2cc8-42d4-90b3-e2252bed77f6" (UID: "c74f41b3-2cc8-42d4-90b3-e2252bed77f6"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 20:24:49 crc kubenswrapper[4942]: I0218 20:24:49.935159 4942 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c74f41b3-2cc8-42d4-90b3-e2252bed77f6-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 18 20:24:49 crc kubenswrapper[4942]: I0218 20:24:49.935204 4942 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c74f41b3-2cc8-42d4-90b3-e2252bed77f6-utilities\") on node \"crc\" DevicePath \"\"" Feb 18 20:24:49 crc kubenswrapper[4942]: I0218 20:24:49.935214 4942 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9pgtl\" (UniqueName: \"kubernetes.io/projected/c74f41b3-2cc8-42d4-90b3-e2252bed77f6-kube-api-access-9pgtl\") on node \"crc\" DevicePath \"\"" Feb 18 20:24:50 crc kubenswrapper[4942]: I0218 20:24:50.193319 4942 generic.go:334] "Generic (PLEG): container finished" podID="c74f41b3-2cc8-42d4-90b3-e2252bed77f6" containerID="b340bc5bb7932fcb915e74753d59bd9ee469e6f2151acce531197fd5a75a3e59" exitCode=0 Feb 18 20:24:50 crc kubenswrapper[4942]: I0218 20:24:50.193378 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-sdc46" event={"ID":"c74f41b3-2cc8-42d4-90b3-e2252bed77f6","Type":"ContainerDied","Data":"b340bc5bb7932fcb915e74753d59bd9ee469e6f2151acce531197fd5a75a3e59"} Feb 18 20:24:50 crc kubenswrapper[4942]: I0218 20:24:50.193411 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-sdc46" event={"ID":"c74f41b3-2cc8-42d4-90b3-e2252bed77f6","Type":"ContainerDied","Data":"c9b0fb017ab3cc6daf6a8949493b9a9ef3c2c4268df580e987c1eed967f4d2fa"} Feb 18 20:24:50 crc kubenswrapper[4942]: I0218 20:24:50.193450 4942 scope.go:117] "RemoveContainer" containerID="b340bc5bb7932fcb915e74753d59bd9ee469e6f2151acce531197fd5a75a3e59" Feb 18 20:24:50 crc kubenswrapper[4942]: I0218 20:24:50.193674 4942 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-sdc46" Feb 18 20:24:50 crc kubenswrapper[4942]: I0218 20:24:50.252614 4942 scope.go:117] "RemoveContainer" containerID="0a449d367accd206c984e8836605943c66b351dce79ad1676d37bcbde16abe47" Feb 18 20:24:50 crc kubenswrapper[4942]: I0218 20:24:50.268533 4942 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-sdc46"] Feb 18 20:24:50 crc kubenswrapper[4942]: I0218 20:24:50.279445 4942 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-sdc46"] Feb 18 20:24:50 crc kubenswrapper[4942]: I0218 20:24:50.281372 4942 scope.go:117] "RemoveContainer" containerID="c32ba00cae4f7d1dd51ba40cbe103ac2c5e72299822a28333e5a4fab02b0f3d3" Feb 18 20:24:50 crc kubenswrapper[4942]: I0218 20:24:50.352365 4942 scope.go:117] "RemoveContainer" containerID="b340bc5bb7932fcb915e74753d59bd9ee469e6f2151acce531197fd5a75a3e59" Feb 18 20:24:50 crc kubenswrapper[4942]: E0218 20:24:50.352864 4942 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b340bc5bb7932fcb915e74753d59bd9ee469e6f2151acce531197fd5a75a3e59\": container with ID starting with b340bc5bb7932fcb915e74753d59bd9ee469e6f2151acce531197fd5a75a3e59 not found: ID does not exist" containerID="b340bc5bb7932fcb915e74753d59bd9ee469e6f2151acce531197fd5a75a3e59" Feb 18 20:24:50 crc kubenswrapper[4942]: I0218 20:24:50.352977 4942 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b340bc5bb7932fcb915e74753d59bd9ee469e6f2151acce531197fd5a75a3e59"} err="failed to get container status \"b340bc5bb7932fcb915e74753d59bd9ee469e6f2151acce531197fd5a75a3e59\": rpc error: code = NotFound desc = could not find container \"b340bc5bb7932fcb915e74753d59bd9ee469e6f2151acce531197fd5a75a3e59\": container with ID starting with b340bc5bb7932fcb915e74753d59bd9ee469e6f2151acce531197fd5a75a3e59 not found: ID does not exist" Feb 18 20:24:50 crc kubenswrapper[4942]: I0218 20:24:50.353031 4942 scope.go:117] "RemoveContainer" containerID="0a449d367accd206c984e8836605943c66b351dce79ad1676d37bcbde16abe47" Feb 18 20:24:50 crc kubenswrapper[4942]: E0218 20:24:50.353595 4942 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0a449d367accd206c984e8836605943c66b351dce79ad1676d37bcbde16abe47\": container with ID starting with 0a449d367accd206c984e8836605943c66b351dce79ad1676d37bcbde16abe47 not found: ID does not exist" containerID="0a449d367accd206c984e8836605943c66b351dce79ad1676d37bcbde16abe47" Feb 18 20:24:50 crc kubenswrapper[4942]: I0218 20:24:50.353627 4942 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0a449d367accd206c984e8836605943c66b351dce79ad1676d37bcbde16abe47"} err="failed to get container status \"0a449d367accd206c984e8836605943c66b351dce79ad1676d37bcbde16abe47\": rpc error: code = NotFound desc = could not find container \"0a449d367accd206c984e8836605943c66b351dce79ad1676d37bcbde16abe47\": container with ID starting with 0a449d367accd206c984e8836605943c66b351dce79ad1676d37bcbde16abe47 not found: ID does not exist" Feb 18 20:24:50 crc kubenswrapper[4942]: I0218 20:24:50.353649 4942 scope.go:117] "RemoveContainer" containerID="c32ba00cae4f7d1dd51ba40cbe103ac2c5e72299822a28333e5a4fab02b0f3d3" Feb 18 20:24:50 crc kubenswrapper[4942]: E0218 20:24:50.354032 4942 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c32ba00cae4f7d1dd51ba40cbe103ac2c5e72299822a28333e5a4fab02b0f3d3\": container with ID starting with c32ba00cae4f7d1dd51ba40cbe103ac2c5e72299822a28333e5a4fab02b0f3d3 not found: ID does not exist" containerID="c32ba00cae4f7d1dd51ba40cbe103ac2c5e72299822a28333e5a4fab02b0f3d3" Feb 18 20:24:50 crc kubenswrapper[4942]: I0218 20:24:50.354103 4942 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c32ba00cae4f7d1dd51ba40cbe103ac2c5e72299822a28333e5a4fab02b0f3d3"} err="failed to get container status \"c32ba00cae4f7d1dd51ba40cbe103ac2c5e72299822a28333e5a4fab02b0f3d3\": rpc error: code = NotFound desc = could not find container \"c32ba00cae4f7d1dd51ba40cbe103ac2c5e72299822a28333e5a4fab02b0f3d3\": container with ID starting with c32ba00cae4f7d1dd51ba40cbe103ac2c5e72299822a28333e5a4fab02b0f3d3 not found: ID does not exist" Feb 18 20:24:51 crc kubenswrapper[4942]: I0218 20:24:51.058932 4942 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c74f41b3-2cc8-42d4-90b3-e2252bed77f6" path="/var/lib/kubelet/pods/c74f41b3-2cc8-42d4-90b3-e2252bed77f6/volumes" Feb 18 20:24:53 crc kubenswrapper[4942]: I0218 20:24:53.035922 4942 scope.go:117] "RemoveContainer" containerID="13108c3e1f4853bccc21a0b4ca8d8754dcd7b8ef84f2648c54eda40929f45769" Feb 18 20:24:53 crc kubenswrapper[4942]: E0218 20:24:53.036595 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wqxh4_openshift-machine-config-operator(28921539-823a-4439-a230-3b5aed7085cc)\"" pod="openshift-machine-config-operator/machine-config-daemon-wqxh4" podUID="28921539-823a-4439-a230-3b5aed7085cc" Feb 18 20:25:04 crc kubenswrapper[4942]: I0218 20:25:04.036356 4942 scope.go:117] "RemoveContainer" containerID="13108c3e1f4853bccc21a0b4ca8d8754dcd7b8ef84f2648c54eda40929f45769" Feb 18 20:25:04 crc kubenswrapper[4942]: E0218 20:25:04.037332 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wqxh4_openshift-machine-config-operator(28921539-823a-4439-a230-3b5aed7085cc)\"" pod="openshift-machine-config-operator/machine-config-daemon-wqxh4" podUID="28921539-823a-4439-a230-3b5aed7085cc" Feb 18 20:25:19 crc kubenswrapper[4942]: I0218 20:25:19.036011 4942 scope.go:117] "RemoveContainer" containerID="13108c3e1f4853bccc21a0b4ca8d8754dcd7b8ef84f2648c54eda40929f45769" Feb 18 20:25:19 crc kubenswrapper[4942]: E0218 20:25:19.036886 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wqxh4_openshift-machine-config-operator(28921539-823a-4439-a230-3b5aed7085cc)\"" pod="openshift-machine-config-operator/machine-config-daemon-wqxh4" podUID="28921539-823a-4439-a230-3b5aed7085cc" Feb 18 20:25:31 crc kubenswrapper[4942]: I0218 20:25:31.199528 4942 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-5nxbp"] Feb 18 20:25:31 crc kubenswrapper[4942]: E0218 20:25:31.201063 4942 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c74f41b3-2cc8-42d4-90b3-e2252bed77f6" containerName="extract-utilities" Feb 18 20:25:31 crc kubenswrapper[4942]: I0218 20:25:31.201099 4942 state_mem.go:107] "Deleted CPUSet assignment" podUID="c74f41b3-2cc8-42d4-90b3-e2252bed77f6" containerName="extract-utilities" Feb 18 20:25:31 crc kubenswrapper[4942]: E0218 20:25:31.201182 4942 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c74f41b3-2cc8-42d4-90b3-e2252bed77f6" containerName="extract-content" Feb 18 20:25:31 crc kubenswrapper[4942]: I0218 20:25:31.201203 4942 state_mem.go:107] "Deleted CPUSet assignment" podUID="c74f41b3-2cc8-42d4-90b3-e2252bed77f6" containerName="extract-content" Feb 18 20:25:31 crc kubenswrapper[4942]: E0218 20:25:31.201276 4942 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c74f41b3-2cc8-42d4-90b3-e2252bed77f6" containerName="registry-server" Feb 18 20:25:31 crc kubenswrapper[4942]: I0218 20:25:31.201294 4942 state_mem.go:107] "Deleted CPUSet assignment" podUID="c74f41b3-2cc8-42d4-90b3-e2252bed77f6" containerName="registry-server" Feb 18 20:25:31 crc kubenswrapper[4942]: I0218 20:25:31.201810 4942 memory_manager.go:354] "RemoveStaleState removing state" podUID="c74f41b3-2cc8-42d4-90b3-e2252bed77f6" containerName="registry-server" Feb 18 20:25:31 crc kubenswrapper[4942]: I0218 20:25:31.205634 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-5nxbp" Feb 18 20:25:31 crc kubenswrapper[4942]: I0218 20:25:31.218793 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-5nxbp"] Feb 18 20:25:31 crc kubenswrapper[4942]: I0218 20:25:31.298900 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sgt89\" (UniqueName: \"kubernetes.io/projected/b14df05a-46ca-4dba-a05c-8aca88ea9643-kube-api-access-sgt89\") pod \"certified-operators-5nxbp\" (UID: \"b14df05a-46ca-4dba-a05c-8aca88ea9643\") " pod="openshift-marketplace/certified-operators-5nxbp" Feb 18 20:25:31 crc kubenswrapper[4942]: I0218 20:25:31.299293 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b14df05a-46ca-4dba-a05c-8aca88ea9643-utilities\") pod \"certified-operators-5nxbp\" (UID: \"b14df05a-46ca-4dba-a05c-8aca88ea9643\") " pod="openshift-marketplace/certified-operators-5nxbp" Feb 18 20:25:31 crc kubenswrapper[4942]: I0218 20:25:31.299450 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b14df05a-46ca-4dba-a05c-8aca88ea9643-catalog-content\") pod \"certified-operators-5nxbp\" (UID: \"b14df05a-46ca-4dba-a05c-8aca88ea9643\") " pod="openshift-marketplace/certified-operators-5nxbp" Feb 18 20:25:31 crc kubenswrapper[4942]: I0218 20:25:31.401288 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sgt89\" (UniqueName: \"kubernetes.io/projected/b14df05a-46ca-4dba-a05c-8aca88ea9643-kube-api-access-sgt89\") pod \"certified-operators-5nxbp\" (UID: \"b14df05a-46ca-4dba-a05c-8aca88ea9643\") " pod="openshift-marketplace/certified-operators-5nxbp" Feb 18 20:25:31 crc kubenswrapper[4942]: I0218 20:25:31.401423 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b14df05a-46ca-4dba-a05c-8aca88ea9643-utilities\") pod \"certified-operators-5nxbp\" (UID: \"b14df05a-46ca-4dba-a05c-8aca88ea9643\") " pod="openshift-marketplace/certified-operators-5nxbp" Feb 18 20:25:31 crc kubenswrapper[4942]: I0218 20:25:31.401488 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b14df05a-46ca-4dba-a05c-8aca88ea9643-catalog-content\") pod \"certified-operators-5nxbp\" (UID: \"b14df05a-46ca-4dba-a05c-8aca88ea9643\") " pod="openshift-marketplace/certified-operators-5nxbp" Feb 18 20:25:31 crc kubenswrapper[4942]: I0218 20:25:31.402017 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b14df05a-46ca-4dba-a05c-8aca88ea9643-utilities\") pod \"certified-operators-5nxbp\" (UID: \"b14df05a-46ca-4dba-a05c-8aca88ea9643\") " pod="openshift-marketplace/certified-operators-5nxbp" Feb 18 20:25:31 crc kubenswrapper[4942]: I0218 20:25:31.402100 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b14df05a-46ca-4dba-a05c-8aca88ea9643-catalog-content\") pod \"certified-operators-5nxbp\" (UID: \"b14df05a-46ca-4dba-a05c-8aca88ea9643\") " pod="openshift-marketplace/certified-operators-5nxbp" Feb 18 20:25:31 crc kubenswrapper[4942]: I0218 20:25:31.421840 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sgt89\" (UniqueName: \"kubernetes.io/projected/b14df05a-46ca-4dba-a05c-8aca88ea9643-kube-api-access-sgt89\") pod \"certified-operators-5nxbp\" (UID: \"b14df05a-46ca-4dba-a05c-8aca88ea9643\") " pod="openshift-marketplace/certified-operators-5nxbp" Feb 18 20:25:31 crc kubenswrapper[4942]: I0218 20:25:31.534398 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-5nxbp" Feb 18 20:25:32 crc kubenswrapper[4942]: I0218 20:25:32.082848 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-5nxbp"] Feb 18 20:25:32 crc kubenswrapper[4942]: I0218 20:25:32.621050 4942 generic.go:334] "Generic (PLEG): container finished" podID="b14df05a-46ca-4dba-a05c-8aca88ea9643" containerID="a139bdbcd44b10bd4347feac43f107406dad0f697a107e7f6b4f6861c9831a64" exitCode=0 Feb 18 20:25:32 crc kubenswrapper[4942]: I0218 20:25:32.621171 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5nxbp" event={"ID":"b14df05a-46ca-4dba-a05c-8aca88ea9643","Type":"ContainerDied","Data":"a139bdbcd44b10bd4347feac43f107406dad0f697a107e7f6b4f6861c9831a64"} Feb 18 20:25:32 crc kubenswrapper[4942]: I0218 20:25:32.621403 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5nxbp" event={"ID":"b14df05a-46ca-4dba-a05c-8aca88ea9643","Type":"ContainerStarted","Data":"b62cbba7efcc3057eaed590174d862635c69b01aabde9c445fab7d27d0db35d8"} Feb 18 20:25:33 crc kubenswrapper[4942]: I0218 20:25:33.635480 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5nxbp" event={"ID":"b14df05a-46ca-4dba-a05c-8aca88ea9643","Type":"ContainerStarted","Data":"bce25462aea5b7bb567c89cafdbc10b30aa10b3310edb73e614fa19e27c50e20"} Feb 18 20:25:34 crc kubenswrapper[4942]: I0218 20:25:34.035833 4942 scope.go:117] "RemoveContainer" containerID="13108c3e1f4853bccc21a0b4ca8d8754dcd7b8ef84f2648c54eda40929f45769" Feb 18 20:25:34 crc kubenswrapper[4942]: E0218 20:25:34.036150 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wqxh4_openshift-machine-config-operator(28921539-823a-4439-a230-3b5aed7085cc)\"" pod="openshift-machine-config-operator/machine-config-daemon-wqxh4" podUID="28921539-823a-4439-a230-3b5aed7085cc" Feb 18 20:25:35 crc kubenswrapper[4942]: I0218 20:25:35.656565 4942 generic.go:334] "Generic (PLEG): container finished" podID="b14df05a-46ca-4dba-a05c-8aca88ea9643" containerID="bce25462aea5b7bb567c89cafdbc10b30aa10b3310edb73e614fa19e27c50e20" exitCode=0 Feb 18 20:25:35 crc kubenswrapper[4942]: I0218 20:25:35.656663 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5nxbp" event={"ID":"b14df05a-46ca-4dba-a05c-8aca88ea9643","Type":"ContainerDied","Data":"bce25462aea5b7bb567c89cafdbc10b30aa10b3310edb73e614fa19e27c50e20"} Feb 18 20:25:36 crc kubenswrapper[4942]: I0218 20:25:36.668580 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5nxbp" event={"ID":"b14df05a-46ca-4dba-a05c-8aca88ea9643","Type":"ContainerStarted","Data":"f38fbba31327c539341d30370a929648de96475d873277573d41e72cd5418fb0"} Feb 18 20:25:36 crc kubenswrapper[4942]: I0218 20:25:36.690945 4942 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-5nxbp" podStartSLOduration=2.254562591 podStartE2EDuration="5.690927664s" podCreationTimestamp="2026-02-18 20:25:31 +0000 UTC" firstStartedPulling="2026-02-18 20:25:32.623537087 +0000 UTC m=+4092.328469792" lastFinishedPulling="2026-02-18 20:25:36.0599022 +0000 UTC m=+4095.764834865" observedRunningTime="2026-02-18 20:25:36.685937042 +0000 UTC m=+4096.390869717" watchObservedRunningTime="2026-02-18 20:25:36.690927664 +0000 UTC m=+4096.395860329" Feb 18 20:25:41 crc kubenswrapper[4942]: I0218 20:25:41.534994 4942 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-5nxbp" Feb 18 20:25:41 crc kubenswrapper[4942]: I0218 20:25:41.535640 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-5nxbp" Feb 18 20:25:41 crc kubenswrapper[4942]: I0218 20:25:41.598455 4942 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-5nxbp" Feb 18 20:25:41 crc kubenswrapper[4942]: I0218 20:25:41.791425 4942 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-5nxbp" Feb 18 20:25:41 crc kubenswrapper[4942]: I0218 20:25:41.857793 4942 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-5nxbp"] Feb 18 20:25:43 crc kubenswrapper[4942]: I0218 20:25:43.738958 4942 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-5nxbp" podUID="b14df05a-46ca-4dba-a05c-8aca88ea9643" containerName="registry-server" containerID="cri-o://f38fbba31327c539341d30370a929648de96475d873277573d41e72cd5418fb0" gracePeriod=2 Feb 18 20:25:44 crc kubenswrapper[4942]: I0218 20:25:44.334120 4942 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-5nxbp" Feb 18 20:25:44 crc kubenswrapper[4942]: I0218 20:25:44.371633 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sgt89\" (UniqueName: \"kubernetes.io/projected/b14df05a-46ca-4dba-a05c-8aca88ea9643-kube-api-access-sgt89\") pod \"b14df05a-46ca-4dba-a05c-8aca88ea9643\" (UID: \"b14df05a-46ca-4dba-a05c-8aca88ea9643\") " Feb 18 20:25:44 crc kubenswrapper[4942]: I0218 20:25:44.371960 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b14df05a-46ca-4dba-a05c-8aca88ea9643-utilities\") pod \"b14df05a-46ca-4dba-a05c-8aca88ea9643\" (UID: \"b14df05a-46ca-4dba-a05c-8aca88ea9643\") " Feb 18 20:25:44 crc kubenswrapper[4942]: I0218 20:25:44.372219 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b14df05a-46ca-4dba-a05c-8aca88ea9643-catalog-content\") pod \"b14df05a-46ca-4dba-a05c-8aca88ea9643\" (UID: \"b14df05a-46ca-4dba-a05c-8aca88ea9643\") " Feb 18 20:25:44 crc kubenswrapper[4942]: I0218 20:25:44.382441 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b14df05a-46ca-4dba-a05c-8aca88ea9643-utilities" (OuterVolumeSpecName: "utilities") pod "b14df05a-46ca-4dba-a05c-8aca88ea9643" (UID: "b14df05a-46ca-4dba-a05c-8aca88ea9643"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 20:25:44 crc kubenswrapper[4942]: I0218 20:25:44.400090 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b14df05a-46ca-4dba-a05c-8aca88ea9643-kube-api-access-sgt89" (OuterVolumeSpecName: "kube-api-access-sgt89") pod "b14df05a-46ca-4dba-a05c-8aca88ea9643" (UID: "b14df05a-46ca-4dba-a05c-8aca88ea9643"). InnerVolumeSpecName "kube-api-access-sgt89". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 20:25:44 crc kubenswrapper[4942]: I0218 20:25:44.464638 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b14df05a-46ca-4dba-a05c-8aca88ea9643-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b14df05a-46ca-4dba-a05c-8aca88ea9643" (UID: "b14df05a-46ca-4dba-a05c-8aca88ea9643"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 20:25:44 crc kubenswrapper[4942]: I0218 20:25:44.474071 4942 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b14df05a-46ca-4dba-a05c-8aca88ea9643-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 18 20:25:44 crc kubenswrapper[4942]: I0218 20:25:44.474115 4942 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sgt89\" (UniqueName: \"kubernetes.io/projected/b14df05a-46ca-4dba-a05c-8aca88ea9643-kube-api-access-sgt89\") on node \"crc\" DevicePath \"\"" Feb 18 20:25:44 crc kubenswrapper[4942]: I0218 20:25:44.474143 4942 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b14df05a-46ca-4dba-a05c-8aca88ea9643-utilities\") on node \"crc\" DevicePath \"\"" Feb 18 20:25:44 crc kubenswrapper[4942]: I0218 20:25:44.748928 4942 generic.go:334] "Generic (PLEG): container finished" podID="b14df05a-46ca-4dba-a05c-8aca88ea9643" containerID="f38fbba31327c539341d30370a929648de96475d873277573d41e72cd5418fb0" exitCode=0 Feb 18 20:25:44 crc kubenswrapper[4942]: I0218 20:25:44.748974 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5nxbp" event={"ID":"b14df05a-46ca-4dba-a05c-8aca88ea9643","Type":"ContainerDied","Data":"f38fbba31327c539341d30370a929648de96475d873277573d41e72cd5418fb0"} Feb 18 20:25:44 crc kubenswrapper[4942]: I0218 20:25:44.748991 4942 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-5nxbp" Feb 18 20:25:44 crc kubenswrapper[4942]: I0218 20:25:44.749007 4942 scope.go:117] "RemoveContainer" containerID="f38fbba31327c539341d30370a929648de96475d873277573d41e72cd5418fb0" Feb 18 20:25:44 crc kubenswrapper[4942]: I0218 20:25:44.748996 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5nxbp" event={"ID":"b14df05a-46ca-4dba-a05c-8aca88ea9643","Type":"ContainerDied","Data":"b62cbba7efcc3057eaed590174d862635c69b01aabde9c445fab7d27d0db35d8"} Feb 18 20:25:44 crc kubenswrapper[4942]: I0218 20:25:44.773799 4942 scope.go:117] "RemoveContainer" containerID="bce25462aea5b7bb567c89cafdbc10b30aa10b3310edb73e614fa19e27c50e20" Feb 18 20:25:44 crc kubenswrapper[4942]: I0218 20:25:44.793469 4942 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-5nxbp"] Feb 18 20:25:44 crc kubenswrapper[4942]: I0218 20:25:44.804474 4942 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-5nxbp"] Feb 18 20:25:44 crc kubenswrapper[4942]: I0218 20:25:44.811640 4942 scope.go:117] "RemoveContainer" containerID="a139bdbcd44b10bd4347feac43f107406dad0f697a107e7f6b4f6861c9831a64" Feb 18 20:25:44 crc kubenswrapper[4942]: I0218 20:25:44.871166 4942 scope.go:117] "RemoveContainer" containerID="f38fbba31327c539341d30370a929648de96475d873277573d41e72cd5418fb0" Feb 18 20:25:44 crc kubenswrapper[4942]: E0218 20:25:44.873137 4942 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f38fbba31327c539341d30370a929648de96475d873277573d41e72cd5418fb0\": container with ID starting with f38fbba31327c539341d30370a929648de96475d873277573d41e72cd5418fb0 not found: ID does not exist" containerID="f38fbba31327c539341d30370a929648de96475d873277573d41e72cd5418fb0" Feb 18 20:25:44 crc kubenswrapper[4942]: I0218 20:25:44.873186 4942 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f38fbba31327c539341d30370a929648de96475d873277573d41e72cd5418fb0"} err="failed to get container status \"f38fbba31327c539341d30370a929648de96475d873277573d41e72cd5418fb0\": rpc error: code = NotFound desc = could not find container \"f38fbba31327c539341d30370a929648de96475d873277573d41e72cd5418fb0\": container with ID starting with f38fbba31327c539341d30370a929648de96475d873277573d41e72cd5418fb0 not found: ID does not exist" Feb 18 20:25:44 crc kubenswrapper[4942]: I0218 20:25:44.873212 4942 scope.go:117] "RemoveContainer" containerID="bce25462aea5b7bb567c89cafdbc10b30aa10b3310edb73e614fa19e27c50e20" Feb 18 20:25:44 crc kubenswrapper[4942]: E0218 20:25:44.875153 4942 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bce25462aea5b7bb567c89cafdbc10b30aa10b3310edb73e614fa19e27c50e20\": container with ID starting with bce25462aea5b7bb567c89cafdbc10b30aa10b3310edb73e614fa19e27c50e20 not found: ID does not exist" containerID="bce25462aea5b7bb567c89cafdbc10b30aa10b3310edb73e614fa19e27c50e20" Feb 18 20:25:44 crc kubenswrapper[4942]: I0218 20:25:44.875184 4942 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bce25462aea5b7bb567c89cafdbc10b30aa10b3310edb73e614fa19e27c50e20"} err="failed to get container status \"bce25462aea5b7bb567c89cafdbc10b30aa10b3310edb73e614fa19e27c50e20\": rpc error: code = NotFound desc = could not find container \"bce25462aea5b7bb567c89cafdbc10b30aa10b3310edb73e614fa19e27c50e20\": container with ID starting with bce25462aea5b7bb567c89cafdbc10b30aa10b3310edb73e614fa19e27c50e20 not found: ID does not exist" Feb 18 20:25:44 crc kubenswrapper[4942]: I0218 20:25:44.875204 4942 scope.go:117] "RemoveContainer" containerID="a139bdbcd44b10bd4347feac43f107406dad0f697a107e7f6b4f6861c9831a64" Feb 18 20:25:44 crc kubenswrapper[4942]: E0218 20:25:44.877881 4942 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a139bdbcd44b10bd4347feac43f107406dad0f697a107e7f6b4f6861c9831a64\": container with ID starting with a139bdbcd44b10bd4347feac43f107406dad0f697a107e7f6b4f6861c9831a64 not found: ID does not exist" containerID="a139bdbcd44b10bd4347feac43f107406dad0f697a107e7f6b4f6861c9831a64" Feb 18 20:25:44 crc kubenswrapper[4942]: I0218 20:25:44.877906 4942 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a139bdbcd44b10bd4347feac43f107406dad0f697a107e7f6b4f6861c9831a64"} err="failed to get container status \"a139bdbcd44b10bd4347feac43f107406dad0f697a107e7f6b4f6861c9831a64\": rpc error: code = NotFound desc = could not find container \"a139bdbcd44b10bd4347feac43f107406dad0f697a107e7f6b4f6861c9831a64\": container with ID starting with a139bdbcd44b10bd4347feac43f107406dad0f697a107e7f6b4f6861c9831a64 not found: ID does not exist" Feb 18 20:25:45 crc kubenswrapper[4942]: I0218 20:25:45.050037 4942 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b14df05a-46ca-4dba-a05c-8aca88ea9643" path="/var/lib/kubelet/pods/b14df05a-46ca-4dba-a05c-8aca88ea9643/volumes" Feb 18 20:25:46 crc kubenswrapper[4942]: I0218 20:25:46.036358 4942 scope.go:117] "RemoveContainer" containerID="13108c3e1f4853bccc21a0b4ca8d8754dcd7b8ef84f2648c54eda40929f45769" Feb 18 20:25:46 crc kubenswrapper[4942]: E0218 20:25:46.036954 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wqxh4_openshift-machine-config-operator(28921539-823a-4439-a230-3b5aed7085cc)\"" pod="openshift-machine-config-operator/machine-config-daemon-wqxh4" podUID="28921539-823a-4439-a230-3b5aed7085cc" Feb 18 20:25:57 crc kubenswrapper[4942]: I0218 20:25:57.036448 4942 scope.go:117] "RemoveContainer" containerID="13108c3e1f4853bccc21a0b4ca8d8754dcd7b8ef84f2648c54eda40929f45769" Feb 18 20:25:57 crc kubenswrapper[4942]: E0218 20:25:57.037500 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wqxh4_openshift-machine-config-operator(28921539-823a-4439-a230-3b5aed7085cc)\"" pod="openshift-machine-config-operator/machine-config-daemon-wqxh4" podUID="28921539-823a-4439-a230-3b5aed7085cc" Feb 18 20:26:10 crc kubenswrapper[4942]: I0218 20:26:10.035838 4942 scope.go:117] "RemoveContainer" containerID="13108c3e1f4853bccc21a0b4ca8d8754dcd7b8ef84f2648c54eda40929f45769" Feb 18 20:26:10 crc kubenswrapper[4942]: E0218 20:26:10.036640 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wqxh4_openshift-machine-config-operator(28921539-823a-4439-a230-3b5aed7085cc)\"" pod="openshift-machine-config-operator/machine-config-daemon-wqxh4" podUID="28921539-823a-4439-a230-3b5aed7085cc" Feb 18 20:26:25 crc kubenswrapper[4942]: I0218 20:26:25.037063 4942 scope.go:117] "RemoveContainer" containerID="13108c3e1f4853bccc21a0b4ca8d8754dcd7b8ef84f2648c54eda40929f45769" Feb 18 20:26:25 crc kubenswrapper[4942]: I0218 20:26:25.313498 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wqxh4" event={"ID":"28921539-823a-4439-a230-3b5aed7085cc","Type":"ContainerStarted","Data":"ea4f0d375bd63d31e9839963358f5e2cdba00a92ebcdf5c8c65e10c1b7192f1c"} Feb 18 20:28:53 crc kubenswrapper[4942]: I0218 20:28:53.740896 4942 patch_prober.go:28] interesting pod/machine-config-daemon-wqxh4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 20:28:53 crc kubenswrapper[4942]: I0218 20:28:53.741512 4942 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wqxh4" podUID="28921539-823a-4439-a230-3b5aed7085cc" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 20:29:11 crc kubenswrapper[4942]: I0218 20:29:11.079265 4942 patch_prober.go:28] interesting pod/oauth-openshift-666545c866-26rlh container/oauth-openshift namespace/openshift-authentication: Liveness probe status=failure output="Get \"https://10.217.0.56:6443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 18 20:29:11 crc kubenswrapper[4942]: I0218 20:29:11.079908 4942 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-authentication/oauth-openshift-666545c866-26rlh" podUID="78f383f9-664c-43eb-9253-d9df1eaa9716" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.56:6443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 18 20:29:11 crc kubenswrapper[4942]: I0218 20:29:11.084927 4942 patch_prober.go:28] interesting pod/oauth-openshift-666545c866-26rlh container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.56:6443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 18 20:29:11 crc kubenswrapper[4942]: I0218 20:29:11.085927 4942 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-666545c866-26rlh" podUID="78f383f9-664c-43eb-9253-d9df1eaa9716" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.56:6443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 18 20:29:23 crc kubenswrapper[4942]: I0218 20:29:23.741211 4942 patch_prober.go:28] interesting pod/machine-config-daemon-wqxh4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 20:29:23 crc kubenswrapper[4942]: I0218 20:29:23.741843 4942 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wqxh4" podUID="28921539-823a-4439-a230-3b5aed7085cc" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 20:29:53 crc kubenswrapper[4942]: I0218 20:29:53.741128 4942 patch_prober.go:28] interesting pod/machine-config-daemon-wqxh4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 20:29:53 crc kubenswrapper[4942]: I0218 20:29:53.742823 4942 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wqxh4" podUID="28921539-823a-4439-a230-3b5aed7085cc" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 20:29:53 crc kubenswrapper[4942]: I0218 20:29:53.742992 4942 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-wqxh4" Feb 18 20:29:53 crc kubenswrapper[4942]: I0218 20:29:53.744030 4942 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"ea4f0d375bd63d31e9839963358f5e2cdba00a92ebcdf5c8c65e10c1b7192f1c"} pod="openshift-machine-config-operator/machine-config-daemon-wqxh4" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 18 20:29:53 crc kubenswrapper[4942]: I0218 20:29:53.744234 4942 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-wqxh4" podUID="28921539-823a-4439-a230-3b5aed7085cc" containerName="machine-config-daemon" containerID="cri-o://ea4f0d375bd63d31e9839963358f5e2cdba00a92ebcdf5c8c65e10c1b7192f1c" gracePeriod=600 Feb 18 20:29:54 crc kubenswrapper[4942]: I0218 20:29:54.816314 4942 generic.go:334] "Generic (PLEG): container finished" podID="28921539-823a-4439-a230-3b5aed7085cc" containerID="ea4f0d375bd63d31e9839963358f5e2cdba00a92ebcdf5c8c65e10c1b7192f1c" exitCode=0 Feb 18 20:29:54 crc kubenswrapper[4942]: I0218 20:29:54.816374 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wqxh4" event={"ID":"28921539-823a-4439-a230-3b5aed7085cc","Type":"ContainerDied","Data":"ea4f0d375bd63d31e9839963358f5e2cdba00a92ebcdf5c8c65e10c1b7192f1c"} Feb 18 20:29:54 crc kubenswrapper[4942]: I0218 20:29:54.817167 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wqxh4" event={"ID":"28921539-823a-4439-a230-3b5aed7085cc","Type":"ContainerStarted","Data":"e170d5371e4f5bbe6907c57c4924b945a84df7feec77a73107b0bd925e94b04f"} Feb 18 20:29:54 crc kubenswrapper[4942]: I0218 20:29:54.817209 4942 scope.go:117] "RemoveContainer" containerID="13108c3e1f4853bccc21a0b4ca8d8754dcd7b8ef84f2648c54eda40929f45769" Feb 18 20:30:00 crc kubenswrapper[4942]: I0218 20:30:00.204063 4942 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29524110-7kfww"] Feb 18 20:30:00 crc kubenswrapper[4942]: E0218 20:30:00.205147 4942 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b14df05a-46ca-4dba-a05c-8aca88ea9643" containerName="registry-server" Feb 18 20:30:00 crc kubenswrapper[4942]: I0218 20:30:00.205164 4942 state_mem.go:107] "Deleted CPUSet assignment" podUID="b14df05a-46ca-4dba-a05c-8aca88ea9643" containerName="registry-server" Feb 18 20:30:00 crc kubenswrapper[4942]: E0218 20:30:00.205229 4942 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b14df05a-46ca-4dba-a05c-8aca88ea9643" containerName="extract-content" Feb 18 20:30:00 crc kubenswrapper[4942]: I0218 20:30:00.205238 4942 state_mem.go:107] "Deleted CPUSet assignment" podUID="b14df05a-46ca-4dba-a05c-8aca88ea9643" containerName="extract-content" Feb 18 20:30:00 crc kubenswrapper[4942]: E0218 20:30:00.205258 4942 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b14df05a-46ca-4dba-a05c-8aca88ea9643" containerName="extract-utilities" Feb 18 20:30:00 crc kubenswrapper[4942]: I0218 20:30:00.205269 4942 state_mem.go:107] "Deleted CPUSet assignment" podUID="b14df05a-46ca-4dba-a05c-8aca88ea9643" containerName="extract-utilities" Feb 18 20:30:00 crc kubenswrapper[4942]: I0218 20:30:00.205513 4942 memory_manager.go:354] "RemoveStaleState removing state" podUID="b14df05a-46ca-4dba-a05c-8aca88ea9643" containerName="registry-server" Feb 18 20:30:00 crc kubenswrapper[4942]: I0218 20:30:00.206396 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29524110-7kfww" Feb 18 20:30:00 crc kubenswrapper[4942]: I0218 20:30:00.209475 4942 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 18 20:30:00 crc kubenswrapper[4942]: I0218 20:30:00.209920 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 18 20:30:00 crc kubenswrapper[4942]: I0218 20:30:00.236327 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29524110-7kfww"] Feb 18 20:30:00 crc kubenswrapper[4942]: I0218 20:30:00.347984 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2pczv\" (UniqueName: \"kubernetes.io/projected/a10be051-b656-4065-834d-236e091e60e8-kube-api-access-2pczv\") pod \"collect-profiles-29524110-7kfww\" (UID: \"a10be051-b656-4065-834d-236e091e60e8\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524110-7kfww" Feb 18 20:30:00 crc kubenswrapper[4942]: I0218 20:30:00.348039 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a10be051-b656-4065-834d-236e091e60e8-config-volume\") pod \"collect-profiles-29524110-7kfww\" (UID: \"a10be051-b656-4065-834d-236e091e60e8\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524110-7kfww" Feb 18 20:30:00 crc kubenswrapper[4942]: I0218 20:30:00.348090 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/a10be051-b656-4065-834d-236e091e60e8-secret-volume\") pod \"collect-profiles-29524110-7kfww\" (UID: \"a10be051-b656-4065-834d-236e091e60e8\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524110-7kfww" Feb 18 20:30:00 crc kubenswrapper[4942]: I0218 20:30:00.450678 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2pczv\" (UniqueName: \"kubernetes.io/projected/a10be051-b656-4065-834d-236e091e60e8-kube-api-access-2pczv\") pod \"collect-profiles-29524110-7kfww\" (UID: \"a10be051-b656-4065-834d-236e091e60e8\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524110-7kfww" Feb 18 20:30:00 crc kubenswrapper[4942]: I0218 20:30:00.451021 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a10be051-b656-4065-834d-236e091e60e8-config-volume\") pod \"collect-profiles-29524110-7kfww\" (UID: \"a10be051-b656-4065-834d-236e091e60e8\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524110-7kfww" Feb 18 20:30:00 crc kubenswrapper[4942]: I0218 20:30:00.451147 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/a10be051-b656-4065-834d-236e091e60e8-secret-volume\") pod \"collect-profiles-29524110-7kfww\" (UID: \"a10be051-b656-4065-834d-236e091e60e8\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524110-7kfww" Feb 18 20:30:00 crc kubenswrapper[4942]: I0218 20:30:00.452552 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a10be051-b656-4065-834d-236e091e60e8-config-volume\") pod \"collect-profiles-29524110-7kfww\" (UID: \"a10be051-b656-4065-834d-236e091e60e8\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524110-7kfww" Feb 18 20:30:00 crc kubenswrapper[4942]: I0218 20:30:00.458168 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/a10be051-b656-4065-834d-236e091e60e8-secret-volume\") pod \"collect-profiles-29524110-7kfww\" (UID: \"a10be051-b656-4065-834d-236e091e60e8\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524110-7kfww" Feb 18 20:30:00 crc kubenswrapper[4942]: I0218 20:30:00.471152 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2pczv\" (UniqueName: \"kubernetes.io/projected/a10be051-b656-4065-834d-236e091e60e8-kube-api-access-2pczv\") pod \"collect-profiles-29524110-7kfww\" (UID: \"a10be051-b656-4065-834d-236e091e60e8\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524110-7kfww" Feb 18 20:30:00 crc kubenswrapper[4942]: I0218 20:30:00.533727 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29524110-7kfww" Feb 18 20:30:01 crc kubenswrapper[4942]: I0218 20:30:01.074737 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29524110-7kfww"] Feb 18 20:30:01 crc kubenswrapper[4942]: I0218 20:30:01.887268 4942 generic.go:334] "Generic (PLEG): container finished" podID="a10be051-b656-4065-834d-236e091e60e8" containerID="e91ad73bf04f5c5f2f890026f91c9070ffb5b22ca5dc09f77b422fa5636d374e" exitCode=0 Feb 18 20:30:01 crc kubenswrapper[4942]: I0218 20:30:01.887341 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29524110-7kfww" event={"ID":"a10be051-b656-4065-834d-236e091e60e8","Type":"ContainerDied","Data":"e91ad73bf04f5c5f2f890026f91c9070ffb5b22ca5dc09f77b422fa5636d374e"} Feb 18 20:30:01 crc kubenswrapper[4942]: I0218 20:30:01.888567 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29524110-7kfww" event={"ID":"a10be051-b656-4065-834d-236e091e60e8","Type":"ContainerStarted","Data":"9e464cf346f1ffc3fe57a51da12c4b293901698a8c61c333719b7f47299883ea"} Feb 18 20:30:03 crc kubenswrapper[4942]: I0218 20:30:03.355372 4942 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29524110-7kfww" Feb 18 20:30:03 crc kubenswrapper[4942]: I0218 20:30:03.513579 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2pczv\" (UniqueName: \"kubernetes.io/projected/a10be051-b656-4065-834d-236e091e60e8-kube-api-access-2pczv\") pod \"a10be051-b656-4065-834d-236e091e60e8\" (UID: \"a10be051-b656-4065-834d-236e091e60e8\") " Feb 18 20:30:03 crc kubenswrapper[4942]: I0218 20:30:03.513843 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a10be051-b656-4065-834d-236e091e60e8-config-volume\") pod \"a10be051-b656-4065-834d-236e091e60e8\" (UID: \"a10be051-b656-4065-834d-236e091e60e8\") " Feb 18 20:30:03 crc kubenswrapper[4942]: I0218 20:30:03.513954 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/a10be051-b656-4065-834d-236e091e60e8-secret-volume\") pod \"a10be051-b656-4065-834d-236e091e60e8\" (UID: \"a10be051-b656-4065-834d-236e091e60e8\") " Feb 18 20:30:03 crc kubenswrapper[4942]: I0218 20:30:03.514302 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a10be051-b656-4065-834d-236e091e60e8-config-volume" (OuterVolumeSpecName: "config-volume") pod "a10be051-b656-4065-834d-236e091e60e8" (UID: "a10be051-b656-4065-834d-236e091e60e8"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 20:30:03 crc kubenswrapper[4942]: I0218 20:30:03.514985 4942 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a10be051-b656-4065-834d-236e091e60e8-config-volume\") on node \"crc\" DevicePath \"\"" Feb 18 20:30:03 crc kubenswrapper[4942]: I0218 20:30:03.524690 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a10be051-b656-4065-834d-236e091e60e8-kube-api-access-2pczv" (OuterVolumeSpecName: "kube-api-access-2pczv") pod "a10be051-b656-4065-834d-236e091e60e8" (UID: "a10be051-b656-4065-834d-236e091e60e8"). InnerVolumeSpecName "kube-api-access-2pczv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 20:30:03 crc kubenswrapper[4942]: I0218 20:30:03.526138 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a10be051-b656-4065-834d-236e091e60e8-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "a10be051-b656-4065-834d-236e091e60e8" (UID: "a10be051-b656-4065-834d-236e091e60e8"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 20:30:03 crc kubenswrapper[4942]: I0218 20:30:03.616858 4942 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/a10be051-b656-4065-834d-236e091e60e8-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 18 20:30:03 crc kubenswrapper[4942]: I0218 20:30:03.617113 4942 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2pczv\" (UniqueName: \"kubernetes.io/projected/a10be051-b656-4065-834d-236e091e60e8-kube-api-access-2pczv\") on node \"crc\" DevicePath \"\"" Feb 18 20:30:03 crc kubenswrapper[4942]: I0218 20:30:03.911736 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29524110-7kfww" event={"ID":"a10be051-b656-4065-834d-236e091e60e8","Type":"ContainerDied","Data":"9e464cf346f1ffc3fe57a51da12c4b293901698a8c61c333719b7f47299883ea"} Feb 18 20:30:03 crc kubenswrapper[4942]: I0218 20:30:03.911807 4942 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9e464cf346f1ffc3fe57a51da12c4b293901698a8c61c333719b7f47299883ea" Feb 18 20:30:03 crc kubenswrapper[4942]: I0218 20:30:03.911924 4942 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29524110-7kfww" Feb 18 20:30:04 crc kubenswrapper[4942]: I0218 20:30:04.454791 4942 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29524065-ckvj9"] Feb 18 20:30:04 crc kubenswrapper[4942]: I0218 20:30:04.470374 4942 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29524065-ckvj9"] Feb 18 20:30:05 crc kubenswrapper[4942]: I0218 20:30:05.050415 4942 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f02d65f2-f70f-4982-a9d5-fc9d75091181" path="/var/lib/kubelet/pods/f02d65f2-f70f-4982-a9d5-fc9d75091181/volumes" Feb 18 20:30:50 crc kubenswrapper[4942]: I0218 20:30:50.924917 4942 scope.go:117] "RemoveContainer" containerID="bdd33fc87e63584fee347049c15193b1ff470c22181f3d250c7e0de28ba81fd9" Feb 18 20:32:23 crc kubenswrapper[4942]: I0218 20:32:23.741846 4942 patch_prober.go:28] interesting pod/machine-config-daemon-wqxh4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 20:32:23 crc kubenswrapper[4942]: I0218 20:32:23.742379 4942 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wqxh4" podUID="28921539-823a-4439-a230-3b5aed7085cc" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 20:32:53 crc kubenswrapper[4942]: I0218 20:32:53.740693 4942 patch_prober.go:28] interesting pod/machine-config-daemon-wqxh4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 20:32:53 crc kubenswrapper[4942]: I0218 20:32:53.741233 4942 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wqxh4" podUID="28921539-823a-4439-a230-3b5aed7085cc" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 20:33:23 crc kubenswrapper[4942]: I0218 20:33:23.741467 4942 patch_prober.go:28] interesting pod/machine-config-daemon-wqxh4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 20:33:23 crc kubenswrapper[4942]: I0218 20:33:23.742090 4942 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wqxh4" podUID="28921539-823a-4439-a230-3b5aed7085cc" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 20:33:23 crc kubenswrapper[4942]: I0218 20:33:23.742155 4942 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-wqxh4" Feb 18 20:33:23 crc kubenswrapper[4942]: I0218 20:33:23.743154 4942 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"e170d5371e4f5bbe6907c57c4924b945a84df7feec77a73107b0bd925e94b04f"} pod="openshift-machine-config-operator/machine-config-daemon-wqxh4" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 18 20:33:23 crc kubenswrapper[4942]: I0218 20:33:23.743248 4942 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-wqxh4" podUID="28921539-823a-4439-a230-3b5aed7085cc" containerName="machine-config-daemon" containerID="cri-o://e170d5371e4f5bbe6907c57c4924b945a84df7feec77a73107b0bd925e94b04f" gracePeriod=600 Feb 18 20:33:23 crc kubenswrapper[4942]: E0218 20:33:23.864808 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wqxh4_openshift-machine-config-operator(28921539-823a-4439-a230-3b5aed7085cc)\"" pod="openshift-machine-config-operator/machine-config-daemon-wqxh4" podUID="28921539-823a-4439-a230-3b5aed7085cc" Feb 18 20:33:24 crc kubenswrapper[4942]: I0218 20:33:24.257618 4942 generic.go:334] "Generic (PLEG): container finished" podID="28921539-823a-4439-a230-3b5aed7085cc" containerID="e170d5371e4f5bbe6907c57c4924b945a84df7feec77a73107b0bd925e94b04f" exitCode=0 Feb 18 20:33:24 crc kubenswrapper[4942]: I0218 20:33:24.257682 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wqxh4" event={"ID":"28921539-823a-4439-a230-3b5aed7085cc","Type":"ContainerDied","Data":"e170d5371e4f5bbe6907c57c4924b945a84df7feec77a73107b0bd925e94b04f"} Feb 18 20:33:24 crc kubenswrapper[4942]: I0218 20:33:24.257731 4942 scope.go:117] "RemoveContainer" containerID="ea4f0d375bd63d31e9839963358f5e2cdba00a92ebcdf5c8c65e10c1b7192f1c" Feb 18 20:33:24 crc kubenswrapper[4942]: I0218 20:33:24.258740 4942 scope.go:117] "RemoveContainer" containerID="e170d5371e4f5bbe6907c57c4924b945a84df7feec77a73107b0bd925e94b04f" Feb 18 20:33:24 crc kubenswrapper[4942]: E0218 20:33:24.259256 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wqxh4_openshift-machine-config-operator(28921539-823a-4439-a230-3b5aed7085cc)\"" pod="openshift-machine-config-operator/machine-config-daemon-wqxh4" podUID="28921539-823a-4439-a230-3b5aed7085cc" Feb 18 20:33:37 crc kubenswrapper[4942]: I0218 20:33:37.036193 4942 scope.go:117] "RemoveContainer" containerID="e170d5371e4f5bbe6907c57c4924b945a84df7feec77a73107b0bd925e94b04f" Feb 18 20:33:37 crc kubenswrapper[4942]: E0218 20:33:37.037301 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wqxh4_openshift-machine-config-operator(28921539-823a-4439-a230-3b5aed7085cc)\"" pod="openshift-machine-config-operator/machine-config-daemon-wqxh4" podUID="28921539-823a-4439-a230-3b5aed7085cc" Feb 18 20:33:52 crc kubenswrapper[4942]: I0218 20:33:52.036386 4942 scope.go:117] "RemoveContainer" containerID="e170d5371e4f5bbe6907c57c4924b945a84df7feec77a73107b0bd925e94b04f" Feb 18 20:33:52 crc kubenswrapper[4942]: E0218 20:33:52.037556 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wqxh4_openshift-machine-config-operator(28921539-823a-4439-a230-3b5aed7085cc)\"" pod="openshift-machine-config-operator/machine-config-daemon-wqxh4" podUID="28921539-823a-4439-a230-3b5aed7085cc" Feb 18 20:34:07 crc kubenswrapper[4942]: I0218 20:34:07.036664 4942 scope.go:117] "RemoveContainer" containerID="e170d5371e4f5bbe6907c57c4924b945a84df7feec77a73107b0bd925e94b04f" Feb 18 20:34:07 crc kubenswrapper[4942]: E0218 20:34:07.037627 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wqxh4_openshift-machine-config-operator(28921539-823a-4439-a230-3b5aed7085cc)\"" pod="openshift-machine-config-operator/machine-config-daemon-wqxh4" podUID="28921539-823a-4439-a230-3b5aed7085cc" Feb 18 20:34:07 crc kubenswrapper[4942]: I0218 20:34:07.322923 4942 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-bpf4n"] Feb 18 20:34:07 crc kubenswrapper[4942]: E0218 20:34:07.323393 4942 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a10be051-b656-4065-834d-236e091e60e8" containerName="collect-profiles" Feb 18 20:34:07 crc kubenswrapper[4942]: I0218 20:34:07.323415 4942 state_mem.go:107] "Deleted CPUSet assignment" podUID="a10be051-b656-4065-834d-236e091e60e8" containerName="collect-profiles" Feb 18 20:34:07 crc kubenswrapper[4942]: I0218 20:34:07.323675 4942 memory_manager.go:354] "RemoveStaleState removing state" podUID="a10be051-b656-4065-834d-236e091e60e8" containerName="collect-profiles" Feb 18 20:34:07 crc kubenswrapper[4942]: I0218 20:34:07.325368 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-bpf4n" Feb 18 20:34:07 crc kubenswrapper[4942]: I0218 20:34:07.339939 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-bpf4n"] Feb 18 20:34:07 crc kubenswrapper[4942]: I0218 20:34:07.433041 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8277de0c-d81c-4d35-a68a-97ca7a1edd6b-utilities\") pod \"redhat-operators-bpf4n\" (UID: \"8277de0c-d81c-4d35-a68a-97ca7a1edd6b\") " pod="openshift-marketplace/redhat-operators-bpf4n" Feb 18 20:34:07 crc kubenswrapper[4942]: I0218 20:34:07.433153 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ztqdr\" (UniqueName: \"kubernetes.io/projected/8277de0c-d81c-4d35-a68a-97ca7a1edd6b-kube-api-access-ztqdr\") pod \"redhat-operators-bpf4n\" (UID: \"8277de0c-d81c-4d35-a68a-97ca7a1edd6b\") " pod="openshift-marketplace/redhat-operators-bpf4n" Feb 18 20:34:07 crc kubenswrapper[4942]: I0218 20:34:07.433223 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8277de0c-d81c-4d35-a68a-97ca7a1edd6b-catalog-content\") pod \"redhat-operators-bpf4n\" (UID: \"8277de0c-d81c-4d35-a68a-97ca7a1edd6b\") " pod="openshift-marketplace/redhat-operators-bpf4n" Feb 18 20:34:07 crc kubenswrapper[4942]: I0218 20:34:07.535670 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8277de0c-d81c-4d35-a68a-97ca7a1edd6b-utilities\") pod \"redhat-operators-bpf4n\" (UID: \"8277de0c-d81c-4d35-a68a-97ca7a1edd6b\") " pod="openshift-marketplace/redhat-operators-bpf4n" Feb 18 20:34:07 crc kubenswrapper[4942]: I0218 20:34:07.535779 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ztqdr\" (UniqueName: \"kubernetes.io/projected/8277de0c-d81c-4d35-a68a-97ca7a1edd6b-kube-api-access-ztqdr\") pod \"redhat-operators-bpf4n\" (UID: \"8277de0c-d81c-4d35-a68a-97ca7a1edd6b\") " pod="openshift-marketplace/redhat-operators-bpf4n" Feb 18 20:34:07 crc kubenswrapper[4942]: I0218 20:34:07.535856 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8277de0c-d81c-4d35-a68a-97ca7a1edd6b-catalog-content\") pod \"redhat-operators-bpf4n\" (UID: \"8277de0c-d81c-4d35-a68a-97ca7a1edd6b\") " pod="openshift-marketplace/redhat-operators-bpf4n" Feb 18 20:34:07 crc kubenswrapper[4942]: I0218 20:34:07.537312 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8277de0c-d81c-4d35-a68a-97ca7a1edd6b-utilities\") pod \"redhat-operators-bpf4n\" (UID: \"8277de0c-d81c-4d35-a68a-97ca7a1edd6b\") " pod="openshift-marketplace/redhat-operators-bpf4n" Feb 18 20:34:07 crc kubenswrapper[4942]: I0218 20:34:07.537982 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8277de0c-d81c-4d35-a68a-97ca7a1edd6b-catalog-content\") pod \"redhat-operators-bpf4n\" (UID: \"8277de0c-d81c-4d35-a68a-97ca7a1edd6b\") " pod="openshift-marketplace/redhat-operators-bpf4n" Feb 18 20:34:07 crc kubenswrapper[4942]: I0218 20:34:07.575482 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ztqdr\" (UniqueName: \"kubernetes.io/projected/8277de0c-d81c-4d35-a68a-97ca7a1edd6b-kube-api-access-ztqdr\") pod \"redhat-operators-bpf4n\" (UID: \"8277de0c-d81c-4d35-a68a-97ca7a1edd6b\") " pod="openshift-marketplace/redhat-operators-bpf4n" Feb 18 20:34:07 crc kubenswrapper[4942]: I0218 20:34:07.741615 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-bpf4n" Feb 18 20:34:08 crc kubenswrapper[4942]: I0218 20:34:08.295823 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-bpf4n"] Feb 18 20:34:08 crc kubenswrapper[4942]: I0218 20:34:08.803160 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bpf4n" event={"ID":"8277de0c-d81c-4d35-a68a-97ca7a1edd6b","Type":"ContainerStarted","Data":"781d76d1512bab16a1778d0f049717a35cc14741855ad3e1da58ce4ed191e1e2"} Feb 18 20:34:09 crc kubenswrapper[4942]: I0218 20:34:09.819933 4942 generic.go:334] "Generic (PLEG): container finished" podID="8277de0c-d81c-4d35-a68a-97ca7a1edd6b" containerID="e67ac2c620953275a3382ee0f7606ec0062a3b0d8a79dfa6e97d84b9d29351b8" exitCode=0 Feb 18 20:34:09 crc kubenswrapper[4942]: I0218 20:34:09.820060 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bpf4n" event={"ID":"8277de0c-d81c-4d35-a68a-97ca7a1edd6b","Type":"ContainerDied","Data":"e67ac2c620953275a3382ee0f7606ec0062a3b0d8a79dfa6e97d84b9d29351b8"} Feb 18 20:34:09 crc kubenswrapper[4942]: I0218 20:34:09.822807 4942 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 18 20:34:10 crc kubenswrapper[4942]: E0218 20:34:10.329430 4942 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = copying system image from manifest list: parsing image configuration: fetching blob: received unexpected HTTP status: 500 Internal Server Error" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Feb 18 20:34:10 crc kubenswrapper[4942]: E0218 20:34:10.329595 4942 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-ztqdr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-bpf4n_openshift-marketplace(8277de0c-d81c-4d35-a68a-97ca7a1edd6b): ErrImagePull: copying system image from manifest list: parsing image configuration: fetching blob: received unexpected HTTP status: 500 Internal Server Error" logger="UnhandledError" Feb 18 20:34:10 crc kubenswrapper[4942]: E0218 20:34:10.330780 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"copying system image from manifest list: parsing image configuration: fetching blob: received unexpected HTTP status: 500 Internal Server Error\"" pod="openshift-marketplace/redhat-operators-bpf4n" podUID="8277de0c-d81c-4d35-a68a-97ca7a1edd6b" Feb 18 20:34:10 crc kubenswrapper[4942]: E0218 20:34:10.833611 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-bpf4n" podUID="8277de0c-d81c-4d35-a68a-97ca7a1edd6b" Feb 18 20:34:20 crc kubenswrapper[4942]: I0218 20:34:20.036247 4942 scope.go:117] "RemoveContainer" containerID="e170d5371e4f5bbe6907c57c4924b945a84df7feec77a73107b0bd925e94b04f" Feb 18 20:34:20 crc kubenswrapper[4942]: E0218 20:34:20.038520 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wqxh4_openshift-machine-config-operator(28921539-823a-4439-a230-3b5aed7085cc)\"" pod="openshift-machine-config-operator/machine-config-daemon-wqxh4" podUID="28921539-823a-4439-a230-3b5aed7085cc" Feb 18 20:34:24 crc kubenswrapper[4942]: E0218 20:34:24.711234 4942 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = copying system image from manifest list: parsing image configuration: fetching blob: received unexpected HTTP status: 500 Internal Server Error" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Feb 18 20:34:24 crc kubenswrapper[4942]: E0218 20:34:24.712196 4942 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-ztqdr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-bpf4n_openshift-marketplace(8277de0c-d81c-4d35-a68a-97ca7a1edd6b): ErrImagePull: copying system image from manifest list: parsing image configuration: fetching blob: received unexpected HTTP status: 500 Internal Server Error" logger="UnhandledError" Feb 18 20:34:24 crc kubenswrapper[4942]: E0218 20:34:24.713517 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"copying system image from manifest list: parsing image configuration: fetching blob: received unexpected HTTP status: 500 Internal Server Error\"" pod="openshift-marketplace/redhat-operators-bpf4n" podUID="8277de0c-d81c-4d35-a68a-97ca7a1edd6b" Feb 18 20:34:25 crc kubenswrapper[4942]: I0218 20:34:25.411388 4942 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-scbxm"] Feb 18 20:34:25 crc kubenswrapper[4942]: I0218 20:34:25.414789 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-scbxm" Feb 18 20:34:25 crc kubenswrapper[4942]: I0218 20:34:25.425694 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-scbxm"] Feb 18 20:34:25 crc kubenswrapper[4942]: I0218 20:34:25.597493 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6b509214-59a6-4d42-9b1b-a0252c545c1d-utilities\") pod \"community-operators-scbxm\" (UID: \"6b509214-59a6-4d42-9b1b-a0252c545c1d\") " pod="openshift-marketplace/community-operators-scbxm" Feb 18 20:34:25 crc kubenswrapper[4942]: I0218 20:34:25.597539 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6b509214-59a6-4d42-9b1b-a0252c545c1d-catalog-content\") pod \"community-operators-scbxm\" (UID: \"6b509214-59a6-4d42-9b1b-a0252c545c1d\") " pod="openshift-marketplace/community-operators-scbxm" Feb 18 20:34:25 crc kubenswrapper[4942]: I0218 20:34:25.597836 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cssts\" (UniqueName: \"kubernetes.io/projected/6b509214-59a6-4d42-9b1b-a0252c545c1d-kube-api-access-cssts\") pod \"community-operators-scbxm\" (UID: \"6b509214-59a6-4d42-9b1b-a0252c545c1d\") " pod="openshift-marketplace/community-operators-scbxm" Feb 18 20:34:25 crc kubenswrapper[4942]: I0218 20:34:25.699900 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6b509214-59a6-4d42-9b1b-a0252c545c1d-utilities\") pod \"community-operators-scbxm\" (UID: \"6b509214-59a6-4d42-9b1b-a0252c545c1d\") " pod="openshift-marketplace/community-operators-scbxm" Feb 18 20:34:25 crc kubenswrapper[4942]: I0218 20:34:25.699960 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6b509214-59a6-4d42-9b1b-a0252c545c1d-catalog-content\") pod \"community-operators-scbxm\" (UID: \"6b509214-59a6-4d42-9b1b-a0252c545c1d\") " pod="openshift-marketplace/community-operators-scbxm" Feb 18 20:34:25 crc kubenswrapper[4942]: I0218 20:34:25.700124 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cssts\" (UniqueName: \"kubernetes.io/projected/6b509214-59a6-4d42-9b1b-a0252c545c1d-kube-api-access-cssts\") pod \"community-operators-scbxm\" (UID: \"6b509214-59a6-4d42-9b1b-a0252c545c1d\") " pod="openshift-marketplace/community-operators-scbxm" Feb 18 20:34:25 crc kubenswrapper[4942]: I0218 20:34:25.701049 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6b509214-59a6-4d42-9b1b-a0252c545c1d-catalog-content\") pod \"community-operators-scbxm\" (UID: \"6b509214-59a6-4d42-9b1b-a0252c545c1d\") " pod="openshift-marketplace/community-operators-scbxm" Feb 18 20:34:25 crc kubenswrapper[4942]: I0218 20:34:25.703036 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6b509214-59a6-4d42-9b1b-a0252c545c1d-utilities\") pod \"community-operators-scbxm\" (UID: \"6b509214-59a6-4d42-9b1b-a0252c545c1d\") " pod="openshift-marketplace/community-operators-scbxm" Feb 18 20:34:25 crc kubenswrapper[4942]: I0218 20:34:25.723345 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cssts\" (UniqueName: \"kubernetes.io/projected/6b509214-59a6-4d42-9b1b-a0252c545c1d-kube-api-access-cssts\") pod \"community-operators-scbxm\" (UID: \"6b509214-59a6-4d42-9b1b-a0252c545c1d\") " pod="openshift-marketplace/community-operators-scbxm" Feb 18 20:34:25 crc kubenswrapper[4942]: I0218 20:34:25.745311 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-scbxm" Feb 18 20:34:26 crc kubenswrapper[4942]: I0218 20:34:26.231373 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-scbxm"] Feb 18 20:34:26 crc kubenswrapper[4942]: I0218 20:34:26.997250 4942 generic.go:334] "Generic (PLEG): container finished" podID="6b509214-59a6-4d42-9b1b-a0252c545c1d" containerID="93f8d81b8be45787cf6fc4b813deaf1827a837361f9a82316159f70be9ac2fc1" exitCode=0 Feb 18 20:34:27 crc kubenswrapper[4942]: I0218 20:34:26.997316 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-scbxm" event={"ID":"6b509214-59a6-4d42-9b1b-a0252c545c1d","Type":"ContainerDied","Data":"93f8d81b8be45787cf6fc4b813deaf1827a837361f9a82316159f70be9ac2fc1"} Feb 18 20:34:27 crc kubenswrapper[4942]: I0218 20:34:27.001157 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-scbxm" event={"ID":"6b509214-59a6-4d42-9b1b-a0252c545c1d","Type":"ContainerStarted","Data":"82176acbd80e8d50d3a8dcf73cd791e49339bba80d1a93a7a1da46d8f8dd41f8"} Feb 18 20:34:27 crc kubenswrapper[4942]: E0218 20:34:27.951859 4942 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = copying system image from manifest list: parsing image configuration: fetching blob: received unexpected HTTP status: 500 Internal Server Error" image="registry.redhat.io/redhat/community-operator-index:v4.18" Feb 18 20:34:27 crc kubenswrapper[4942]: E0218 20:34:27.952350 4942 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-cssts,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-scbxm_openshift-marketplace(6b509214-59a6-4d42-9b1b-a0252c545c1d): ErrImagePull: copying system image from manifest list: parsing image configuration: fetching blob: received unexpected HTTP status: 500 Internal Server Error" logger="UnhandledError" Feb 18 20:34:27 crc kubenswrapper[4942]: E0218 20:34:27.953580 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"copying system image from manifest list: parsing image configuration: fetching blob: received unexpected HTTP status: 500 Internal Server Error\"" pod="openshift-marketplace/community-operators-scbxm" podUID="6b509214-59a6-4d42-9b1b-a0252c545c1d" Feb 18 20:34:28 crc kubenswrapper[4942]: E0218 20:34:28.023103 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-scbxm" podUID="6b509214-59a6-4d42-9b1b-a0252c545c1d" Feb 18 20:34:34 crc kubenswrapper[4942]: I0218 20:34:34.036397 4942 scope.go:117] "RemoveContainer" containerID="e170d5371e4f5bbe6907c57c4924b945a84df7feec77a73107b0bd925e94b04f" Feb 18 20:34:34 crc kubenswrapper[4942]: E0218 20:34:34.037589 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wqxh4_openshift-machine-config-operator(28921539-823a-4439-a230-3b5aed7085cc)\"" pod="openshift-machine-config-operator/machine-config-daemon-wqxh4" podUID="28921539-823a-4439-a230-3b5aed7085cc" Feb 18 20:34:38 crc kubenswrapper[4942]: E0218 20:34:38.039213 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-bpf4n" podUID="8277de0c-d81c-4d35-a68a-97ca7a1edd6b" Feb 18 20:34:42 crc kubenswrapper[4942]: E0218 20:34:42.514977 4942 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = copying system image from manifest list: parsing image configuration: fetching blob: received unexpected HTTP status: 500 Internal Server Error" image="registry.redhat.io/redhat/community-operator-index:v4.18" Feb 18 20:34:42 crc kubenswrapper[4942]: E0218 20:34:42.515896 4942 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-cssts,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-scbxm_openshift-marketplace(6b509214-59a6-4d42-9b1b-a0252c545c1d): ErrImagePull: copying system image from manifest list: parsing image configuration: fetching blob: received unexpected HTTP status: 500 Internal Server Error" logger="UnhandledError" Feb 18 20:34:42 crc kubenswrapper[4942]: E0218 20:34:42.517225 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"copying system image from manifest list: parsing image configuration: fetching blob: received unexpected HTTP status: 500 Internal Server Error\"" pod="openshift-marketplace/community-operators-scbxm" podUID="6b509214-59a6-4d42-9b1b-a0252c545c1d" Feb 18 20:34:49 crc kubenswrapper[4942]: I0218 20:34:49.036279 4942 scope.go:117] "RemoveContainer" containerID="e170d5371e4f5bbe6907c57c4924b945a84df7feec77a73107b0bd925e94b04f" Feb 18 20:34:49 crc kubenswrapper[4942]: E0218 20:34:49.037130 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wqxh4_openshift-machine-config-operator(28921539-823a-4439-a230-3b5aed7085cc)\"" pod="openshift-machine-config-operator/machine-config-daemon-wqxh4" podUID="28921539-823a-4439-a230-3b5aed7085cc" Feb 18 20:34:49 crc kubenswrapper[4942]: E0218 20:34:49.814678 4942 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = copying system image from manifest list: parsing image configuration: fetching blob: received unexpected HTTP status: 500 Internal Server Error" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Feb 18 20:34:49 crc kubenswrapper[4942]: E0218 20:34:49.815119 4942 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-ztqdr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-bpf4n_openshift-marketplace(8277de0c-d81c-4d35-a68a-97ca7a1edd6b): ErrImagePull: copying system image from manifest list: parsing image configuration: fetching blob: received unexpected HTTP status: 500 Internal Server Error" logger="UnhandledError" Feb 18 20:34:49 crc kubenswrapper[4942]: E0218 20:34:49.816365 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"copying system image from manifest list: parsing image configuration: fetching blob: received unexpected HTTP status: 500 Internal Server Error\"" pod="openshift-marketplace/redhat-operators-bpf4n" podUID="8277de0c-d81c-4d35-a68a-97ca7a1edd6b" Feb 18 20:34:57 crc kubenswrapper[4942]: E0218 20:34:57.038876 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-scbxm" podUID="6b509214-59a6-4d42-9b1b-a0252c545c1d" Feb 18 20:35:02 crc kubenswrapper[4942]: E0218 20:35:02.039536 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-bpf4n" podUID="8277de0c-d81c-4d35-a68a-97ca7a1edd6b" Feb 18 20:35:04 crc kubenswrapper[4942]: I0218 20:35:04.037110 4942 scope.go:117] "RemoveContainer" containerID="e170d5371e4f5bbe6907c57c4924b945a84df7feec77a73107b0bd925e94b04f" Feb 18 20:35:04 crc kubenswrapper[4942]: E0218 20:35:04.038041 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wqxh4_openshift-machine-config-operator(28921539-823a-4439-a230-3b5aed7085cc)\"" pod="openshift-machine-config-operator/machine-config-daemon-wqxh4" podUID="28921539-823a-4439-a230-3b5aed7085cc" Feb 18 20:35:11 crc kubenswrapper[4942]: E0218 20:35:11.097147 4942 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = copying system image from manifest list: parsing image configuration: fetching blob: received unexpected HTTP status: 500 Internal Server Error" image="registry.redhat.io/redhat/community-operator-index:v4.18" Feb 18 20:35:11 crc kubenswrapper[4942]: E0218 20:35:11.097902 4942 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-cssts,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-scbxm_openshift-marketplace(6b509214-59a6-4d42-9b1b-a0252c545c1d): ErrImagePull: copying system image from manifest list: parsing image configuration: fetching blob: received unexpected HTTP status: 500 Internal Server Error" logger="UnhandledError" Feb 18 20:35:11 crc kubenswrapper[4942]: E0218 20:35:11.099119 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"copying system image from manifest list: parsing image configuration: fetching blob: received unexpected HTTP status: 500 Internal Server Error\"" pod="openshift-marketplace/community-operators-scbxm" podUID="6b509214-59a6-4d42-9b1b-a0252c545c1d" Feb 18 20:35:13 crc kubenswrapper[4942]: E0218 20:35:13.039712 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-bpf4n" podUID="8277de0c-d81c-4d35-a68a-97ca7a1edd6b" Feb 18 20:35:16 crc kubenswrapper[4942]: I0218 20:35:16.036429 4942 scope.go:117] "RemoveContainer" containerID="e170d5371e4f5bbe6907c57c4924b945a84df7feec77a73107b0bd925e94b04f" Feb 18 20:35:16 crc kubenswrapper[4942]: E0218 20:35:16.037320 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wqxh4_openshift-machine-config-operator(28921539-823a-4439-a230-3b5aed7085cc)\"" pod="openshift-machine-config-operator/machine-config-daemon-wqxh4" podUID="28921539-823a-4439-a230-3b5aed7085cc" Feb 18 20:35:23 crc kubenswrapper[4942]: E0218 20:35:23.038201 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-scbxm" podUID="6b509214-59a6-4d42-9b1b-a0252c545c1d" Feb 18 20:35:26 crc kubenswrapper[4942]: E0218 20:35:26.039274 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-bpf4n" podUID="8277de0c-d81c-4d35-a68a-97ca7a1edd6b" Feb 18 20:35:27 crc kubenswrapper[4942]: I0218 20:35:27.036569 4942 scope.go:117] "RemoveContainer" containerID="e170d5371e4f5bbe6907c57c4924b945a84df7feec77a73107b0bd925e94b04f" Feb 18 20:35:27 crc kubenswrapper[4942]: E0218 20:35:27.037231 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wqxh4_openshift-machine-config-operator(28921539-823a-4439-a230-3b5aed7085cc)\"" pod="openshift-machine-config-operator/machine-config-daemon-wqxh4" podUID="28921539-823a-4439-a230-3b5aed7085cc" Feb 18 20:35:36 crc kubenswrapper[4942]: E0218 20:35:36.041995 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-scbxm" podUID="6b509214-59a6-4d42-9b1b-a0252c545c1d" Feb 18 20:35:38 crc kubenswrapper[4942]: I0218 20:35:38.515962 4942 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-8lc5d"] Feb 18 20:35:38 crc kubenswrapper[4942]: I0218 20:35:38.519991 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-8lc5d" Feb 18 20:35:38 crc kubenswrapper[4942]: I0218 20:35:38.535133 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-8lc5d"] Feb 18 20:35:38 crc kubenswrapper[4942]: I0218 20:35:38.681449 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nqcsp\" (UniqueName: \"kubernetes.io/projected/618efece-b48e-4e8d-baef-15eb25017938-kube-api-access-nqcsp\") pod \"certified-operators-8lc5d\" (UID: \"618efece-b48e-4e8d-baef-15eb25017938\") " pod="openshift-marketplace/certified-operators-8lc5d" Feb 18 20:35:38 crc kubenswrapper[4942]: I0218 20:35:38.681517 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/618efece-b48e-4e8d-baef-15eb25017938-catalog-content\") pod \"certified-operators-8lc5d\" (UID: \"618efece-b48e-4e8d-baef-15eb25017938\") " pod="openshift-marketplace/certified-operators-8lc5d" Feb 18 20:35:38 crc kubenswrapper[4942]: I0218 20:35:38.681693 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/618efece-b48e-4e8d-baef-15eb25017938-utilities\") pod \"certified-operators-8lc5d\" (UID: \"618efece-b48e-4e8d-baef-15eb25017938\") " pod="openshift-marketplace/certified-operators-8lc5d" Feb 18 20:35:38 crc kubenswrapper[4942]: I0218 20:35:38.783503 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nqcsp\" (UniqueName: \"kubernetes.io/projected/618efece-b48e-4e8d-baef-15eb25017938-kube-api-access-nqcsp\") pod \"certified-operators-8lc5d\" (UID: \"618efece-b48e-4e8d-baef-15eb25017938\") " pod="openshift-marketplace/certified-operators-8lc5d" Feb 18 20:35:38 crc kubenswrapper[4942]: I0218 20:35:38.783559 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/618efece-b48e-4e8d-baef-15eb25017938-catalog-content\") pod \"certified-operators-8lc5d\" (UID: \"618efece-b48e-4e8d-baef-15eb25017938\") " pod="openshift-marketplace/certified-operators-8lc5d" Feb 18 20:35:38 crc kubenswrapper[4942]: I0218 20:35:38.783631 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/618efece-b48e-4e8d-baef-15eb25017938-utilities\") pod \"certified-operators-8lc5d\" (UID: \"618efece-b48e-4e8d-baef-15eb25017938\") " pod="openshift-marketplace/certified-operators-8lc5d" Feb 18 20:35:38 crc kubenswrapper[4942]: I0218 20:35:38.784230 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/618efece-b48e-4e8d-baef-15eb25017938-utilities\") pod \"certified-operators-8lc5d\" (UID: \"618efece-b48e-4e8d-baef-15eb25017938\") " pod="openshift-marketplace/certified-operators-8lc5d" Feb 18 20:35:38 crc kubenswrapper[4942]: I0218 20:35:38.784482 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/618efece-b48e-4e8d-baef-15eb25017938-catalog-content\") pod \"certified-operators-8lc5d\" (UID: \"618efece-b48e-4e8d-baef-15eb25017938\") " pod="openshift-marketplace/certified-operators-8lc5d" Feb 18 20:35:38 crc kubenswrapper[4942]: I0218 20:35:38.816677 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nqcsp\" (UniqueName: \"kubernetes.io/projected/618efece-b48e-4e8d-baef-15eb25017938-kube-api-access-nqcsp\") pod \"certified-operators-8lc5d\" (UID: \"618efece-b48e-4e8d-baef-15eb25017938\") " pod="openshift-marketplace/certified-operators-8lc5d" Feb 18 20:35:38 crc kubenswrapper[4942]: I0218 20:35:38.854247 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-8lc5d" Feb 18 20:35:39 crc kubenswrapper[4942]: I0218 20:35:39.414754 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-8lc5d"] Feb 18 20:35:39 crc kubenswrapper[4942]: I0218 20:35:39.834716 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8lc5d" event={"ID":"618efece-b48e-4e8d-baef-15eb25017938","Type":"ContainerStarted","Data":"c4e9d76c330550eb4d99211ab40437e6c91931f8c18e6fd30c8ca251b4f351a0"} Feb 18 20:35:40 crc kubenswrapper[4942]: I0218 20:35:40.850101 4942 generic.go:334] "Generic (PLEG): container finished" podID="618efece-b48e-4e8d-baef-15eb25017938" containerID="28e845546a64e1540e9cd180df23419bbad8ebf7eb82a48eaf7341a83acac702" exitCode=0 Feb 18 20:35:40 crc kubenswrapper[4942]: I0218 20:35:40.850185 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8lc5d" event={"ID":"618efece-b48e-4e8d-baef-15eb25017938","Type":"ContainerDied","Data":"28e845546a64e1540e9cd180df23419bbad8ebf7eb82a48eaf7341a83acac702"} Feb 18 20:35:41 crc kubenswrapper[4942]: I0218 20:35:41.061526 4942 scope.go:117] "RemoveContainer" containerID="e170d5371e4f5bbe6907c57c4924b945a84df7feec77a73107b0bd925e94b04f" Feb 18 20:35:41 crc kubenswrapper[4942]: E0218 20:35:41.062283 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wqxh4_openshift-machine-config-operator(28921539-823a-4439-a230-3b5aed7085cc)\"" pod="openshift-machine-config-operator/machine-config-daemon-wqxh4" podUID="28921539-823a-4439-a230-3b5aed7085cc" Feb 18 20:35:41 crc kubenswrapper[4942]: I0218 20:35:41.496812 4942 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-fssdt"] Feb 18 20:35:41 crc kubenswrapper[4942]: I0218 20:35:41.498946 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-fssdt" Feb 18 20:35:41 crc kubenswrapper[4942]: I0218 20:35:41.535225 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-fssdt"] Feb 18 20:35:41 crc kubenswrapper[4942]: I0218 20:35:41.656943 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a9fb128b-71df-4bd4-8e7c-6494714c5a0c-utilities\") pod \"redhat-marketplace-fssdt\" (UID: \"a9fb128b-71df-4bd4-8e7c-6494714c5a0c\") " pod="openshift-marketplace/redhat-marketplace-fssdt" Feb 18 20:35:41 crc kubenswrapper[4942]: I0218 20:35:41.657039 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a9fb128b-71df-4bd4-8e7c-6494714c5a0c-catalog-content\") pod \"redhat-marketplace-fssdt\" (UID: \"a9fb128b-71df-4bd4-8e7c-6494714c5a0c\") " pod="openshift-marketplace/redhat-marketplace-fssdt" Feb 18 20:35:41 crc kubenswrapper[4942]: I0218 20:35:41.657573 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x8skx\" (UniqueName: \"kubernetes.io/projected/a9fb128b-71df-4bd4-8e7c-6494714c5a0c-kube-api-access-x8skx\") pod \"redhat-marketplace-fssdt\" (UID: \"a9fb128b-71df-4bd4-8e7c-6494714c5a0c\") " pod="openshift-marketplace/redhat-marketplace-fssdt" Feb 18 20:35:41 crc kubenswrapper[4942]: E0218 20:35:41.666908 4942 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = copying system image from manifest list: parsing image configuration: fetching blob: received unexpected HTTP status: 500 Internal Server Error" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Feb 18 20:35:41 crc kubenswrapper[4942]: E0218 20:35:41.667082 4942 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-nqcsp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-8lc5d_openshift-marketplace(618efece-b48e-4e8d-baef-15eb25017938): ErrImagePull: copying system image from manifest list: parsing image configuration: fetching blob: received unexpected HTTP status: 500 Internal Server Error" logger="UnhandledError" Feb 18 20:35:41 crc kubenswrapper[4942]: E0218 20:35:41.668272 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"copying system image from manifest list: parsing image configuration: fetching blob: received unexpected HTTP status: 500 Internal Server Error\"" pod="openshift-marketplace/certified-operators-8lc5d" podUID="618efece-b48e-4e8d-baef-15eb25017938" Feb 18 20:35:41 crc kubenswrapper[4942]: I0218 20:35:41.759055 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x8skx\" (UniqueName: \"kubernetes.io/projected/a9fb128b-71df-4bd4-8e7c-6494714c5a0c-kube-api-access-x8skx\") pod \"redhat-marketplace-fssdt\" (UID: \"a9fb128b-71df-4bd4-8e7c-6494714c5a0c\") " pod="openshift-marketplace/redhat-marketplace-fssdt" Feb 18 20:35:41 crc kubenswrapper[4942]: I0218 20:35:41.759154 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a9fb128b-71df-4bd4-8e7c-6494714c5a0c-utilities\") pod \"redhat-marketplace-fssdt\" (UID: \"a9fb128b-71df-4bd4-8e7c-6494714c5a0c\") " pod="openshift-marketplace/redhat-marketplace-fssdt" Feb 18 20:35:41 crc kubenswrapper[4942]: I0218 20:35:41.759214 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a9fb128b-71df-4bd4-8e7c-6494714c5a0c-catalog-content\") pod \"redhat-marketplace-fssdt\" (UID: \"a9fb128b-71df-4bd4-8e7c-6494714c5a0c\") " pod="openshift-marketplace/redhat-marketplace-fssdt" Feb 18 20:35:41 crc kubenswrapper[4942]: I0218 20:35:41.759878 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a9fb128b-71df-4bd4-8e7c-6494714c5a0c-catalog-content\") pod \"redhat-marketplace-fssdt\" (UID: \"a9fb128b-71df-4bd4-8e7c-6494714c5a0c\") " pod="openshift-marketplace/redhat-marketplace-fssdt" Feb 18 20:35:41 crc kubenswrapper[4942]: I0218 20:35:41.759892 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a9fb128b-71df-4bd4-8e7c-6494714c5a0c-utilities\") pod \"redhat-marketplace-fssdt\" (UID: \"a9fb128b-71df-4bd4-8e7c-6494714c5a0c\") " pod="openshift-marketplace/redhat-marketplace-fssdt" Feb 18 20:35:41 crc kubenswrapper[4942]: E0218 20:35:41.861077 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-8lc5d" podUID="618efece-b48e-4e8d-baef-15eb25017938" Feb 18 20:35:42 crc kubenswrapper[4942]: I0218 20:35:42.468986 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x8skx\" (UniqueName: \"kubernetes.io/projected/a9fb128b-71df-4bd4-8e7c-6494714c5a0c-kube-api-access-x8skx\") pod \"redhat-marketplace-fssdt\" (UID: \"a9fb128b-71df-4bd4-8e7c-6494714c5a0c\") " pod="openshift-marketplace/redhat-marketplace-fssdt" Feb 18 20:35:42 crc kubenswrapper[4942]: I0218 20:35:42.725631 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-fssdt" Feb 18 20:35:43 crc kubenswrapper[4942]: I0218 20:35:43.118425 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-fssdt"] Feb 18 20:35:43 crc kubenswrapper[4942]: W0218 20:35:43.123883 4942 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda9fb128b_71df_4bd4_8e7c_6494714c5a0c.slice/crio-2e20291057bbeb9e1e0db649b4b0aca356b84913d0b8de85162b4ab533dc0484 WatchSource:0}: Error finding container 2e20291057bbeb9e1e0db649b4b0aca356b84913d0b8de85162b4ab533dc0484: Status 404 returned error can't find the container with id 2e20291057bbeb9e1e0db649b4b0aca356b84913d0b8de85162b4ab533dc0484 Feb 18 20:35:43 crc kubenswrapper[4942]: I0218 20:35:43.893456 4942 generic.go:334] "Generic (PLEG): container finished" podID="a9fb128b-71df-4bd4-8e7c-6494714c5a0c" containerID="799084e5d6096a3d77015453eca62d3d9ef509516f5bf2db78bfc07d368af996" exitCode=0 Feb 18 20:35:43 crc kubenswrapper[4942]: I0218 20:35:43.893755 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fssdt" event={"ID":"a9fb128b-71df-4bd4-8e7c-6494714c5a0c","Type":"ContainerDied","Data":"799084e5d6096a3d77015453eca62d3d9ef509516f5bf2db78bfc07d368af996"} Feb 18 20:35:43 crc kubenswrapper[4942]: I0218 20:35:43.893811 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fssdt" event={"ID":"a9fb128b-71df-4bd4-8e7c-6494714c5a0c","Type":"ContainerStarted","Data":"2e20291057bbeb9e1e0db649b4b0aca356b84913d0b8de85162b4ab533dc0484"} Feb 18 20:35:45 crc kubenswrapper[4942]: E0218 20:35:45.187234 4942 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = copying system image from manifest list: parsing image configuration: fetching blob: received unexpected HTTP status: 500 Internal Server Error" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Feb 18 20:35:45 crc kubenswrapper[4942]: E0218 20:35:45.187829 4942 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-x8skx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-fssdt_openshift-marketplace(a9fb128b-71df-4bd4-8e7c-6494714c5a0c): ErrImagePull: copying system image from manifest list: parsing image configuration: fetching blob: received unexpected HTTP status: 500 Internal Server Error" logger="UnhandledError" Feb 18 20:35:45 crc kubenswrapper[4942]: E0218 20:35:45.189528 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"copying system image from manifest list: parsing image configuration: fetching blob: received unexpected HTTP status: 500 Internal Server Error\"" pod="openshift-marketplace/redhat-marketplace-fssdt" podUID="a9fb128b-71df-4bd4-8e7c-6494714c5a0c" Feb 18 20:35:45 crc kubenswrapper[4942]: E0218 20:35:45.804114 4942 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = copying system image from manifest list: parsing image configuration: fetching blob: received unexpected HTTP status: 500 Internal Server Error" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Feb 18 20:35:45 crc kubenswrapper[4942]: E0218 20:35:45.804620 4942 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-ztqdr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-bpf4n_openshift-marketplace(8277de0c-d81c-4d35-a68a-97ca7a1edd6b): ErrImagePull: copying system image from manifest list: parsing image configuration: fetching blob: received unexpected HTTP status: 500 Internal Server Error" logger="UnhandledError" Feb 18 20:35:45 crc kubenswrapper[4942]: E0218 20:35:45.806340 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"copying system image from manifest list: parsing image configuration: fetching blob: received unexpected HTTP status: 500 Internal Server Error\"" pod="openshift-marketplace/redhat-operators-bpf4n" podUID="8277de0c-d81c-4d35-a68a-97ca7a1edd6b" Feb 18 20:35:45 crc kubenswrapper[4942]: E0218 20:35:45.922947 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-fssdt" podUID="a9fb128b-71df-4bd4-8e7c-6494714c5a0c" Feb 18 20:35:48 crc kubenswrapper[4942]: E0218 20:35:48.038102 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-scbxm" podUID="6b509214-59a6-4d42-9b1b-a0252c545c1d" Feb 18 20:35:52 crc kubenswrapper[4942]: I0218 20:35:52.036934 4942 scope.go:117] "RemoveContainer" containerID="e170d5371e4f5bbe6907c57c4924b945a84df7feec77a73107b0bd925e94b04f" Feb 18 20:35:52 crc kubenswrapper[4942]: E0218 20:35:52.037702 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wqxh4_openshift-machine-config-operator(28921539-823a-4439-a230-3b5aed7085cc)\"" pod="openshift-machine-config-operator/machine-config-daemon-wqxh4" podUID="28921539-823a-4439-a230-3b5aed7085cc" Feb 18 20:35:58 crc kubenswrapper[4942]: E0218 20:35:58.417483 4942 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = copying system image from manifest list: parsing image configuration: fetching blob: received unexpected HTTP status: 500 Internal Server Error" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Feb 18 20:35:58 crc kubenswrapper[4942]: E0218 20:35:58.418622 4942 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-nqcsp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-8lc5d_openshift-marketplace(618efece-b48e-4e8d-baef-15eb25017938): ErrImagePull: copying system image from manifest list: parsing image configuration: fetching blob: received unexpected HTTP status: 500 Internal Server Error" logger="UnhandledError" Feb 18 20:35:58 crc kubenswrapper[4942]: E0218 20:35:58.420311 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"copying system image from manifest list: parsing image configuration: fetching blob: received unexpected HTTP status: 500 Internal Server Error\"" pod="openshift-marketplace/certified-operators-8lc5d" podUID="618efece-b48e-4e8d-baef-15eb25017938" Feb 18 20:35:59 crc kubenswrapper[4942]: E0218 20:35:59.038614 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-bpf4n" podUID="8277de0c-d81c-4d35-a68a-97ca7a1edd6b" Feb 18 20:36:00 crc kubenswrapper[4942]: E0218 20:36:00.025085 4942 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = copying system image from manifest list: parsing image configuration: fetching blob: received unexpected HTTP status: 500 Internal Server Error" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Feb 18 20:36:00 crc kubenswrapper[4942]: E0218 20:36:00.025262 4942 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-x8skx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-fssdt_openshift-marketplace(a9fb128b-71df-4bd4-8e7c-6494714c5a0c): ErrImagePull: copying system image from manifest list: parsing image configuration: fetching blob: received unexpected HTTP status: 500 Internal Server Error" logger="UnhandledError" Feb 18 20:36:00 crc kubenswrapper[4942]: E0218 20:36:00.026513 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"copying system image from manifest list: parsing image configuration: fetching blob: received unexpected HTTP status: 500 Internal Server Error\"" pod="openshift-marketplace/redhat-marketplace-fssdt" podUID="a9fb128b-71df-4bd4-8e7c-6494714c5a0c" Feb 18 20:36:02 crc kubenswrapper[4942]: E0218 20:36:02.706671 4942 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = copying system image from manifest list: parsing image configuration: fetching blob: received unexpected HTTP status: 500 Internal Server Error" image="registry.redhat.io/redhat/community-operator-index:v4.18" Feb 18 20:36:02 crc kubenswrapper[4942]: E0218 20:36:02.707084 4942 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-cssts,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-scbxm_openshift-marketplace(6b509214-59a6-4d42-9b1b-a0252c545c1d): ErrImagePull: copying system image from manifest list: parsing image configuration: fetching blob: received unexpected HTTP status: 500 Internal Server Error" logger="UnhandledError" Feb 18 20:36:02 crc kubenswrapper[4942]: E0218 20:36:02.708378 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"copying system image from manifest list: parsing image configuration: fetching blob: received unexpected HTTP status: 500 Internal Server Error\"" pod="openshift-marketplace/community-operators-scbxm" podUID="6b509214-59a6-4d42-9b1b-a0252c545c1d" Feb 18 20:36:07 crc kubenswrapper[4942]: I0218 20:36:07.036297 4942 scope.go:117] "RemoveContainer" containerID="e170d5371e4f5bbe6907c57c4924b945a84df7feec77a73107b0bd925e94b04f" Feb 18 20:36:07 crc kubenswrapper[4942]: E0218 20:36:07.037141 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wqxh4_openshift-machine-config-operator(28921539-823a-4439-a230-3b5aed7085cc)\"" pod="openshift-machine-config-operator/machine-config-daemon-wqxh4" podUID="28921539-823a-4439-a230-3b5aed7085cc" Feb 18 20:36:09 crc kubenswrapper[4942]: E0218 20:36:09.039126 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-8lc5d" podUID="618efece-b48e-4e8d-baef-15eb25017938" Feb 18 20:36:12 crc kubenswrapper[4942]: E0218 20:36:12.039216 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-bpf4n" podUID="8277de0c-d81c-4d35-a68a-97ca7a1edd6b" Feb 18 20:36:12 crc kubenswrapper[4942]: E0218 20:36:12.039633 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-fssdt" podUID="a9fb128b-71df-4bd4-8e7c-6494714c5a0c" Feb 18 20:36:17 crc kubenswrapper[4942]: E0218 20:36:17.037856 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-scbxm" podUID="6b509214-59a6-4d42-9b1b-a0252c545c1d" Feb 18 20:36:19 crc kubenswrapper[4942]: I0218 20:36:19.035746 4942 scope.go:117] "RemoveContainer" containerID="e170d5371e4f5bbe6907c57c4924b945a84df7feec77a73107b0bd925e94b04f" Feb 18 20:36:19 crc kubenswrapper[4942]: E0218 20:36:19.036489 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wqxh4_openshift-machine-config-operator(28921539-823a-4439-a230-3b5aed7085cc)\"" pod="openshift-machine-config-operator/machine-config-daemon-wqxh4" podUID="28921539-823a-4439-a230-3b5aed7085cc" Feb 18 20:36:23 crc kubenswrapper[4942]: E0218 20:36:23.314663 4942 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = copying system image from manifest list: parsing image configuration: fetching blob: received unexpected HTTP status: 500 Internal Server Error" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Feb 18 20:36:23 crc kubenswrapper[4942]: E0218 20:36:23.315413 4942 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-nqcsp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-8lc5d_openshift-marketplace(618efece-b48e-4e8d-baef-15eb25017938): ErrImagePull: copying system image from manifest list: parsing image configuration: fetching blob: received unexpected HTTP status: 500 Internal Server Error" logger="UnhandledError" Feb 18 20:36:23 crc kubenswrapper[4942]: E0218 20:36:23.316700 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"copying system image from manifest list: parsing image configuration: fetching blob: received unexpected HTTP status: 500 Internal Server Error\"" pod="openshift-marketplace/certified-operators-8lc5d" podUID="618efece-b48e-4e8d-baef-15eb25017938" Feb 18 20:36:23 crc kubenswrapper[4942]: E0218 20:36:23.784071 4942 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = copying system image from manifest list: parsing image configuration: fetching blob: received unexpected HTTP status: 500 Internal Server Error" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Feb 18 20:36:23 crc kubenswrapper[4942]: E0218 20:36:23.784539 4942 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-x8skx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-fssdt_openshift-marketplace(a9fb128b-71df-4bd4-8e7c-6494714c5a0c): ErrImagePull: copying system image from manifest list: parsing image configuration: fetching blob: received unexpected HTTP status: 500 Internal Server Error" logger="UnhandledError" Feb 18 20:36:23 crc kubenswrapper[4942]: E0218 20:36:23.785707 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"copying system image from manifest list: parsing image configuration: fetching blob: received unexpected HTTP status: 500 Internal Server Error\"" pod="openshift-marketplace/redhat-marketplace-fssdt" podUID="a9fb128b-71df-4bd4-8e7c-6494714c5a0c" Feb 18 20:36:25 crc kubenswrapper[4942]: I0218 20:36:25.399164 4942 generic.go:334] "Generic (PLEG): container finished" podID="498a3ae0-adb2-4729-a2eb-78e267e1613b" containerID="169b9c7b6b3a31907bbb5568c6300b81731785a07744ed74ff40a7d3cf050f29" exitCode=1 Feb 18 20:36:25 crc kubenswrapper[4942]: I0218 20:36:25.399288 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"498a3ae0-adb2-4729-a2eb-78e267e1613b","Type":"ContainerDied","Data":"169b9c7b6b3a31907bbb5568c6300b81731785a07744ed74ff40a7d3cf050f29"} Feb 18 20:36:26 crc kubenswrapper[4942]: E0218 20:36:26.040598 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-bpf4n" podUID="8277de0c-d81c-4d35-a68a-97ca7a1edd6b" Feb 18 20:36:26 crc kubenswrapper[4942]: I0218 20:36:26.961315 4942 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Feb 18 20:36:27 crc kubenswrapper[4942]: I0218 20:36:27.007013 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/498a3ae0-adb2-4729-a2eb-78e267e1613b-config-data\") pod \"498a3ae0-adb2-4729-a2eb-78e267e1613b\" (UID: \"498a3ae0-adb2-4729-a2eb-78e267e1613b\") " Feb 18 20:36:27 crc kubenswrapper[4942]: I0218 20:36:27.007124 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/498a3ae0-adb2-4729-a2eb-78e267e1613b-test-operator-ephemeral-temporary\") pod \"498a3ae0-adb2-4729-a2eb-78e267e1613b\" (UID: \"498a3ae0-adb2-4729-a2eb-78e267e1613b\") " Feb 18 20:36:27 crc kubenswrapper[4942]: I0218 20:36:27.007178 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/498a3ae0-adb2-4729-a2eb-78e267e1613b-test-operator-ephemeral-workdir\") pod \"498a3ae0-adb2-4729-a2eb-78e267e1613b\" (UID: \"498a3ae0-adb2-4729-a2eb-78e267e1613b\") " Feb 18 20:36:27 crc kubenswrapper[4942]: I0218 20:36:27.007204 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-logs\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"498a3ae0-adb2-4729-a2eb-78e267e1613b\" (UID: \"498a3ae0-adb2-4729-a2eb-78e267e1613b\") " Feb 18 20:36:27 crc kubenswrapper[4942]: I0218 20:36:27.007231 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/498a3ae0-adb2-4729-a2eb-78e267e1613b-openstack-config-secret\") pod \"498a3ae0-adb2-4729-a2eb-78e267e1613b\" (UID: \"498a3ae0-adb2-4729-a2eb-78e267e1613b\") " Feb 18 20:36:27 crc kubenswrapper[4942]: I0218 20:36:27.007264 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/498a3ae0-adb2-4729-a2eb-78e267e1613b-openstack-config\") pod \"498a3ae0-adb2-4729-a2eb-78e267e1613b\" (UID: \"498a3ae0-adb2-4729-a2eb-78e267e1613b\") " Feb 18 20:36:27 crc kubenswrapper[4942]: I0218 20:36:27.007349 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/498a3ae0-adb2-4729-a2eb-78e267e1613b-ssh-key\") pod \"498a3ae0-adb2-4729-a2eb-78e267e1613b\" (UID: \"498a3ae0-adb2-4729-a2eb-78e267e1613b\") " Feb 18 20:36:27 crc kubenswrapper[4942]: I0218 20:36:27.007410 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/498a3ae0-adb2-4729-a2eb-78e267e1613b-ca-certs\") pod \"498a3ae0-adb2-4729-a2eb-78e267e1613b\" (UID: \"498a3ae0-adb2-4729-a2eb-78e267e1613b\") " Feb 18 20:36:27 crc kubenswrapper[4942]: I0218 20:36:27.007442 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nnpsv\" (UniqueName: \"kubernetes.io/projected/498a3ae0-adb2-4729-a2eb-78e267e1613b-kube-api-access-nnpsv\") pod \"498a3ae0-adb2-4729-a2eb-78e267e1613b\" (UID: \"498a3ae0-adb2-4729-a2eb-78e267e1613b\") " Feb 18 20:36:27 crc kubenswrapper[4942]: I0218 20:36:27.007910 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/498a3ae0-adb2-4729-a2eb-78e267e1613b-config-data" (OuterVolumeSpecName: "config-data") pod "498a3ae0-adb2-4729-a2eb-78e267e1613b" (UID: "498a3ae0-adb2-4729-a2eb-78e267e1613b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 20:36:27 crc kubenswrapper[4942]: I0218 20:36:27.008192 4942 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/498a3ae0-adb2-4729-a2eb-78e267e1613b-config-data\") on node \"crc\" DevicePath \"\"" Feb 18 20:36:27 crc kubenswrapper[4942]: I0218 20:36:27.008812 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/498a3ae0-adb2-4729-a2eb-78e267e1613b-test-operator-ephemeral-temporary" (OuterVolumeSpecName: "test-operator-ephemeral-temporary") pod "498a3ae0-adb2-4729-a2eb-78e267e1613b" (UID: "498a3ae0-adb2-4729-a2eb-78e267e1613b"). InnerVolumeSpecName "test-operator-ephemeral-temporary". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 20:36:27 crc kubenswrapper[4942]: I0218 20:36:27.013968 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage10-crc" (OuterVolumeSpecName: "test-operator-logs") pod "498a3ae0-adb2-4729-a2eb-78e267e1613b" (UID: "498a3ae0-adb2-4729-a2eb-78e267e1613b"). InnerVolumeSpecName "local-storage10-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Feb 18 20:36:27 crc kubenswrapper[4942]: I0218 20:36:27.022011 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/498a3ae0-adb2-4729-a2eb-78e267e1613b-kube-api-access-nnpsv" (OuterVolumeSpecName: "kube-api-access-nnpsv") pod "498a3ae0-adb2-4729-a2eb-78e267e1613b" (UID: "498a3ae0-adb2-4729-a2eb-78e267e1613b"). InnerVolumeSpecName "kube-api-access-nnpsv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 20:36:27 crc kubenswrapper[4942]: I0218 20:36:27.038947 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/498a3ae0-adb2-4729-a2eb-78e267e1613b-openstack-config-secret" (OuterVolumeSpecName: "openstack-config-secret") pod "498a3ae0-adb2-4729-a2eb-78e267e1613b" (UID: "498a3ae0-adb2-4729-a2eb-78e267e1613b"). InnerVolumeSpecName "openstack-config-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 20:36:27 crc kubenswrapper[4942]: I0218 20:36:27.061389 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/498a3ae0-adb2-4729-a2eb-78e267e1613b-ca-certs" (OuterVolumeSpecName: "ca-certs") pod "498a3ae0-adb2-4729-a2eb-78e267e1613b" (UID: "498a3ae0-adb2-4729-a2eb-78e267e1613b"). InnerVolumeSpecName "ca-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 20:36:27 crc kubenswrapper[4942]: I0218 20:36:27.075919 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/498a3ae0-adb2-4729-a2eb-78e267e1613b-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "498a3ae0-adb2-4729-a2eb-78e267e1613b" (UID: "498a3ae0-adb2-4729-a2eb-78e267e1613b"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 20:36:27 crc kubenswrapper[4942]: I0218 20:36:27.089523 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/498a3ae0-adb2-4729-a2eb-78e267e1613b-openstack-config" (OuterVolumeSpecName: "openstack-config") pod "498a3ae0-adb2-4729-a2eb-78e267e1613b" (UID: "498a3ae0-adb2-4729-a2eb-78e267e1613b"). InnerVolumeSpecName "openstack-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 20:36:27 crc kubenswrapper[4942]: I0218 20:36:27.102484 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/498a3ae0-adb2-4729-a2eb-78e267e1613b-test-operator-ephemeral-workdir" (OuterVolumeSpecName: "test-operator-ephemeral-workdir") pod "498a3ae0-adb2-4729-a2eb-78e267e1613b" (UID: "498a3ae0-adb2-4729-a2eb-78e267e1613b"). InnerVolumeSpecName "test-operator-ephemeral-workdir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 20:36:27 crc kubenswrapper[4942]: I0218 20:36:27.110442 4942 reconciler_common.go:293] "Volume detached for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/498a3ae0-adb2-4729-a2eb-78e267e1613b-test-operator-ephemeral-temporary\") on node \"crc\" DevicePath \"\"" Feb 18 20:36:27 crc kubenswrapper[4942]: I0218 20:36:27.110508 4942 reconciler_common.go:293] "Volume detached for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/498a3ae0-adb2-4729-a2eb-78e267e1613b-test-operator-ephemeral-workdir\") on node \"crc\" DevicePath \"\"" Feb 18 20:36:27 crc kubenswrapper[4942]: I0218 20:36:27.110534 4942 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") on node \"crc\" " Feb 18 20:36:27 crc kubenswrapper[4942]: I0218 20:36:27.110548 4942 reconciler_common.go:293] "Volume detached for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/498a3ae0-adb2-4729-a2eb-78e267e1613b-openstack-config-secret\") on node \"crc\" DevicePath \"\"" Feb 18 20:36:27 crc kubenswrapper[4942]: I0218 20:36:27.110563 4942 reconciler_common.go:293] "Volume detached for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/498a3ae0-adb2-4729-a2eb-78e267e1613b-openstack-config\") on node \"crc\" DevicePath \"\"" Feb 18 20:36:27 crc kubenswrapper[4942]: I0218 20:36:27.110576 4942 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/498a3ae0-adb2-4729-a2eb-78e267e1613b-ssh-key\") on node \"crc\" DevicePath \"\"" Feb 18 20:36:27 crc kubenswrapper[4942]: I0218 20:36:27.110587 4942 reconciler_common.go:293] "Volume detached for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/498a3ae0-adb2-4729-a2eb-78e267e1613b-ca-certs\") on node \"crc\" DevicePath \"\"" Feb 18 20:36:27 crc kubenswrapper[4942]: I0218 20:36:27.110601 4942 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nnpsv\" (UniqueName: \"kubernetes.io/projected/498a3ae0-adb2-4729-a2eb-78e267e1613b-kube-api-access-nnpsv\") on node \"crc\" DevicePath \"\"" Feb 18 20:36:27 crc kubenswrapper[4942]: I0218 20:36:27.136542 4942 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage10-crc" (UniqueName: "kubernetes.io/local-volume/local-storage10-crc") on node "crc" Feb 18 20:36:27 crc kubenswrapper[4942]: I0218 20:36:27.212921 4942 reconciler_common.go:293] "Volume detached for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") on node \"crc\" DevicePath \"\"" Feb 18 20:36:27 crc kubenswrapper[4942]: I0218 20:36:27.421025 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"498a3ae0-adb2-4729-a2eb-78e267e1613b","Type":"ContainerDied","Data":"4638cc0d3971f910691e7e7ad60b86d01493160078b4f86a07d3570748f42e2f"} Feb 18 20:36:27 crc kubenswrapper[4942]: I0218 20:36:27.421084 4942 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4638cc0d3971f910691e7e7ad60b86d01493160078b4f86a07d3570748f42e2f" Feb 18 20:36:27 crc kubenswrapper[4942]: I0218 20:36:27.421106 4942 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Feb 18 20:36:28 crc kubenswrapper[4942]: I0218 20:36:28.961102 4942 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/test-operator-logs-pod-tempest-tempest-tests-tempest"] Feb 18 20:36:28 crc kubenswrapper[4942]: E0218 20:36:28.962035 4942 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="498a3ae0-adb2-4729-a2eb-78e267e1613b" containerName="tempest-tests-tempest-tests-runner" Feb 18 20:36:28 crc kubenswrapper[4942]: I0218 20:36:28.962059 4942 state_mem.go:107] "Deleted CPUSet assignment" podUID="498a3ae0-adb2-4729-a2eb-78e267e1613b" containerName="tempest-tests-tempest-tests-runner" Feb 18 20:36:28 crc kubenswrapper[4942]: I0218 20:36:28.962406 4942 memory_manager.go:354] "RemoveStaleState removing state" podUID="498a3ae0-adb2-4729-a2eb-78e267e1613b" containerName="tempest-tests-tempest-tests-runner" Feb 18 20:36:28 crc kubenswrapper[4942]: I0218 20:36:28.963484 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Feb 18 20:36:28 crc kubenswrapper[4942]: I0218 20:36:28.966326 4942 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"default-dockercfg-fgwq4" Feb 18 20:36:28 crc kubenswrapper[4942]: I0218 20:36:28.975934 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/test-operator-logs-pod-tempest-tempest-tests-tempest"] Feb 18 20:36:29 crc kubenswrapper[4942]: E0218 20:36:29.038106 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-scbxm" podUID="6b509214-59a6-4d42-9b1b-a0252c545c1d" Feb 18 20:36:29 crc kubenswrapper[4942]: I0218 20:36:29.052606 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"fa910027-8bd8-4779-9dc5-9071534fa252\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Feb 18 20:36:29 crc kubenswrapper[4942]: I0218 20:36:29.052706 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lp2tv\" (UniqueName: \"kubernetes.io/projected/fa910027-8bd8-4779-9dc5-9071534fa252-kube-api-access-lp2tv\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"fa910027-8bd8-4779-9dc5-9071534fa252\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Feb 18 20:36:29 crc kubenswrapper[4942]: I0218 20:36:29.154464 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lp2tv\" (UniqueName: \"kubernetes.io/projected/fa910027-8bd8-4779-9dc5-9071534fa252-kube-api-access-lp2tv\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"fa910027-8bd8-4779-9dc5-9071534fa252\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Feb 18 20:36:29 crc kubenswrapper[4942]: I0218 20:36:29.154747 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"fa910027-8bd8-4779-9dc5-9071534fa252\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Feb 18 20:36:29 crc kubenswrapper[4942]: I0218 20:36:29.155072 4942 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"fa910027-8bd8-4779-9dc5-9071534fa252\") device mount path \"/mnt/openstack/pv10\"" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Feb 18 20:36:29 crc kubenswrapper[4942]: I0218 20:36:29.177106 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lp2tv\" (UniqueName: \"kubernetes.io/projected/fa910027-8bd8-4779-9dc5-9071534fa252-kube-api-access-lp2tv\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"fa910027-8bd8-4779-9dc5-9071534fa252\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Feb 18 20:36:29 crc kubenswrapper[4942]: I0218 20:36:29.178573 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"fa910027-8bd8-4779-9dc5-9071534fa252\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Feb 18 20:36:29 crc kubenswrapper[4942]: I0218 20:36:29.303646 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Feb 18 20:36:29 crc kubenswrapper[4942]: I0218 20:36:29.794746 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/test-operator-logs-pod-tempest-tempest-tests-tempest"] Feb 18 20:36:30 crc kubenswrapper[4942]: I0218 20:36:30.458971 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" event={"ID":"fa910027-8bd8-4779-9dc5-9071534fa252","Type":"ContainerStarted","Data":"4fa10290a286dccbab8e982b7fe69b9138d97c789c261ecc98b3a52a30c71931"} Feb 18 20:36:30 crc kubenswrapper[4942]: E0218 20:36:30.789505 4942 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = parsing image configuration: fetching blob: received unexpected HTTP status: 500 Internal Server Error" image="quay.io/quay/busybox:latest" Feb 18 20:36:30 crc kubenswrapper[4942]: E0218 20:36:30.789946 4942 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:test-operator-logs-container,Image:quay.io/quay/busybox,Command:[sleep],Args:[infinity],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:logs-volume-0,ReadOnly:false,MountPath:/mnt/logs-tempest-tests-tempest-step-0,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-lp2tv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod test-operator-logs-pod-tempest-tempest-tests-tempest_openstack(fa910027-8bd8-4779-9dc5-9071534fa252): ErrImagePull: parsing image configuration: fetching blob: received unexpected HTTP status: 500 Internal Server Error" logger="UnhandledError" Feb 18 20:36:30 crc kubenswrapper[4942]: E0218 20:36:30.791130 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"test-operator-logs-container\" with ErrImagePull: \"parsing image configuration: fetching blob: received unexpected HTTP status: 500 Internal Server Error\"" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" podUID="fa910027-8bd8-4779-9dc5-9071534fa252" Feb 18 20:36:31 crc kubenswrapper[4942]: E0218 20:36:31.473642 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"test-operator-logs-container\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/quay/busybox\\\"\"" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" podUID="fa910027-8bd8-4779-9dc5-9071534fa252" Feb 18 20:36:32 crc kubenswrapper[4942]: I0218 20:36:32.036549 4942 scope.go:117] "RemoveContainer" containerID="e170d5371e4f5bbe6907c57c4924b945a84df7feec77a73107b0bd925e94b04f" Feb 18 20:36:32 crc kubenswrapper[4942]: E0218 20:36:32.036970 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wqxh4_openshift-machine-config-operator(28921539-823a-4439-a230-3b5aed7085cc)\"" pod="openshift-machine-config-operator/machine-config-daemon-wqxh4" podUID="28921539-823a-4439-a230-3b5aed7085cc" Feb 18 20:36:37 crc kubenswrapper[4942]: E0218 20:36:37.039168 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-fssdt" podUID="a9fb128b-71df-4bd4-8e7c-6494714c5a0c" Feb 18 20:36:38 crc kubenswrapper[4942]: E0218 20:36:38.038693 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-8lc5d" podUID="618efece-b48e-4e8d-baef-15eb25017938" Feb 18 20:36:40 crc kubenswrapper[4942]: E0218 20:36:40.038664 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-scbxm" podUID="6b509214-59a6-4d42-9b1b-a0252c545c1d" Feb 18 20:36:41 crc kubenswrapper[4942]: E0218 20:36:41.069002 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-bpf4n" podUID="8277de0c-d81c-4d35-a68a-97ca7a1edd6b" Feb 18 20:36:43 crc kubenswrapper[4942]: I0218 20:36:43.036129 4942 scope.go:117] "RemoveContainer" containerID="e170d5371e4f5bbe6907c57c4924b945a84df7feec77a73107b0bd925e94b04f" Feb 18 20:36:43 crc kubenswrapper[4942]: E0218 20:36:43.037352 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wqxh4_openshift-machine-config-operator(28921539-823a-4439-a230-3b5aed7085cc)\"" pod="openshift-machine-config-operator/machine-config-daemon-wqxh4" podUID="28921539-823a-4439-a230-3b5aed7085cc" Feb 18 20:36:44 crc kubenswrapper[4942]: E0218 20:36:44.481846 4942 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = parsing image configuration: fetching blob: received unexpected HTTP status: 500 Internal Server Error" image="quay.io/quay/busybox:latest" Feb 18 20:36:44 crc kubenswrapper[4942]: E0218 20:36:44.482078 4942 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:test-operator-logs-container,Image:quay.io/quay/busybox,Command:[sleep],Args:[infinity],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:logs-volume-0,ReadOnly:false,MountPath:/mnt/logs-tempest-tests-tempest-step-0,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-lp2tv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod test-operator-logs-pod-tempest-tempest-tests-tempest_openstack(fa910027-8bd8-4779-9dc5-9071534fa252): ErrImagePull: parsing image configuration: fetching blob: received unexpected HTTP status: 500 Internal Server Error" logger="UnhandledError" Feb 18 20:36:44 crc kubenswrapper[4942]: E0218 20:36:44.484426 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"test-operator-logs-container\" with ErrImagePull: \"parsing image configuration: fetching blob: received unexpected HTTP status: 500 Internal Server Error\"" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" podUID="fa910027-8bd8-4779-9dc5-9071534fa252" Feb 18 20:36:49 crc kubenswrapper[4942]: E0218 20:36:49.039359 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-8lc5d" podUID="618efece-b48e-4e8d-baef-15eb25017938" Feb 18 20:36:51 crc kubenswrapper[4942]: E0218 20:36:51.055753 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-scbxm" podUID="6b509214-59a6-4d42-9b1b-a0252c545c1d" Feb 18 20:36:52 crc kubenswrapper[4942]: E0218 20:36:52.038226 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-fssdt" podUID="a9fb128b-71df-4bd4-8e7c-6494714c5a0c" Feb 18 20:36:54 crc kubenswrapper[4942]: I0218 20:36:54.036183 4942 scope.go:117] "RemoveContainer" containerID="e170d5371e4f5bbe6907c57c4924b945a84df7feec77a73107b0bd925e94b04f" Feb 18 20:36:54 crc kubenswrapper[4942]: E0218 20:36:54.036597 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wqxh4_openshift-machine-config-operator(28921539-823a-4439-a230-3b5aed7085cc)\"" pod="openshift-machine-config-operator/machine-config-daemon-wqxh4" podUID="28921539-823a-4439-a230-3b5aed7085cc" Feb 18 20:36:54 crc kubenswrapper[4942]: E0218 20:36:54.038277 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-bpf4n" podUID="8277de0c-d81c-4d35-a68a-97ca7a1edd6b" Feb 18 20:36:58 crc kubenswrapper[4942]: E0218 20:36:58.038961 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"test-operator-logs-container\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/quay/busybox\\\"\"" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" podUID="fa910027-8bd8-4779-9dc5-9071534fa252" Feb 18 20:37:02 crc kubenswrapper[4942]: E0218 20:37:02.039691 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-8lc5d" podUID="618efece-b48e-4e8d-baef-15eb25017938" Feb 18 20:37:02 crc kubenswrapper[4942]: E0218 20:37:02.039732 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-scbxm" podUID="6b509214-59a6-4d42-9b1b-a0252c545c1d" Feb 18 20:37:06 crc kubenswrapper[4942]: E0218 20:37:06.022545 4942 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = copying system image from manifest list: parsing image configuration: fetching blob: received unexpected HTTP status: 500 Internal Server Error" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Feb 18 20:37:06 crc kubenswrapper[4942]: E0218 20:37:06.023096 4942 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-x8skx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-fssdt_openshift-marketplace(a9fb128b-71df-4bd4-8e7c-6494714c5a0c): ErrImagePull: copying system image from manifest list: parsing image configuration: fetching blob: received unexpected HTTP status: 500 Internal Server Error" logger="UnhandledError" Feb 18 20:37:06 crc kubenswrapper[4942]: E0218 20:37:06.024404 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"copying system image from manifest list: parsing image configuration: fetching blob: received unexpected HTTP status: 500 Internal Server Error\"" pod="openshift-marketplace/redhat-marketplace-fssdt" podUID="a9fb128b-71df-4bd4-8e7c-6494714c5a0c" Feb 18 20:37:07 crc kubenswrapper[4942]: I0218 20:37:07.037753 4942 scope.go:117] "RemoveContainer" containerID="e170d5371e4f5bbe6907c57c4924b945a84df7feec77a73107b0bd925e94b04f" Feb 18 20:37:07 crc kubenswrapper[4942]: E0218 20:37:07.038358 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wqxh4_openshift-machine-config-operator(28921539-823a-4439-a230-3b5aed7085cc)\"" pod="openshift-machine-config-operator/machine-config-daemon-wqxh4" podUID="28921539-823a-4439-a230-3b5aed7085cc" Feb 18 20:37:11 crc kubenswrapper[4942]: E0218 20:37:11.282884 4942 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = copying system image from manifest list: parsing image configuration: fetching blob: received unexpected HTTP status: 500 Internal Server Error" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Feb 18 20:37:11 crc kubenswrapper[4942]: E0218 20:37:11.283540 4942 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-ztqdr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-bpf4n_openshift-marketplace(8277de0c-d81c-4d35-a68a-97ca7a1edd6b): ErrImagePull: copying system image from manifest list: parsing image configuration: fetching blob: received unexpected HTTP status: 500 Internal Server Error" logger="UnhandledError" Feb 18 20:37:11 crc kubenswrapper[4942]: E0218 20:37:11.284754 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"copying system image from manifest list: parsing image configuration: fetching blob: received unexpected HTTP status: 500 Internal Server Error\"" pod="openshift-marketplace/redhat-operators-bpf4n" podUID="8277de0c-d81c-4d35-a68a-97ca7a1edd6b" Feb 18 20:37:13 crc kubenswrapper[4942]: E0218 20:37:13.833893 4942 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = parsing image configuration: fetching blob: received unexpected HTTP status: 500 Internal Server Error" image="quay.io/quay/busybox:latest" Feb 18 20:37:13 crc kubenswrapper[4942]: E0218 20:37:13.834336 4942 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:test-operator-logs-container,Image:quay.io/quay/busybox,Command:[sleep],Args:[infinity],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:logs-volume-0,ReadOnly:false,MountPath:/mnt/logs-tempest-tests-tempest-step-0,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-lp2tv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod test-operator-logs-pod-tempest-tempest-tests-tempest_openstack(fa910027-8bd8-4779-9dc5-9071534fa252): ErrImagePull: parsing image configuration: fetching blob: received unexpected HTTP status: 500 Internal Server Error" logger="UnhandledError" Feb 18 20:37:13 crc kubenswrapper[4942]: E0218 20:37:13.835510 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"test-operator-logs-container\" with ErrImagePull: \"parsing image configuration: fetching blob: received unexpected HTTP status: 500 Internal Server Error\"" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" podUID="fa910027-8bd8-4779-9dc5-9071534fa252" Feb 18 20:37:14 crc kubenswrapper[4942]: E0218 20:37:14.037507 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-scbxm" podUID="6b509214-59a6-4d42-9b1b-a0252c545c1d" Feb 18 20:37:17 crc kubenswrapper[4942]: E0218 20:37:17.194455 4942 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = copying system image from manifest list: parsing image configuration: fetching blob: received unexpected HTTP status: 500 Internal Server Error" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Feb 18 20:37:17 crc kubenswrapper[4942]: E0218 20:37:17.194641 4942 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-nqcsp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-8lc5d_openshift-marketplace(618efece-b48e-4e8d-baef-15eb25017938): ErrImagePull: copying system image from manifest list: parsing image configuration: fetching blob: received unexpected HTTP status: 500 Internal Server Error" logger="UnhandledError" Feb 18 20:37:17 crc kubenswrapper[4942]: E0218 20:37:17.195971 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"copying system image from manifest list: parsing image configuration: fetching blob: received unexpected HTTP status: 500 Internal Server Error\"" pod="openshift-marketplace/certified-operators-8lc5d" podUID="618efece-b48e-4e8d-baef-15eb25017938" Feb 18 20:37:18 crc kubenswrapper[4942]: E0218 20:37:18.038389 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-fssdt" podUID="a9fb128b-71df-4bd4-8e7c-6494714c5a0c" Feb 18 20:37:20 crc kubenswrapper[4942]: I0218 20:37:20.036584 4942 scope.go:117] "RemoveContainer" containerID="e170d5371e4f5bbe6907c57c4924b945a84df7feec77a73107b0bd925e94b04f" Feb 18 20:37:20 crc kubenswrapper[4942]: E0218 20:37:20.037165 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wqxh4_openshift-machine-config-operator(28921539-823a-4439-a230-3b5aed7085cc)\"" pod="openshift-machine-config-operator/machine-config-daemon-wqxh4" podUID="28921539-823a-4439-a230-3b5aed7085cc" Feb 18 20:37:26 crc kubenswrapper[4942]: E0218 20:37:26.039951 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-bpf4n" podUID="8277de0c-d81c-4d35-a68a-97ca7a1edd6b" Feb 18 20:37:27 crc kubenswrapper[4942]: E0218 20:37:27.038100 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"test-operator-logs-container\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/quay/busybox\\\"\"" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" podUID="fa910027-8bd8-4779-9dc5-9071534fa252" Feb 18 20:37:28 crc kubenswrapper[4942]: E0218 20:37:28.616370 4942 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = copying system image from manifest list: parsing image configuration: fetching blob: received unexpected HTTP status: 500 Internal Server Error" image="registry.redhat.io/redhat/community-operator-index:v4.18" Feb 18 20:37:28 crc kubenswrapper[4942]: E0218 20:37:28.616754 4942 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-cssts,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-scbxm_openshift-marketplace(6b509214-59a6-4d42-9b1b-a0252c545c1d): ErrImagePull: copying system image from manifest list: parsing image configuration: fetching blob: received unexpected HTTP status: 500 Internal Server Error" logger="UnhandledError" Feb 18 20:37:28 crc kubenswrapper[4942]: E0218 20:37:28.618002 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"copying system image from manifest list: parsing image configuration: fetching blob: received unexpected HTTP status: 500 Internal Server Error\"" pod="openshift-marketplace/community-operators-scbxm" podUID="6b509214-59a6-4d42-9b1b-a0252c545c1d" Feb 18 20:37:29 crc kubenswrapper[4942]: E0218 20:37:29.037294 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-8lc5d" podUID="618efece-b48e-4e8d-baef-15eb25017938" Feb 18 20:37:30 crc kubenswrapper[4942]: E0218 20:37:30.038411 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-fssdt" podUID="a9fb128b-71df-4bd4-8e7c-6494714c5a0c" Feb 18 20:37:32 crc kubenswrapper[4942]: I0218 20:37:32.036682 4942 scope.go:117] "RemoveContainer" containerID="e170d5371e4f5bbe6907c57c4924b945a84df7feec77a73107b0bd925e94b04f" Feb 18 20:37:32 crc kubenswrapper[4942]: E0218 20:37:32.038348 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wqxh4_openshift-machine-config-operator(28921539-823a-4439-a230-3b5aed7085cc)\"" pod="openshift-machine-config-operator/machine-config-daemon-wqxh4" podUID="28921539-823a-4439-a230-3b5aed7085cc" Feb 18 20:37:37 crc kubenswrapper[4942]: E0218 20:37:37.038717 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-bpf4n" podUID="8277de0c-d81c-4d35-a68a-97ca7a1edd6b" Feb 18 20:37:40 crc kubenswrapper[4942]: E0218 20:37:40.038506 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"test-operator-logs-container\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/quay/busybox\\\"\"" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" podUID="fa910027-8bd8-4779-9dc5-9071534fa252" Feb 18 20:37:40 crc kubenswrapper[4942]: E0218 20:37:40.040919 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-8lc5d" podUID="618efece-b48e-4e8d-baef-15eb25017938" Feb 18 20:37:41 crc kubenswrapper[4942]: E0218 20:37:41.053393 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-scbxm" podUID="6b509214-59a6-4d42-9b1b-a0252c545c1d" Feb 18 20:37:43 crc kubenswrapper[4942]: E0218 20:37:43.040053 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-fssdt" podUID="a9fb128b-71df-4bd4-8e7c-6494714c5a0c" Feb 18 20:37:44 crc kubenswrapper[4942]: I0218 20:37:44.035829 4942 scope.go:117] "RemoveContainer" containerID="e170d5371e4f5bbe6907c57c4924b945a84df7feec77a73107b0bd925e94b04f" Feb 18 20:37:44 crc kubenswrapper[4942]: E0218 20:37:44.036499 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wqxh4_openshift-machine-config-operator(28921539-823a-4439-a230-3b5aed7085cc)\"" pod="openshift-machine-config-operator/machine-config-daemon-wqxh4" podUID="28921539-823a-4439-a230-3b5aed7085cc" Feb 18 20:37:51 crc kubenswrapper[4942]: E0218 20:37:51.069473 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-8lc5d" podUID="618efece-b48e-4e8d-baef-15eb25017938" Feb 18 20:37:52 crc kubenswrapper[4942]: E0218 20:37:52.039517 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"test-operator-logs-container\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/quay/busybox\\\"\"" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" podUID="fa910027-8bd8-4779-9dc5-9071534fa252" Feb 18 20:37:52 crc kubenswrapper[4942]: E0218 20:37:52.039540 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-scbxm" podUID="6b509214-59a6-4d42-9b1b-a0252c545c1d" Feb 18 20:37:52 crc kubenswrapper[4942]: E0218 20:37:52.040611 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-bpf4n" podUID="8277de0c-d81c-4d35-a68a-97ca7a1edd6b" Feb 18 20:37:55 crc kubenswrapper[4942]: E0218 20:37:55.040213 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-fssdt" podUID="a9fb128b-71df-4bd4-8e7c-6494714c5a0c" Feb 18 20:37:58 crc kubenswrapper[4942]: I0218 20:37:58.035976 4942 scope.go:117] "RemoveContainer" containerID="e170d5371e4f5bbe6907c57c4924b945a84df7feec77a73107b0bd925e94b04f" Feb 18 20:37:58 crc kubenswrapper[4942]: E0218 20:37:58.037043 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wqxh4_openshift-machine-config-operator(28921539-823a-4439-a230-3b5aed7085cc)\"" pod="openshift-machine-config-operator/machine-config-daemon-wqxh4" podUID="28921539-823a-4439-a230-3b5aed7085cc" Feb 18 20:38:03 crc kubenswrapper[4942]: E0218 20:38:03.043140 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-scbxm" podUID="6b509214-59a6-4d42-9b1b-a0252c545c1d" Feb 18 20:38:03 crc kubenswrapper[4942]: E0218 20:38:03.045286 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-8lc5d" podUID="618efece-b48e-4e8d-baef-15eb25017938" Feb 18 20:38:05 crc kubenswrapper[4942]: E0218 20:38:05.038633 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-bpf4n" podUID="8277de0c-d81c-4d35-a68a-97ca7a1edd6b" Feb 18 20:38:08 crc kubenswrapper[4942]: E0218 20:38:08.039842 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-fssdt" podUID="a9fb128b-71df-4bd4-8e7c-6494714c5a0c" Feb 18 20:38:08 crc kubenswrapper[4942]: E0218 20:38:08.331171 4942 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = parsing image configuration: fetching blob: received unexpected HTTP status: 500 Internal Server Error" image="quay.io/quay/busybox:latest" Feb 18 20:38:08 crc kubenswrapper[4942]: E0218 20:38:08.331405 4942 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:test-operator-logs-container,Image:quay.io/quay/busybox,Command:[sleep],Args:[infinity],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:logs-volume-0,ReadOnly:false,MountPath:/mnt/logs-tempest-tests-tempest-step-0,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-lp2tv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod test-operator-logs-pod-tempest-tempest-tests-tempest_openstack(fa910027-8bd8-4779-9dc5-9071534fa252): ErrImagePull: parsing image configuration: fetching blob: received unexpected HTTP status: 500 Internal Server Error" logger="UnhandledError" Feb 18 20:38:08 crc kubenswrapper[4942]: E0218 20:38:08.332693 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"test-operator-logs-container\" with ErrImagePull: \"parsing image configuration: fetching blob: received unexpected HTTP status: 500 Internal Server Error\"" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" podUID="fa910027-8bd8-4779-9dc5-9071534fa252" Feb 18 20:38:10 crc kubenswrapper[4942]: I0218 20:38:10.036272 4942 scope.go:117] "RemoveContainer" containerID="e170d5371e4f5bbe6907c57c4924b945a84df7feec77a73107b0bd925e94b04f" Feb 18 20:38:10 crc kubenswrapper[4942]: E0218 20:38:10.036980 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wqxh4_openshift-machine-config-operator(28921539-823a-4439-a230-3b5aed7085cc)\"" pod="openshift-machine-config-operator/machine-config-daemon-wqxh4" podUID="28921539-823a-4439-a230-3b5aed7085cc" Feb 18 20:38:16 crc kubenswrapper[4942]: E0218 20:38:16.041823 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-8lc5d" podUID="618efece-b48e-4e8d-baef-15eb25017938" Feb 18 20:38:16 crc kubenswrapper[4942]: E0218 20:38:16.041867 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-bpf4n" podUID="8277de0c-d81c-4d35-a68a-97ca7a1edd6b" Feb 18 20:38:18 crc kubenswrapper[4942]: E0218 20:38:18.038904 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-scbxm" podUID="6b509214-59a6-4d42-9b1b-a0252c545c1d" Feb 18 20:38:20 crc kubenswrapper[4942]: E0218 20:38:20.038593 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"test-operator-logs-container\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/quay/busybox\\\"\"" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" podUID="fa910027-8bd8-4779-9dc5-9071534fa252" Feb 18 20:38:20 crc kubenswrapper[4942]: E0218 20:38:20.038977 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-fssdt" podUID="a9fb128b-71df-4bd4-8e7c-6494714c5a0c" Feb 18 20:38:25 crc kubenswrapper[4942]: I0218 20:38:25.036676 4942 scope.go:117] "RemoveContainer" containerID="e170d5371e4f5bbe6907c57c4924b945a84df7feec77a73107b0bd925e94b04f" Feb 18 20:38:25 crc kubenswrapper[4942]: I0218 20:38:25.765193 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wqxh4" event={"ID":"28921539-823a-4439-a230-3b5aed7085cc","Type":"ContainerStarted","Data":"9bddad80fb30b139a11793b4f0ba955a9abbd5925ded5b3df80be5dbd41f27ba"} Feb 18 20:38:30 crc kubenswrapper[4942]: E0218 20:38:30.039666 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-bpf4n" podUID="8277de0c-d81c-4d35-a68a-97ca7a1edd6b" Feb 18 20:38:31 crc kubenswrapper[4942]: E0218 20:38:31.038149 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-8lc5d" podUID="618efece-b48e-4e8d-baef-15eb25017938" Feb 18 20:38:31 crc kubenswrapper[4942]: E0218 20:38:31.052932 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"test-operator-logs-container\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/quay/busybox\\\"\"" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" podUID="fa910027-8bd8-4779-9dc5-9071534fa252" Feb 18 20:38:33 crc kubenswrapper[4942]: E0218 20:38:33.038788 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-scbxm" podUID="6b509214-59a6-4d42-9b1b-a0252c545c1d" Feb 18 20:38:35 crc kubenswrapper[4942]: E0218 20:38:35.686688 4942 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = copying system image from manifest list: parsing image configuration: fetching blob: received unexpected HTTP status: 500 Internal Server Error" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Feb 18 20:38:35 crc kubenswrapper[4942]: E0218 20:38:35.687453 4942 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-x8skx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-fssdt_openshift-marketplace(a9fb128b-71df-4bd4-8e7c-6494714c5a0c): ErrImagePull: copying system image from manifest list: parsing image configuration: fetching blob: received unexpected HTTP status: 500 Internal Server Error" logger="UnhandledError" Feb 18 20:38:35 crc kubenswrapper[4942]: E0218 20:38:35.688825 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"copying system image from manifest list: parsing image configuration: fetching blob: received unexpected HTTP status: 500 Internal Server Error\"" pod="openshift-marketplace/redhat-marketplace-fssdt" podUID="a9fb128b-71df-4bd4-8e7c-6494714c5a0c" Feb 18 20:38:41 crc kubenswrapper[4942]: E0218 20:38:41.054528 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-bpf4n" podUID="8277de0c-d81c-4d35-a68a-97ca7a1edd6b" Feb 18 20:38:44 crc kubenswrapper[4942]: E0218 20:38:44.038454 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-scbxm" podUID="6b509214-59a6-4d42-9b1b-a0252c545c1d" Feb 18 20:38:45 crc kubenswrapper[4942]: E0218 20:38:45.037907 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"test-operator-logs-container\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/quay/busybox\\\"\"" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" podUID="fa910027-8bd8-4779-9dc5-9071534fa252" Feb 18 20:38:47 crc kubenswrapper[4942]: E0218 20:38:47.967512 4942 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = copying system image from manifest list: parsing image configuration: fetching blob: received unexpected HTTP status: 500 Internal Server Error" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Feb 18 20:38:47 crc kubenswrapper[4942]: E0218 20:38:47.968020 4942 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-nqcsp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-8lc5d_openshift-marketplace(618efece-b48e-4e8d-baef-15eb25017938): ErrImagePull: copying system image from manifest list: parsing image configuration: fetching blob: received unexpected HTTP status: 500 Internal Server Error" logger="UnhandledError" Feb 18 20:38:47 crc kubenswrapper[4942]: E0218 20:38:47.969256 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"copying system image from manifest list: parsing image configuration: fetching blob: received unexpected HTTP status: 500 Internal Server Error\"" pod="openshift-marketplace/certified-operators-8lc5d" podUID="618efece-b48e-4e8d-baef-15eb25017938" Feb 18 20:38:50 crc kubenswrapper[4942]: E0218 20:38:50.043365 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-fssdt" podUID="a9fb128b-71df-4bd4-8e7c-6494714c5a0c" Feb 18 20:38:50 crc kubenswrapper[4942]: I0218 20:38:50.050617 4942 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-vfphw/must-gather-kzsbl"] Feb 18 20:38:50 crc kubenswrapper[4942]: I0218 20:38:50.052430 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-vfphw/must-gather-kzsbl" Feb 18 20:38:50 crc kubenswrapper[4942]: I0218 20:38:50.060118 4942 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-vfphw"/"kube-root-ca.crt" Feb 18 20:38:50 crc kubenswrapper[4942]: I0218 20:38:50.060121 4942 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-vfphw"/"openshift-service-ca.crt" Feb 18 20:38:50 crc kubenswrapper[4942]: I0218 20:38:50.095736 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-vfphw/must-gather-kzsbl"] Feb 18 20:38:50 crc kubenswrapper[4942]: I0218 20:38:50.133796 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x25bk\" (UniqueName: \"kubernetes.io/projected/96ac2dba-808d-4839-9f27-9c77b6d1f97d-kube-api-access-x25bk\") pod \"must-gather-kzsbl\" (UID: \"96ac2dba-808d-4839-9f27-9c77b6d1f97d\") " pod="openshift-must-gather-vfphw/must-gather-kzsbl" Feb 18 20:38:50 crc kubenswrapper[4942]: I0218 20:38:50.133970 4942 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/96ac2dba-808d-4839-9f27-9c77b6d1f97d-must-gather-output\") pod \"must-gather-kzsbl\" (UID: \"96ac2dba-808d-4839-9f27-9c77b6d1f97d\") " pod="openshift-must-gather-vfphw/must-gather-kzsbl" Feb 18 20:38:50 crc kubenswrapper[4942]: I0218 20:38:50.236143 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/96ac2dba-808d-4839-9f27-9c77b6d1f97d-must-gather-output\") pod \"must-gather-kzsbl\" (UID: \"96ac2dba-808d-4839-9f27-9c77b6d1f97d\") " pod="openshift-must-gather-vfphw/must-gather-kzsbl" Feb 18 20:38:50 crc kubenswrapper[4942]: I0218 20:38:50.236342 4942 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x25bk\" (UniqueName: \"kubernetes.io/projected/96ac2dba-808d-4839-9f27-9c77b6d1f97d-kube-api-access-x25bk\") pod \"must-gather-kzsbl\" (UID: \"96ac2dba-808d-4839-9f27-9c77b6d1f97d\") " pod="openshift-must-gather-vfphw/must-gather-kzsbl" Feb 18 20:38:50 crc kubenswrapper[4942]: I0218 20:38:50.236721 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/96ac2dba-808d-4839-9f27-9c77b6d1f97d-must-gather-output\") pod \"must-gather-kzsbl\" (UID: \"96ac2dba-808d-4839-9f27-9c77b6d1f97d\") " pod="openshift-must-gather-vfphw/must-gather-kzsbl" Feb 18 20:38:50 crc kubenswrapper[4942]: I0218 20:38:50.255714 4942 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x25bk\" (UniqueName: \"kubernetes.io/projected/96ac2dba-808d-4839-9f27-9c77b6d1f97d-kube-api-access-x25bk\") pod \"must-gather-kzsbl\" (UID: \"96ac2dba-808d-4839-9f27-9c77b6d1f97d\") " pod="openshift-must-gather-vfphw/must-gather-kzsbl" Feb 18 20:38:50 crc kubenswrapper[4942]: I0218 20:38:50.379055 4942 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-vfphw/must-gather-kzsbl" Feb 18 20:38:50 crc kubenswrapper[4942]: I0218 20:38:50.870370 4942 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-vfphw/must-gather-kzsbl"] Feb 18 20:38:51 crc kubenswrapper[4942]: I0218 20:38:51.082503 4942 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-vfphw/must-gather-kzsbl" event={"ID":"96ac2dba-808d-4839-9f27-9c77b6d1f97d","Type":"ContainerStarted","Data":"35d1f533ad495def710ab683242fdb7a66e65bd3343a40e16806102801f7451c"} Feb 18 20:38:51 crc kubenswrapper[4942]: E0218 20:38:51.521672 4942 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = parsing image configuration: fetching blob: received unexpected HTTP status: 500 Internal Server Error" image="quay.io/openstack-k8s-operators/openstack-must-gather:latest" Feb 18 20:38:51 crc kubenswrapper[4942]: E0218 20:38:51.521896 4942 kuberuntime_manager.go:1274] "Unhandled Error" err=< Feb 18 20:38:51 crc kubenswrapper[4942]: container &Container{Name:gather,Image:quay.io/openstack-k8s-operators/openstack-must-gather:latest,Command:[/bin/bash -c if command -v setsid >/dev/null 2>&1 && command -v ps >/dev/null 2>&1 && command -v pkill >/dev/null 2>&1; then Feb 18 20:38:51 crc kubenswrapper[4942]: HAVE_SESSION_TOOLS=true Feb 18 20:38:51 crc kubenswrapper[4942]: else Feb 18 20:38:51 crc kubenswrapper[4942]: HAVE_SESSION_TOOLS=false Feb 18 20:38:51 crc kubenswrapper[4942]: fi Feb 18 20:38:51 crc kubenswrapper[4942]: Feb 18 20:38:51 crc kubenswrapper[4942]: Feb 18 20:38:51 crc kubenswrapper[4942]: echo "[disk usage checker] Started" Feb 18 20:38:51 crc kubenswrapper[4942]: target_dir="/must-gather" Feb 18 20:38:51 crc kubenswrapper[4942]: usage_percentage_limit="80" Feb 18 20:38:51 crc kubenswrapper[4942]: while true; do Feb 18 20:38:51 crc kubenswrapper[4942]: usage_percentage=$(df -P "$target_dir" | awk 'NR==2 {print $5}' | sed 's/%//') Feb 18 20:38:51 crc kubenswrapper[4942]: echo "[disk usage checker] Volume usage percentage: current = ${usage_percentage} ; allowed = ${usage_percentage_limit}" Feb 18 20:38:51 crc kubenswrapper[4942]: if [ "$usage_percentage" -gt "$usage_percentage_limit" ]; then Feb 18 20:38:51 crc kubenswrapper[4942]: echo "[disk usage checker] Disk usage exceeds the volume percentage of ${usage_percentage_limit} for mounted directory, terminating..." Feb 18 20:38:51 crc kubenswrapper[4942]: if [ "$HAVE_SESSION_TOOLS" = "true" ]; then Feb 18 20:38:51 crc kubenswrapper[4942]: ps -o sess --no-headers | sort -u | while read sid; do Feb 18 20:38:51 crc kubenswrapper[4942]: [[ "$sid" -eq "${$}" ]] && continue Feb 18 20:38:51 crc kubenswrapper[4942]: pkill --signal SIGKILL --session "$sid" Feb 18 20:38:51 crc kubenswrapper[4942]: done Feb 18 20:38:51 crc kubenswrapper[4942]: else Feb 18 20:38:51 crc kubenswrapper[4942]: kill 0 Feb 18 20:38:51 crc kubenswrapper[4942]: fi Feb 18 20:38:51 crc kubenswrapper[4942]: exit 1 Feb 18 20:38:51 crc kubenswrapper[4942]: fi Feb 18 20:38:51 crc kubenswrapper[4942]: sleep 5 Feb 18 20:38:51 crc kubenswrapper[4942]: done & if [ "$HAVE_SESSION_TOOLS" = "true" ]; then Feb 18 20:38:51 crc kubenswrapper[4942]: setsid -w bash <<-MUSTGATHER_EOF Feb 18 20:38:51 crc kubenswrapper[4942]: ADDITIONAL_NAMESPACES=kuttl,openshift-storage,openshift-marketplace,openshift-operators,sushy-emulator,tobiko OPENSTACK_DATABASES=ALL SOS_EDPM=all OMC=False SOS_DECOMPRESS=0 gather Feb 18 20:38:51 crc kubenswrapper[4942]: MUSTGATHER_EOF Feb 18 20:38:51 crc kubenswrapper[4942]: else Feb 18 20:38:51 crc kubenswrapper[4942]: ADDITIONAL_NAMESPACES=kuttl,openshift-storage,openshift-marketplace,openshift-operators,sushy-emulator,tobiko OPENSTACK_DATABASES=ALL SOS_EDPM=all OMC=False SOS_DECOMPRESS=0 gather Feb 18 20:38:51 crc kubenswrapper[4942]: fi; sync && echo 'Caches written to disk'],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:must-gather-output,ReadOnly:false,MountPath:/must-gather,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-x25bk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod must-gather-kzsbl_openshift-must-gather-vfphw(96ac2dba-808d-4839-9f27-9c77b6d1f97d): ErrImagePull: parsing image configuration: fetching blob: received unexpected HTTP status: 500 Internal Server Error Feb 18 20:38:51 crc kubenswrapper[4942]: > logger="UnhandledError" Feb 18 20:38:51 crc kubenswrapper[4942]: E0218 20:38:51.524113 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"gather\" with ErrImagePull: \"parsing image configuration: fetching blob: received unexpected HTTP status: 500 Internal Server Error\", failed to \"StartContainer\" for \"copy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/openstack-must-gather:latest\\\"\"]" pod="openshift-must-gather-vfphw/must-gather-kzsbl" podUID="96ac2dba-808d-4839-9f27-9c77b6d1f97d" Feb 18 20:38:52 crc kubenswrapper[4942]: E0218 20:38:52.096264 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"gather\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/openstack-must-gather:latest\\\"\", failed to \"StartContainer\" for \"copy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/openstack-must-gather:latest\\\"\"]" pod="openshift-must-gather-vfphw/must-gather-kzsbl" podUID="96ac2dba-808d-4839-9f27-9c77b6d1f97d" Feb 18 20:38:54 crc kubenswrapper[4942]: E0218 20:38:54.039158 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-bpf4n" podUID="8277de0c-d81c-4d35-a68a-97ca7a1edd6b" Feb 18 20:38:56 crc kubenswrapper[4942]: E0218 20:38:56.036999 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-scbxm" podUID="6b509214-59a6-4d42-9b1b-a0252c545c1d" Feb 18 20:38:58 crc kubenswrapper[4942]: E0218 20:38:58.038452 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"test-operator-logs-container\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/quay/busybox\\\"\"" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" podUID="fa910027-8bd8-4779-9dc5-9071534fa252" Feb 18 20:39:00 crc kubenswrapper[4942]: E0218 20:39:00.039508 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-8lc5d" podUID="618efece-b48e-4e8d-baef-15eb25017938" Feb 18 20:39:02 crc kubenswrapper[4942]: E0218 20:39:02.039570 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-fssdt" podUID="a9fb128b-71df-4bd4-8e7c-6494714c5a0c" Feb 18 20:39:03 crc kubenswrapper[4942]: E0218 20:39:03.512189 4942 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = parsing image configuration: fetching blob: received unexpected HTTP status: 500 Internal Server Error" image="quay.io/openstack-k8s-operators/openstack-must-gather:latest" Feb 18 20:39:03 crc kubenswrapper[4942]: E0218 20:39:03.512343 4942 kuberuntime_manager.go:1274] "Unhandled Error" err=< Feb 18 20:39:03 crc kubenswrapper[4942]: container &Container{Name:gather,Image:quay.io/openstack-k8s-operators/openstack-must-gather:latest,Command:[/bin/bash -c if command -v setsid >/dev/null 2>&1 && command -v ps >/dev/null 2>&1 && command -v pkill >/dev/null 2>&1; then Feb 18 20:39:03 crc kubenswrapper[4942]: HAVE_SESSION_TOOLS=true Feb 18 20:39:03 crc kubenswrapper[4942]: else Feb 18 20:39:03 crc kubenswrapper[4942]: HAVE_SESSION_TOOLS=false Feb 18 20:39:03 crc kubenswrapper[4942]: fi Feb 18 20:39:03 crc kubenswrapper[4942]: Feb 18 20:39:03 crc kubenswrapper[4942]: Feb 18 20:39:03 crc kubenswrapper[4942]: echo "[disk usage checker] Started" Feb 18 20:39:03 crc kubenswrapper[4942]: target_dir="/must-gather" Feb 18 20:39:03 crc kubenswrapper[4942]: usage_percentage_limit="80" Feb 18 20:39:03 crc kubenswrapper[4942]: while true; do Feb 18 20:39:03 crc kubenswrapper[4942]: usage_percentage=$(df -P "$target_dir" | awk 'NR==2 {print $5}' | sed 's/%//') Feb 18 20:39:03 crc kubenswrapper[4942]: echo "[disk usage checker] Volume usage percentage: current = ${usage_percentage} ; allowed = ${usage_percentage_limit}" Feb 18 20:39:03 crc kubenswrapper[4942]: if [ "$usage_percentage" -gt "$usage_percentage_limit" ]; then Feb 18 20:39:03 crc kubenswrapper[4942]: echo "[disk usage checker] Disk usage exceeds the volume percentage of ${usage_percentage_limit} for mounted directory, terminating..." Feb 18 20:39:03 crc kubenswrapper[4942]: if [ "$HAVE_SESSION_TOOLS" = "true" ]; then Feb 18 20:39:03 crc kubenswrapper[4942]: ps -o sess --no-headers | sort -u | while read sid; do Feb 18 20:39:03 crc kubenswrapper[4942]: [[ "$sid" -eq "${$}" ]] && continue Feb 18 20:39:03 crc kubenswrapper[4942]: pkill --signal SIGKILL --session "$sid" Feb 18 20:39:03 crc kubenswrapper[4942]: done Feb 18 20:39:03 crc kubenswrapper[4942]: else Feb 18 20:39:03 crc kubenswrapper[4942]: kill 0 Feb 18 20:39:03 crc kubenswrapper[4942]: fi Feb 18 20:39:03 crc kubenswrapper[4942]: exit 1 Feb 18 20:39:03 crc kubenswrapper[4942]: fi Feb 18 20:39:03 crc kubenswrapper[4942]: sleep 5 Feb 18 20:39:03 crc kubenswrapper[4942]: done & if [ "$HAVE_SESSION_TOOLS" = "true" ]; then Feb 18 20:39:03 crc kubenswrapper[4942]: setsid -w bash <<-MUSTGATHER_EOF Feb 18 20:39:03 crc kubenswrapper[4942]: ADDITIONAL_NAMESPACES=kuttl,openshift-storage,openshift-marketplace,openshift-operators,sushy-emulator,tobiko OPENSTACK_DATABASES=ALL SOS_EDPM=all OMC=False SOS_DECOMPRESS=0 gather Feb 18 20:39:03 crc kubenswrapper[4942]: MUSTGATHER_EOF Feb 18 20:39:03 crc kubenswrapper[4942]: else Feb 18 20:39:03 crc kubenswrapper[4942]: ADDITIONAL_NAMESPACES=kuttl,openshift-storage,openshift-marketplace,openshift-operators,sushy-emulator,tobiko OPENSTACK_DATABASES=ALL SOS_EDPM=all OMC=False SOS_DECOMPRESS=0 gather Feb 18 20:39:03 crc kubenswrapper[4942]: fi; sync && echo 'Caches written to disk'],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:must-gather-output,ReadOnly:false,MountPath:/must-gather,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-x25bk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod must-gather-kzsbl_openshift-must-gather-vfphw(96ac2dba-808d-4839-9f27-9c77b6d1f97d): ErrImagePull: parsing image configuration: fetching blob: received unexpected HTTP status: 500 Internal Server Error Feb 18 20:39:03 crc kubenswrapper[4942]: > logger="UnhandledError" Feb 18 20:39:03 crc kubenswrapper[4942]: E0218 20:39:03.514499 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"gather\" with ErrImagePull: \"parsing image configuration: fetching blob: received unexpected HTTP status: 500 Internal Server Error\", failed to \"StartContainer\" for \"copy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/openstack-must-gather:latest\\\"\"]" pod="openshift-must-gather-vfphw/must-gather-kzsbl" podUID="96ac2dba-808d-4839-9f27-9c77b6d1f97d" Feb 18 20:39:06 crc kubenswrapper[4942]: I0218 20:39:06.327589 4942 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-vfphw/must-gather-kzsbl"] Feb 18 20:39:06 crc kubenswrapper[4942]: I0218 20:39:06.339049 4942 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-vfphw/must-gather-kzsbl"] Feb 18 20:39:07 crc kubenswrapper[4942]: E0218 20:39:07.038294 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-bpf4n" podUID="8277de0c-d81c-4d35-a68a-97ca7a1edd6b" Feb 18 20:39:07 crc kubenswrapper[4942]: I0218 20:39:07.100295 4942 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-vfphw/must-gather-kzsbl" Feb 18 20:39:07 crc kubenswrapper[4942]: I0218 20:39:07.150320 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x25bk\" (UniqueName: \"kubernetes.io/projected/96ac2dba-808d-4839-9f27-9c77b6d1f97d-kube-api-access-x25bk\") pod \"96ac2dba-808d-4839-9f27-9c77b6d1f97d\" (UID: \"96ac2dba-808d-4839-9f27-9c77b6d1f97d\") " Feb 18 20:39:07 crc kubenswrapper[4942]: I0218 20:39:07.150403 4942 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/96ac2dba-808d-4839-9f27-9c77b6d1f97d-must-gather-output\") pod \"96ac2dba-808d-4839-9f27-9c77b6d1f97d\" (UID: \"96ac2dba-808d-4839-9f27-9c77b6d1f97d\") " Feb 18 20:39:07 crc kubenswrapper[4942]: I0218 20:39:07.151893 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/96ac2dba-808d-4839-9f27-9c77b6d1f97d-must-gather-output" (OuterVolumeSpecName: "must-gather-output") pod "96ac2dba-808d-4839-9f27-9c77b6d1f97d" (UID: "96ac2dba-808d-4839-9f27-9c77b6d1f97d"). InnerVolumeSpecName "must-gather-output". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 20:39:07 crc kubenswrapper[4942]: I0218 20:39:07.188700 4942 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/96ac2dba-808d-4839-9f27-9c77b6d1f97d-kube-api-access-x25bk" (OuterVolumeSpecName: "kube-api-access-x25bk") pod "96ac2dba-808d-4839-9f27-9c77b6d1f97d" (UID: "96ac2dba-808d-4839-9f27-9c77b6d1f97d"). InnerVolumeSpecName "kube-api-access-x25bk". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 20:39:07 crc kubenswrapper[4942]: I0218 20:39:07.253234 4942 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x25bk\" (UniqueName: \"kubernetes.io/projected/96ac2dba-808d-4839-9f27-9c77b6d1f97d-kube-api-access-x25bk\") on node \"crc\" DevicePath \"\"" Feb 18 20:39:07 crc kubenswrapper[4942]: I0218 20:39:07.253270 4942 reconciler_common.go:293] "Volume detached for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/96ac2dba-808d-4839-9f27-9c77b6d1f97d-must-gather-output\") on node \"crc\" DevicePath \"\"" Feb 18 20:39:07 crc kubenswrapper[4942]: I0218 20:39:07.281904 4942 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-vfphw/must-gather-kzsbl" Feb 18 20:39:09 crc kubenswrapper[4942]: E0218 20:39:09.040218 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-scbxm" podUID="6b509214-59a6-4d42-9b1b-a0252c545c1d" Feb 18 20:39:09 crc kubenswrapper[4942]: I0218 20:39:09.050112 4942 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="96ac2dba-808d-4839-9f27-9c77b6d1f97d" path="/var/lib/kubelet/pods/96ac2dba-808d-4839-9f27-9c77b6d1f97d/volumes" Feb 18 20:39:11 crc kubenswrapper[4942]: E0218 20:39:11.062200 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"test-operator-logs-container\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/quay/busybox\\\"\"" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" podUID="fa910027-8bd8-4779-9dc5-9071534fa252" Feb 18 20:39:14 crc kubenswrapper[4942]: E0218 20:39:14.040918 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-8lc5d" podUID="618efece-b48e-4e8d-baef-15eb25017938" Feb 18 20:39:15 crc kubenswrapper[4942]: E0218 20:39:15.039887 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-fssdt" podUID="a9fb128b-71df-4bd4-8e7c-6494714c5a0c" Feb 18 20:39:20 crc kubenswrapper[4942]: E0218 20:39:20.038978 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-bpf4n" podUID="8277de0c-d81c-4d35-a68a-97ca7a1edd6b" Feb 18 20:39:21 crc kubenswrapper[4942]: E0218 20:39:21.052976 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-scbxm" podUID="6b509214-59a6-4d42-9b1b-a0252c545c1d" Feb 18 20:39:24 crc kubenswrapper[4942]: E0218 20:39:24.063888 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"test-operator-logs-container\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/quay/busybox\\\"\"" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" podUID="fa910027-8bd8-4779-9dc5-9071534fa252" Feb 18 20:39:26 crc kubenswrapper[4942]: E0218 20:39:26.038628 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-fssdt" podUID="a9fb128b-71df-4bd4-8e7c-6494714c5a0c" Feb 18 20:39:29 crc kubenswrapper[4942]: E0218 20:39:29.039589 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-8lc5d" podUID="618efece-b48e-4e8d-baef-15eb25017938" Feb 18 20:39:32 crc kubenswrapper[4942]: E0218 20:39:32.038878 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-bpf4n" podUID="8277de0c-d81c-4d35-a68a-97ca7a1edd6b" Feb 18 20:39:34 crc kubenswrapper[4942]: E0218 20:39:34.038870 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-scbxm" podUID="6b509214-59a6-4d42-9b1b-a0252c545c1d" Feb 18 20:39:37 crc kubenswrapper[4942]: I0218 20:39:37.037969 4942 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 18 20:39:37 crc kubenswrapper[4942]: E0218 20:39:37.716513 4942 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = parsing image configuration: fetching blob: received unexpected HTTP status: 500 Internal Server Error" image="quay.io/quay/busybox:latest" Feb 18 20:39:37 crc kubenswrapper[4942]: E0218 20:39:37.716654 4942 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:test-operator-logs-container,Image:quay.io/quay/busybox,Command:[sleep],Args:[infinity],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:logs-volume-0,ReadOnly:false,MountPath:/mnt/logs-tempest-tests-tempest-step-0,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-lp2tv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod test-operator-logs-pod-tempest-tempest-tests-tempest_openstack(fa910027-8bd8-4779-9dc5-9071534fa252): ErrImagePull: parsing image configuration: fetching blob: received unexpected HTTP status: 500 Internal Server Error" logger="UnhandledError" Feb 18 20:39:37 crc kubenswrapper[4942]: E0218 20:39:37.718219 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"test-operator-logs-container\" with ErrImagePull: \"parsing image configuration: fetching blob: received unexpected HTTP status: 500 Internal Server Error\"" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" podUID="fa910027-8bd8-4779-9dc5-9071534fa252" Feb 18 20:39:41 crc kubenswrapper[4942]: E0218 20:39:41.056681 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-fssdt" podUID="a9fb128b-71df-4bd4-8e7c-6494714c5a0c" Feb 18 20:39:42 crc kubenswrapper[4942]: E0218 20:39:42.039877 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-8lc5d" podUID="618efece-b48e-4e8d-baef-15eb25017938" Feb 18 20:39:44 crc kubenswrapper[4942]: E0218 20:39:44.038712 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-bpf4n" podUID="8277de0c-d81c-4d35-a68a-97ca7a1edd6b" Feb 18 20:39:47 crc kubenswrapper[4942]: E0218 20:39:47.037813 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-scbxm" podUID="6b509214-59a6-4d42-9b1b-a0252c545c1d" Feb 18 20:39:52 crc kubenswrapper[4942]: E0218 20:39:52.038277 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"test-operator-logs-container\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/quay/busybox\\\"\"" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" podUID="fa910027-8bd8-4779-9dc5-9071534fa252" Feb 18 20:39:53 crc kubenswrapper[4942]: E0218 20:39:53.038383 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-8lc5d" podUID="618efece-b48e-4e8d-baef-15eb25017938" Feb 18 20:39:54 crc kubenswrapper[4942]: E0218 20:39:54.039044 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-fssdt" podUID="a9fb128b-71df-4bd4-8e7c-6494714c5a0c" Feb 18 20:39:59 crc kubenswrapper[4942]: E0218 20:39:59.038938 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-scbxm" podUID="6b509214-59a6-4d42-9b1b-a0252c545c1d" Feb 18 20:39:59 crc kubenswrapper[4942]: E0218 20:39:59.048388 4942 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = copying system image from manifest list: parsing image configuration: fetching blob: received unexpected HTTP status: 500 Internal Server Error" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Feb 18 20:39:59 crc kubenswrapper[4942]: E0218 20:39:59.048541 4942 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-ztqdr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-bpf4n_openshift-marketplace(8277de0c-d81c-4d35-a68a-97ca7a1edd6b): ErrImagePull: copying system image from manifest list: parsing image configuration: fetching blob: received unexpected HTTP status: 500 Internal Server Error" logger="UnhandledError" Feb 18 20:39:59 crc kubenswrapper[4942]: E0218 20:39:59.049820 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"copying system image from manifest list: parsing image configuration: fetching blob: received unexpected HTTP status: 500 Internal Server Error\"" pod="openshift-marketplace/redhat-operators-bpf4n" podUID="8277de0c-d81c-4d35-a68a-97ca7a1edd6b" Feb 18 20:40:05 crc kubenswrapper[4942]: E0218 20:40:05.039549 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-8lc5d" podUID="618efece-b48e-4e8d-baef-15eb25017938" Feb 18 20:40:06 crc kubenswrapper[4942]: E0218 20:40:06.038101 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"test-operator-logs-container\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/quay/busybox\\\"\"" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" podUID="fa910027-8bd8-4779-9dc5-9071534fa252" Feb 18 20:40:09 crc kubenswrapper[4942]: E0218 20:40:09.039190 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-fssdt" podUID="a9fb128b-71df-4bd4-8e7c-6494714c5a0c" Feb 18 20:40:13 crc kubenswrapper[4942]: E0218 20:40:13.040305 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-bpf4n" podUID="8277de0c-d81c-4d35-a68a-97ca7a1edd6b" Feb 18 20:40:14 crc kubenswrapper[4942]: E0218 20:40:14.706338 4942 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = copying system image from manifest list: parsing image configuration: fetching blob: received unexpected HTTP status: 500 Internal Server Error" image="registry.redhat.io/redhat/community-operator-index:v4.18" Feb 18 20:40:14 crc kubenswrapper[4942]: E0218 20:40:14.706877 4942 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-cssts,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-scbxm_openshift-marketplace(6b509214-59a6-4d42-9b1b-a0252c545c1d): ErrImagePull: copying system image from manifest list: parsing image configuration: fetching blob: received unexpected HTTP status: 500 Internal Server Error" logger="UnhandledError" Feb 18 20:40:14 crc kubenswrapper[4942]: E0218 20:40:14.708060 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"copying system image from manifest list: parsing image configuration: fetching blob: received unexpected HTTP status: 500 Internal Server Error\"" pod="openshift-marketplace/community-operators-scbxm" podUID="6b509214-59a6-4d42-9b1b-a0252c545c1d" Feb 18 20:40:19 crc kubenswrapper[4942]: E0218 20:40:19.039122 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"test-operator-logs-container\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/quay/busybox\\\"\"" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" podUID="fa910027-8bd8-4779-9dc5-9071534fa252" Feb 18 20:40:20 crc kubenswrapper[4942]: E0218 20:40:20.038705 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-8lc5d" podUID="618efece-b48e-4e8d-baef-15eb25017938" Feb 18 20:40:21 crc kubenswrapper[4942]: E0218 20:40:21.053864 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-fssdt" podUID="a9fb128b-71df-4bd4-8e7c-6494714c5a0c" Feb 18 20:40:28 crc kubenswrapper[4942]: E0218 20:40:28.039567 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-scbxm" podUID="6b509214-59a6-4d42-9b1b-a0252c545c1d" Feb 18 20:40:28 crc kubenswrapper[4942]: E0218 20:40:28.039966 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-bpf4n" podUID="8277de0c-d81c-4d35-a68a-97ca7a1edd6b" Feb 18 20:40:31 crc kubenswrapper[4942]: E0218 20:40:31.058303 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-8lc5d" podUID="618efece-b48e-4e8d-baef-15eb25017938" Feb 18 20:40:32 crc kubenswrapper[4942]: E0218 20:40:32.046423 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"test-operator-logs-container\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/quay/busybox\\\"\"" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" podUID="fa910027-8bd8-4779-9dc5-9071534fa252" Feb 18 20:40:36 crc kubenswrapper[4942]: E0218 20:40:36.041061 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-fssdt" podUID="a9fb128b-71df-4bd4-8e7c-6494714c5a0c" Feb 18 20:40:39 crc kubenswrapper[4942]: E0218 20:40:39.038804 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-scbxm" podUID="6b509214-59a6-4d42-9b1b-a0252c545c1d" Feb 18 20:40:40 crc kubenswrapper[4942]: E0218 20:40:40.039698 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-bpf4n" podUID="8277de0c-d81c-4d35-a68a-97ca7a1edd6b" Feb 18 20:40:45 crc kubenswrapper[4942]: E0218 20:40:45.040318 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"test-operator-logs-container\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/quay/busybox\\\"\"" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" podUID="fa910027-8bd8-4779-9dc5-9071534fa252" Feb 18 20:40:46 crc kubenswrapper[4942]: E0218 20:40:46.039999 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-8lc5d" podUID="618efece-b48e-4e8d-baef-15eb25017938" Feb 18 20:40:48 crc kubenswrapper[4942]: E0218 20:40:48.039230 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-fssdt" podUID="a9fb128b-71df-4bd4-8e7c-6494714c5a0c" Feb 18 20:40:52 crc kubenswrapper[4942]: E0218 20:40:52.038635 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-bpf4n" podUID="8277de0c-d81c-4d35-a68a-97ca7a1edd6b" Feb 18 20:40:52 crc kubenswrapper[4942]: E0218 20:40:52.038791 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-scbxm" podUID="6b509214-59a6-4d42-9b1b-a0252c545c1d" Feb 18 20:40:53 crc kubenswrapper[4942]: I0218 20:40:53.741703 4942 patch_prober.go:28] interesting pod/machine-config-daemon-wqxh4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 20:40:53 crc kubenswrapper[4942]: I0218 20:40:53.742452 4942 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wqxh4" podUID="28921539-823a-4439-a230-3b5aed7085cc" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 20:40:58 crc kubenswrapper[4942]: E0218 20:40:58.040174 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-8lc5d" podUID="618efece-b48e-4e8d-baef-15eb25017938" Feb 18 20:41:00 crc kubenswrapper[4942]: E0218 20:41:00.040024 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"test-operator-logs-container\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/quay/busybox\\\"\"" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" podUID="fa910027-8bd8-4779-9dc5-9071534fa252" Feb 18 20:41:00 crc kubenswrapper[4942]: E0218 20:41:00.040054 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-fssdt" podUID="a9fb128b-71df-4bd4-8e7c-6494714c5a0c" Feb 18 20:41:05 crc kubenswrapper[4942]: E0218 20:41:05.039036 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-bpf4n" podUID="8277de0c-d81c-4d35-a68a-97ca7a1edd6b" Feb 18 20:41:05 crc kubenswrapper[4942]: E0218 20:41:05.039718 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-scbxm" podUID="6b509214-59a6-4d42-9b1b-a0252c545c1d" Feb 18 20:41:11 crc kubenswrapper[4942]: E0218 20:41:11.055930 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-8lc5d" podUID="618efece-b48e-4e8d-baef-15eb25017938" Feb 18 20:41:12 crc kubenswrapper[4942]: E0218 20:41:12.038644 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-fssdt" podUID="a9fb128b-71df-4bd4-8e7c-6494714c5a0c" Feb 18 20:41:14 crc kubenswrapper[4942]: E0218 20:41:14.038481 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"test-operator-logs-container\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/quay/busybox\\\"\"" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" podUID="fa910027-8bd8-4779-9dc5-9071534fa252" Feb 18 20:41:18 crc kubenswrapper[4942]: E0218 20:41:18.040044 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-scbxm" podUID="6b509214-59a6-4d42-9b1b-a0252c545c1d" Feb 18 20:41:19 crc kubenswrapper[4942]: E0218 20:41:19.037450 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-bpf4n" podUID="8277de0c-d81c-4d35-a68a-97ca7a1edd6b" Feb 18 20:41:23 crc kubenswrapper[4942]: E0218 20:41:23.043043 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-8lc5d" podUID="618efece-b48e-4e8d-baef-15eb25017938" Feb 18 20:41:23 crc kubenswrapper[4942]: I0218 20:41:23.740800 4942 patch_prober.go:28] interesting pod/machine-config-daemon-wqxh4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 20:41:23 crc kubenswrapper[4942]: I0218 20:41:23.741131 4942 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wqxh4" podUID="28921539-823a-4439-a230-3b5aed7085cc" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 20:41:25 crc kubenswrapper[4942]: E0218 20:41:25.800494 4942 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = copying system image from manifest list: parsing image configuration: fetching blob: received unexpected HTTP status: 500 Internal Server Error" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Feb 18 20:41:25 crc kubenswrapper[4942]: E0218 20:41:25.801088 4942 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-x8skx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-fssdt_openshift-marketplace(a9fb128b-71df-4bd4-8e7c-6494714c5a0c): ErrImagePull: copying system image from manifest list: parsing image configuration: fetching blob: received unexpected HTTP status: 500 Internal Server Error" logger="UnhandledError" Feb 18 20:41:25 crc kubenswrapper[4942]: E0218 20:41:25.802361 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"copying system image from manifest list: parsing image configuration: fetching blob: received unexpected HTTP status: 500 Internal Server Error\"" pod="openshift-marketplace/redhat-marketplace-fssdt" podUID="a9fb128b-71df-4bd4-8e7c-6494714c5a0c" Feb 18 20:41:27 crc kubenswrapper[4942]: E0218 20:41:27.039635 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"test-operator-logs-container\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/quay/busybox\\\"\"" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" podUID="fa910027-8bd8-4779-9dc5-9071534fa252" Feb 18 20:41:29 crc kubenswrapper[4942]: E0218 20:41:29.038704 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-scbxm" podUID="6b509214-59a6-4d42-9b1b-a0252c545c1d" Feb 18 20:41:33 crc kubenswrapper[4942]: E0218 20:41:33.039179 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-bpf4n" podUID="8277de0c-d81c-4d35-a68a-97ca7a1edd6b" Feb 18 20:41:36 crc kubenswrapper[4942]: E0218 20:41:36.887133 4942 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = copying system image from manifest list: parsing image configuration: fetching blob: received unexpected HTTP status: 500 Internal Server Error" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Feb 18 20:41:36 crc kubenswrapper[4942]: E0218 20:41:36.887867 4942 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-nqcsp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-8lc5d_openshift-marketplace(618efece-b48e-4e8d-baef-15eb25017938): ErrImagePull: copying system image from manifest list: parsing image configuration: fetching blob: received unexpected HTTP status: 500 Internal Server Error" logger="UnhandledError" Feb 18 20:41:36 crc kubenswrapper[4942]: E0218 20:41:36.889136 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"copying system image from manifest list: parsing image configuration: fetching blob: received unexpected HTTP status: 500 Internal Server Error\"" pod="openshift-marketplace/certified-operators-8lc5d" podUID="618efece-b48e-4e8d-baef-15eb25017938" Feb 18 20:41:39 crc kubenswrapper[4942]: E0218 20:41:39.038034 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"test-operator-logs-container\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/quay/busybox\\\"\"" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" podUID="fa910027-8bd8-4779-9dc5-9071534fa252" Feb 18 20:41:39 crc kubenswrapper[4942]: E0218 20:41:39.039320 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-fssdt" podUID="a9fb128b-71df-4bd4-8e7c-6494714c5a0c" Feb 18 20:41:42 crc kubenswrapper[4942]: E0218 20:41:42.041031 4942 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-scbxm" podUID="6b509214-59a6-4d42-9b1b-a0252c545c1d" var/home/core/zuul-output/logs/crc-cloud-workdir-crc-all-logs.tar.gz0000644000175000000000000000005515145422020024440 0ustar coreroot  Om77'(var/home/core/zuul-output/logs/crc-cloud/0000755000175000000000000000000015145422020017355 5ustar corerootvar/home/core/zuul-output/artifacts/0000755000175000017500000000000015145407531016512 5ustar corecorevar/home/core/zuul-output/docs/0000755000175000017500000000000015145407532015463 5ustar corecore